在线时间:8:00-16:00
迪恩网络APP
随时随地掌握行业动态
扫描二维码
关注迪恩网络微信公众号
开源软件名称:philips-labs/terraform-aws-github-runner开源软件地址:https://github.com/philips-labs/terraform-aws-github-runner开源编程语言:TypeScript 50.6%开源软件介绍:Terraform module for scalable self hosted GitHub action runnersThis Terraform module creates the required infrastructure needed to host GitHub Actions self-hosted, auto-scaling runners on AWS spot instances. It provides the required logic to handle the life cycle for scaling up and down using a set of AWS Lambda functions. Runners are scaled down to zero to avoid costs when no workflows are active.
MotivationGitHub Actions Lambda is chosen as the runtime for two major reasons. First, it allows the creation of small components with minimal access to AWS and GitHub. Secondly, it provides a scalable setup with minimal costs that works on repo level and scales to organization level. The lambdas will create Linux based EC2 instances with Docker to serve CI workloads that can run on Linux and/or Docker. The main goal is to support Docker-based workloads. A logical question would be, why not Kubernetes? In the current approach, we stay close to how the GitHub action runners are available today. The approach is to install the runner on a host where the required software is available. With this setup, we stay quite close to the current GitHub approach. Another logical choice would be AWS Auto Scaling groups. However, this choice would typically require much more permissions on instance level to GitHub. And besides that, scaling up and down is not trivial. OverviewThe moment a GitHub action workflow requiring a For receiving the
In AWS a API gateway endpoint is created that is able to receive the GitHub webhook events via HTTP post. The gateway triggers the webhook lambda which will verify the signature of the event. This check guarantees the event is sent by the GitHub App. The lambda only handles The "scale up runner" lambda listens to the SQS queue and picks up events. The lambda runs various checks to decide whether a new EC2 spot instance needs to be created. For example, the instance is not created if the build is already started by an existing runner, or the maximum number of runners is reached. The Lambda first requests a registration token from GitHub, which is needed later by the runner to register itself. This avoids that the EC2 instance, which later in the process will install the agent, needs administration permissions to register the runner. Next, the EC2 spot instance is created via the launch template. The launch template defines the specifications of the required instance and contains a Scaling down the runners is at the moment brute-forced, every configurable amount of minutes a lambda will check every runner (instance) if it is busy. In case the runner is not busy it will be removed from GitHub and the instance terminated in AWS. At the moment there seems no other option to scale down more smoothly. Downloading the GitHub Action Runner distribution can be occasionally slow (more than 10 minutes). Therefore a lambda is introduced that synchronizes the action runner binary from GitHub to an S3 bucket. The EC2 instance will fetch the distribution from the S3 bucket instead of the internet. Secrets and private keys are stored in SSM Parameter Store. These values are encrypted using the default KMS key for SSM or passing in a custom KMS key. Permission are managed on several places. Below the most important ones. For details check the Terraform sources.
Besides these permissions, the lambdas also need permission to CloudWatch (for logging and scheduling), SSM and S3. For more details about the required permissions see the documentation of the IAM module which uses permission boundaries. Major configuration options.To be able to support a number of use-cases the module has quite a lot of configuration options. We try to choose reasonable defaults. The several examples also show for the main cases how to configure the runners.
ARM64 support via Graviton/Graviton2 instance-typesWhen using the default example or top-level module, specifying UsagesExamples are provided in the example directory. Please ensure you have installed the following tools.
The module supports two main scenarios for creating runners. On repository level a runner will be dedicated to only one repository, no other repository can use the runner. On organization level you can use the runner(s) for all the repositories within the organization. See GitHub self-hosted runner instructions for more information. Before starting the deployment you have to choose one option. The setup consists of running Terraform to create all AWS resources and manually configuring the GitHub App. The Terraform module requires configuration from the GitHub App and the GitHub app requires output from Terraform. Therefore you first create the GitHub App and configure the basics, then run Terraform, and afterwards finalize the configuration of the GitHub App. Setup GitHub App (part 1)Go to GitHub and create a new app. Beware you can create apps your organization or for a user. For now we support only organization level apps.
Setup terraform moduleDownload lambdasTo apply the terraform module, the compiled lambdas (.zip files) need to be available either locally or in an S3 bucket. They can be either downloaded from the GitHub release page or build locally. To read the files from S3, set the The lambdas can be downloaded manually from the release page or using the download-lambda terraform module (requires For local development you can build all the lambdas at once using Service-linked roleTo create spot instances the resource "aws_iam_service_linked_role" "spot" {
aws_service_name = "spot.amazonaws.com"
} Terraform moduleNext create a second terraform workspace and initiate the module, or adapt one of the examples. Note that module "github-runner" {
source = "philips-labs/github-runner/aws"
version = "REPLACE_WITH_VERSION"
aws_region = "eu-west-1"
vpc_id = "vpc-123"
subnet_ids = ["subnet-123", "subnet-456"]
environment = "gh-ci"
github_app = {
key_base64 = "base64string"
id = "1"
webhook_secret = "webhook_secret"
}
webhook_lambda_zip = "lambdas-download/webhook.zip"
runner_binaries_syncer_lambda_zip = "lambdas-download/runner-binaries-syncer.zip"
runners_lambda_zip = "lambdas-download/runners.zip"
enable_organization_runners = true
} Run terraform by using the following commands terraform init
terraform apply The terraform output displays the API gateway url (endpoint) and secret, which you need in the next step. The lambda for syncing the GitHub distribution to S3 is triggered via CloudWatch (by default once per hour). After deployment the function is triggered via S3 to ensure the distribution is cached. Setup the webhook / GitHub App (part 2)At this point you have 2 options. Either create a separate webhook (enterprise, org, or repo), or create webhook in the App. Option 1: Webhook
Option 2: AppGo back to the GitHub App and update the following settings.
Install appFinally you need to ensure the app is installed to all or selected repositories. Go back to the GitHub App and update the following settings.
EncryptionThe module support 2 scenarios to manage environment secrets and private key of the Lambda functions. Encrypted via a module managed KMS key (default)This is the default, no additional configuration is required. Encrypted via a provided KMS keyYou have to create an configure you KMS key. The module will use the context with key: resource "aws_kms_key" "github" {
is_enabled = true
}
module "runners" {
...
kms_key_arn = aws_kms_key.github.arn
...
PoolThe module basically supports two options for keeping a pool of runners. One is via a pool which only supports org-level runners, the second option is keeping runners idle. The pool is introduced in combination with the ephemeral runners and is primary meant to ensure if any event is unexpected dropped, and no runner was created the pool can pick up the job. The pool is maintained by a lambda. Each time the lambda is triggered a check is preformed if the number of idler runners managed by the module are meeting the expected pool size. If not, the pool will be adjusted. Keep in mind that the scale down function is still active and will terminate instances that are detected as idle. pool_runner_owner = "my-org" # Org to which the runners are added
pool_config = [{
size = 20 # size of the pool
schedule_expression = "cron(* * * * ? *)" # cron expression to trigger the adjustment of the pool
}] The pool is NOT enabled by default and can be enabled by setting at least one object of the pool config list. The ephemeral example contains configuration options (commented out). Idle runnersThe module will scale down to zero runners by default. By specifying a idle_config = [{
cron = "* * 9-17 * * 1-5"
timeZone = "Europe/Amsterdam"
idleCount = 2
}] Note: When using Windows runners it's recommended to keep a few runners warmed up due to the minutes-long cold start time. Supported configCron expressions are parsed by cron-parser. The supported syntax. * * * * * *
┬ ┬ ┬ ┬ ┬ ┬
│ │ │ │ │ |
│ │ │ │ │ └ day of week (0 - 7) (0 or 7 is Sun)
│ │ │ │ └───── month (1 - 12)
│ │ │ └────────── day of month (1 - 31)
│ │ └─────────────── hour (0 - 23)
│ └──────────────────── minute (0 - 59)
└───────────────────────── second (0 - 59, optional) For time zones please check TZ database name column for the supported values. Ephemeral runnersCurrently a beta feature! You can configure runners to be ephemeral, runners will be used only for one job. The feature should be used in conjunction with listening for the workflow job event. Please consider the following:
The example for ephemeral runners is based on the default example. Have look on the diff to see the major configuration differences. Prebuilt ImagesThis module also allows you to run agents from a prebuilt AMI to gain faster startup times. You can find more information in the image README.md ExamplesExamples are located in the examples directory. The following examples are provided:
Sub modulesThe module contains several submodules, you can use the module via the main module or assemble your own setup by initializing the submodules yourself. The following submodules are the core of the module and are mandatory:
The following sub modules are optional and are provided as example or utility:
ARM64 configuration for submodulesWhen using the top level module configure DebuggingIn case the setup does not work as intended follow the trace of events:
Requirements
Providers
Modules
Resources
Inputs
全部评论
专题导读
上一篇:learn-co-curriculum/git-version-control-getting-code-with-git发布时间:2022-06-11下一篇:IBM/Kubernetes-container-service-GitLab-sample: This code shows how a common mul ...发布时间:2022-06-11热门推荐
热门话题
阅读排行榜
|
请发表评论