Lab: Build a Continuous Deployment Pipeline with Jenkins and Kubernetes
For a more in depth best practices guide, go to the solution posted here.
Introduction
This guide will take you through the steps necessary to continuously deliver
your software to end users by leveraging Google Container Engine
and Jenkins to orchestrate the software delivery pipeline.
If you are not familiar with basic Kubernetes concepts, have a look at
Kubernetes 101.
In order to accomplish this goal you will use the following Jenkins plugins:
Jenkins Kubernetes Plugin - start Jenkins build executor containers in the Kubernetes cluster when builds are requested, terminate those containers when builds complete, freeing resources up for the rest of the cluster
Jenkins Pipelines - define our build pipeline declaratively and keep it checked into source code management alongside our application code
Google Oauth Plugin - allows you to add your google oauth credentials to jenkins
In order to deploy the application with Kubernetes you will use the following resources:
Deployments - replicates our application across our kubernetes nodes and allows us to do a controlled rolling update of our software across the fleet of application instances
Services - load balancing and service discovery for our internal services
Ingress - external load balancing and SSL termination for our external service
Secrets - secure storage of non public configuration information, SSL certs specifically in our case
Cloning into 'continuous-deployment-on-kubernetes'...
...
cd continuous-deployment-on-kubernetes
Create a Service Account with permissions
Create a service account, on Google Cloud Platform (GCP).
Create a new service account because it's the recommended way to avoid
using extra permissions in Jenkins and the cluster.
gcloud iam service-accounts create jenkins-sa \
--display-name "jenkins-sa"
Output (do not copy):
Created service account [jenkins-sa].
Add required permissions, to the service account, using predefined roles.
Most of these permissions are related to Jenkins use of Cloud Build, and
storing/retrieving build artifacts in Cloud Storage. Also, the
service account needs to enable the Jenkins agent to read from a repo
you will create in Cloud Source Repositories (CSR).
Your active configuration is: [cloudshell-...]
clusterrolebinding.rbac.authorization.k8s.io/cluster-admin-binding created
Install Helm
In this lab, you will use Helm to install Jenkins with a stable chart. Helm
is a package manager that makes it easy to configure and deploy Kubernetes
applications. Once you have Jenkins installed, you'll be able to set up your
CI/CD pipleline.
Now, check that the Jenkins Service was created properly:
kubectl get svc
Output (do not copy):
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
cd-jenkins 10.35.249.67 <none> 8080/TCP 3h
cd-jenkins-agent 10.35.248.1 <none> 50000/TCP 3h
kubernetes 10.35.240.1 <none> 443/TCP 9h
This Jenkins configuration is using the Kubernetes Plugin,
so that builder nodes will be automatically launched as necessary when the
Jenkins master requests them. Upon completion of the work, the builder nodes
will be automatically turned down, and their resources added back to the
cluster's resource pool.
Notice that this service exposes ports 8080 and 50000 for any pods that
match the selector. This will expose the Jenkins web UI and builder/agent
registration ports within the Kubernetes cluster. Additionally the jenkins-ui
services is exposed using a ClusterIP so that it is not accessible from outside
the cluster.
Connect to Jenkins
The Jenkins chart will automatically create an admin password for you. To
retrieve it, run:
printf$(kubectl get secret cd-jenkins -o jsonpath="{.data.jenkins-admin-password}"| base64 --decode);echo
To get to the Jenkins user interface, click on the Web Preview
button in cloud shell, then click
Preview on port 8080:
You should now be able to log in with username admin and your auto generated
password.
Your progress, and what's next
You've got a Kubernetes cluster managed by GKE. You've deployed:
a Jenkins Deployment
a (non-public) service that exposes Jenkins to its agent containers
You have the tools to build a continuous deployment pipeline. Now you need a
sample app to deploy continuously.
The sample app
You'll use a very simple sample application - gceme - as the basis for your CD
pipeline. gceme is written in Go and is located in the sample-app directory
in this repo. When you run the gceme binary on a GCE instance, it displays the
instance's metadata in a pretty card:
The binary supports two modes of operation, designed to mimic a microservice. In
backend mode, gceme will listen on a port (8080 by default) and return GCE
instance metadata as JSON, with content-type=application/json. In frontend mode,
gceme will query a backend gceme service and render that JSON in the UI you
saw above. It looks roughly like this:
Both the frontend and backend modes of the application support two additional URLs:
/version prints the version of the binary (declared as a const in
main.go)
/healthz reports the health of the application. In frontend mode, health
will be OK if the backend is reachable.
Deploy the sample app to Kubernetes
In this section you will deploy the gceme frontend and backend to Kubernetes
using Kubernetes manifest files (included in this repo) that describe the
environment that the gceme binary/Docker image will be deployed to. They use a
default gceme Docker image that you will be updating with your own in a later
section.
You'll have two primary environments -
canary and production - and
use Kubernetes to manage them.
Note: The manifest files for this section of the tutorial are in
sample-app/k8s. You are encouraged to open and read each one before creating
it per the instructions.
First change directories to the sample-app, back in Cloud Shell:
cd sample-app
Create the namespace for production:
kubectl create ns production
Output (do not copy):
namespace/production created
Create the production Deployments for frontend and backend:
To https://source.developers.google.com/p/myproject/r/gceme
* [new branch] master -> master
Create a pipeline
You'll now use Jenkins to define and run a pipeline that will test, build,
and deploy your copy of gceme to your Kubernetes cluster. You'll approach this
in phases. Let's get started with the first.
Phase 1: Add your service account credentials
First, you will need to configure GCP credentials in order for Jenkins to be
able to access the code repository:
In the Jenkins UI, Click Credentials on the left
Click the (global) link
Click Add Credentials on the left
From the Kind dropdown, select Google Service Account from private key
Enter the Project Name from your project
Leave JSON key selected, and click Choose File.
Select the jenkins-sa-key.json file downloaded earlier, then click
Open.
Click OK
You should now see 1 global credential. Make a note of the name of the
credential, as you will reference this in Phase 2.
Phase 2: Create a job
This lab uses Jenkins Pipeline to
define builds as groovy scripts.
Navigate to your Jenkins UI and follow these steps to configure a Pipeline job
(hot tip: you can find the IP address of your Jenkins install with kubectl get ingress --namespace jenkins):
Click the Jenkins link in the top left toolbar, of the ui
Click the New Item link in the left nav
For item name use sample-app, choose the Multibranch Pipeline
option, then click OK
From the Credentials dropdown, select the name of the credential from
Phase 1. It should have the format PROJECT_ID service account.
Under Scan Multibranch Pipeline Triggers section, check the
Periodically if not otherwise run box, then set the Interval value to
1 minute.
Click Save, leaving all other options with default values.
A Branch indexing job was kicked off to identify any branches in your
repository.
Click Jenkins > sample-app, in the top menu.
You should see the master branch now has a job created for it.
The first run of the job will fail, until the project name is set properly
in the Jenkinsfile next step.
Phase 3: Modify Jenkinsfile, then build and test the app
Create a branch for the canary environment called canary
git checkout -b canary
Output (do not copy):
Switched to a new branch 'canary'
The Jenkinsfile is
written using the Jenkins Workflow DSL, which is Groovy-based. It allows an
entire build pipeline to be expressed in a single script that lives alongside
your source code and supports powerful features like parallelization, stages,
and user input.
Update your Jenkinsfile script with the correct PROJECT environment value.
Be sure to replace REPLACE_WITH_YOUR_PROJECT_ID with your project name.
Save your changes, but don't commit the new Jenkinsfile change just yet.
You'll make one more change in the next section, then commit and push them
together.
Now that your pipeline is working, it's time to make a change to the gceme app
and let your pipeline test, package, and deploy it.
The canary environment is rolled out as a percentage of the pods behind the
production load balancer. In this case we have 1 out of 5 of our frontends
running the canary code and the other 4 running the production code. This allows
you to ensure that the canary code is not negatively affecting users before
rolling out to your full fleet. You can use the
labelsenv: production and
env: canary in Google Cloud Monitoring in order to monitor the performance of
each version individually.
In the sample-app repository on your workstation open html.go and replace
the word blue with orange (there should be exactly two occurrences):
//snip
<divclass="card orange"><divclass="card-content white-text"><divclass="card-title">Backend that serviced this request</div>
//snip
In the same repository, open main.go and change the version number from
1.0.0 to 2.0.0:
请发表评论