Deploying your Java application in a Kubernetes cluster could feel like Alice in Wonderland. You keep going down the rabbit hole and don’t know how to make that ride comfortable. This repository explains how a Java application can be deployed, tested, debugged and monitored in Kubernetes. In addition, it also talks about canary deployment and deployment pipeline.
We will use a simple Java application built using Spring Boot. The application publishes a REST endpoint that can be invoked at http://{host}:{port}/hello.
The source code is in the app directory.
Build and Test using Maven
Run application:
cd app
mvn spring-boot:run
Test application
curl http://localhost:8080/hello
Build and Test using Docker
Build Docker Image using multi-stage Dockerfile
Create m2.tar.gz:
mvn -Dmaven.repo.local=./m2 clean package
tar cvf m2.tar.gz ./m2
List the Docker images and show the difference in sizes:
[ec2-user@ip-172-31-21-7 app]$ docker image ls | grep greeting
arungupta/greeting jre-slim 9eed25582f36 6 seconds ago 162MB
arungupta/greeting latest 1b7c061dad60 10 hours ago 490MB
Run the container:
docker container run -d -p 8080:8080 arungupta/greeting:jre-slim
Access the application:
curl http://localhost:8080/hello
Build and Test using Kubernetes
A single node Kubernetes cluster can be easily created on a development machine using Minikube, MicroK8s, KIND, and Docker for Mac. Read on why using these local development environments does not truly represent your prod cluster.
This tutorial will use Docker for Mac.
Ensure that Kubernetes is enabled in Docker for Mac
Show the list of contexts:
kubectl config get-contexts
Configure kubectl CLI for Kubernetes cluster
kubectl config use-context docker-for-desktop
Install the Helm CLI:
brew install kubernetes-helm
If Helm CLI is already installed then use brew upgrade kubernetes-helm.
Check Helm version:
helm version
Install Helm in Kubernetes cluster:
helm init
If Helm has already been initialized on the cluster, then you may have to upgrade Tiller:
helm init --upgrade
Install the Helm chart:
cd ..
helm install --name myapp manifests/myapp
Check that the pod is running:
kubectl get pods
Check that the service is up:
kubectl get svc
Access the application:
curl http://$(kubectl get svc/myapp-greeting \
-o jsonpath='{.status.loadBalancer.ingress[0].hostname}'):8080/hello
Debug Docker and Kubernetes using IntelliJ
You can debug a Docker container and a Kubernetes Pod if they’re running locally on your machine.
Debug using Kubernetes
This was tested using Docker for Mac/Kubernetes. Use the previously deployed Helm chart.
Show service:
kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
greeting-service LoadBalancer 10.101.39.100 <pending> 80:30854/TCP 8m
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 90d
myapp-greeting LoadBalancer 10.108.104.178 localhost 8080:32189/TCP,5005:31117/TCP 4s
Highlight the debug port is also forwarded.
In IntelliJ, Run, Debug, Remote:
Click on Debug, setup a breakpoint in the class:
Access the application:
curl http://$(kubectl get svc/myapp-greeting \
-o jsonpath='{.status.loadBalancer.ingress[0].hostname}'):8080/hello
$ docker container ls -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
724313157e3c arungupta/greeting "java -jar app-swarm…" 3 seconds ago Up 2 seconds 0.0.0.0:5005->5005/tcp, 0.0.0.0:8080->8080/tcp greeting
This application will be deployed to an Amazon EKS cluster. If you’re looking for a self-paced workshop that provide detailed instructions to get you started with EKS then eksworkshop.com is your place.
eksctl create cluster --name myeks --nodes 4 --region us-west-2
2018-10-25T13:45:38+02:00 [ℹ] setting availability zones to [us-west-2a us-west-2c us-west-2b]
2018-10-25T13:45:39+02:00 [ℹ] using "ami-0a54c984b9f908c81" for nodes
2018-10-25T13:45:39+02:00 [ℹ] creating EKS cluster "myeks" in "us-west-2" region
2018-10-25T13:45:39+02:00 [ℹ] will create 2 separate CloudFormation stacks for cluster itself and the initial nodegroup
2018-10-25T13:45:39+02:00 [ℹ] if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=us-west-2 --name=myeks'
2018-10-25T13:45:39+02:00 [ℹ] creating cluster stack "eksctl-myeks-cluster"
2018-10-25T13:57:33+02:00 [ℹ] creating nodegroup stack "eksctl-myeks-nodegroup-0"
2018-10-25T14:01:18+02:00 [✔] all EKS cluster resource for "myeks" had been created
2018-10-25T14:01:18+02:00 [✔] saved kubeconfig as "/Users/argu/.kube/config"
2018-10-25T14:01:19+02:00 [ℹ] the cluster has 0 nodes
2018-10-25T14:01:19+02:00 [ℹ] waiting for at least 4 nodes to become ready
2018-10-25T14:01:50+02:00 [ℹ] the cluster has 4 nodes
2018-10-25T14:01:50+02:00 [ℹ] node "ip-192-168-161-180.us-west-2.compute.internal" is ready
2018-10-25T14:01:50+02:00 [ℹ] node "ip-192-168-214-48.us-west-2.compute.internal" is ready
2018-10-25T14:01:50+02:00 [ℹ] node "ip-192-168-75-44.us-west-2.compute.internal" is ready
2018-10-25T14:01:50+02:00 [ℹ] node "ip-192-168-82-236.us-west-2.compute.internal" is ready
2018-10-25T14:01:52+02:00 [ℹ] kubectl command should work with "/Users/argu/.kube/config", try 'kubectl get nodes'
2018-10-25T14:01:52+02:00 [✔] EKS cluster "myeks" in "us-west-2" region is ready
Check the nodes:
kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-192-168-161-180.us-west-2.compute.internal Ready <none> 52s v1.10.3
ip-192-168-214-48.us-west-2.compute.internal Ready <none> 57s v1.10.3
ip-192-168-75-44.us-west-2.compute.internal Ready <none> 57s v1.10.3
ip-192-168-82-236.us-west-2.compute.internal Ready <none> 54s v1.10.3
Get the list of configs:
kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* [email protected] myeks.us-west-2.eksctl.io [email protected]
docker-for-desktop docker-for-desktop-cluster docker-for-desktop
As indicated by *, kubectl CLI configuration is updated to the recently created cluster.
kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 17m
myapp-greeting LoadBalancer 10.100.241.250 a8713338abef211e8970816cb629d414-71232674.us-east-1.elb.amazonaws.com 8080:32626/TCP,5005:30739/TCP 2m
It shows the port 8080 and 5005 are published and an Elastic Load Balancer is provisioned. It takes about three minutes for the load balancer to be ready.
Access the application:
curl http://$(kubectl get svc/myapp-greeting \
-o jsonpath='{.status.loadBalancer.ingress[0].hostname}'):8080/hello
Delete the application:
helm delete --purge myapp
Service Mesh using AWS App Mesh
AWS App Mesh is a service mesh that provides application-level networking to make it easy for your services to communicate with each other across multiple types of compute infrastructure. App Mesh can be used with Amazon EKS or Kubernetes running on AWS. In addition, it also works with other container services offered by AWS such as AWS Fargate and Amazon ECS. It also works with microservices deployed on Amazon EC2.
A thorough detailed example that shows how to use App Mesh with EKS is available at Service Mesh with App Mesh. This section provides a simplistic setup using the configuration files from there.
All scripts used in this section are in the manifests/appmesh directory.
Setup IAM Permissions
Set a variable ROLE_NAME to IAM role for the EKS worker nodes:
ROLE_NAME=$(aws iam list-roles \
--query \
'Roles[?contains(RoleName,`eksctl-myeks-nodegroup`)].RoleName' --output text)
Setup permissions for the worker nodes:
aws iam attach-role-policy \
--role-name $ROLE_NAME \
--policy-arn arn:aws:iam::aws:policy/AWSAppMeshFullAccess
TALKER_POD=$(kubectl get pods \
-nprod -lgreeting=talker \
-o jsonpath='{.items[0].metadata.name}')
Exec into the talker pod:
kubectl exec -nprod $TALKER_POD -it bash
Invoke the mostly-hello service to get back mostly Hello response:
while [ 1 ]; do curl http://mostly-hello.prod.svc.cluster.local:8080/hello; echo;done
CTRL+C to break the loop.
Invoke the mostly-howdy service to get back mostly Howdy response:
while [ 1 ]; do curl http://mostly-howdy.prod.svc.cluster.local:8080/hello; echo;done
CTRL+C to break the loop.
Service Mesh using Istio
Istio is is a layer 4/7 proxy that routes and load balances traffic over HTTP, WebSocket, HTTP/2, gRPC and supports application protocols such as MongoDB and Redis. Istio uses the Envoy proxy to manage all inbound/outbound traffic in the service mesh.
Istio has a wide variety of traffic management features that live outside the application code, such as A/B testing, phased/canary rollouts, failure recovery, circuit breaker, layer 7 routing and policy enforcement (all provided by the Envoy proxy). Istio also supports ACLs, rate limits, quotas, authentication, request tracing and telemetry collection using its Mixer component. The goal of the Istio project is to support traffic management and security of microservices without requiring any changes to the application; it does this by injecting a sidecar into your pod that handles all network communications.
kubectl get pods -l app=greeting
NAME READY STATUS RESTARTS AGE
greeting-hello-69cc7684d-7g4bx 2/2 Running 0 1m
greeting-howdy-788b5d4b44-g7pml 2/2 Running 0 1m
Access application multipe times to see different response:
for i in {1..10}
do
curl -q http://$(kubectl get svc/greeting -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')/hello
echo
done
Setup an Istio rule to split traffic between 75% to Hello and 25% to Howdy version of the greeting service:
Access application multipe times to see ~10% greeting messages with Howdy:
for i in {1..50}
do
curl -q http://$(kubectl get svc/greeting -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')/hello
echo
done
Distributed Tracing
Istio is deployed as a sidecar proxy into each of your pods; this means it can see and monitor all the traffic flows between your microservices and generate a graphical representation of your mesh traffic. We’ll use the application you deployed in the previous step to demonstrate this.
By default, tracing is disabled. --set tracing.enabled=true was used during Istio installation to ensure tracing was enabled.
Setup access to the tracing dashboard URL using port-forwarding:
kubectl port-forward \
-n istio-system \
pod/$(kubectl get pod \
-n istio-system \
-l app=jaeger \
-o jsonpath='{.items[0].metadata.name}') 16686:16686 &
By default, Grafana is disabled. --set grafana.enabled=true was used during Istio installation to ensure Grafana was enabled. Alternatively, the Grafana add-on can be installed as:
export INGRESS_HOST=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
export INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http")].port}')
export GATEWAY_URL=$INGRESS_HOST:$INGRESS_PORT
curl --connect-timeout 2 http://$GATEWAY_URL/resources/greeting
The service will timeout in 2 seconds.
Chaos using kube-monkey
kube-monkey is an implementation of Netflix’s Chaos Monkey for Kubernetes clusters. It randomly deletes Kubernetes pods in the cluster encouraging and validating the development of failure-resilient services.
This application agrees to kill up to 40% of pods. The schedule of deletion is defined by kube-monkey configuration and is defined to be between 10am and 4pm on weekdays.
Deployment Pipeline using Skaffold
Skaffold is a command line utility that facilitates continuous development for Kubernetes applications. With Skaffold, you can iterate on your application source code locally then deploy it to a remote Kubernetes cluster.
Create a new GitHub token https://github.com/settings/tokens/new, select repo as the scope, click on Generate Token to generate the token. Copy the generated token.
请发表评论