Gracefully handle EC2 instance shutdown within Kubernetes
Project Summary
This project ensures that the Kubernetes control plane responds appropriately to events that can cause your EC2 instance to become unavailable, such as EC2 maintenance events, EC2 Spot interruptions, ASG Scale-In, ASG AZ Rebalance, and EC2 Instance Termination via the API or Console. If not handled, your application code may not stop gracefully, take longer to recover full availability, or accidentally schedule work to nodes that are going down.
The aws-node-termination-handler (NTH) can operate in two different modes: Instance Metadata Service (IMDS) or the Queue Processor.
The aws-node-termination-handler Instance Metadata Service Monitor will run a small pod on each host to perform monitoring of IMDS paths like /spot or /events and react accordingly to drain and/or cordon the corresponding node.
The aws-node-termination-handler Queue Processor will monitor an SQS queue of events from Amazon EventBridge for ASG lifecycle events, EC2 status change events, Spot Interruption Termination Notice events, and Spot Rebalance Recommendation events. When NTH detects an instance is going down, we use the Kubernetes API to cordon the node to ensure no new work is scheduled there, then drain it, removing any existing work. The termination handler Queue Processor requires AWS IAM permissions to monitor and manage the SQS queue and to query the EC2 API.
You can run the termination handler on any Kubernetes cluster running on AWS, including self-managed clusters and those created with Amazon Elastic Kubernetes Service.
Major Features
Instance Metadata Service Processor
Monitors EC2 Metadata for Scheduled Maintenance Events
Monitors EC2 Metadata for Spot Instance Termination Notifications
Monitors EC2 Metadata for Rebalance Recommendation Notifications
Helm installation and event configuration support
Webhook feature to send shutdown or restart notification messages
Unit & Integration Tests
Queue Processor
Monitors an SQS Queue for:
EC2 Spot Interruption Notifications
EC2 Instance Rebalance Recommendation
EC2 Auto-Scaling Group Termination Lifecycle Hooks to take care of ASG Scale-In, AZ-Rebalance, Unhealthy Instances, and more!
EC2 Status Change Events
EC2 Scheduled Change events from AWS Health
Helm installation and event configuration support
Webhook feature to send shutdown or restart notification messages
Unit & Integration Tests
Which one should I use?
Feature
IMDS Processor
Queue Processor
K8s DaemonSet
✅
❌
K8s Deployment
❌
✅
Spot Instance Interruptions (ITN)
✅
✅
Scheduled Events
✅
✅
EC2 Instance Rebalance Recommendation
✅
✅
ASG Lifecycle Hooks
❌
✅
EC2 Status Changes
❌
✅
Setup Required
❌
✅
Installation and Configuration
The aws-node-termination-handler can operate in two different modes: IMDS Processor and Queue Processor. The enableSqsTerminationDraining helm configuration key or the ENABLE_SQS_TERMINATION_DRAINING environment variable are used to enable the Queue Processor mode of operation. If enableSqsTerminationDraining is set to true, then IMDS paths will NOT be monitored. If the enableSqsTerminationDraining is set to false, then IMDS Processor Mode will be enabled. Queue Processor Mode and IMDS Processor Mode cannot be run at the same time.
IMDS Processor Mode allows for a fine-grained configuration of IMDS paths that are monitored. There are currently 3 paths supported that can be enabled or disabled by using the following helm configuration keys:
enableSpotInterruptionDraining
enableRebalanceMonitoring
enableScheduledEventDraining
By default, IMDS mode will only Cordon in response to a Rebalance Recommendation event (all other events are Cordoned and Drained). Cordon is the default for a rebalance event because it's not known if an ASG is being utilized and if that ASG is configured to replace the instance on a rebalance event. If you are using an ASG w/ rebalance recommendations enabled, then you can set the enableRebalanceDraining flag to true to perform a Cordon and Drain when a rebalance event is received.
The enableSqsTerminationDraining must be set to false for these configuration values to be considered.
The Queue Processor Mode does not allow for fine-grained configuration of which events are handled through helm configuration keys. Instead, you can modify your Amazon EventBridge rules to not send certain types of events to the SQS Queue so that NTH does not process those events. All events when operating in Queue Processor mode are Cordoned and Drained unless the cordon-only flag is set to true.
The enableSqsTerminationDraining flag turns on Queue Processor Mode. When Queue Processor Mode is enabled, IMDS mode cannot be active. NTH cannot respond to queue events AND monitor IMDS paths. Queue Processor Mode still queries for node information on startup, but this information is not required for normal operation, so it is safe to disable IMDS for the NTH pod.
AWS Node Termination Handler - IMDS Processor
Installation and Configuration
The termination handler DaemonSet installs into your cluster a ServiceAccount, ClusterRole, ClusterRoleBinding, and a DaemonSet. All four of these Kubernetes constructs are required for the termination handler to run properly.
Kubectl Apply
You can use kubectl to directly add all of the above resources with the default configuration into your cluster.
For a full list of releases and associated artifacts see our releases page.
Helm
The easiest way to configure the various options of the termination handler is via helm. The chart for this project is hosted in the eks-charts repository.
To get started you need to add the eks-charts repo to helm
The termination handler requires some infrastructure prepared before deploying the application. In a multi-cluster environment, you will need to repeat the following steps for each cluster.
You'll need the following AWS infrastructure components:
Amazon Simple Queue Service (SQS) Queue
AutoScaling Group Termination Lifecycle Hook
Amazon EventBridge Rule
IAM Role for the aws-node-termination-handler Queue Processing Pods
1. Create an SQS Queue:
Here is the AWS CLI command to create an SQS queue to hold termination events from ASG and EC2, although this should really be configured via your favorite infrastructure-as-code tool like CloudFormation (template here) or Terraform:
If you are sending Lifecycle termination events from ASG directly to SQS, instead of through EventBridge, then you will also need to create an IAM service role to give Amazon EC2 Auto Scaling access to your SQS queue. Please follow these linked instructions to create the IAM service role: link.
Note the ARNs for the SQS queue and the associated IAM role for Step 2.
using SSE-KMS with an AWS managed key is not supported as the KMS key policy can't be updated to allow EventBridge to publish events to SQS.
using SSE-SQS doesn't require extra setup and works out of the box as SQS queues without encryption at rest.
2. Create an ASG Termination Lifecycle Hook:
Here is the AWS CLI command to create a termination lifecycle hook on an existing ASG when using EventBridge, although this should really be configured via your favorite infrastructure-as-code tool like CloudFormation or Terraform:
If you want to avoid using EventBridge and instead send ASG Lifecycle events directly to SQS, instead use the following command, using the ARNs from Step 1:
This functionality is helpful in accounts where there are ASGs that do not run kubernetes nodes or you do not want aws-node-termination-handler to manage their termination lifecycle.
However, if your account is dedicated to ASGs for your kubernetes cluster, then you can turn off the ASG tag check by setting the flag --check-asg-tag-before-draining=false or environment variable CHECK_ASG_TAG_BEFORE_DRAINING=false.
You can also control what resources NTH manages by adding the resource ARNs to your Amazon EventBridge rules.
Take a look at the docs on how to create rules that only manage certain ASGs here.
You may skip this step if sending events from ASG to SQS directly.
Here are AWS CLI commands to create Amazon EventBridge rules so that ASG termination events, Spot Interruptions, Instance state changes, Rebalance Recommendations, and AWS Health Scheduled Changes are sent to the SQS queue created in the previous step. This should really be configured via your favorite infrastructure-as-code tool like CloudFormation (template here) or Terraform:
The easiest and most commonly used method to configure the termination handler is via helm. The chart for this project is hosted in the eks-charts repository.
To get started you need to add the eks-charts repo to helm
For a full list of configuration options see our Helm readme.
Kubectl Apply
Queue Processor needs an sqs queue url to function; therefore, manifest changes are REQUIRED before using kubectl to directly add all of the above resources into your cluster.
For a full list of releases and associated artifacts see our releases page.
Use with Kiam
Use with Kiam
If you are using IMDS mode which defaults to hostNetworking: true, or if you are using queue-processor mode, then this section does not apply. The configuration below only needs to be used if you are explicitly changing NTH IMDS mode to hostNetworking: false .
To use the termination handler alongside Kiam requires some extra configuration on Kiam's end.
By default Kiam will block all access to the metadata address, so you need to make sure it passes through the requests the termination handler relies on.
To add a whitelist configuration, use the following fields in the Kiam Helm chart values:
Or just pass it as an argument to the kiam agents:
kiam agent --whitelist-route-regexp='^\/latest\/meta-data\/(spot\/instance-action|events\/maintenance\/scheduled|instance-(id|type)|public-(hostname|ipv4)|local-(hostname|ipv4)|placement\/availability-zone)|\/latest\/dynamic\/instance-identity\/document$'
Metadata endpoints
The termination handler relies on the following metadata endpoints to function properly:
请发表评论