This is an ingress controller for Kubernetes — the open-source container deployment,
scaling, and management system — on AWS. It runs inside a Kubernetes cluster to monitor changes to your ingress
resources and orchestrate AWS Load Balancers accordingly.
This ingress controller uses the EC2 instance metadata of the worker node where it's currently running to find the
additional details about the cluster provisioned by Kubernetes on top of AWS.
This information is used to manage AWS resources for each ingress objects of the cluster.
Features
Uses CloudFormation to guarantee consistent state
Automatic discovery of SSL certificates
Automatic forwarding of requests to all Worker Nodes, even with auto scaling
Automatic cleanup of unnecessary managed resources
Version v0.13.0 use Ingress version v1 as default. You can downgrade
ingress version to earlier versions via flag. You will also need to
allow the access via RBAC, see more information in <v0.11.0 to >=0.11.0 below.
<v0.12.17 to >=v0.12.17
Please see release note
and issue
this update can cause 30s downtime, if you don't use AWS CNI mode.
<v0.12.0 to <=0.12.16
Version v0.12.0 changes Network Load Balancer type handling if Application Load Balancer type feature is requested. See Load Balancers types notes for details.
<v0.11.0 to >=0.11.0
Version v0.11.0 changes the default apiVersion used for fetching/updating
ingresses from extensions/v1beta1 to networking.k8s.io/v1beta1. For this to
work the controller needs to have permissions to listingresses and
update, patchingresses/status from the networking.k8s.ioapiGroup.
See deployment example. To fallback to
the old behavior you can set the apiVersion via the --ingress-api-version
flag. Value must be extensions/v1beta1 or networking.k8s.io/v1beta1
(default) or networking.k8s.io/v1.
<v0.9.0 to >=v0.9.0
Version v0.9.0 changes the internal flag parsing library to
kingpin this means flags are now defined with -- (two dashes)
instead of a single dash. You need to change all the flags like this:
-stack-termination-protection -> --stack-termination-protection before
running v0.9.0 of the controller.
<v0.8.0 to >=v0.8.0
Version v0.8.0 added certificate verification check to automatically ignore
self-signed and certificates from internal CAs. The IAM role used by the controller
now needs the acm:GetCertificate permission. acm:DescribeCertificate permission
is no longer needed and can be removed from the role.
<v0.7.0 to >=v0.7.0
Version v0.7.0 deletes the annotation
zalando.org/aws-load-balancer-ssl-cert-domain, which we do not
consider as feature since we have SNI enabled ALBs.
<v0.6.0 to >=v0.6.0
Version v0.6.0 introduced support for Multiple TLS Certificates per ALB
(SNI). When upgrading your ALBs will automatically be aggregated to a single
ALB with multiple certificates configured.
It also adds support for attaching single EC2 instances and multiple
AutoScalingGroups to the ALBs therefore you must ensure you have the correct
instance filter defined before upgrading. The default filter is
tag:kubernetes.io/cluster/<cluster-id>=owned tag-key=k8s.io/role/node see
How it works for more information on how to configure this.
<v0.5.0 to >=v0.5.0
Version v0.5.0 introduced support for both internet-facing and internal
load balancers. For this change we had to change the naming of the
CloudFormation stacks created by the controller. To upgrade from v0.4.* to
v0.5.0 no changes are needed, but since the naming change of the stacks
migrating back down to a v0.4.* version will not be non-disruptive as it will
be unable to manage the stacks with the new naming scheme. Deleting the stacks
manually will allow for a working downgrade.
<v0.4.0 to >=v0.4.0
In versions before v0.4.0 we used AWS Tags that were set by CloudFormation automatically to find
some AWS resources.
This behavior has been changed to use custom non cloudformation tags.
In order to update to v0.4.0, you have to add the following tags to your AWs Loadbalancer
SecurityGroup before updating:
Additionally you must ensure that the instance where the ingress-controller is
running has the clusterID tag kubernetes.io/cluster/<cluster-id>=owned set
(was ClusterID=<cluster-id> before v0.4.0).
Ingress annotations
Overview of configuration which can be set via Ingress annotations.
To facilitate default load balancer type switch from Application to Network when the default load balancer type is Network
(--load-balancer-type="network") and Custom Security Group (zalando.org/aws-load-balancer-security-group) or
Web Application Firewall (zalando.org/aws-waf-web-acl-id) annotation is present the controller configures Application Load Balancer.
If zalando.org/aws-load-balancer-type: nlb annotation is also present then controller ignores the configuration and logs an error.
AWS Tags
SecurityGroup auto detection needs the following AWS Tags on the
SecurityGroup:
kubernetes.io/cluster/<cluster-id>=owned
kubernetes:application=<controller-id>, controller-id defaults to
kube-ingress-aws-controller and can be set by flag --controller-id=<my-ctrl-id>.
AutoScalingGroup auto detection needs the same AWS tags on the
AutoScalingGroup as defined for the SecurityGroup.
In case you want to attach/detach single EC2 instances to the ALB
TargetGroup, you have to have the same <cluster-id> set as on the
running kube-ingress-aws-controller. Normally this would be
kubernetes.io/cluster/<cluster-id>=owned.
Development Status
This controller is used in production since Q1 2017. It aims to be out-of-the-box useful for anyone
running Kubernetes. Jump down to the Quickstart to try it out—and please let us know if you have
trouble getting it running by filing an
Issue.
If you created your cluster with Kops, see our deployment guide for Kops
As of this writing, it's being used in production use cases at Zalando, and can be considered battle-tested in this setup. We're actively seeking devs/teams/companies to try it out and share feedback so we can
make improvements.
The maintainers of this project are building an infrastructure that runs Kubernetes on top of AWS at large scale (for nearly 200 delivery teams), and with automation. As such, we're creating our own tooling to support this new infrastructure. We couldn't find an existing ingress controller that operates like this one does, so we created one ourselves.
We're using this ingress controller with Skipper, an HTTP router that Zalando
has used in production since Q4 2015 as part of its front-end microservices architecture. Skipper's also open
source and has some outstanding features, that we
documented here. Feel
free to use it, or use another ingress of your choosing.
How It Works
This controller continuously polls the API server to check for ingress resources. It runs an infinite loop. For
each cycle it creates load balancers for new ingress resources, and deletes the load balancers for obsolete/removed
ingress resources.
The controller will not manage the security groups required to allow access from the Internet to the load balancers.
It assumes that their lifecycle is external to the controller itself.
During startup phase EC2 filters are constructed as follows:
If CUSTOM_FILTERS environment variable is set, it is used to generate filters that are later used
to fetch instances from EC2.
If CUSTOM_FILTERS environment variable is not set or could not be parsed, then default
filters are tag:kubernetes.io/cluster/<cluster-id>=owned tag-key=k8s.io/role/node where <cluster-id>
is determined from EC2 tags of instance on which Ingress Controller pod is started.
CUSTOM_FILTERS is a list of filters separated by spaces. Each filter has a form of name=value where name can be a tag: or tag-key: prefixed expression, as would be recognized by the EC2 API, and value is value of a filter, or a comma seperated list of values.
For example:
tag-key=test will filter instances that have a tag named test, ignoring the value.
tag:foo=bar' will filter instances that have a tag named foo with the value bar
tag:abc=def,ghi will filter instances that have a tag named abc with the value def OR ghi
Default filter tag:kubernetes.io/cluster/<cluster-id>=owned tag-key=k8s.io/role/node filters instances
that has tag kubernetes.io/cluster/<cluster-id> with value owned and have tag named tag-key=k8s.io/role/node.
Every poll cycle EC2 is queried with filters that were constructed during startup.
Each new discovered instance is scanned for Auto Scaling Group tag. Each Target
Group created by this Ingress controller is then added to each known Auto Scaling Group.
Each Auto Scaling Group information is fetched only once when first node of it is discovered for first time.
If instance does not belong to Auto Scaling Group (does not have aws:autoscaling:groupName tag) it is stored in separate list of
Single Instances. On each cycle instances on this list are registered as targets in all Target Groups managed by this controller.
If call to get instances from EC2 did not return previously known Single Instance, it is deregistered from Target Group and removed from list of Single Instances.
Call to deregister instances is aggregated so that maximum 1 call to deregister is issued in poll cycle.
For Auto Scaling Groups, the controller will always try to build a list of
owned Auto Scaling Groups based on the tag:
kubernetes.io/cluster/<cluster-id>=owned even if this tag is not specified in
the CUSTOM_FILTERS configuration. Tracking the owned Auto Scaling Groups is
done to automatically deregister any ASGs which are no longer targeted by the
CUSTOM_FILTERS.
Discovery
On startup, the controller discovers the AWS resources required for the controller operations:
The Security Group
Lookup of the kubernetes.io/cluster/<cluster-id> tag of the Security Group matching the clusterID for the controller node and kubernetes:application matching the value kube-ingress-aws-controller or as fallback for <v0.4.0
tag aws:cloudformation:logical-id matching the value IngressLoadBalancerSecurityGroup (only clusters created by CF).
The Subnets
Subnets are discovered based on the VPC of the instance where the
controller is running. By default it will try to select all subnets of the
VPC but will limit the subnets to one per Availability Zone. If there are
many subnets within the VPC it's possible to tag the desired subnets with
the tags kubernetes.io/role/elb (for internet-facing ALBs) or
kubernetes.io/role/internal-elb (for internal ALBs). Subnets with these
tags will be favored when selecting subnets for the ALBs.
Additionally you can tag EC2 subnets with
kubernetes.io/cluster/<cluster-id>, which will be prioritized.
If there are two possible subnets for a single Availability Zone then the
first subnet, lexicographically sorted by ID, will be selected.
Creating Load Balancers
When the controller learns about new ingress resources, it uses the hosts specified in it to automatically determine
the most specific, valid certificates to use. The certificates has to be valid for at least 7 days. An example ingress:
The Application Load Balancer created by the controller will have both an HTTP listener and an HTTPS listener. The
latter will use the automatically selected certificates.
By default the ingress-controller will aggregate all ingresses under as few
Application Load Balancers as possible (unless running with
--disable-sni-support). If you like to provision an Application Load Balancer
that is unique for an ingress you can use the annotation
zalando.org/aws-load-balancer-shared: "false".
The new Application Load Balancers have a custom tag marking them as managed load balancers to differentiate them
from other load balancers. The tag looks like this:
You can only select from internet-facing (default) and internal
options.
If you run the controller with --load-balancer-type=network and
create an internal load balancer, the controller will create an
Application Load Balancer instead of a Network Load Balancer, because
it can create hard to debug issues,
that we want to prevent as default. If you know what you are doing you
can enforce to create a Network Load Balancer by setting annotation
zalando.org/aws-load-balancer-type: nlb.
Omit to create a Load Balancer for cluster internal domains
Since >=v0.10.5, you can create Ingress objects with host rules,
that have the .cluster.local and the controller will not create an
ALB for this.
If you pass --cluster-local-domain=".cluster.local", you can change
what domain is considered cluster internal. If you're using the deny
internal traffic feature, you might
want to sync this configuration with the --internal-domains one.
Deny traffic for internal domains
Since >=v0.11.18 the controller supports the flag
--deny-internal-domains. It's a boolean config item that when enabled
configures the ALBs' cloudformation templates with a
AWS::ElasticLoadBalancingV2::ListenerRule resource.
This rule will be configured with the condition
values from the --internal-domains flag and the
action fixedresponseconfig with the respective response
--deny-internal-domains-response flags. This feature is not enabled by
default. The following are the default values to its config flags:
internal-domains: *.cluster.local
deny-internal-domains: false (same as explicitly passing
--no-deny-internal-domains)
Running the controller with --deny-internal-domains and
--internal-domains=*.cluster.local will generate a rule in the ALB
that matches any request to domains ending in .cluster.local and answer
the request with an HTTP 401 Unauthorized.
Create Load Balancer with SSL Policy
You can select the default
SSLPolicy,
with the flag --ssl-policy=ELBSecurityPolicy-TLS-1-2-2017-01. This
choice can be overriden by the Kubernetes Ingress annotation
zalando.org/aws-load-balancer-ssl-policy to any valid value. Valid
values will be checked by the controller.
The controller will normally automatically detect the SecurityGroup to
use. Auto detection is done by filtering all SecurityGroups with AWS
Tags. The kubernetes.io/cluster/<cluster-id> tag of the Security
Group should match clusterID for the controller node with value
owned and kubernetes:application tag should match the value
kube-ingress-aws-controller.
If you want to override the detected SecurityGroup, you can set a
SecurityGroup of your choice with the
zalando.org/aws-load-balancer-security-group annotation like the
shown here:
It is possible to define WAF associations for the created load balancers. The WAF Web ACLs need to be created
separately via CloudFormation or the AWS Console, and they can be referenced either as a global startup
configuration of the controller, or as ingress specific settings in the ingress object with an annotation. The
ingress annotation overrides the global setting, and the controller will create separate load balancers for
those ingresses using a separate WAF association.
The controller supports two versions of AWS WAF:
WAF (v1 or "classic"): the Web ACL is identified by a UUID
WAFv2: the Web ACL is identified by its ARN, prefixed with arn:aws:wafv2:
Only one WAF association can be used for a load balancer, and the same command line flag and ingress annotation
is used for both versions, only the format of the value differs.
Starting the controller with global WAF association:
请发表评论