在线时间:8:00-16:00
迪恩网络APP
随时随地掌握行业动态
扫描二维码
关注迪恩网络微信公众号
开源软件名称(OpenSource Name):kubernetes-retired/kubeadm-dind-cluster开源软件地址(OpenSource Url):https://github.com/kubernetes-retired/kubeadm-dind-cluster开源编程语言(OpenSource Language):Shell 98.1%开源软件介绍(OpenSource Introduction):kubeadm-dind-clusterNOTE: This project is deprecated in favor of kind. Try kind today, it's great! A Kubernetes multi-node cluster for developer of Kubernetes and projects that extend Kubernetes. Based on kubeadm and DIND (Docker in Docker). Supports both local workflows and workflows utilizing powerful remote machines/cloud instances for building Kubernetes, starting test clusters and running e2e tests. If you're an application developer, you may be better off with
Minikube because it's more
mature and less dependent on the local environment, but if you're
feeling adventurous you may give RequirementsDocker 1.12+ is recommended. If you're not using one of the
preconfigured scripts (see below) and not building from source, it's
better to have
As of now, running The problems include inability to properly clean up DIND volumes due
to a docker bug which
is not really fixed and, more importantly, a
kubelet problem.
If you want to run By default Mac OS X considerationsWhen building Kubernetes from source on Mac OS X, it should be
possible to build NOTE: Docker on Mac OS X, at the time of this writing, does not support IPv6 and thus clusters cannot be formed using IPv6 addresses. Using preconfigured scripts
The preconfigured scripts are convenient for use with projects that extend or use Kubernetes. For example, you can start Kubernetes 1.14 like this: $ wget -O dind-cluster.sh https://github.com/kubernetes-sigs/kubeadm-dind-cluster/releases/download/v0.2.0/dind-cluster-v1.14.sh
$ chmod +x dind-cluster.sh
$ # start the cluster
$ ./dind-cluster.sh up
$ # add kubectl directory to PATH
$ export PATH="$HOME/.kubeadm-dind-cluster:$PATH"
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
kube-master Ready master 4m v1.14.0
kube-node-1 Ready <none> 2m v1.14.0
kube-node-2 Ready <none> 2m v1.14.0
$ # k8s dashboard available at http://localhost:<DOCKER_EXPOSED_PORT>/api/v1/namespaces/kube-system/services/kubernetes-dashboard:/proxy. See your console for the URL.
$ # restart the cluster, this should happen much quicker than initial startup
$ ./dind-cluster.sh up
$ # stop the cluster
$ ./dind-cluster.sh down
$ # remove DIND containers and volumes
$ ./dind-cluster.sh clean Replace 1.14 with 1.13 or 1.12 to use older Kubernetes versions.
Important note: you need to do Using with Kubernetes source$ git clone https://github.com/kubernetes-sigs/kubeadm-dind-cluster.git ~/dind
$ cd ~/work/kubernetes/src/k8s.io/kubernetes
$ export BUILD_KUBEADM=y
$ export BUILD_HYPERKUBE=y
$ # build binaries+images and start the cluster
$ ~/dind/dind-cluster.sh up
$ kubectl get nodes
NAME STATUS AGE
kube-master Ready,master 1m
kube-node-1 Ready 34s
kube-node-2 Ready 34s
$ # k8s dashboard available at http://localhost:8080/ui
$ # run conformance tests
$ ~/dind/dind-cluster.sh e2e
$ # restart the cluster rebuilding
$ ~/dind/dind-cluster.sh up
$ # run particular e2e test based on substring
$ ~/dind/dind-cluster.sh e2e "existing RC"
$ # shut down the cluster
$ ~/dind/dind-cluster.sh down The first Controlling network usageKubeadm-dind-cluster uses several networks for operation, and allows the user to customize the networks used. Check your network assignments for your setup, and adjust things, if there are conflicts. This section will describe how to adjust settings. NOTE: Docker will define networks for bridges, which kubeadm-dind-cluster tries to avoid by default, but based on your setup, you may need to choose different subnets. Typically, docker uses 172.17.0.0/16, 172.18.0.0/16,... Management networkFor the management network, the user can set MGMT_CIDRS to a string representing the CIDR to use for the network. This is used in conjunction with the CLUSTER_ID, when creating multple clusters. If single cluster, the cluster ID will be zero. For IPv4, this must be a /24 and the third octet is reserved for the multi-cluster number (0 when in single-cluster mode). For example, use 10.192.0.0/24, for an IPv4 cluster that will have nodes 10.192.0.2, 10.192.0.3, etc. A cluster with ID "5" would have nodes 10.192.5.2, 10.192.5.3, etc. For IPv6, the CIDR must have room for a hextet to be reserved for the multi-cluster number. For example, fd00:10:20::/64 would be for an IPv6 cluster with ID "10" (considered in hex) with nodes fd00:10:20:10::2, fd00:10:20:10::3, etc. If the cluster ID was "0" (single cluster mode), the nodes would be fd00:10:20:0::2, fd00:10:20:0::3, etc. The defaults are 10.192.0.0/24 for IPv4, and fd00:20::/64 for IPv6. For dual-stack mode, a comma separated list with IPv4 and IPv6 can be specified. Any omitted CIDR will use the default value above, based on the IP mode. Service networkThe service network CIDR, can be specified by SERVICE_CIDR. For IPv4, the default is 10.96.0.0/12. For IPv6, the default is fd00:30::/110. Pod networkFor the pod network the POD_NETWORK_CIDR environment variable can be set to specify the pod sub-networks. One subnet will be created for each node in the cluster. For IPv4, the value must be a /16, of which this will be split into multiple /24 subnets. The master node will set the third octet to 2, and the minion nodes will set the third octet to 3+. For example, with 10.244.0.0/16, pods on the master node will be 10.244.2.X, on minion kube-node-1 will be 10.244.3.Y, on minion kube-node-2 will be 10.244.4.Z, etc. For IPv6, the CIDR will again be split into subnets, eigth bits smaller. For example, with fd00:10:20:30::/72, the master node would have a CIDR of fd00:10:20:30:2::/80 with pods fd00:10:20:30:2::X. If the POD_NETWORK_CIDR, instead was fd00:10:20:30::/64, the master node woudl have a CIDR of fd00:10:20:30:0200::/72, and pods would be fd00:10:20:30:0200::X. The defaults are 10.244.0.0/16 for IPv4, and fd00:40::/72 for IPv6. For dual-stack mode, a comma separated list with IPv4 and IPv6 CIDR can be specified. Any omitted CIDR will use the default value above, based on the IP mode. Kube-routerInstead of using kube-proxy and static routes (with bridge CNI plugin), kube-router can be used. Kube-router uses the bridge plugin, but uses IPVS kernel module, instead of iptables. This results in better performance and scalability. Kube-router also uses iBGP, so that static routes are not required for pods to communicate across nodes. To use kube-router, set the CNI_PLUGIN environment variable to "kube-router". NOTE: Currently pinning kube-router to v0.2.0, because of issue seen with cleanup, when using newer (latest) kube-router. NOTE: This has only been tested with Kubernetes 1.11, and currently fails when using Kuberentes 1.12+ IPv6 ModeTo run Kubernetes in IPv6 only mode, set the environment variable IP_MODE to "ipv6". There are additional customizations that you can make for IPv6, to set the prefix used for DNS64, subnet prefix to use for DinD, and the service subnet CIDR (among other settings - see dind-cluster.sh): export EMBBEDDED_CONFIG=y
export DNS64_PREFIX=fd00:77:64:ff9b::
export DIND_SUBNET=fd00:77::
export SERVICE_CIDR=fd00:77:30::/110
export NAT64_V4_SUBNET_PREFIX=172.20 NOTE: The DNS64 and NAT64 containers that are created on the host, persist
beyond the NOTE: In multi-cluster, there will be DNS and NAT64 containers for each cluster, with thier names including the cluster suffix (e.g. bind9-cluster-50). NOTE: At this time, there is no isolation between clusters. Nodes on one cluster can ping nodes on another cluster (appears to be isolation iptables rules, instead of ip6tables rules). NOTE: The IPv4 mapping subnet used by NAT64, can be overridden from the default of 172.18.0.0/16, by specifying the first two octets in NAT64_V4_SUBNET_PREFIX (you cannot change the size). This prefix must be within the 10.0.0.0/8 or 172.16.0.0/12 private network ranges. Be aware, that, in a multi-cluster setup, the cluster ID, which defaults to zero, will be added to the second octet of the prefix. You must ensure that the resulting prefix is still within the private network's range. For example, if CLUSTER_ID="10", the default NAT64_V4_SUBNET_PREFIX will be "172.28", forming a subnet 172.28.0.0/16. NOTE: If you use ConfigurationYou may edit You can also edit the version appropriate kubeadm.conf.#.##.tmpl file in the image/ directory, to customize how KubeAdm works. This will require that you build a new image using build/build-local.sh and then setting this environment variable:
Note: the DIND_IMAGE environment variable will work only with Just keep in mind, there are some parameters in double curly-brackets that are used to substitue settings, based on other dind-cluster.sh config settings. Remote Docker / GCEIt's possible to build Kubernetes on a remote machine running Docker.
kubeadm-dind-cluster can consume binaries directly from the build
data container without copying them back to developer's machine.
An example utilizing GCE instance is provided in gce-setup.sh.
You may try running it using . gce-setup.sh The example is based on sample commands from build/README.md in Kubernetes source. When using a remote machine, you need to use ssh port forwarding
to forward If you do not explicitly set Dumping cluster stateIn case of CI environment such as Travis CI or Circle CI, it's often desirable to get detailed cluster state for a failed job. Moreover, in case of e.g. Travis CI there's no way to store the artefacts without using an external service such as Amazon S3. Because of this, kubeadm-dind-cluster supports dumping cluster state as a text block that can be later split into individual files. For cases where there are limits on the log size (e.g. 4 Mb log limit in Travis CI) it's also possible to dump the lzma-compressed text block using base64 encoding. The following commands can be used to work with cluster state dumps:
All of the above commands work with 'fixed' scripts, too.
kubeadm-dind-cluster's own Travis CI jobs dump base64 blobs in case of
failure. Such blocks can be then extracted directly from the output of
travis logs NNN.N | ./dind-cluster.sh split-dump64 The following information is currently stored in the dump:
Running multiple clusters in parallel
Normally, default names will be used for docker resources and the kubectl context.
For example, For each additional cluster, the user can set a unique CLUSTER_ID to a string that represents a number from 1..254. The number will be used on all management network IP addresses. For IPv4, the cluster ID will be used as the third octet of the management address (whether default or user specified). For example, with cluster ID "10", the default management network CIDR will be 10.192.10.0/24. For IPv6, the cluster ID will be placed as the hextet before the double colon, for the management CIDR. For example, a management ntwork CIDR of fd00:20::/64 will become fd00:20:2::/64, for a cluster ID of '2'. NOTE: The cluster ID can be limited in some cases. For IPv6 mode, the cluster ID is also used in the NAT64 prefix, and that prifix must be within one of the RFC-1918 private network ranges. If the 172.16.0.0/12 private network is used, the cluster ID cannot be more than 15 (and less, if a higher base prefix is specified by the NAT64_V4_SUBNET_PREFIX, like the default 172.18, which would allow cluster IDs up to 13). Note: If the MGMT_CIDR (or legacy DIND_SUBNET/DIND_SUBNET_SIZE) environment variables are set for the management network, they must be able to accommodate the cluster ID injection. In addition to the management network, the resource names will have the suffix "-cluster-#", where # is the CLUSTER_ID. The context for kubectl will be "dind-cluster-#". For legacy support (or if a user wants a custom cluster name), setting the DIND_LABEL will create a resource suffix "-{DIND_LABEL}-#", where # is the cluster ID. If no cluster ID is specified, as would be for backwards-compatibility, or it is zero, the resource names will just use the DIND_LABEL, and a pseudo-random number from 1..13 will be used for the cluster ID to be applied to the management network, and in case of IPv6, the NAT64 V4 mapping subnet prefix (hence the limitation). Example usage: $ # creates a 'default' cluster
$ ./dind-cluster up
$ # creates a cluster with an ID of 10
$ CLUSTER_ID="10" ./dind-cluster.sh up
$ # creates an additional cluster with the label 'foo' and random cluster ID assigned
$ DIND_LABEL="foo" ./dind-cluster.sh up Example containers: $ docker ps --format '{{ .ID }} - {{ .Names }} -- {{ .Labels }}'
8178227e567c - kube-node-2 -- mirantis.kubeadm_dind_cluster=1,mirantis.kubeadm_dind_cluster_runtime=
6ea1822303bf - kube-node-1 -- mirantis.kubeadm_dind_cluster=1,mirantis.kubeadm_dind_cluster_runtime=
7bc6b28be0b4 - kube-master -- mirantis.kubeadm_dind_cluster=1,mirantis.kubeadm_dind_cluster_runtime=
ce3fa6eaecfe - kube-node-2-cluster-10 -- cluster-10=,mirantis.kubeadm_dind_cluster=1
12c18cf3edb7 - kube-node-1-cluster-10 -- cluster-10=,mirantis.kubeadm_dind_cluster=1
963a6e7c1e40 - kube-master-cluster-10 -- cluster-10=,mirantis.kubeadm_dind_cluster=1
b05926f06642 - kube-node-2-foo -- mirantis.kubeadm_dind_cluster=1,foo=
ddb961f1cc95 - kube-node-1-foo -- mirantis.kubeadm_dind_cluster=1,foo=
2efc46f9dafd - kube-master-foo -- foo=,mirantis.kubeadm_dind_cluster=1 Example $ # to access the 'default' cluster
$ kubectl --context dind get all
$ # to access the additional clusters
$ kubectl --context dind-cluster-10 get all
$ kubectl --context dind-foo get all Dual-stack OperationBy setting the The MGMT_CIDRS and POD_NETWORK_CIDR environment variables can be used to customize the management and pod networks, respectively. For this mode, static routes will be created on each node, for both IPv4 and IPv6, to allow pods to communicate across nodes. LimitationsDual-stack mode for k-d-c is only available when using the bridge or PTP CNI plugins. The initial version will not be using DNS64/NAT64, meaning that the cluster must have access to the outside via IPv6 (or use an external DNS64/NAT64). This implies that it will not work, out of the box, with GCE, which provides only IPv4 access to the outside world. The functionality of the cluster in dual-stack mode, depends on the implementation of the dual-stack KEP. As of this commit, implementation of the KEP is only beginning, so some things will not work yet. Consider this commit as support for a WIP. One known current limitation is that the service network must use the IPv4 family, as currently IPv4 is preferred, when both are available and logic doesn't force famliy to IPv6. As a result, endpoints are still IPv4, when service network is IPv6 (and doesn't work correctly). Motivation
There's also k8s vagrant provider, but it's quite slow. Besides,
Another widely suggested solution for development clusters is minikube, but currently it's not very well suited for development of Kubernetes itself. Besides, it's currently only supports single node, too, unless used with additional DIND layer like nkube. kubernetes-dind-cluster is very nice & useful but uses a custom method of cluster setup (same as 2nd problem with local-up-cluster). There's also sometimes a need to use a powerful remote machine or a cloud instance to build and test Kubernetes. Having Docker as the only requirement for such machine would be nice. Builds and unit tests are already covered by jbeda's work on dockerized builds, but being able to quickly start remote test clusters and run e2e tests is also important. kubeadm-dind-cluster uses kubeadm to create a cluster consisting of docker containers instead of VMs. That's somewhat of a compromise but allows one to (re)start clusters quickly which is quite important when making changes to k8s source. Moreover, some projects that extend Kubernetes such as Virtlet need a way to start kubernetes cluster quickly in CI environment without involving nested virtulization. Current kubeadm-dind-cluster version provides means to do this without the need to build Kubernetes locally. Additional notesAt the moment, all non-serial
When restoring a cluster (either using
Contributing to & Testing |
2023-10-27
2022-08-15
2022-08-17
2022-09-23
2022-08-13
请发表评论