A step-by-step guide to get kubernetes running inside an LXC container.
This guide is an alternative to minikube which also offers a local kubernetes environment.
The advantage of the LXC approach is that everything runs natively on the host kernel without any virtualization costs from a Virtual Machine.
For example, minikube causes such high CPU usage on the host (see minikube issue #3207), that development is impaired.
The downside is more setup work to get the kubernetes environment running, its administration costs, and lower isolation of the kubernetes cluster.
Below, you find a step-by-step guide to setup an LXC container and install kubernetes on it.
Lxc installation
Lxc is similar to docker but aims more to be an OS container instead of just application containers.
To use it, install lxd and initialize it using lxd init. When prompted, answer the following questions:
Would you like to use LXD clustering? (yes/no) [default=no]:
Do you want to configure a new storage pool? (yes/no) [default=yes]: yes -> dir (any directory based provider should work)
Would you like to connect to a MAAS server? (yes/no) [default=no]:
Would you like to create a new local network bridge? (yes/no) [default=yes]:
What should the new bridge be called? [default=lxdbr0]:
What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
Would you like LXD to be available over the network? (yes/no) [default=no]: no
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]:
Starting your LXC container
Before you can fire up your lxc container, you have to make sure to create /etc/subuid and /etc/subgid with the following entries:
Run systemctl restart lxd to have LXD detect your new maps.
As the base system for our kubernetes we will use Debian and call the lxc machine k8s-lxc. Now create your kubernetes host machine with
lxc launch images:debian/stretch k8s-lxc
Run lxc list to ensure that your machine k8s-lxc is up and running.
Note: To get an overview over the supported base images, run
lxc image list images:
Usual lxc containers are quite restricted in their capabilities.
Because we need to run docker and kubernetes in the lxc container, it is required to give the container the capabilities to manage networking configuration and create cgroups.
For that, run lxc config edit k8s-lxc and merge in the following settings:
kernel_modules: depending on the kernel of your host system, you need to add further kernel modules here. The ones listed above are for networking and for dockers overlay filesystem.
raw.lxc: this allows the lxc container to configure certain system resources.
security.privileged and security.nesting: for a privileged container which may create nested cgroups
Restart your lxc container. Unfortunately, lxc stop k8s-lxc does not work for me. I need to do lxc exec k8s-lxc reboot.
Using docker and kubernetes on zfs backed host systems
If your host system is backed by ZFS storage (e.g. an option for Proxmox), some adaption need to be made. ZFS currently lacks full namespace support an thus a dataset cannot be reached into a LXC container retaining full control over the child datasets. The easiest solution is to create two volumes for /var/lib/docker and /var/lib/kubelet and format these ext4.
Below, some commands will need to be executed inside the lxc container and others on the host.
$-prefix means to be executed on the host machine
@-prefix means to be executed inside the lxc container
no prefix means it does not matter where the command is executed
First ensure on your host system that $ cat /proc/sys/net/bridge/bridge-nf-call-iptables returns 1.
This is required by kubernetes but cannot be validated inside the lxc container.
On your host, ensure that $ conntrack -L produces some output (this means it works).
If this requires additional kernel modules to be loaded, add those to the lxc container config.
For example you might need to add in $ lxc config edit k8s-lxc
config:
linux.kernel_modules: xt_conntrack,...
After that, verify that @ conntrack also works inside your lxc container.
To enter your container as root, do $ lxc exec k8s-lxc /bin/bash.
Recent kubernetes versions want to read from /dev/kmsg which is not present in the container.
You need to instruct systemd to always create a symlink to /dev/console instead:
This solution can cause infinite CPU usage in some cases; some(?) versions of
systemd-journald read from /dev/kmsg and write to /dev/console, and if
they're symlinked together, this will cause an infinite loop. If this affects
you, link /dev/null to /dev/kmsg instead:
Install docker and kubernetes runtime in the lxc container.
The following commands add the required repositories, install kubernetes with dependencies, and pin the kubernetes & docker version:
@ kubeadm init --ignore-preflight-errors=FileContent--proc-sys-net-bridge-bridge-nf-call-iptables
@ kubeadm init phase addon all
For the first command you need to ignore the bridge-nf-call-iptables check which you have done manually before.
In case you obtain an error like failed to parse kernel config in the preflight check, copy your host kernel config to from /boot to your lxc-guest /boot.
Disable the software container network infrastructure, because it is not needed for a dev environment:
Congratulations, if the last command worked, you now have kubernetes running in your lxc container.
Configure the host for working with k8s-lxc
On your host machine, add a host entry to access your k8s-lxc container by DNS name.
Find out the IP of your lxc container by running $ lxc list k8s-lxc.
Add its IP in /etc/hosts
<k8s-lxc-ip> k8s-lxc
After that, it should be possible to ping the container with ping k8s-lxc.
Make your docker daemon in the lxc cluster available from your host.
There are two options.
insecure without authentication
Open docker in the lxc container so that it can be accessed from outside. Run
To access the ingress via the default http/https ports, add hostPort directives to its deployment template.
Run kubectl edit -n ingress-nginx deployment nginx-ingress-controller and change the port definitions to
Because we are skipping the ingress-nginx service, you should also remove --publish-service=$(POD_NAMESPACE)/ingress-nginx
from the arguments to nginx-ingress-controller.
Disable leader election for control plane components, because this it is obsolete for a single node deployment.
sed -i 's/--leader-elect=true/--leader-elect=false/' \
/etc/kubernetes/manifests/{kube-controller-manager.yaml,kube-scheduler.yaml}
(Optional) Create an SSL certificate for your lxc container to secure traffic
Follow the instructions from kubernetes.io.
Using the following commands:
(Optional) Secure access to your cluster by creating a user with edit rights.
Use the script src/setup-default-user.sh to set up authentication by client certificate for a user with name default-user.
Check if everything is running correctly by deploying a small test application
kubectl apply -f src/test-k8s-lxc.yaml
You should be able to access the application from your browser http://k8s-lxc.
If that does not work, try to access the test service from within your kubernetes network.
$ kubectl run --generator=run-pod/v1 -ti --image nicolaka/netshoot curl
> curl test# should fetch a minimal greeting
Ctrl-D
$ kubectl delete pod curl
Useful command for working with your LXC container
Start your lxc container with lxc start k8s-lxc
Show your running container with its IP lxc list
Open a privileged shell in your container with lxc exec k8s-lxc /bin/bash
请发表评论