kube-ansible is a set of Ansible playbooks and roles that allows
you to instantiate a vanilla Kubernetes cluster on (primarily) CentOS virtual
machines or baremetal.
Additionally, kube-ansible includes CNI pod networking (defaulting to Flannel,
with an ability to deploy Weave, Multus and OVN Kubernetes).
The purpose of kube-ansible is to provide a simpler lab environment that allows
prototyping and proof of concepts. For staging and production deployments, we
recommend that you utilize
OpenShift-Ansible
Playbooks
Playbooks are located in the playbooks/ directory.
Playbook
Inventory
Purpose
virthost-setup.yml
./inventory/virthost/
Provision a virtual machine host
bmhost-setup.yml
./inventory/bmhost/
Provision a bare metal host and add to group nodes.
allhost-setup.yml
./inventory/allhosts/
Provision both a virtual machine host and a bare metal host.
kube-install.yml
./inventory/all.local.generated
Install and configure a k8s cluster using all hosts in group nodes
kube-install-ovn.yml
./inventory/all.local.generated
Install and configure a k8s cluster with OVN network using all hosts in group nodes
kube-ansible provides the means to install and setup KVM as a virtual host
platform on which virtual machines can be created, and used as the foundation
of a Kubernetes cluster installation.
There are generally two steps to this deployment:
Installation of KVM on the baremetal system and virtual machines instantiation
Kubernetes environment installation and setup on the virtual machines
Start with configuring the virthost/ inventory to match the required working
environment, including DNS or IP address of the baremetal system, that will be
installed and configured on the KVM platform. It also setup the network (KVM
network, whether that be a bridged interface, or a NAT interface), and then
define the system topology that needs to be deployed (e.g number of virtual
machines to instantiate).
All the above mentioned configuration is done by virthost-setup.yml playbook,
which performs the virtual host basic configuration, virtual machine
instantiation, and extra virtual disk creation when configuring persistent
storage with GlusterFS.
During the virthost-setup.yml a vms.local.generated inventory file is
created with the IP addresses and hostname of the virtual machines. The
vms.local.generated file can then be used with Kubernetes installation
playbooks like kube-install.yml or kube-install-ovn.yml.
Usage
Step 0. Install dependent roles
Install role dependencies with ansible-galaxy. This step will install the main
dependencies like (go and docker) and also brings other roles that is required
for setting up the VMs.
ansible-galaxy install -r requirements.yml
Step 1. Create virtual host inventory
Copy the example virthost inventory into a new directory.
Modify ./inventory/virthost/virthost.inventory to setup a virtual
host (If inventory is already present, please skip this step).
Step 2. Override the default configuration if requires
All the default configuration settings used by kube-ansible playbooks are present
in the all.yml file.
For instance by default kube-ansible creates one master and two worker node setup
only (please refer to ordered list under virtual_machines in all.yml),
but if HA cluster deployment (stacked control plane nodes) is required,
edit the all.yml file and change the
configuration to something on the line of
Above configuration change will create 3 node HA cluster with 2 worker nodes and
a LB node.
You can also define separate vCPU and vRAM for each of the virtual machines with
system_ram_mb and system_cpus. The default values are setup via system_default_ram_mb
and system_default_cpus which can also be overridden if you wish different
default values. (Current defaults are 2048MB and 4 vCPU.)
WARNING
If you're not going to be connecting to the virtual machines from the same
network as your source machine, you'll need to make sure you setup the
ssh_proxy_enabled: true and other related ssh_proxy_... variables to
allow the kube-install.yml playbook to work properly. See next NOTE for
more information.
Step 3. Create the virtual machines defined in all.yml
Once the default configuration is being changed as per the setup requirements,
execute the following instruction to create the VMs and generate the final inventory
with all the details required for Kubernetes installation on these VMs.
NOTE
There are a few extra variables you may wish to set against the virtual host
which can be satisfied in the inventory/virthost/group_vars/virthost.yml
file of your local inventory configuration in inventory/virthost/ that you
just created.
Primarily, this is for overriding the default variables located in the
all.yml file, or overriding the default values
associated with the roles.
Some common variables you may wish to override include:
Both the commands above will generate a new inventory file vm.local.generated
in inventory directory. This inventory file will be used by the Kubernetes
installation playbooks to install Kubernetes on the provisioned VMs. For instance,
below content is an example of vm.local.generated file for 3 node HA Kubernetes cluster
Step 4. Install Kubernetes on the instantiated virtual machines
During the execution of Step 3 a local inventory file inventory/vms.local.generated
should have been generated. This inventory file contains the virtual machines and their
IP addresses. Alternatively you can ignore the generated inventory and copy the example
inventory directory from inventory/examples/vms/ and modify to your hearts
content.
This inventory file need to be passed to the Kubernetes Installation playbooks
(kube-install.yml \ kube-install-ovn.yml).
If you're not running the Ansible playbooks from the virtual host itself,
it's possible to connect to the virtual machines via SSH proxy. You can do
this by setting up the ssh_proxy_... variables as noted in Step 3.
Options
kube-ansible supports following options and these options can be configured in all.yml:
network_type (optional, string): specify network topology for the virthost, each master/worker has one interface (eth0) in default:
2nics: each master/worker node has two interfaces: eth0 and eth1
bridge: add linux bridge (cni0) and move eth0 under cni0. This is useful to use linux bridge CNI for Kubernetes Pod's network
crio_use_copr (optional, boolean): (only in case of cri-o) set true if copr cri-o RPM is used
ovn_image_repo (optional, string): set the container image (e.g. docker.io/ovnkube/ovn-daemonset-u:latest)
enable_endpointslice (optional, boolean): set True if endpointslice is used instead of endpoints
enable_auditlog (optional, boolean): set True if auditing logs
enable_ovn_raft (optional, boolean): (kube-install-ovn.yml only) set True if you want to OVN with raft mode
ovn_image_repo (optional, string): Replace the url if image needs to be pull from other location.
NOTE
In case of enable_ovn_raft=True, you need to build your own image from the upstream ovn-kubernetes
repo and push it to your account and configure ovn_image_repo to point to that newly built image,
because current official ovn-kubernetes image does not support raft.
Tip
User can override the all.yml configuration values from command line
as well. Here's the example:
Install Kubernetes with cri-o runtime, each host has two NICs (eth0, eth1):
Once ansible-playbook execute successfully, to verify the installation login to the Kubernetes master
virtual machine and run kubectl get nodes and verify that all the nodes are in a Ready state.
(It may take some time for everything to coalesce and the nodes to report back to the Kubernetes master node.)
In order to login to the nodes, you may need to ssh-add ~/.ssh/vmhost/id_vm_rsa. The private key created on the virtual host will be
automatically fetched to your local machine, allowing you to connect to the
nodes when proxying.
Pro Tip
You can create a ~/.bashrc alias to SSH into the virtual machines if you're
not executing the Ansible playbooks directly from your virtual host (i.e.
from your laptop or desktop). To SSH into the nodes via SSH proxy, add the
following alias:
alias ssh-virthost='ssh -o ProxyCommand="ssh -W %h:%p root@virthost"'
It's assumed you're logging into the virtual host as the root user and at
hostname virthost. Change as required.
Once you're logged into your Kubernetes master node, run the following command
to check the state of your cluster.
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
kube-master1 Ready master 18h v1.17.3
kube-master2 Ready master 18h v1.17.3
kube-master3 Ready master 18h v1.17.3
kube-node-1 Ready <none> 18h v1.17.3
kube-node-2 Ready <none> 18h v1.17.3
Everything should be marked as ready. If so, you're good to go!
Example Setup and configuration instructions
Following instructions are to create a HA Kubernetes cluster with two worker nodes and OVN-Kubernetes
in Raft mode as a CNI. All these instructions are executed from the physical server where virtual virtual_machines
will be created to deploy the Kubernetes cluster.
请发表评论