What is kubernetes and why should I need it?
Simply put, kubernetes is a tool for managing computing resources. It does this very efficently by abstracting your hardware into one (or more, if you like) big computing resource and therefore highly efficient use of your hardware with very little overhead unlike Virtual Machines for example (doesn’t mean there is no use case for them anymore though as there are also pros and cons as it is with every technology).
However, kubernetes mostly manages containers like docker, rkt or podman which allow you to build cost efficient and high available (microservice) architectures.
What’s this about
In this post I’ll cover how to set up a kubernetes cluster (master and worker) and the most basic commands.
Pre-requisites
- Im using 3 vms with debian9 and 4GB RAM, this is sufficient for a master with two workers.
- I’m using docker for containers in this setup
- Don’t format the Filesystem on your vm with xfs (or do it with -d), use ext4 for example. Docker uses an overlayfs (needed even if you don’t plan to keep files locally), it uses AUFS by default but that isn’t supported anymore in kernels > 4.x, so we’ll use overlay2, which has said limitations.
- Disable swap on installation or you will have to disable it later
Installation
Let’s install the hosts (kubernetes-master1, kubernetes-node1 ):
apt-get install apt-transport-https ca-certificates curl gnupg2 software-properties-common
We need some basic packages, so that we can work with external repositories.
(If you are behind a proxy, see instructions for working behind a proxy in the tips section at the end of this post).
curl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key add -
add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/debian $(lsb_release -cs) stable"
apt-get update
apt-get install docker-ce docker-ce-cli containerd.io
Now docker is installed on our master and our node(s)
vi /etc/default/grub : GRUB_CMDLINE_LINUX="cgroup_enable=memory swapaccount=1"
Edit your grub config and enable the memory controlgroup (node only).
vi /etc/sysctl.conf
net.ipv4.conf.all.forwarding=1
net.ipv4.ip_nonlocal_bind=1
net.bridge.bridge-nf-call-iptables=1
net.ipv6.conf.all.disable_ipv6=1
net.ipv6.conf.default.disable_ipv6=1
net.ipv6.conf.lo.disable_ipv6=1
On your node(s) only, edit your sysctl to enable the use of the docker firewall, forwarding and the binding of non-local ips, so that we may expose our services via kubernetes later. Also I found it convenient to disable ipv6 (didn’t get it to work with ipv6 enabled).
vi /etc/hosts:
192.168.1.10 kubernetes-master1.your.domain kubernetes-master1
192.168.1.11 kubernetes-node1.your.domain kubernetes-node1
I also add master and nodes to /etc/hosts here, but you may skip this if you trust in the reliability of your local dns (or use something like dnsmasq).
vi /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2",
"dns": ["192.168.1.33","192.168.1.34"]
}
This Part is important (at least at the moment, when working with debian).
We change the cgroup driver to systemd, so docker cgroups are handled by systemd.
Also we set the storage driver to “overlay2” (docker needs an overlay fs to handle local filesystem access, here we use overlayfs2 because the old AUFS isn’t supported anymore by recent linux kernels.
You may also set things like dns here, but that’s not really necessary under normal circumstances (here it is though…).
mkdir -p /etc/systemd/system/docker.service.d
Because debian (at least atm) leaves it up to the user to handle the configuration of docker with systemd, we have to create this directory (also see: https://kubernetes.io/docs/setup/cri/).
systemctl daemon-reload
systemctl restart docker
docker should be running now on your master and nodes.
Troubleshooting Tips:
rm -f /var/lib/docker/network/files/local-kv.db
If docker won’t start because of network/virtual ip problems, delete the above file as it was created during the installation but may contain wrong information now, because of our configurations. It will get recreated on restart with the correct information.
ip link add name docker0 type bridge && ip addr add dev docker0 192.168.5.1/24
If that doesn’t help create a virtual ip address, delete the above file and the restart (virtual ip will get re-created on reboot because systemd handles docker and docker has the correct info then), you may not run into this but I encountered this problem several times.
installing and configuring kubernetes
Like with the docker installation, everything here must be done on the master and the nodes (I’ll mark the few exceptions explicitly).
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
add-apt-repository "deb http://apt.kubernetes.io/ kubernetes-xenial main"
apt-get update
Configure the kubernetes sources/repositories, at the moment these are the xenial sources (they work for debian too of course).
deb-src [arch=amd64] https://download.docker.com/linux/debian stretch stable
deb http://apt.kubernetes.io/ kubernetes-xenial main
Your sources.list (or whatever you use) should now contain the above entries.
apt-get install ebtables socat conntrack
Install some dependencies first.
apt-get install -t kubernetes-xenial -y kubelet kubeadm kubectl
Now install kubernetes.
vi /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=systemd"
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS $KUBELET_CGROUP_ARGS
Here we go again, cgroups driver has to be configured and this configuration has to be added to ExecStart Parameters.
systemctl daemon-reload
systemctl kubelet restart
Issue the above command on the master.
kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.1.90 --kubernetes-version "1.14.0"
Now execute the above command on your master to initialize the control server with the API IP to advertise (the IP of your master node) and a range for your pod network (the virtual network your services will use to communicate, make sure that the pod network is the same as written above, it won’t work otherwise in flannel).
One may give a lot of parameters here (https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init/) but for now this should be sufficient.
This generates a token (and prints the entire command) one can use to join nodes to the cluster, after deploying a pod network, let’s use flanneld because it’s simple::
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Now it’s time to join our nodes with the previously generated token:
kubeadm join 192.168.1.90:6443 --token j54q85.kxpvl5hwlssvcd3 --discovery-token-ca-cert-hash sha256:dde35c22122fa8fbc0c16ddd448dc98ce1e22341e50a1463ae0dda99974cfd9f
Kubernetes should now be up and running (for now this is kubelet, which is the kubernetes node agent that runs on every cluster node, be it master or worker, read more about it, here: https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/).
Your kubernetes cluster should be up and running!
(see the below section for some Troubleshooting Tips (if the join didn’t work, you are behind a proxy pod network didn’t start, etc.)
Also let’s deploy something: the kubernetes dashboard (which is extremely useful as a graphical user interface and for a general overview about your cluster).
Tips and Troubleshooting
Check if your cluster is working as expected (do it to see if your join worked):
kubectl get nodes
root@kubmastertest:~# kubectl get nodes
NAME STATUS ROLES AGE VERSION
kubmastertest Ready master 17h v1.14.0
kubnodetest01 NotReady <none> 4m23s v1.14.0
Obviously everything should be in state “Ready”. Here it is not and I fixed it by manually editing flannel config in the end, on the node:
mkdir -p /etc/cni/net.d
vi /etc/cni/net.d/10-flannel.conflist
{
"name": "cbr0",
"plugins": [
{
"type": "flannel",
"delegate": {
"hairpinMode": true,
"isDefaultGateway": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}
if you should “forget” the join command, just look it up on your master with:
kubeadm token create --print-join-command
If you are behind a proxy, configure the following to let docker and kubernetes communicate (though you might encounter several issues as these things are not build to be run behind a proxy…):
vi /etc/environment
http_proxy="http://192.168.255.1:3128"
https_proxy="http://192.168.255.1:3128"
no_proxy="127.0.0.1, localhost, alltheserversyouwanttobeexcluded
Edit /etc/environment, which is OS default for using proxies.
One has to exclude servers by single ips, because the no_proxy directive ignores netmasks.
vi /etc/apt/apt.conf.d/99HttpProxy
http_proxy="http://192.168.255.1:3128"
https_proxy="http://192.168.255.1:3128"
Acquire::http::Proxy::my.debianrepo.de DIRECT;
Acquire::http::Proxy::corporate.debian.repo.de DIRECT;
Excluding repositories is done via “DIRECT”, so one can use internal repos and the external ones, mentioned above.
vi /etc/systemd/system/docker.service.d/https-proxy.conf
vi /etc/systemd/system/docker.service.d/http-proxy.conf
[Service]
Environment="HTTPS_PROXY=http://192.168.255.1:3128"
Environment="NO_PROXY=127.0.0.1, localhost, alltheserversyouwanttoexclude"
[Service]
Environment="HTTP_PROXY=http://192.168.255.1:3128"
Environment="NO_PROXY=127.0.0.1, localhost, alltheserversyouwanttoexclude"
Proxy settings for docker.
kubeadm complains about missing kubeconfig or the like:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
useradd -U -G users,sudo -m -s /bin/bash kubeadm
vi /etc/sudoers
kubeadm ALL=(ALL) NOPASSWD:ALL
Try adding a kubeadm user withthe kubeadm config or just do it directly as root.
Kubernetes dashboard deployment
We use RBAC (Role Based Authentication) with our kubernetes cluster, this allows us to manage access and rights on a granular level. First we need to create a user:
vi dashboard-adminuser.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kube-system
In case, that the role “admin-user” already exists, just add a cluster role binding:
vi dashboard-adminuser.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kube-system
kubectl apply -f dashboard-adminuser.yaml
Anyway, create it with the above command.
Now find out the auth token for the user/role, we will need it to log into the dashboard (unfortunatley ldap is not supported as of yet).:
kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
Now deploy the dashboard:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-rc5/aio/deploy/recommended.yaml
http://192.168.1.90:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
The dashboard should be accessible now on your api server address.
With newer Dashboards you’ll need a client certificate (if you use the method described above and access it through the api), you may extract the information from your .kube/config.
grep 'client-certificate-data' $HOME/.kube/config | awk '{print $2}' | base64 -d >> admin.crt
grep 'client-key-data' $HOME/.kube/config | awk '{print $2}' | base64 -d >> admin.key
openssl pkcs12 -export -in admin.crt -inkey admin.key -out kub_client.p12
Extract the Key and the Cert and put them together as pkcs12. You may now import them into firefox and access the dashboard through the api (don’t forget to enter the token).
(Read everything about the dashboard here: https://github.com/kubernetes/dashboard)
You should have a fully working and manageable kubernetes cluster by now!
some closing words and Resources
This post covers the manual setup of a kubernetes cluster, which is good for understanding the basics, for a more production ready setup you may want to take a look at common automation tools (puppet and ansible):
- https://forge.puppet.com/puppetlabs/kubernetes
- https://kubespray.io/#/
Also, because the topic is huge, you might want to consider reading a book about it:
- https://www.hanser-fachbuch.de/buch/Kubernetes+in+Action/9783446455108
It’s good, I’m reading (parts of) it at the moment, it uses minikube for examples a lot though.