Installation Kubernetes High-Availability with Kubeadm
Kubernetes?
Kubernetes is an open-source platform used for managing containerized application workloads, as well as providing declarative configuration and automation. Kubernetes is in a large and fast-growing ecosystem. Kubernetes services, support, and tools are widely available.
Kubernetes provides container-centered environmental management. Kubernetes manages to compute networking and storage infrastructure. This feature then makes the concept of Platform as a Service (PaaS) simpler, flexible, equipped with Infrastructure as a Service (IaaS).
Why Kubernetes High Availability?
So the HA function here is to create a bridge and load divider communication between many nodes to the master. The node reads as if there are only one master and points to the IP HA, but actually, behind the HA, 3 masters are serving every API request from the node. In this case, HA is only used at the time of initial initiation because from the design only one HA can be one of the causes of SPOF. So that after the cluster has been successfully initiated, the HA function will be replaced by internal. Then the HA machine will be in a hibernation state (turn off) but still beep (not released). The master set is the control field in this cluster.
In this article I will use 5 VMs;
● 1 VM load balancer, as control plane or proxy in the Kubernetes cluster.
● 3 VM Master, as control manager in the Kubernetes cluster.
● 2 VM workers, where pods or services are stored
How to set up the Kubernetes cluster.
- set hosts on all node
vi /etc/hosts
...
10.61.61.10 k8s-LB
10.61.61.11 k8s-master1
10.61.61.12 k8s-master2
10.61.61.13 k8s-master3
10.61.61.14 k8s-worker1
10.61.61.15 k8s-worker2
...
- setup ssh passwordless authentication from node master1 to another master
sudo -i
ssh-keygen
cat /etc/hosts | grep master | awk {'print $2'} > target.txt
for node in $(cat target.txt); do ssh-copy-id root@$node; done
- Testing ssh passwordless authentication
for node in $(cat target.txt); do ssh root@$node hostname; done''output
...
k8s-master1
k8s-master2
k8s-master3
...
- Update and upgrade package on all nodes
sudo apt -y update; sudo apt -y upgrade; sudo apt -y update;
- Install docker.io on all node, except node LB
sudo apt install -y docker.io; sudo docker version
sudo systemctl enable docker
sudo systemctl start docker
sudo systemctl status docker
- Install kubelet, kubeadm, kubectl on all node, except node LB
sudo apt install -y apt-transport-https; curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -cat <<EOF > kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOFsudo mv kubernetes.list /etc/apt/sources.list.d/kubernetes.list
sudo apt update; sudo apt install -y kubectl kubelet kubeadm
- Set off swap on all node, except node LB
sudo swapon -s
sudo swapoff /dev/xxx
sudo swapon -s
- Install and configure haproxy on node LB
sudo apt update; sudo apt upgrade -y; sudo apt install haproxy -y
sudo vim /etc/haproxy/haproxy.cfg
...
frontend kubernetes
bind 10.61.61.10:6443
option tcplog
mode tcp
default_backend kubernetes-master-nodesbackend kubernetes-master-nodes
mode tcp
balance roundrobin
option tcp-check
server k8s-master1 10.61.61.11:6443 check fall 3 rise 2
server k8s-master2 10.61.61.12:6443 check fall 3 rise 2
server k8s-master3 10.61.61.13:6443 check fall 3 rise 2frontend https_frontend_kubernetes
bind 10.61.61.10:443
option tcplog
mode tcp
default_backend backend_k8s_nodesbackend backend_k8s_nodes
mode tcp
balance roundrobin
option tcp-check
server k8s-master1 10.61.61.11:6443 check fall 3 rise 2
server k8s-master2 10.61.61.12:6443 check fall 3 rise 2
server k8s-master3 10.61.61.13:6443 check fall 3 rise 2
...
- Verification the configure haproxy and then restart the service
haproxy -c -V -f /etc/haproxy/haproxy.cfg
''output
...
Configuration file is valid
...sudo systemctl restart haproxy
- Verification connection between master and node load balancer
nc -v 10.61.61.10 6443
''output
...
Connection to 10.61.61.10 6443 port [tcp/*] succeeded!
...
- Initialization on master1
kubeadm init --pod-network-cidr=10.244.X.0/16 --control-plane-endpoint "IP_LOADBALANCER:6443" --upload-certs
- Installation CNI on master1 and verification the installation CNI
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/configkubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"kubectl get pod -n kube-system -w
''note: just wait until all pods already up.
- Joining master2 & master3 to cluster node control-plane
kubeadm join IP_LOADBALANCER:6443 --token [TOKEN] \
--discovery-token-ca-cert-hash [TOKEN-ca-cert-hash] \
--control-plane --certificate-key [certificate-key]
- Verification on node master1
kubectl get nodes
- Execution on node master2 & master3 and verification
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/configkubectl get nodes''note: It is executed so that you can run kubectl commands to display a list of nodes, or other kubectl commands on master2 & master3
- Create three different deployments
git clone https://github.com/riowiraldhani/kubernetescd kubernetes/
ls./createdeployments.sh
''output:
...
''list deployment
...
- Expose deployment with type NodePort
- ./exposedeployments.sh
''output:
...
''list service with nodePort
...- verification
curl localhost:30606
...
Hello, world!
Version: 2.0.0
Hostname: helloapp-2-5ccf4846b5-pnwss
...
- Create ingress for three service that we have created
kubectl apply -f helloapp-ingress.yamlkubectl get ing --all-namespaces
- Create service ingress controller
kubectl apply -f ingress-controller.yaml
kubectl get svc -n ingress-nginx
- Edit service ingress controller
kubectl edit svc -n ingress-nginx ingress-nginx-controller
...
spec:
clusterIP: 10.101.62.144
externalIPs:
- IP_LOADBALANCER
...
''note: add externalIPskubectl get ingress helloapp-ingress
''note: now ingress helloapp-ingress have Address
- Verification service ingress
kubectl get ingress --all-namespaces
kubectl get svc -n ingress-nginx''note: now, you have service with hostname. But, you need mapping the hostname with you'r ip master. example;
...
10.61.61.11 k8s-master1 helloword-v1.info
...curl helloword-v1.info:[NodePort]
''output
...
Hello, world!
Version: 1.0.0
Hostname: helloapp-1-759f7597c5-8sf2r
...
- Access service with IP Node Loadbalancer
''note: if you want expose or access this service with IP node load balancer you need add config on haproxy.''on node k8s-LBsudo vi /etc/haproxy/haproxy.cfg
...
frontend nginx
bind 10.61.61.10:5000
mode http
default_backend backend_nginxbackend backend_nginx
balance roundrobin
server k8s1 10.61.61.11:30606
server k8s2 10.61.61.12:30606
server k8s3 10.61.61.13:30606frontend nginx-1
bind 10.61.61.10:5001
mode http
default_backend backend_nginx-1backend backend_nginx-1
balance roundrobin
server k8s1 10.61.61.11:30734
server k8s2 10.61.61.12:30734
server k8s3 10.61.61.13:30734
...haproxy -c -V -f /etc/haproxy/haproxy.cfg
sudo systemctl restart haproxy
''note: port 30606 & 30734 is port NodePort, you can get this with command kubectl get svc and look at column PORT(S)
- Verification
''on node k8s-LB and other nodecurl 10.61.61.10:5000
...
Hello, world!
Version: 1.0.0
Hostname: helloapp-1-759f7597c5-8sf2r
...curl 10.61.61.10:5001
...
<html><body><h1>It works!</h1></body></html>
...
If you have a question about this topic, you can dm me.
Thanks