Kubernetes (K8S) Setup On-Premise Baremetal — Using Calico and Flannel Network

Sathish Kumar
5 min readNov 6, 2020

--

We gonna setup Kubernetes with 1 Control Node and 2 Worker Nodes in an On-Premise Bare-metal/VMs.

Prerequisites

System Minimum Requirement

╔════════════════╦═════╦════════╦════════╦═══════════════════╗
║ Machine ║ CPU ║ Memory ║ Disk ║ OS ║
╠════════════════╬═════╬════════╬════════╬═══════════════════╣
║ Control Node ║ 4 ║ 8 GB ║ 100 GB ║ Ubuntu 16.04+ LTS ║
║ Worker Node 1 ║ 4 ║ 8 GB ║ 100 GB ║ Ubuntu 16.04+ LTS ║
║ Worker Node 2 ║ 4 ║ 8 GB ║ 100 GB ║ Ubuntu 16.04+ LTS ║
╚════════════════╩═════╩════════╩════════╩═══════════════════╝

Add Worker nodes when required to Scale Horizontally. Increase Replicas Manually or setup HPA (Not included in Gluu setup) for Automatic Horizontal scaling of pods.

Dependencies/Packages

  1. Docker CE Latest
  2. kubelet, kubeadm, kubectl — Ubuntu Packages
  3. Canel (Calico + Flannel-Overlay) — Kubernetes CNI Plugins

Required Ports to Open

Control-Plane Node(s):

╔═════════╦══════════╦═══════════╦═══════════════╦═════════════════╗
║ Protocol║ Direction║ Port Range║ Purpose ║ Used By ║
╠═════════╬══════════╬═══════════╬═══════════════╬═════════════════╣
║ TCP ║ Inbound ║ 6443* ║ Kubernetes ║ All ║
║ ║ ║ ║ API server ║ ║
║ TCP ║ Inbound ║ 2379-2380 ║ etcd server ║ kube-apiserver, ║
║ ║ ║ ║ client API ║ etcd ║
║ TCP ║ Inbound ║ 10250 ║ Kubelet API ║ Self, ║
║ ║ ║ ║ ║ Control plane ║
║ TCP ║ Inbound ║ 10251 ║ kube-scheduler║ Self ║
║ TCP ║ Inbound ║ 10252 ║ kube- ║ Self ║
║ ║ ║ ║ controller- ║ ║
║ ║ ║ ║ manager ║ ║
╚═════════╩══════════╩═══════════╩═══════════════╩═════════════════╝

Worker Node(s):

╔═════════╦══════════╦═══════════╦═════════════╦═══════════════╗
║ Protocol║ Direction║ Port Range║ Purpose ║ Used By ║
╠═════════╬══════════╬═══════════╬═════════════╬═══════════════╣
║ TCP ║ Inbound ║ 10250 ║ Kubelet API ║ Self, ║
║ ║ ║ ║ ║ Control plane ║
║ TCP ║ Inbound ║ 30000- ║ NodePort ║ All ║
║ ║ ║ 32767 ║ Services ║ ║
╚═════════╩══════════╩═══════════╩═════════════╩═══════════════╝

Kubernetes Setup

Docker Installation (All Nodes)

Docker installation using script:

$ curl -fsSL https://get.docker.com -o get-docker.sh
$ chmod +x get-docker.sh
$ sudo sh get-docker.sh

If you would like to use Docker as a non-root user, you should now consider adding your user to the “docker” group with something like:

$ sudo usermod -aG docker your-user

Setup Docker Daemon

$ cat > /etc/docker/daemon.json <<EOF
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2",
"storage-opts": [
"overlay2.override_kernel_check=true"
]
}
EOF
$ mkdir -p /etc/systemd/system/docker.service.d
$ systemctl daemon-reload && systemctl restart docker

Install kubelet kubectl (All Nodes)

$ apt-get update && apt-get install -y apt-transport-https curl$ curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -$ cat <<EOF >/etc/apt/sources.list.d/kubernetes.list deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF
$ apt-get update && apt-get install -y kubelet kubeadm kubectl && apt-mark hold kubelet kubeadm kubectl

Install kubeadm (Only in Control Node)

$ apt-get update && apt-get install -y kubeadm && apt-mark hold kubeadm

Cluster Init — Kubeadm

$ sudo swapoff -a && sudo kubeadm init --service-cidr=172.16.0.0/17 --pod-network-cidr=172.16.128.0/16 --apiserver-advertise-address=<CONTROL_NODE_IP>$ <Output> with init .kube config and Join Command
  1. — service-cidr and — pod-network-cidr can be your choice of CIDR
  2. Copy the Join command in the Output

Init .kube config

$ mkdir -p $HOME/.kube && \
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config && \
sudo chown $(id -u):$(id -g) $HOME/.kube/config && \
sudo swapoff -a

Join to Cluster (On Each Worker Node)

$ sudo kubeadm join <CONTROL_NODE_IP>:6443 --token zjtna2.zv60qj8l4thdktp9 --discovery-token-ca-cert-hash sha256:3d2bfa53b5bc3ca276afcdbd51d6c65992d9eb64131829831f1a63c78a306fb5

Check Nodes

$ kubectl get nodes

Install CNI — Cluster Networking

Install Calico for policy and flannel (aka Canal) for networking, which are majorly used Cluster Networking Plugins. Reference: https://docs.projectcalico.org/getting-started/kubernetes/flannel/flannel

$ sudo kubectl apply -f https://docs.projectcalico.org/v3.10/manifests/canal.yaml && sudo KUBECONFIG=/etc/kubernetes/admin.conf kubectl apply -f https://raw.githubusercontent.com/cloudnativelabs/kube-router/master/daemonset/kubeadm-kuberouter-all-features.yaml

Check Pods

Make sure the Core DNS, Canel, Kube-Router pods are running.

$ kubectl get pods --all-namespaces --watch -o wide

Control Plane Node Isolation (Optional)

Pods or Docker containers will not be scheduled in the control-plane by default. If you want to be able to schedule Pods on the control-plane node,

$ kubectl taint nodes <MASTER_NODE> node-role.kubernetes.io/master-
Or
$ kubectl taint nodes --all node-role.kubernetes.io/master-

Deploy Sample Nginx App

Deploy Nginx app + Service with Node Port: “kubectl apply -f nginx-app.yaml”

apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
selector:
matchLabels:
app: nginx
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:alpine
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
type: NodePort
ports:
- protocol: TCP
port: 8080
targetPort: 80
nodePort: 30600
name: http
selector:
app: nginx

Access Nginx Service using the Node Port: http://<ANY_CLUSTER_NODE_IP>:30600

Access Service using Nginx Ingress Controller

Reference: https://kubernetes.github.io/ingress-nginx/deploy/

NodePort vs LoadBalancer vs Ingress

Here is good article to know about the difference between NodePort vs LoadBalancer vs Ingress: https://medium.com/google-cloud/kubernetes-nodeport-vs-loadbalancer-vs-ingress-when-should-i-use-what-922f010849e0

We have seen Node Port, now Lets try the Ingress, which helps us to open only one Node Port for multiple service.

Install Nginx Ingress Controller For Baremetal — NodePort

Take a Look: https://kubernetes.github.io/ingress-nginx/deploy/baremetal/#over-a-nodeport-service

$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/baremetal/deploy.yaml

Watch if pods related to Nginx Ingress Controller are UP,

$ kubectl get pods -n ingress-nginx -l app.kubernetes.io/name=ingress-nginx --watch -o wide

Declare a Ingress

Create file nginx-ingress.yaml with content given below and deploy “kubectl apply -f nginx-ingress.yaml”,

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginx-ingress
annotations:
ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: nginx
servicePort: 8080
- path: /api
backend:
serviceName: other-service
servicePort: 8080

If you get this error:

Error from server (InternalError): error when creating “nginx-ingress.yaml”: Internal error occurred: failed calling webhook “validate.nginx.ingress.kubernetes.io”: Post “https://ingress-nginx-controller-admission.ingress-nginx.svc:443/networking/v1beta1/ingresses?timeout=10s": dial tcp <IP>:443: connect: connection refused

Use the solution given below, and the deploy “kubectl apply -f nginx-ingress.yaml”

$ kubectl delete -A ValidatingWebhookConfiguration ingress-nginx-admission

Get Service Endpoint Exposed through Ingress

$ kubectl get svc -n ingress-nginx -o go-template='{{range .items}}{{range.spec.ports}}{{if .nodePort}}{{.name}}: {{.nodePort}}{{"\n"}}{{end}}{{end}}{{end}}'http: 31478
https: 30282

Access Nginx Service through Ingress Node Port: http://<ANY_CLUSTER_NODE_IP>:31478 or https://<ANY_CLUSTER_NODE_IP>:30282

Configure Load Balancer/Proxy with following IP and Ports (Https). You can use a cloud load balancer or proxy like HAProxy/Nginx. This provide us a single endpoint to be configured in DNS

<CONTROL_NODE_IP>:31478
<WORKER_1_NODE_IP>:31478
<WORKER_2_NODE_IP>:31478

Fault Tolerance and Disaster Recovery is not taken care in this setup. You can use this to setup Development Environment Setup.

--

--

Sathish Kumar
Sathish Kumar

Written by Sathish Kumar

DevOps Architect | Kubernetes | Cloud

No responses yet