Creating a single node development Kubernetes server on CentOS 8

Ashley Hines
5 min readMay 29, 2021

--

Introduction

As a hobby web developer, I use AWS to host applications, however for development I use docker, with moving more applications to Kubernetes, I need applications running inside Kubernetes to test. Ive found it much cheaper to just run Kubernetes at home as an alternative to testing in the cloud. I prefer to run native Kubernetes over applications such as Minikube and kind personally allowing me to change more functions of how the system works.

With this setup, I’ve decided to use flannel for ease of use of networking stack and CRI-O as the container runtime instead of docker as a personal preference. Docker can be used as an alternative but changes will be required.

Prerequisites

  • A single CentOS/RHEL 8 server with 4GB of ram and 2 CPUs (The more the merrier though).
  • Root/Sudo access.
  • Either DNS or hostnames need to be created, for this I am using a machine named “k8”, alternatively you can modify the hosts file to reflect this.
  • The server should be fully up to date.
sudo dnf -y update

Part 1: Setting up K8

Step 1: Setup the server
In order for our server to run K8’s we need to disable certain functionally including SELinux and the swap. Enter the following commands to disable these.

#Disable SELinux
sudo setenforce 0
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
#Disable Swap
sudo swapoff -a
sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

Next we load the required kernel modules for CRI-O.

# Load modules necessary for cri-o:
sudo bash -c 'cat >> /etc/modules-load.d/cri-o.conf <<EOF
overlay
br_netfilter
EOF'
sudo systemctl restart systemd-modules-load.service

Next we need to prepare sysctl to allow network bridging and ip_forward to allow virtual networking to pass traffic.

sudo bash -c 'cat > /etc/sysctl.d/99-kubernetes-cri.conf <<EOF
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-navailableables = 1
EOF'
sudo sysctl --system

Next we need to setup our firewall to allow remote traffic, as this server is used for development I will just open up all ports to my local network.

sudo firewall-cmd --set-default-zone=trusted

Step 2: Install dependencies (kubeadm, kubectl, kubelet, etcd, cri-o)
Firstly we will install CRI-O and disable CentOS's build in container management tools.

#Disable centos contanier tools
sudo dnf -y module disable container-tools
#Set required K8 and CRI Version
export VERSION=1.19
sudo curl -L -o /etc/yum.repos.d/devel:kubic:libcontainers:stable.repo https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable/CentOS_8/devel:kubic:libcontainers:stable.repo

sudo curl -L -o /etc/yum.repos.d/devel:kubic:libcontainers:stable:cri-o:$VERSION.repo https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable:cri-o:$VERSION/CentOS_8/devel:kubic:libcontainers:stable:cri-o:$VERSION.repo
sudo dnf -y install cri-o cri-tools
#Enable crio in systemd.
sudo systemctl enable --now crio

Now verify that CRI-O is running.

systemctl status crio
sudo crictl info

Next we add the Kubernetes Repo.

#Add Kubernetes repo
sudo bash -c 'cat > /etc/yum.repos.d/kubernetes.repo' << EOF
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF

Now to install and start Kubelet.

sudo dnf install -y kubelet kubeadm kubectl iproute-tc --disableexcludes=kubernetes
sudo systemctl enable --now kubelet

Step 3: Creating the cluster

Now we can initialise the cluster using kubeadmn.

sudo kubeadm init --pod-network-cidr=10.244.0.0/16

After the initialisation, we copy over the config files to allow us control kubectl as a non-privileged user.

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Step 4: Verifying Kubernetes initialisation

Now that the environment has been created, we can verify everything is running smoothly.

#View running nodes.
kubectl get nodes
#view running pods
kubectl get pods -A

Once the master is setup, we need to un-taint it so that it can run pods.

kubectl taint nodes --all node-role.kubernetes.io/master-

Step 5: Install network provider

to allow pod networking, we need to install a network provider. we will install flannel as a simple networking provider.

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

Next just verify that flannel has been installed on the node and is running

kubectl get pods -A | grep flannel
Flannel shows as running

Part 2: Setup K8 for remote access.

To enable remote access to the cluster, the file ~/.kube/config needs to be copied to another machine running kubectl. this can be done by running the following.

mkdir ~/.kube
scp user@k8:.kube/config ~/.kube/config

Part 3: Using Kubernetes

We now have a functional single node Kubernetes cluster running. Now to continue we need to make the environment easier to deploy applications by configuring storage, load-balancing and ingress. All of the below is optional depending on your use case.

Step 1: Creating NFS storage provider.
Here we will create a NFS server locally to automatically provision storage.
First we need to setup a NFS server running on the local machine.

#Install helm to install NFS-Client-Provisioner
wget -4 https://get.helm.sh/helm-v3.6.0-linux-amd64.tar.gz
tar -zxvf helm-v3.6.0-linux-amd64.tar.gz
sudo cp linux-amd64/helm /usr/local/bin/helm
#Install NFS Server - this might already be installed
sudo dnf install -y nfs-utils
#Enable service.
sudo systemctl enable --now nfs-server
#Create directory and set permissions
sudo mkdir /var/nfs
sudo chown nobody:nobody /var/nfs
sudo chmod 755 /var/nfs
#Set the export folder and IP Address, IP needs changing based off ip
echo '/var/nfs 192.168.1.7(rw,sync,no_subtree_check)' | sudo tee -a /etc/exports
#Update exports
sudo exportfs -a
#Install NFS Provisoiner from helm
helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/
helm install nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
--set nfs.server=192.168.1.7 \
--set nfs.path=/var/nfs
#Set NFS as default storage class
kubectl patch storageclass nfs-client -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

Step 2: Setup MetalLB for Load Balancing.
First install MetalLB into our cluster, installation instructions are also provided at https://metallb.universe.tf/installation/

#Install MetalLB
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.6/manifests/namespace.yaml
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.6/manifests/metallb.yaml
# On first install only
kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"

Next we need to configure MetalLb to provide IP addresses for the cluster.

# Create the config and push into a yaml file
cat > metallb-config.yaml << EOF
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: local
protocol: layer2
addresses:
- 192.168.1.240/28
EOF
kubectl apply -f metallb-config.yaml

Conclusion

With all these steps down, you now have an environment to test workflows before pushing into production cloud or hosted environments. Personally I’ve linked this environment into my Gitlab to manage my deployment workflows.

--

--