Yet<>Another<>DevOps<>Blog.For Folks with System Admin Background (needless to say LINUX/UNIX)  <TO> DevOps Engineer, Helps to find path way from Basic to Advance Concepts with Projects to master concepts and tools. This Site is under Construction and will keep updating Projects from Regular Basis ETA 15 Days once Update can be expected. Please be patient and this serves purely educational purpose and as such i don't guarantee any bug or Security Vulnerability which should be always be addressed when we design Production Grade Systems. Last but not the least it's never a single man show, it's all collective effort and i do owe huge respect to lot of folks out their who relentlessly helping for Devops transition Huge respect and thanks to all of them (LinkedIn::Alex Xu, Bibin Willson,Ann, etc.,), i will create a separate page in future for acknowledging and respecting these folks !!!
Thanks & Regards,
Srini
!!!! Happy Learning !!!!

Project#1 Designing Vagrant K8 Cluster

This Project is based on Vagrant Platform and serves as the basement for creating K8 Cluster which can easily be scaled and also which can be run on local machine instead of Hosting on Cloud which will involve Cost, this can be very useful for those who want to practice K8 hands on before they can spin cluster on Public Cloud.

Pre-Requsites::

  • Core-i5 or Above Processor with 16GB RAM, 100GB Free Disk space.
  • Base Machine: Ubuntu 20.04.6 LTS.
  • Virtual Box 6.1, Extension Pack: 6.1.38r.
  • Vagrant 2.2.9.

Ensure to Create a directory and Create Vagrant File as Below:

Depending on Requirement, Feel free to change the NUM_WORKER_NODES / IP_NW / IP_START Variables in Vagrant file.

There are 3 Files :: 1>Vagrant 2>scripts/common.sh 3>scripts/master.sh

1>Vagrant Contains Configuration which will be used to Provision the Master/Worker Nodes.

2>scripts/common.sh Scripts which contains common configuration for both Master and Worker nodes.

3>scripts/master.sh Scripts which contains specific Configuration for Master alone.

user@<workingdir>Vagrantfile Contents Below::

Vagrant File Content Below::

NUM_WORKER_NODES=3
IP_NW="10.0.0."
IP_START=10

Vagrant.configure("2") do |config|
    config.vm.provision "shell",env: {"IP_NW" => IP_NW, "IP_START" => IP_START}, inline: <<-SHELL
        apt-get update -y
        echo "10.0.0.10 k8-master" >> /etc/hosts
        echo "10.0.0.11 k8-node01" >> /etc/hosts
        echo "10.0.0.12 k8-node02" >> /etc/hosts
        echo "10.0.0.13 k8-node03" >> /etc/hosts
   
    SHELL
    
    config.vm.box = "bento/ubuntu-21.10"
    config.vm.box_check_update = true

    config.vm.define "master" do |master|
      # master.vm.box = "bento/ubuntu-20.04"
      master.vm.hostname = "k8-master"
      master.vm.network "private_network", ip: IP_NW + "#{IP_START}" 
      master.vm.boot_timeout = 1000 
      master.vm.provider "virtualbox" do |vb|
          vb.memory = 4048
          vb.cpus = 2
      end
      master.vm.provision "shell", path: "scripts/common.sh"
      master.vm.provision "shell", path: "scripts/master.sh"
    end

    (1..NUM_WORKER_NODES).each do |i|
  
    config.vm.define "node0#{i}" do |node|
      # node.vm.box = "bento/ubuntu-20.04"
      node.vm.hostname = "k8-node0#{i}"
      node.vm.network "private_network", ip: IP_NW + "#{IP_START + i}" 
      node.vm.boot_timeout = 1000
      node.vm.provider "virtualbox" do |vb|
          vb.memory = 2048
          vb.cpus = 1
      end
      node.vm.provision "shell", path: "scripts/common.sh"
      node.vm.provision "shell", path: "scripts/node.sh"
    end
    
    end
  end
scripts/common.sh Content Below::
#!/bin/bash
#
# Common setup for all servers (Control Plane and Nodes)

set -euxo pipefail

# Variable Declaration

KUBERNETES_VERSION="1.24.1-00"
# disable swap
sudo swapoff -a

# keeps the swapoff during reboot
(crontab -l 2>/dev/null; echo "@reboot /sbin/swapoff -a") | crontab - || true
# backup your sources file
cp /etc/apt/sources.list /etc/apt/sources.list.bak

# replace the links with the archive address
sudo sed -i -re 's/([a-z]{2}.)?archive.ubuntu.com|security.ubuntu.com/old-releases.ubuntu.com/g' /etc/apt/sources.list

# run update again
sudo apt-get update -y #&& sudo apt-get dist-upgrade

#sudo apt-get update -y
# Install CRI-O Runtime

OS="xUbuntu_20.04"

VERSION="1.24"
#VERSION="1.23"

# Create the .conf file to load the modules at bootup
cat <<EOF | sudo tee /etc/modules-load.d/crio.conf
overlay
br_netfilter
EOF

sudo modprobe overlay
sudo modprobe br_netfilter

# Set up required sysctl params, these persist across reboots.
cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-iptables  = 1
net.ipv4.ip_forward                 = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF

sudo sysctl --system

cat <<EOF | sudo tee /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list
deb https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/ /
EOF
cat <<EOF | sudo tee /etc/apt/sources.list.d/devel:kubic:libcontainers:stable:cri-o:$VERSION.list
deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable:/cri-o:/$VERSION/$OS/ /
EOF

curl -L https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable:cri-o:$VERSION/$OS/Release.key | sudo apt-key --keyring /etc/apt/trusted.gpg.d/libcontainers.gpg add -
curl -L https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/Release.key | sudo apt-key --keyring /etc/apt/trusted.gpg.d/libcontainers.gpg add -

sudo apt-get update
sudo apt-get install cri-o cri-o-runc -y

sudo systemctl daemon-reload
sudo systemctl enable crio --now

echo "CRI runtime installed susccessfully"

sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl
sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg

echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update -y
sudo apt-get install -y kubelet="$KUBERNETES_VERSION" kubectl="$KUBERNETES_VERSION" kubeadm="$KUBERNETES_VERSION"
sudo apt-get update -y
sudo apt-get install -y jq

local_ip="$(ip --json a s | jq -r '.[] | if .ifname == "eth1" then .addr_info[] | if .family == "inet" then .local else empty end else empty end')"
cat > /etc/default/kubelet << EOF
KUBELET_EXTRA_ARGS=--node-ip=$local_ip
EOF

scripts/master.sh Content below:
#!/bin/bash
#
# Setup for Control Plane (Master) servers

set -euxo pipefail

MASTER_IP="10.0.0.10"
NODENAME=$(hostname -s)
POD_CIDR="192.168.0.0/16"
#POD_CIDR="192.168.0.0/16"

sudo kubeadm config images pull

echo "Preflight Check Passed: Downloaded All Required Images"

sudo kubeadm init --apiserver-advertise-address=$MASTER_IP --apiserver-cert-extra-sans=$MASTER_IP --pod-network-cidr=$POD_CIDR --node-name "$NODENAME" --ignore-preflight-errors Swap

mkdir -p "$HOME"/.kube
sudo cp -i /etc/kubernetes/admin.conf "$HOME"/.kube/config
sudo chown "$(id -u)":"$(id -g)" "$HOME"/.kube/config

# Save Configs to shared /Vagrant location

# For Vagrant re-runs, check if there is existing configs in the location and delete it for saving new configuration.

config_path="/vagrant/configs"

if [ -d $config_path ]; then
  rm -f $config_path/*
else
  mkdir -p $config_path
fi

cp -i /etc/kubernetes/admin.conf /vagrant/configs/config
touch /vagrant/configs/join.sh
chmod +x /vagrant/configs/join.sh

kubeadm token create --print-join-command > /vagrant/configs/join.sh

# Install Calico Network Plugin

curl https://docs.projectcalico.org/manifests/calico.yaml -O

kubectl apply -f calico.yaml

# Install Metrics Server

kubectl apply -f https://raw.githubusercontent.com/scriptcamp/kubeadm-scripts/main/manifests/metrics-server.yaml

# Install Kubernetes Dashboard

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.5.1/aio/deploy/recommended.yaml

# Create Dashboard User

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard
EOF

cat <<EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard
EOF

kubectl -n kubernetes-dashboard get secret "$(kubectl -n kubernetes-dashboard get sa/admin-user -o jsonpath="{.secrets[0].name}")" -o go-template="{{.data.token | base64decode}}" >> /vagrant/configs/token

sudo -i -u vagrant bash << EOF
whoami
mkdir -p /home/vagrant/.kube
sudo cp -i /vagrant/configs/config /home/vagrant/.kube/
sudo chown 1000:1000 /home/vagrant/.kube/config
EOF

Screen Shot for the Run once we have all the Files.





export the configs from the directory as shown below so that kubectl will work from base machine.



To get the kubernets Dashboard. Please check the below official Link download and run the yaml file as described in below for dashboard to work.
https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/

Before accessing the dashboard, Ensure to run the proxy from your base machine and get the secret token.
kubectl proxy & 
kubectl describe secret





Open the Browser in the base machine with below URL localhost:
http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/#/login

and paste the token to gain access.






Reference ::
For in detail understanding about various k8 components please refer :: https://devopscube.com/
Thanks Bibin Wilson