cluster with kubeadm

In this blog, we will see how using Kubeadm will create a Kubernetes cluster.

Firstly, log in to play with Kubernetes or use any cloud machines.

https://labs.play-with-k8s.com/

Create an account if required, if the account exists start our session

In this demo, I will create three machines and among them, the first machines will be used for the control plane and the remaining 2 will be for the worker node.

last machine we will see how to switch the control plane from one machine to another.

Check Kubeadm is installed or not

[node1 ~]$ kubeadm
This is a sandbox environment. Using personal credentials
 is HIGHLY! discouraged. Any consequences of doing so, are
 completely the user's responsibilites.

 You can bootstrap a cluster as follows:

 1. Initializes cluster master node:

 kubeadm init --apiserver-advertise-address $(hostname -i) --pod-network-cidr 10.5.0.0/16


 2. Initialize cluster networking:

 kubectl apply -f https://raw.githubusercontent.com/cloudnativelabs/kube-router/master/daemonset/kubeadm-kuberouter.yaml


 3. (Optional) Create an nginx deployment:

 kubectl apply -f https://raw.githubusercontent.com/kubernetes/website/master/content/en/examples/application/nginx-app.yaml

Initializes cluster master node:

[node1 ~]$ kubeadm init --apiserver-advertise-address $(hostname -i) --pod-network-cidr 10.5.0.0/16
W0707 22:02:24.755543    4428 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/docker/containerd/containerd.sock". Please update your configuration!
[init] Using Kubernetes version: v1.27.3
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 4.4.0-210-generic
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
CGROUPS_PIDS: enabled
CGROUPS_HUGETLB: enabled
CGROUPS_BLKIO: enabled
        [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "", err: exit status 1
        [WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
W0707 22:02:25.227595    4428 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.3, falling back to the nearest etcd version (3.5.7-0)
W0707 22:02:32.855018    4428 checks.go:835] detected that the sandbox image "registry.k8s.io/pause:3.6" of the container runtime is inconsistent with that used by kubeadm. It is recommended that using "registry.k8s.io/pause:3.9" as the CRI sandbox image.
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local node1] and IPs [10.96.0.1 192.168.0.23]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost node1] and IPs [192.168.0.23 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost node1] and IPs [192.168.0.23 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
W0707 22:02:43.713971    4428 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.3, falling back to the nearest etcd version (3.5.7-0)
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 5.503348 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node node1 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node node1 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: 4gjrs6.t4kozrkni5aceeu9
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.0.23:6443 --token 4gjrs6.t4kozrkni5aceeu9 \
        --discovery-token-ca-cert-hash sha256:26c48a25b5e8dae0764fe86c6498ebadb724d6d12c9e7ff282a5b7744aa9888e 
Waiting for api server to startup
Warning: resource daemonsets/kube-proxy is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
daemonset.apps/kube-proxy configured
No resources found

to avoid network issue run below command .
2. Initialize cluster networking:

 kubectl apply -f https://raw.githubusercontent.com/cloudnativelabs/kube-router/master/daemonset/kubeadm-kuberouter.yaml


 3. (Optional) Create an nginx deployment:

 kubectl apply -f https://raw.githubusercontent.com/kubernetes/website/master/content/en/examples/application/nginx-app.yaml

export KUBECONFIG

cat ~/.kube/config

You will get a config file that has info about the cluster server, API, user etc.

node1 ~]$ export KUBECONFIG=/etc/kubernetes/admin.conf
[node1 ~]$ cat ~/.kube/config 
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMvakNDQWVhZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJek1EY3dOekl5TURJME1Gb1hEVE16TURjd05ESXlNREkwTUZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTS9tCis4cEo2UzZiN0tGRXhwY2hBWVJrNDNnWEgyTTNFUE8vdko0dXpVM3N0M0Z5bUdJQ1Z3NmlGUXQ0MXNld2VncEgKQTZ0SVFockpMVVpkZkMzdUZsRTRra2pvTXNsczJDMU84S09UNTJuaFBId1NtUitiZFFnQWRTTU53b0hCUEdLMAppTlJvTDhyc21ISzdkcjFrWi9PdWtmdkhZb0Mxd0hJc0NwSW9vMzdYSkNUblRyQUpyOSt6WVFEVXVBSjhWWnpqCmRSdDZtcnNZeTFISGtpN3BpRE50UTdYNUQraFBlc2haVWdGamNOQ05XWVZ2RWwrUjRPK24rR0lwdmNxUFB1VHYKQlZjeC9STDBZU0lsTWFhU1JFREp3UmtwMWpFcjhScnhPMG5ZazN5ZXJ5OFJJUlpZWW0vRVE5UG50ZlNubFYvMgp0NFhnVU5acGVpd0plK1RNb1Y4Q0F3RUFBYU5aTUZjd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZNVVp1Z3doNkVnUVV5dDdIVHV4UTRBamI2aEdNQlVHQTFVZEVRUU8KTUF5Q0NtdDFZbVZ5Ym1WMFpYTXdEUVlKS29aSWh2Y05BUUVMQlFBRGdnRUJBSjJ0R1JqMjNxSkhhTzZnTGpmSgozanpMODZCc3ZuL3AzNWZyN1JEZ1R4WU5vRnJtM1VlRjEwWWJCcnhhMG03ZEZJbmRFejdSeE0yKzVMajE2Z0VHCm16aG84MzVNUU0rSGhqaW9XdHdEY0NzS0pubDFUQk9qbVNvUWtGdjFGNE5NTjdhYzlmMG1GWEpQK0RQK2VGL0UKbnJJZDJjeGdOTUVLTXduZ3VFbHd4eGZqNzh2R1NPSmlHZVF5VG92U2wvRi9pVnJGc0FGU0VBRFcxTXUyQ3VSUQpQcmpsYUM0ekJRNUtUeEk0S0taVVBEZnUvMUdOUFoxK2xvcnVIbVBVd2ZZaUE5QWh1SVllcHNDZ1doQ3VCaVpiCnQyNEFxMFNlZ2d0WEJmYk5JYlZZWmo0ekpiYjBHa0o5YXQwTllkemxvdktJQklINFVZeGNSMHhxT2pLbzNMTGoKUHlFPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
    server: https://192.168.0.23:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kubernetes-admin
  name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
  user:
    client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURJVENDQWdtZ0F3SUJBZ0lJVnJFU2szaHMrT2t3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TXpBM01EY3lNakF5TkRCYUZ3MHlOREEzTURZeU1qQXlOREphTURReApGekFWQmdOVkJBb1REbk41YzNSbGJUcHRZWE4wWlhKek1Sa3dGd1lEVlFRREV4QnJkV0psY201bGRHVnpMV0ZrCmJXbHVNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQW5xaUVFbzJ1cHUwRDNURDkKV0F1dFdJa1drV2o3b2JCb3QzOWJHN0syb0FyTm1FQmI4V1daVk8wb2xLa2J3eTAvMENGZTJYK3lxL2lIQk5jbwppcjA2QUlYT0FXbTlOZVByYjdpam12OHZmSnllUzdLT0RGbE5odFRIVFRJaG52YVI0RGprQzQ5amFMUGdoNXlJCnd6ZHZmUFllb3o0OU85RXEvVkl5VjRlQm9VbWsrWHZOaEhTTDdldVpVVzdNWnlTM1lPRTlzNFZJelZGYThuR1oKQkM1THlkTmIvOWtpalNyL2lZMXQwb3JWV0t2cW41QVh4M1NYNk0rWmNLVmROYmNYQ21oeWpxYjExU1hxRGZ4dApaS2hwYTVFQ3JSZlV2akRGVXJRYXJGMktvS2NBc2pXcllmOVFjTE52VWR5bVcwTUlDdy9WRlVHcEN3Nm1uQUV3CjJEeHhld0lEQVFBQm8xWXdWREFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUgKQXdJd0RBWURWUjBUQVFIL0JBSXdBREFmQmdOVkhTTUVHREFXZ0JURkdib01JZWhJRUZNcmV4MDdzVU9BSTIrbwpSakFOQmdrcWhraUc5dzBCQVFzRkFBT0NBUUVBalJXUkp2OXZZQm5zL3NpdVVNOEQ0RTlmSE94a3VCc3JUdyszClFFZFpIRjliRWJOalI1cHNnMmtscnI5cnFFMEd5RUk3d0VlWW0yNWk4Q2dqQndnWFRrLzY1cDNCWjh2OFJZUWsKc1laN3IxU0R3VEhtbVRMM3FJU3RUalBwUURoNVc1VWdWMWpSbXZmcEt4Y2VzRGgxUDc5aW9XYzg5cGFKcVduRwo0WjNwOTZVUjFnNWl1WGhSZFVNMzgvR2NTbnc5ZHBsQXV3N2JjckV2cEd6RDZ3K2QzZ0RQQndXS3FaaThIRzUrCk5OK1ZSZzVFdkN5cnF0Z3lCeW5LaDVWUHc0ZVQvdmJVdDlyMmJoLzZoUUpYUmpiN29Bd3dmR3hDR0d1QnVYdFIKN1VRS1Y2dVYybXViOFBSRmtoMjJZaEhlUGg4WWdWOGdHRlM1SlFtVW1mRVg1QVdhdlE9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
    client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcEFJQkFBS0NBUUVBbnFpRUVvMnVwdTBEM1REOVdBdXRXSWtXa1dqN29iQm90MzliRzdLMm9Bck5tRUJiCjhXV1pWTzBvbEtrYnd5MC8wQ0ZlMlgreXEvaUhCTmNvaXIwNkFJWE9BV205TmVQcmI3aWptdjh2Zkp5ZVM3S08KREZsTmh0VEhUVElobnZhUjREamtDNDlqYUxQZ2g1eUl3emR2ZlBZZW96NDlPOUVxL1ZJeVY0ZUJvVW1rK1h2TgpoSFNMN2V1WlVXN01aeVMzWU9FOXM0Vkl6VkZhOG5HWkJDNUx5ZE5iLzlraWpTci9pWTF0MG9yVldLdnFuNUFYCngzU1g2TStaY0tWZE5iY1hDbWh5anFiMTFTWHFEZnh0WktocGE1RUNyUmZVdmpERlVyUWFyRjJLb0tjQXNqV3IKWWY5UWNMTnZVZHltVzBNSUN3L1ZGVUdwQ3c2bW5BRXcyRHh4ZXdJREFRQUJBb0lCQVFDZVVTR1pNZS81ZWNERgpVMEU2UGt5M2IvUXBIVTBheHVGM1dZb2NWWFNPdHJqNUdCK20vZTdISSsrK2lCQy83Y01qZUdraE41K2VvdHg0CkpBcThocDMrTDRhbE9sSW9HRXF5ck5mMHJuZEFMVGgzNkxCOStnNjJZRlNQMzFwVk9VM1BKSFhLWTBhYkVBTVkKejBaWkpsUUZxY0pndXBaM3ZmemIwczJSTWhKVVVZbmRkcG9RY2pXTHlNZEIxYjF2L0JKd0VuclVnWGxjLzJnawo3bll0by8zSllHZElrVEtNcEwvOTJCOU53Yms4TXcyV3lBZXVadWRqRVdONlRjNGhzQmhDeHl3NDFqK3NGYkQzCkl1ZnBoNTBtNzN1S0NOd3VVeERHYVROeTUrTXBNR1pTYVJQL3M4dG85UnlPeE1aNkdjODNuQUYzVVR0Y2Vmc20KOFZRZUlvYmhBb0dCQU1IUWkvK1lxOTRPYUZaSDZuMXhyaU1aQkJtUXdwbHdWTEdTUkM5ZG5jV3ltUmlmaXN1RwpVd0JpNGI2T3VFQTlVU09xRVBqNzZQRXlqOTliUlhnbDVpYm9hVWVNVmFVWkNqWVVaT2JtWTNSQlBXbU12SkwxCjc2eUI4ZTdjaTVnUWdrTkZmODJVMzc5c0h2VlZiWmJYWVpET2lhcG5zbzA0M29qVHczdmFkRmpQQW9HQkFOR1EKVDk3dUNhTm55M2Q3bGJFeGRlRjU3NXRVOWRWM1drOXlxT29Vc3BWS0srN2hJRTY4ZUFZSDFyV2xidStlN25IdApLUEUwTGtuZGdHcnFZeHBoZTE5ajk4LzhoNjMwd1ZGKzZJL0M1dmNVMHpSdnhrb1pubGkyT2E5UG9JRjFObk9aCmh2bS9vR1JiWjIvWEFVSTR2QlY1cEY0ZVhocFhxOFhITUtiS2pXK1ZBb0dBZGdxNUtZdm5xVS9uRmgybzRJd2IKUGY4ZmN4NnFsdDlHaGZ3S2tUcVlPKzlodFJCK2JTUzdhckhPd2N1VXhuTlI5c0crb3BaeXNteFVHZm94M2xKZQowWFdkb1ZrRVZKQmltcnRqRlFwZXFsQ053YnBZbzcwc0kwbmxldEJTS09SdEllR3pUQmVQQ3J3a1F0R3I5RUhyCmgxRnpvUmlWTTlQZUhVRzBmcnQzUHhzQ2dZRUFsWW1oeE5VYzRaSldPUnRoMU1BVGV1S1UzTVdDYW1HeGV0RzUKd05jbUc2dUNzQUhMR1FRWnJVdjRwVU80WnBxRlVaeEd3OTlWVEhZWGhiTmRKbHo3T1RWUGh3V1BGODE0Q1J4QwovUnE1endQNE5nbXdkLzNSNVVHYTVnTXU2RkhvbWhLcW94cGZiRjFnOFFoK0tHL3RubkZmbloyVHpyNVNuMTJrCjFNL2lud1VDZ1lBb2hBTXBXUlhPSS8yOTlQU0MzcitJYzkvVlgxeTdqYjRTWTk1YUNSRTV3b2lBSHZWYXQrM3QKTnRaY1pwbGJjYVB2NVczUjB4M1l1NjAyMW40MkZTZllWc0Z4ZVBQRGJXZzRwM2RTTCtLbkU0SGxZMWkzWHFFOApiUCtMV1FiY0F0clhsdGpsU2k4cUJzbzl5UkQzRG80MjY5OUdmcmp6dFpxaHo5N1ZrampXTnc9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=
[node1 ~]$

Join the node to cluster

kubeadm join 192.168.0.23:6443 --token doa35i.oiev4uprdkqwd8rl
--discovery-token-ca-cert-hash sha256:1cec5d9c4b3f4eacd24eaae018abd594df752d8cad5f8bf69f07a6f65a076fa7



[node3 ~]$ kubeadm join 192.168.0.23:6443 --token 4gjrs6.t4kozrkni5aceeu9 \
>         --discovery-token-ca-cert-hash sha256:26c48a25b5e8dae0764fe86c6498ebadb724d6d12c9e7ff282a5b7744aa9888e 
Initializing machine ID from random generator.
W0707 22:14:32.609259    7009 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/docker/containerd/containerd.sock". Please update your configuration!
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 4.4.0-210-generic
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
CGROUPS_PIDS: enabled
CGROUPS_HUGETLB: enabled
CGROUPS_BLKIO: enabled
        [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "", err: exit status 1
        [WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

[node3 ~]$

kubectl get nodes -o wide

Will info about OS-IMAGE, Kernel version, Container-runtime

[node1 ~]$ kubectl get nodes -o wide
NAME    STATUS   ROLES           AGE     VERSION   INTERNAL-IP    EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION      CONTAINER-RUNTIME
node1   Ready    control-plane   18m     v1.27.2   192.168.0.23   <none>        CentOS Linux 7 (Core)   4.4.0-210-generic   containerd://1.6.21
node2   Ready    <none>          7m30s   v1.27.2   192.168.0.22   <none>        CentOS Linux 7 (Core)   4.4.0-210-generic   containerd://1.6.21
node3   Ready    <none>          7m10s   v1.27.2   192.168.0.21   <none>        CentOS Linux 7 (Core)   4.4.0-210-generic   containerd://1.6.21
[node1 ~]$