Categories
Cloud Linux

Deploy Web App in Azure Kubernetes Service (AKS)

Aim:

Publish website in new AKS (Azure Kubernetes Service) cluster, initially following the tutorial at https://docs.microsoft.com/en-us/azure/aks/tutorial-kubernetes-prepare-acr?tabs=azure-cli

Create the Azure Resources:

`This tutorial requires that you’re running the Azure CLI version 2.0.53 or later. Run az –version to find the version.

Use ‘az login’ to log into the Azure account:

az login

The CLI will open your default browser, and load an Azure sign-in page, I have two-factor authentication enabled and need my phone to confirm access.

Create a new Resource Group:

az group create --name AKS4BIM02 --location northeurope

Create an Azure Container Registry:

I will not use this initially, but I will create for future use.

az acr create --resource-group AKS4BIM02 --name acr4BIM --sku Basic

## List images in registry (None yet) 

az acr repository list --name acr4BIM --output table

Create the AKS cluster:

See options described at the following links:

istacey@DUB004043:~$ az aks create \
>     --resource-group AKS4BIM02 \
>     --name AKS4BIM02 \
>     --node-count 2 \
>     --generate-ssh-keys \
>     --zones 1 2

Connect to the cluster:

If necessary install the Kubernetes CLI (az aks install-cli) and then connect:

$ az aks get-credentials --resource-group AKS4BIM02 --name AKS4BIM02
Merged "AKS4BIM02" as current context in /home/istacey/.kube/config

$ kubectl get nodes -o wide
NAME                                STATUS   ROLES   AGE     VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION     CONTAINER-RUNTIME
aks-nodepool1-88432225-vmss000000   Ready    agent   7m22s   v1.20.9   10.240.0.4    <none>        Ubuntu 18.04.6 LTS   5.4.0-1059-azure   containerd://1.4.9+azure
aks-nodepool1-88432225-vmss000001   Ready    agent   7m17s   v1.20.9   10.240.0.5    <none>        Ubuntu 18.04.6 LTS   5.4.0-1059-azure   containerd://1.4.9+azure

Create Storage Account and Azure File share

Following https://docs.microsoft.com/en-us/azure/aks/azure-files-volume

#Set variables:
AKS_PERS_STORAGE_ACCOUNT_NAME=istacestorageacct01
AKS_PERS_RESOURCE_GROUP=AKS4BIM02
AKS_PERS_LOCATION=northeurope
AKS_PERS_SHARE_NAME=aksshare4bim02

az storage account create -n $AKS_PERS_STORAGE_ACCOUNT_NAME -g $AKS_PERS_RESOURCE_GROUP -l $AKS_PERS_LOCATION --sku Standard_LRS

export AZURE_STORAGE_CONNECTION_STRING=$(az storage account show-connection-string -n $AKS_PERS_STORAGE_ACCOUNT_NAME -g $AKS_PERS_RESOURCE_GROUP -o tsv)

echo $AZURE_STORAGE_CONNECTION_STRING

az storage share create -n $AKS_PERS_SHARE_NAME --connection-string $AZURE_STORAGE_CONNECTION_STRING

STORAGE_KEY=$(az storage account keys list --resource-group $AKS_PERS_RESOURCE_GROUP --account-name $AKS_PERS_STORAGE_ACCOUNT_NAME --query "[0].value" -o tsv)

echo Storage account name: $AKS_PERS_STORAGE_ACCOUNT_NAME

echo $STORAGE_KEY

Create a Kubernetes secret:

kubectl create secret generic azure-secret --from-literal=azurestorageaccountname=$AKS_PERS_STORAGE_ACCOUNT_NAME --from-literal=azurestorageaccountkey=$STORAGE_KEY

Upload index.html and images folder to file share via Azure Portal

index.html based on https://www.w3schools.com/howto/howto_css_coming_soon.asp and related to one of my favorite subjects Luton Town FC! 🙂

Create the Kubernetes Deployment

Create new namespace:

kubectl create ns isnginx

Create new deployment yaml manifest file:

$ vi nginx-deployment.yaml

$ cat nginx-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: isnginx-deployment
  namespace: isnginx
  labels:
    app: isnginx-deployment
spec:
  replicas: 2
  selector:
      matchLabels:
        app: isnginx-deployment
  template:
    metadata:
      labels:
        app: isnginx-deployment
    spec:
      containers:
      - name: isnginx-deployment
        image: nginx
        volumeMounts:
        - name: nginxtest01
          mountPath: /usr/share/nginx/html
      volumes:
      - name: nginxtest01
        azureFile:
          secretName: azure-secret
          shareName: aksshare4bim02
          readOnly: false

Create the deployment:

$ kubectl create -f nginx-deployment.yaml
deployment.apps/isnginx-deployment created

$ kubectl rollout status deployment isnginx-deployment -n isnginx
Waiting for deployment "isnginx-deployment" rollout to finish: 0 of 2 updated replicas are available...

$ kubectl -n isnginx describe pod isnginx-deployment-5ff78ff678-7dphq  | tail -5
  Type     Reason       Age                  From               Message
  ----     ------       ----                 ----               -------
  Normal   Scheduled    2m32s                default-scheduler  Successfully assigned isnginx/isnginx-deployment-5ff78ff678-7dphq to aks-nodepool1-88432225-vmss000000
  Warning  FailedMount  29s                  kubelet            Unable to attach or mount volumes: unmounted volumes=[nginxtest01], unattached volumes=[default-token-lh6xf nginxtest01]: timed out waiting for the condition
  Warning  FailedMount  24s (x9 over 2m32s)  kubelet            MountVolume.SetUp failed for volume "aksshare4bim02" : Couldn't get secret isnginx/azure-secret

Mount failing as secret created in the default namespace.

Create a secret in the isnginx namespace:

kubectl -n isnginx create secret generic azure-secret --from-literal=azurestorageaccountname=$AKS_PERS_STORAGE_ACCOUNT_NAME --from-literal=azurestorageaccountkey=$STORAGE_KEY
$ kubectl -n isnginx get secret
NAME                  TYPE                                  DATA   AGE
azure-secret          Opaque                                2      97m
default-token-lh6xf   kubernetes.io/service-account-token   3      104m

Remove the deployment and recreate:

$ kubectl delete -f nginx-deployment.yaml
deployment.apps "isnginx-deployment" deleted

$ kubectl create -f nginx-deployment.yaml
deployment.apps/isnginx-deployment created

$ kubectl rollout status deployment isnginx-deployment -n isnginx
Waiting for deployment "isnginx-deployment" rollout to finish: 0 of 2 updated replicas are available...
Waiting for deployment "isnginx-deployment" rollout to finish: 1 of 2 updated replicas are available...
deployment "isnginx-deployment" successfully rolled out
$ kubectl -n isnginx get all
NAME                                      READY   STATUS    RESTARTS   AGE
pod/isnginx-deployment-6b8d9db99c-2kj5l   1/1     Running   0          80s
pod/isnginx-deployment-6b8d9db99c-w65gp   1/1     Running   0          80s

NAME                                 READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/isnginx-deployment   2/2     2            2           80s

NAME                                            DESIRED   CURRENT   READY   AGE
replicaset.apps/isnginx-deployment-6b8d9db99c   2         2         2       80s

Create service / Expose deployment

And get external IP:

$ kubectl -n isnginx expose deployment isnginx-deployment --port=80 --type=LoadBalancer
service/isnginx-deployment exposed

$ kubectl -n isnginx get svc
NAME                 TYPE           CLUSTER-IP    EXTERNAL-IP    PORT(S)        AGE
isnginx-deployment   LoadBalancer   10.0.110.58   20.93.54.110   80:30451/TCP   95m

Test with curl and web browser:

$ curl http://20.93.54.110 | grep -i luton
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  2139  100  2139    0     0  44562      0 --:--:-- --:--:-- --:--:-- 45510
    <p>Luton Town's New Stadium</p>

Create HTTPS ingress:

AIM: Create HTTPS ingress with signed certificate and redirect http requests to HTTPS. Following https://docs.microsoft.com/en-us/azure/aks/ingress-tls

Create Ingress controller:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.0.4/deploy/static/provider/cloud/deploy.yaml
$ kubectl -n ingress-nginx get all
NAME                                            READY   STATUS      RESTARTS   AGE
pod/ingress-nginx-admission-create-twnd7        0/1     Completed   0          8m3s
pod/ingress-nginx-admission-patch-vnsj4         0/1     Completed   1          8m3s
pod/ingress-nginx-controller-5d4b6f79c4-mknxc   1/1     Running     0          8m4s

NAME                                         TYPE           CLUSTER-IP   EXTERNAL-IP     PORT(S)                      AGE
service/ingress-nginx-controller             LoadBalancer   10.0.77.11   20.105.96.112   80:31078/TCP,443:31889/TCP   8m4s
service/ingress-nginx-controller-admission   ClusterIP      10.0.2.37    <none>          443/TCP                      8m4s

NAME                                       READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/ingress-nginx-controller   1/1     1            1           8m4s

NAME                                                  DESIRED   CURRENT   READY   AGE
replicaset.apps/ingress-nginx-controller-5d4b6f79c4   1         1         1       8m4s

NAME                                       COMPLETIONS   DURATION   AGE
job.batch/ingress-nginx-admission-create   1/1           1s         8m3s
job.batch/ingress-nginx-admission-patch    1/1           3s         8m3s

Add an A record to DNS zone

$ IP=20.105.96.112
$ DNSNAME="istac-aks-ingress"
$ PUBLICIPID=$(az network public-ip list --query "[?ipAddress!=null]|[?contains(ipAddress, '$IP')].[id]" --output tsv)
$ az network public-ip update --ids $PUBLICIPID --dns-name $DNSNAME

$ az network public-ip show --ids $PUBLICIPID --query "[dnsSettings.fqdn]" --output tsv
istac-aks-ingress.northeurope.cloudapp.azure.com

$ az network public-ip show --ids $PUBLICIPID --query "[dnsSettings.fqdn]" --output tsv | nslookup
Server:         89.101.160.4
Address:        89.101.160.4#53

Non-authoritative answer:
Name:   istac-aks-ingress.northeurope.cloudapp.azure.com
Address: 20.105.96.112

Install cert-manager with helm:

$ kubectl label namespace ingress-nginx cert-manager.io/disable-validation=true
namespace/ingress-nginx labeled

$ helm repo add jetstack https://charts.jetstack.io
"jetstack" has been added to your repositories

$ helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "ingress-nginx" chart repository
...Successfully got an update from the "jetstack" chart repository
Update Complete. ⎈Happy Helming!⎈
$ helm install cert-manager jetstack/cert-manager \
>   --namespace ingress-nginx \
> --set installCRDs=true \
> --set nodeSelector."kubernetes\.io/os"=linux

NAME: cert-manager
LAST DEPLOYED: Mon Oct 25 10:56:51 2021
NAMESPACE: ingress-nginx
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
cert-manager v1.5.4 has been deployed successfully!
$ kubectl -n ingress-nginx get all
NAME                                            READY   STATUS      RESTARTS   AGE
pod/cert-manager-88ddc7f8d-ltz9z                1/1     Running     0          158m
pod/cert-manager-cainjector-748dc889c5-kcdx7    1/1     Running     0          158m
pod/cert-manager-webhook-55dfcc5474-tbrz2       1/1     Running     0          158m
pod/ingress-nginx-admission-create-twnd7        0/1     Completed   0          3h45m
pod/ingress-nginx-admission-patch-vnsj4         0/1     Completed   1          3h45m
pod/ingress-nginx-controller-5d4b6f79c4-mknxc   1/1     Running     0          3h45m

NAME                                         TYPE           CLUSTER-IP     EXTERNAL-IP     PORT(S)                      AGE
service/cert-manager                         ClusterIP      10.0.7.50      <none>          9402/TCP                     158m
service/cert-manager-webhook                 ClusterIP      10.0.132.151   <none>          443/TCP                      158m
service/ingress-nginx-controller             LoadBalancer   10.0.77.11     20.105.96.112   80:31078/TCP,443:31889/TCP   3h45m
service/ingress-nginx-controller-admission   ClusterIP      10.0.2.37      <none>          443/TCP                      3h45m

NAME                                       READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/cert-manager               1/1     1            1           158m
deployment.apps/cert-manager-cainjector    1/1     1            1           158m
deployment.apps/cert-manager-webhook       1/1     1            1           158m
deployment.apps/ingress-nginx-controller   1/1     1            1           3h45m

NAME                                                  DESIRED   CURRENT   READY   AGE
replicaset.apps/cert-manager-88ddc7f8d                1         1         1       158m
replicaset.apps/cert-manager-cainjector-748dc889c5    1         1         1       158m
replicaset.apps/cert-manager-webhook-55dfcc5474       1         1         1       158m
replicaset.apps/ingress-nginx-controller-5d4b6f79c4   1         1         1       3h45m

NAME                                       COMPLETIONS   DURATION   AGE
job.batch/ingress-nginx-admission-create   1/1           1s         3h45m
job.batch/ingress-nginx-admission-patch    1/1           3s         3h45m

Create a CA cluster issuer:

$ vi cluster-issuer.yaml
$ cat cluster-issuer.yaml
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: letsencrypt
spec:
  acme:
    server: https://acme-v02.api.letsencrypt.org/directory
    email: ian@ianstacey.net
    privateKeySecretRef:
      name: letsencrypt
    solvers:
    - http01:
        ingress:
          class: nginx
          podTemplate:
            spec:
              nodeSelector:
                "kubernetes.io/os": linux
$ kubectl apply -f cluster-issuer.yaml
clusterissuer.cert-manager.io/letsencrypt created

Create an ingress route:

$ cat http7-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: powercourt-ingress
  namespace: isnginx
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/use-regex: "true"
    nginx.ingress.kubernetes.io/ssl-redirect: "true"
    nginx.ingress.kubernetes.io/use-regex: "true"
    nginx.ingress.kubernetes.io/rewrite-target: /$2
    cert-manager.io/cluster-issuer: letsencrypt
spec:
  tls:
  - hosts:
    - istac-aks-ingress.northeurope.cloudapp.azure.com
    secretName: tls-secret
  defaultBackend:
    service:
      name: isnginx-clusterip
      port:
        number: 80


$ kubectl apply -f http7-ingress.yaml
ingress.networking.k8s.io/powercourt-ingress configured

Check resources:

$ kubectl -n isnginx describe ingress powercourt-ingress
Name:             powercourt-ingress
Namespace:        isnginx
Address:          20.105.96.112
Default backend:  isnginx-clusterip:80 (10.244.0.7:80,10.244.1.4:80)
TLS:
  tls-secret terminates istac-aks-ingress.northeurope.cloudapp.azure.com
Rules:
  Host        Path  Backends
  ----        ----  --------
  *           *     isnginx-clusterip:80 (10.244.0.7:80,10.244.1.4:80)
Annotations:  cert-manager.io/cluster-issuer: letsencrypt
              kubernetes.io/ingress.class: nginx
              nginx.ingress.kubernetes.io/rewrite-target: /$2
              nginx.ingress.kubernetes.io/use-regex: true
Events:
  Type    Reason             Age                   From                      Message
  ----    ------             ----                  ----                      -------
  Normal  Sync               13m (x10 over 3h51m)  nginx-ingress-controller  Scheduled for sync
  Normal  UpdateCertificate  13m (x3 over 25m)     cert-manager              Successfully updated Certificate "tls-secret"


$ kubectl get  certificate tls-secret --namespace isnginx
NAME         READY   SECRET       AGE
tls-secret   True    tls-secret   144m

$ kubectl get  certificate tls-secret --namespace isnginx -o yaml
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  creationTimestamp: "2021-10-25T10:17:02Z"
  generation: 4
  name: tls-secret
  namespace: isnginx
  ownerReferences:
  - apiVersion: networking.k8s.io/v1
    blockOwnerDeletion: true
    controller: true
    kind: Ingress
    name: powercourt-ingress
    uid: e93a8939-1ed6-444b-b2db-b26aedf01dd8
  resourceVersion: "122626"
  uid: 79631ea4-541b-4251-9c7b-ad5294e6bbc0
spec:
  dnsNames:
  - istac-aks-ingress.northeurope.cloudapp.azure.com
  issuerRef:
    group: cert-manager.io
    kind: ClusterIssuer
    name: letsencrypt
  secretName: tls-secret
  usages:
  - digital signature
  - key encipherment
status:
  conditions:
  - lastTransitionTime: "2021-10-25T12:28:08Z"
    message: Certificate is up to date and has not expired
    observedGeneration: 4
    reason: Ready
    status: "True"
    type: Ready
  notAfter: "2022-01-23T11:28:06Z"
  notBefore: "2021-10-25T11:28:07Z"
  renewalTime: "2021-12-24T11:28:06Z"
  revision: 4

Test:

http requests to http://istac-aks-ingress.northeurope.cloudapp.azure.com are successfully redirected to https:

The finished architecture:

Categories
Cloud Linux

Kubernetes Home Lab (part 2)

Bootstrap the cluster with kubeadm:

Following on from part one, we will create our new Kubernetes cluster.

Our Kubernetes Cluster Topology:

A single master/control plane node and two worker nodes:

Deployment Steps

To deploy the cluster, we will first take care of the prerequisites, install docker and install kubeadm, following the Kubernetes documentation:

Order of deployment steps

Installing kubeadm, kubelet and kubectl

We will install these packages on all the machines:

  • kubeadm: the command to bootstrap the cluster.
  • kubelet: the component that runs on all of the machines in your cluster and does things like starting pods and containers.
  • kubectl: the command line util to talk to your cluster.

For specific versions:

sudo apt-get install -y kubelet=1.21.0-00 kubeadm=1.21.0-00 kubectl=1.21.0-00

Initialize / Create the Cluster:

We can then create our cluster with ‘kubeadm init’

Bootstrap – Attempt 1

My first attempt failed due to a cgroups_memory issue:

root@k8s-master01:~# kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.0.185
[init] Using Kubernetes version: v1.22.2
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
root@k8s-master01:~# docker info | head
Client:
Context:    default
Debug Mode: false
Plugins:
  app: Docker App (Docker Inc., v0.9.1-beta3)
  buildx: Build with BuildKit (Docker Inc., v0.5.1-docker)

Server:
Containers: 2
  Running: 0
root@k8s-master01:~# docker info | grep -i cgroup
Cgroup Driver: systemd
Cgroup Version: 1
WARNING: No memory limit support
WARNING: No swap limit support
WARNING: No kernel memory TCP limit support
WARNING: No oom kill disable support

cgroups Fix

The fix is described here https://phabricator.wikimedia.org/T122734.

As there is no grub with Ubuntu on Raspberry Pi (https://unix.stackexchange.com/questions/475973/cant-find-etc-default-grub) I simply had to edit /boot/firmware/cmdline.txt and reboot.

root@k8s-master01:~# cat /boot/firmware/cmdline.txt
net.ifnames=0 dwc_otg.lpm_enable=0 console=serial0,115200 console=tty1 root=LABEL=writable rootfstype=ext4 elevator=deadline rootwait fixrtc cgroup_enable=memory swapaccount=1

After reboot:

istacey@k8s-master01:~$ uptime

20:54:27 up 1 min,  1 user,  load average: 1.36, 0.52, 0.19
istacey@k8s-master01:~$ cat /proc/cmdline
coherent_pool=1M 8250.nr_uarts=1 snd_bcm2835.enable_compat_alsa=0 snd_bcm2835.enable_hdmi=1 bcm2708_fb.fbwidth=0 bcm2708_fb.fbheight=0 bcm2708_fb.fbswap=1 smsc95xx.macaddr=DC:A6:32:02:F0:6E vc_mem.mem_base=0x3ec00000 vc_mem.mem_size=0x40000000  net.ifnames=0 dwc_otg.lpm_enable=0 console=ttyS0,115200 console=tty1 root=LABEL=writable rootfstype=ext4 elevator=deadline rootwait fixrtc cgroup_enable=memory swapaccount=1 quiet splash

Bootstrap – Attempt 2

Second attempt is successful:

root@k8s-master01:~# kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.0.185
[init] Using Kubernetes version: v1.22.2
[preflight] Running pre-flight checks
        [WARNING SystemVerification]: missing optional cgroups: hugetlb
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.0.185]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master01 localhost] and IPs [192.168.0.185 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master01 localhost] and IPs [192.168.0.185 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 28.511339 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.22" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master01 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node k8s-master01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: oq7hb9.vtmiw210ozvi2grh
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.0.185:6443 --token oq7hb9.vtmiw210ozvi2grh \
        --discovery-token-ca-cert-hash sha256:c87681fc7fec18f015f974e558d8436113019fefbf91123bb5c5190466b5854d
root@k8s-master01:~#

Install the pod network add-on (CNI)

We will use Weave for this cluster

https://kubernetes.io/docs/concepts/cluster-administration/networking/#how-to-implement-the-kubernetes-networking-model

https://www.weave.works/docs/net/latest/kubernetes/
https://www.weave.works/docs/net/latest/kubernetes/kube-addon/

istacey@k8s-master01:~$ kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
serviceaccount/weave-net created
clusterrole.rbac.authorization.k8s.io/weave-net created
clusterrolebinding.rbac.authorization.k8s.io/weave-net created
role.rbac.authorization.k8s.io/weave-net created
rolebinding.rbac.authorization.k8s.io/weave-net created
daemonset.apps/weave-net created
istacey@k8s-master01:~$

Check Nodes are ready and pods running:

istacey@k8s-master01:~$ kubectl get nodes
NAME           STATUS   ROLES                  AGE   VERSION
k8s-master01   Ready    control-plane,master   13m   v1.22.2

istacey@k8s-master01:~$ kubectl get pods -A
NAMESPACE     NAME                                   READY   STATUS    RESTARTS      AGE
kube-system   coredns-78fcd69978-cnql6               1/1     Running   0             14m
kube-system   coredns-78fcd69978-k4bnk               1/1     Running   0             14m
kube-system   etcd-k8s-master01                      1/1     Running   0             14m
kube-system   kube-apiserver-k8s-master01            1/1     Running   0             14m
kube-system   kube-controller-manager-k8s-master01   1/1     Running   0             14m
kube-system   kube-proxy-lx8bj                       1/1     Running   0             14m
kube-system   kube-scheduler-k8s-master01            1/1     Running   0             14m
kube-system   weave-net-f7f7h                        2/2     Running   1 (99s ago)   2m2s

Join the two Worker Nodes:

To get the join token:

kubeadm token create --help
kubeadm token create --print-join-command

Check once the two nodes are joined

istacey@k8s-master01:~$ kubectl get nodes
NAME           STATUS   ROLES                  AGE    VERSION
k8s-master01   Ready    control-plane,master   18m    v1.22.2
k8s-worker01   Ready    <none>                 2m4s   v1.22.2
k8s-worker02   Ready    <none>                 51s    v1.22.2

istacey@k8s-master01:~$ kubectl get ds -n kube-system
NAME         DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
kube-proxy   3         3         3       3            3           kubernetes.io/os=linux   18m
weave-net    3         3         3       3            3           <none>                   6m10s

Quick Test:

istacey@k8s-master01:~$ kubectl run nginx --image=nginx
pod/nginx created
istacey@k8s-master01:~$ kubectl get pods -o wide
NAME    READY   STATUS              RESTARTS   AGE   IP       NODE           NOMINATED NODE   READINESS GATES
nginx   0/1     ContainerCreating   0          23s   <none>   k8s-worker01   <none>           <none>
istacey@k8s-master01:~$ kubectl get pods -o wide
NAME    READY   STATUS    RESTARTS   AGE   IP          NODE           NOMINATED NODE   READINESS GATES
nginx   1/1     Running   0          27s   10.44.0.1   k8s-worker01   <none>           <none>

istacey@k8s-master01:~$ kubectl delete po nginx
pod "nginx" deleted

Part 1 here: