Kubernetes Certified Administrator (CKA) 2024
CKA topics⌗
- Cluster Architecture, Installation & Configuration: How to set up and configure a Kubernetes cluster, including how to install and configure a Kubernetes cluster using
kubeadm
, how to upgrade your cluster version, how to backup and restore anetcd
cluster, and how to configure a pod to use secrets - Workloads & Scheduling: How to deploy a Kubernetes application, create daemonsets, scale the application, configure health checks, use multi-container pods, and use config maps and secrets in a pod. You’ll also need to know how to expose your application using services
- Services & Networking: How to expose applications within the cluster or outside the cluster, how to manage networking policies, and how to configure ingress controllers
- Storage: How to create and configure persistent volumes, how to create and configure persistent volume claims, and how to expand persistent volumes
- Troubleshooting: How to troubleshoot common issues in a Kubernetes environment, including how to diagnose and resolve issues with pods, nodes, and network traffic
Kubernetes in a nutshell⌗
Control plane management components that mother-hen nodes and pods. Key components:
- API server: the frontend API that ties everything together (port 6443)
- Scheduler: determines which nodes to run pods on
- etcd: distributed key-value store used as backing store for all cluster meta-data (ports 2379 and 2380)
Node a worker machine (VM) that hosts pods:
- Kubelet: systemd daemon is the client agent used by the control plane to manage and monitor worker nodes, exposes an HTTP API to provide metrics about the node (port 10250). A read-only API is also provided on 10255.
- Kube Proxy: manages network rules to enable communication between pods and external entities (port 10256)
- supervisord: monitoring of the kubelet and pods
- Container Network Interface (CNI): a software defined network (SDN) plugin such as calico, flannel or weave.
- fluentd: unified logging agent
- containerd: a container runtime of some sort
Pod a set of containers (spacesuit)
ReplicaSet manages replicas of a pod (ex: 3 nginx pods)
Deployment the standard way to run containers, takes care of starting pods in a scalable way and leverages ReplicaSets to do so. Its killer feature is the RollingUpdate which enables no downtime application updates.
Service exposes an application that may be running on many pods, externally from the Kubernetes cluster
Lab environment⌗
A minimal 3 VM setup, all running ubuntu 22.04 servers, each with 2 vCPUs, 2 GB RAM and 32 GB disk (these requirements will be validated by kubeadm init
pre-flight checks)
┌──────────────┐
│ 192.168.1.21 │
└──────────────┘
worker
┌──────────────┐
│ 192.168.1.20 │
└──────────────┘
control
┌──────────────┐
│ 192.168.1.22 │
└──────────────┘
worker
Setting up the cluster play by play:
git clone git@github.com:bm4cs/cka.git
- Install CRI-O:
sudo ~/cka/setup-container.sh
- Install kubetools:
sudo ~/cka/setup-kubetools.sh
- On the control node, setup cluster:
sudo kubeadm init
- Setup
kubectl
client:mkdir ~/.kube
sudo cp -i /etc/kubernetes/admin.conf ~/.kube/config
sudo chown $(id -u):$(id -g) .kube/config
- Setup a network plugin - Calico in this case
- Install calico operator:
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.27.0/manifests/tigera-operator.yaml
- Install calico custom resource defintions:
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.27.0/manifests/custom-resources.yaml
- Confirm calico pods are running:
watch kubectl get pods -n calico-system
- Install calico operator:
- Join nodes to the cluster with
sudo kubeadm join <JOIN-TOKEN>
kubeadm init sample output⌗
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.1.20:6443 --token b92xdr.6kv3hmbvkql7jm5p \
--discovery-token-ca-cert-hash sha256:716f8aa49972954896bb9b128eee16c8585bd8b10e37ac05e296fc27eb1c0e00
Buliding kubernetes clusters⌗
Networking⌗
Kubernetes primatives need to communicate internally and with the outside world:
- Node: physical network
- External to service: kubernetes service resources
- Pod to service: kubernetes services
- Pod to pod: kubernetes CNI network plugin
- Container to container: pod
Kubernetes uses Container Network Interface (CNI) plugins for cluster networking. Out of the box is not opinionates about specific plugins to use, instead it simply provides the CNI interface and lets you choose:
- Calico: Calico is a powerful networking and network security solution for containers, virtual machines, and native host-based workloads. It provides both networking and network policy enforcement, and it works with a broad range of platforms including Kubernetes, Docker, and OpenStack
- Flannel: Flannel is a simple and easy-to-use CNI plugin that satisfies Kubernetes requirements. It creates a virtual network among the various nodes in a Kubernetes cluster, providing a subnet to each node from which pods can be assigned IP addresses
- Weave Net: Weave Net creates a virtual network that connects containers deployed across multiple hosts. It uses a simple, encrypted peer-to-peer communication protocol to establish a routed network between the containers, allowing them to discover each other and communicate securely
- Cilium: Cilium leverages eBPF technology to provide networking and security for microservices in Kubernetes. It provides network visibility, load balancing, and network policy enforcement
- Hybridnet: Designed for hybrid clouds, it provides both overlay and underlay networking for containers in one or more clusters
kubeadm⌗
kubeadm init
out of the box has nice defaults, but ships with lots of options:
--apiserver-advertise-address
the IP address/es on which to bind the API server--config
feeds the CLI with a pre-prepared configuration file--dry-run
--pod-network-cidr
--service-cidr
(default is10.96.0.0/12
)
Tips:
- If an
kubeadm init
went pear shaped, after RCA,kubeadm reset
to best effort baseline any remnant side-effects - Join tokens are temporary, you will often need to reprovision them
sudo kubeadm token create --print-join-command
kubectl⌗
The kubectl client needs to be setup with some config (~/.kube/config
) for the cluster - one way to get this is by cloning /etc/kubernetes/admin.conf
from a control plane node.
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
If you are root, can simply:
export KUBECONFIG=/etc/kubernetes/admin.conf
Contexts⌗
For convenience, client configuration supports contexts, basically a set of pre-canned configurations assigned to a friendly label, such as bens-dev-cluster
. Each context needs to define 3 elements; the cluster in which to connect, the default namespace and the user identity.
kubectl config view
to view contextskubectl set-context
to setup a new contextkubectl use-context
to change contexts
To define a new context and all its components (cluster, namespace and user).
First the cluster:
kubectl config --kubeconfig=~/.kube/config \
set-cluster bens-cluster --server=https://192.168.29.120 --certificate-authority=bens-ca.crt
Second the namespace:
kubectl create ns bens-ns
Third the user:
kubectl config --kubeconfig=~/.kube/config \
set-credentials ben --client-certificate=ben.crt --client-key=ben.key
Lets put a big bow around it all and define the context itself:
kubectl set-context bens-k8s --cluster=bens-cluster --namespace=bens-ns --user=ben
Deploying Applications⌗
Deployments⌗
The standard way to run containers on Kubernetes. Deployments leverage ReplicaSets under the hood. It supports RollingUpdates which carefully load balances and schedules work to enable zero downtime application updates.
kubectl create deploy -h
kubectl create deployment my-dep --image=nginx --replicas=3
kubectl get all
You will see this results in 3 pods, 1 replicaset and 1 deployment:
$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/my-dep-7674c564c-jttdp 1/1 Running 0 5m59s
pod/my-dep-7674c564c-mn8sg 1/1 Running 0 5m59s
pod/my-dep-7674c564c-n87xf 1/1 Running 0 5m59s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 96d
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/my-dep 3/3 3 3 5m59s
NAME DESIRED CURRENT READY AGE
replicaset.apps/my-dep-7674c564c 3 3 3 5m59s
DaemonSet⌗
A resource type that starts one application instance on each cluster node. Therefore, does not make use of ReplicaSets. An example DaemonSet is the kube-proxy
service that must run on each node/server in the cluster.
- Not normally put to use in typical user workloads
- If DaemonSet must run on control plane nodes, a toleration is needed to override default control plane taints
$ kubectl get ds -A
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
kube-system calico-node 3 3 3 3 3 kubernetes.io/os=linux 96d
kube-system kube-proxy 3 3 3 3 3 kubernetes.io/os=linux 96d
DaemonSet specifications are super similar to Deployments, minus concepts like replicas:
kubectl create deploy littledaemon --image=nginx --dry-run=client -o yaml > littledaemon.yaml
# remove replicas and strategy keys from littledaemon.yaml
kubectl apply -f littledaemon.yaml
kubectl get ds