KubeWeek #1
Kubernetes is an architecture that offers a loosely coupled mechanism for service discovery across a cluster. A Kubernetes cluster has one or more control planes, and one or more compute nodes.
The main components of a Kubernetes cluster include:
Nodes: Nodes are VMs or physical servers that host containerized applications. Each node in a cluster can run one or more application instance. There can be as few as one node, however, a typical Kubernetes cluster will have several nodes (and deployments with hundreds or more nodes are not uncommon).
Image Registry: Container images are kept in the registry and transferred to nodes by the control plane for execution in container pods.
Pods: Pods are where containerized applications run. They can include one or more containers and are the smallest unit of deployment for applications in a Kubernetes cluster.
Kubernetes Control Plane
The main component of KCP:
kube-apiserver. As its name suggests the API server exposes the Kubernetes API, which is communications central. External communications via command line interface (CLI) or other user interfaces (UI) pass to the kube-apiserver, and all control planes to node communications also goes through the API server.
etcd: The key value store where all data relating to the cluster is stored. etcd is highly available and consistent since all access to etcd is through the API server. Information in etcd is generally formatted in human-readable YAML (which stands for the recursive “YAML Ain’t Markup Language”).
kube-scheduler: When a new Pod is created, this component assigns it to a node for execution based on resource requirements, policies, and ‘affinity’ specifications regarding geolocation and interference with other workloads.
kube-controller-manager: Although a Kubernetes cluster has several controller functions, they are all compiled into a single binary known as kube-controller-manager.
Controller functions included :
Replication controller: Ensures the correct number of pods is in existence for each replicated pod running in the cluster
Node controller: Monitors the health of each node and notifies the cluster when nodes come online or become unresponsive
Endpoints controller: Connects Pods and Services to populate the Endpoints object
Service Account and Token controllers: Allocates API access tokens and default accounts to new namespaces in the cluster
cloud-controller-manager: If the cluster is partly or entirely cloud-based, the cloud controller manager links the cluster to the cloud provider’s API. Only those controls specific to the cloud provider will run. The cloud controller manager does not exist on clusters that are entirely on-premises. More than one cloud controller manager can be running in a cluster for fault tolerance or to improve overall cloud performance.
Elements of the cloud controller manager include:
Node controller: Determines status of a cloud-based node that has stopped responding, i.e., if it has been deleted
Route controller: Establishes routes in the cloud provider infrastructure
Service controller: Manages cloud provider’s load balancers
Kubernetes node architecture
kubelet: Every node has an agent called kubelet. It ensures that the container described in PodSpecs are up and running properly.
kube-proxy: A network proxy on each node that maintains network nodes which allows for the communication from Pods to network sessions, whether inside or outside the cluster, using operating system (OS) packet filtering if available.
container runtime: Software responsible for running the containerized applications. Although Docker is the most popular, Kubernetes supports any runtime that adheres to the Kubernetes CRI (Container Runtime Interface).
Kubernetes infrastructure components
Pods: By encapsulating one (or more) application containers, pods are the most basic execution unit of a Kubernetes application. Each Pod contains the code and storage resources required for execution and has its own IP address. Pods include configuration options as well. Typically, a Pod contains a single container or few containers that are coupled into an application or business function and that share a set of resources and data.
Deployments: A method of deploying containerized application Pods. A desired state described in a Deployment will cause controllers to change the actual state of the cluster to achieve that state in an orderly manner. Learn more about Kubernetes Deployments.
ReplicaSet: Ensures that a specified number of identical Pods are running at any given point in time.
Cluster DNS: serves DNS records needed to operate Kubernetes services.
Container Resource Monitoring: Captures and records container metrics in a central database.
Best practices for architecting Kubernetes clusters:
Ensure you have updated to the latest Kubernetes version (1.18 as of this writing).
Invest up-front in training for developer and operations teams.
Establish governance enterprise-wide. Ensure tools and vendors are aligned and integrated with Kubernetes orchestration.
Enhance security by integrating image-scanning processes as part of your CI/CD process, scanning during build and run phases. Open-source code pulled from a Github repository should always be considered suspect.
Adopt role-based access control (RBAC) across the cluster. Least privilege, zero-trust models should be the standard.
Further secure containers by using only non-root users and making the file system read-only.
Avoid use of default value, since simple declaratives are less error-prone and demonstrate intent more clearly.
Be careful when using basic Docker Hub images, which can contain malware or be bloated with unnecessary code. Start with lean, clean code and build packages up from there. Small images build faster, are smaller on disk, and image pulls are faster as well.
Keep containers simple. One process per container will let the orchestrator report if that one process is healthy or not.
When in doubt, crash. Kubernetes will restart a failed container, so do not restart on failure.
Be verbose. Descriptive labels help current developers and will be invaluable to developers to follow in their footsteps.
Don’t get too granular with microservices. Not every function within a logical code component need be its own microservice.
Automate, where it makes sense. Automating CI/CD pipeline lets you avoid manual Kubernetes deployments entirely.
Use livenessProbe and readinessProbe to help manage Pod lifecycles, or pods may end up being terminated while initializing or begin receiving user requests before they are ready.
Steps to follow for set up Kubernates Clusture:
In the Master Node:
Update the apt package index and install packages needed to use the Kubernetes apt repository:
sudo apt-get update sudo apt-get install -y apt-transport-https ca-certificates curl
Download the Google Cloud public signing key:
sudo curl -fsSLo /etc/apt/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
Add the Kubernetes apt repository:
echo "deb [signed-by=/etc/apt/keyrings/kubernetes-archive-keyring.gpg] apt.kubernetes.io kubernetes-xenial main" | sudo tee
/etc/apt/sources.list.d/kubernetes.list
Update apt package index, install kubelet, kubeadm and kubectl, and pin their version:
sudo apt-get update sudo apt-get install -y kubelet kubeadm kubectl sudo
kubeadm init
export KUBECONFIG=/etc/kubernetes/admin.conf
kubectl apply -f github.com/weaveworks/weave/releases/downlo..
OR Follow this Step:
sudo apt update -y
sudo systemctl start docker
sudo systemctl enable docker
sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg packages.cloud.google.com/apt/doc/apt-key.gpg
echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] apt.kubernetes.io kubernetes-xenial main" | sudo tee
/etc/apt/sources.list.d/kubernetes.list
sudo apt update -y
sudo apt install kubeadm=1.20.0-00 kubectl=1.20.0-00 kubelet=1.20.0-00 -y
sudo systemctl enable docker
sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg packages.cloud.google.com/apt/doc/apt-key.gpg
echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] apt.kubernetes.io kubernetes-xenial main" | sudo tee
/etc/apt/sources.list.d/kubernetes.list
sudo apt update -y
sudo apt install kubeadm=1.20.0-00 kubectl=1.20.0-00 kubelet=1.20.0-00 -y
Master
sudo su
kubeadm init (for kubeadm service you have to root user)
kubectl apply -f https://github.com/weaveworks/weave/releases/download/v2.8.1/weave-daemonset-k8s.yaml
kubeadm token create --print-join-command
Worker
sudo su kubeadm reset pre-flight set
kubeadm join 172.31.5.53:6443 --token 945e9f.02wgu2rr5i6ltjv5 --discovery-token-ca-cert-hashsha256:c8010ae947a24b02a34fc3a46f0d263dd10e63760516aac67f839d7d75daabb7 --v=5