What's up, Kubernetes !!!
Updated: Jul 16
There is an old saying "Necessity is the mother of invention" and it applies perfectly fine where technology is used. The technology has evolved so much that it is not less than any miracle and all happens because of need of change. The change is inevitable.
If I talk about the technology decades back when the small amount of data used to be saved on large size of disks, when there were boundaries for the applications in a physical server and had resources crunch and look at now , evolving technology has turned the world upside down, huge amount of data can be saved on nail sized chips and concept of virtualization has given a wonderful direction.
In this article, I will introduce Kubernetes and write about the basics of it. Kuberrnetes is today's demand of IT industries. To know "What" is Kubernetes, let's understand "Why" Kubernetes !
When we go back to the traditional IT structure, there were applications used to run on physical racks of servers. There were no way to set the estimation of resources for the applications. There were issues like under commitment and sometimes over commitment of the resources which resulted in resource crunches and wastage of resources respectively. Now to fix this problem, either you have to run each application on different servers but servers are not cheap my friend and scaling out would be much expensive idea.
So as a solution, virtualization was introduced and proved prodigy to traditional problems. Virtualization allowed multiple applications to run on a single physical server meaning resources from physical server would be shared among different application yet can be isolated from each other. Applications were installed on VMs which is nothing but a machine with an operating system running all the components just like physical machine but that’s installed on virtual hardware which provided fair level of security as the information on one application cannot be accessed by another application. Now the problem of resources was solved.
If these problems have solved then there is a point to think about next level of virtualization that is "Containers". Containers are similar to VMs but they have optimized and relaxed way of isolation to share the operating system among the application. Containers are of different sizes and shapes but majorly considered as lightweight. Containers have its own filesystem, CPU, memory, process and same stuff like VM but they have extra features like
they are loosely coupled, distributed and are good for micro-services.
High efficiency and density.
Best for agile application creation and deployment of cloud native applications.
Continuous development, integration and delivery. (explained)
Suitable for DevOps model where building/releasing is easy and quick
Application centric management
and many more
Now the actual question comes up : What's Kubernetes
I read this definition of Kubernetes from Kubernetes website . They define
"Kubernetes is a portable, extensible, open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation. It has a large, rapidly growing ecosystem. Kubernetes services, support, and tools are widely available. "
Fun Fact: Kubernetes is also called as K8s, know why ? K<8 letters i.e. u b e r n e t e> s.
What's up with K8s !
Since containers provide a good way to bundle and run the application but to run and manager the containers, we need some tool. K8s is an ideal platform for hosting cloud-native applications that require rapid scaling. Having said this, K8s is not container but it is the tool which helps to manage the containers that run the application and ensures no downtime e.g one container is down and now someone needs to deploy another one. Here K8s comes into picture. It gives platform to run distributed applications and takes care of scaling and failovers.
Below are some features of K8s described on K8s website
Self-healing: K8s kills, restarts, replaces the containers that fail or don’t respond to your user-defined health check and doesnt even tell the client until its ready
Network traffic control and load balancing: K8s exposes a container using their own IP address. If traffic to a container is high, K8s is able to load balance and distribute the network traffic so that the deployment is stable.
Orchestration : K8s allows you to automatically mount a storage system of your choice, such as local storages, public cloud providers, and more.
Automated updates and rollbacks: You can describe the desired state for your deployed containers using K8s, and it can change the actual state to the desired state at a controlled rate. For example, you can automate K8s to create new containers for your deployment, remove existing containers and adopt all their resources to the new container.
Great Optimization: You provide K8s with a cluster of nodes that it can use to run containerized tasks. You tell K8s how much CPU and memory each container should use.
Let's talk about its components now
As I mentioned, K8s is an open-source platform to manage and orchestrate containerized workloads and services. It facilitates both automation and configuration which has large and rapid growing infrastructure. It automates Linux container operations and eliminates many of the manual processes involved in deploying and scaling containerized applications. You can cluster together groups of hosts running Linux containers, and K8s helps you easily and efficiently manage those clusters.
Whenever you are deploying K8s, you will be basically deploying Cluster. K8s works on server-client architecture. The fundamental component of K8s is NODE. Bunch of nodes makes a cluster. In nodes again, it will have two roles, one is master and another is slave or worker. The node with master roles will be called as Master Node and others with worker roles will be called as Worker Nodes. In a basic set up, you must have 1 or more Master Nodes and can be 1 or more Worker Nodes in K8s cluster.
K8s cluster consists of two main parts
K8s Control Plane which has 1 or more master nodes
K8s Nodes (Worker Nodes)
Worker Nodes basically run containerized applications. The worker node(s) host the Pods that are the components of the application workload. The master node(s) manage the worker nodes and the Pods in the cluster.
Note : A pod is a group of containers that are deployed together on the same node. If single container is frequently deployed, then generally replace the word "pod" with "container.
Let's understand the components of nodes now. Each Master and Worker Node has different components which are essential to be differentiated. Below Image talks about K8s cluster plane and K8s nodes
Courtesy : Kubernetes
K8s Control Plane Components
Below are the components which should be running on all the master nodes of the K8s cluster
Kube-api-server : This component is the heart of control plane or master node. This actually exposes APIs and acts as front end of control plane of the cluster. Any interaction within master node and between worker nodes will happen here.
Kube-controller manager : This components is decision maker or can say brain of the master node. Controller manager makes sure if the Pods are healthy, if not, takes a call to deploy new one For example, you have 5 pods running and one goes down. Since controller manager will have record of number of Pods, it will kill the unhealthy pod and will deploy a new one on different node. It is also responsible for default accounts and API access tokens for new namespaces.
Kube-scheduler : As name suggests, scheduler schedules for new Pods which are not yet assigned to any node. It calls for the node for these pods to run on. It is actually a distributor of nodes to the pods based on their resource requirements.
etcd : etcd is the most important component of master node which is distributed database to store data to manager the cluster. It is consistent and highly-available key value store used in K8s for all cluster data.
Cloud-controller manager : This component as you can see in the diagrams helps you to connect your on-prem cluster into cloud providers through APIs. This is an optional component yet an important one.
Below are the components which should be running on all the worker nodes of the K8s cluster
Kubelet : This is an agent which runs on each worker node to make sure the containers/Pods are running as it should be. The point to be noted is that the kubelet doesn’t manage containers.
Kube-proxy: This component managers the network rules created on nodes. It is the network proxy runs on each worker node in the cluster. It is also used to manage the network traffic and load balancing.
Container runtime : As K8s is a tool to manage containers and to run the containers, you need to have container runtime engine. So container runtime is the software which is responsible for running containers
With this small piece of information, I will wrap up this article and will post further "What is" and "How to" artciles related to K8s soon. Stay tuned for more stuff
Happy Reading !
related articles :