Kubernetes is an open-source popular platform for container orchestration that is, for the management of applications built out of multiple, largely self-contained run-times called containers. Containers becomes popular when the docker containerization project launched in 2013.
Docker was very successful but it needs some tools that can manage its containers because managing it by manually cause error. It takes lot of human effort to manage and run the services on the container technology.
What is container orchestration?
Containers support VM-like setup but it is far less overhead and far greater flexibility. It changes the people thinking about developing, deploying and maintaining software. In the container architecture the different services of one project deploy into separated containers and deployed across a cluster of physical or virtual machines.
This gives rise to the need for container orchestration – a tool that automate the deployment, scaling, management, networking, and availability of containers-based applications.
What is Kubernetes?
Kubernetes is an open-source container-orchestration tool that have the ability to manage deployment, scailing, and management of the computer application. It was originally designed and developed by the Google and is now maintained by the Cloud Native Computing Foundation.
The original Kubernetes project was written in C++ but now it is implemented using Go language. The components of Kubernetes can be divided into those that manage an individual node and that are part of control plane.
The Kubernetes master is the main controlling of the cluster, managing its workload and directing communication across the system. The control plane or master node contains various components of the Kubernetes control plane are as follows:
- Etcd – It store all the configuration data of the cluster, representing the overall state of the cluster at any given point of time.
- API server – The API server is a key component and serves the Kubernetes API using JSON over HTTP, which provides both the internal and external interface with the Kubernetes.
- Schedular – The scheduler is component of cluster that schedule the launching of pods that in which node the pod will be launch.
- Controller manager – It is controller that communicate with the API server to create, update and delete the resources it manage.
A node is also known as the worker node, is a machine where the containers are deployed. Every node in the cluster must rum a container engine such as docker, Kubelet program, Kube-proxy and Container runtime.
- Automated rollouts and rollbacks – Kubernetes provide the facility of rollbacks and rollouts for the application which is a great service provided by it. It rolls out changes to the application or its configuration, while monitoring application health to ensure it doesn’t kill all your instances at the same time. But if something goes wrong Kubernetes rollbacks changes for us.
- Service discovery and load balancing – Kubernetes give Pods to their own IP addresses and single DNS name for a set of Pods, and can load-balance across them.
- Storage orchestration – Kubernetes provides the facility of PVC and PV where it mount the persistent storage of your choice whether it is from local storage, a public storage such as AWS, GCP or NFS.
- Horizontal scailing – It provide the facility to scale up or down the application on the basis of CPU usage.
- Self-healing – Kubernetes have the ability to launch the pod if the older pod is down due to some reason and it also provide the facility of health checks for the pods.
Case Study of Pinterest
Pinterest is a platform of image sharing and social media service designed to enable saving and discovery of information on the internet using images and, on a smaller scale, animated GIFs and videos, in the form of pinboards. It had over 400 million monthly active users as of August 2020.
After eight years in existence, Pinterest had grown into 1,000 microservices and multiple layers of infrastructure and diverse set-up tools and platforms. In 2016 the company launched a roadmap towards a new compute platform. It led by the vision of creating the fastest path from an idea to production, without making engineers worry about the underlying infrastructure.
Having this vision into the mind Pinterest guys involved moving services to Docker container in their first phase. As soon as these services went into production in the starting of 2017, the team started focusing on the orchestration to help create efficiencies and manage them in a decentralized way. After testing lot of tools, Pinterest went with Kubernetes as orchestration tool.
By moving to the Kubernetes Pinterest team was able to to build on-demand scailing and new failover policies, in addition to simplifying the overall deployment and management of complicated piece of infrastructure such as Jenkins.
In the starting of year 2018, the team began onboarding its first use case into the Kubernetes system – Jenkins workloads. By the end of Q1 2018, the team successfully migrated Jenkins Master to run natively on Kubernetes. They also collaborated on the Jenkins Kubernetes Plugin to manage the lifecycle of workers. In Pinterest peak hours, it runs thousands of pods on a few hundred nodes. The overall result is moving to Kubernetes the team was able to build on-demand scailing with the new failover policies, in addition to simplifying the overall deployment and management of a complicated piece of infrastructure such as Jenkins.