
Kubernetes is an open-source system for automating the deployment, scaling, and management of containerized applications. It was originally developed by Google and is now maintained by the Cloud Native Computing Foundation.
Read this blog post or watch the video below to understand the basics of Kubernetes Architecture.

Content
In this blog post, I will cover the basics of Kubernetes Architecture Components and also see how these different components work together to create a well-functioning Kubernetes deployment.
Things covered today –
- Nodes
- Clusters
- Persistent Volume
- Container
- Pods
- Deployments

In Kubernetes, a node refers to a single machine in a cluster. It can be a physical machine or a virtual machine running on a cloud. Each node has the necessary components to run containers, including a container runtime and kubelet, which is an agent for communicating with the Kubernetes control plane.
The primary role of a node is to host and manage containers. When a deployment is created in Kubernetes, the control plane schedules pods (the smallest deployable units in Kubernetes) to run on nodes. The node is responsible for pulling the required images, creating containers, and ensuring that the containers are running as specified in the deployment. In addition to hosting containers, nodes also perform network communication with other nodes and the control plane, and they run services such as the Kubernetes DNS server.
Each node in a Kubernetes cluster is assigned a unique identifier, and nodes can be labeled and annotated to provide information about the node’s properties and capabilities, which can be used to control the scheduling and placement of pods.

A Kubernetes cluster is a set of nodes (physical or virtual machines) that run containerized applications, managed by Kubernetes. The cluster is designed to be highly available and scalable, and it provides the necessary infrastructure to deploy, manage, and scale containers.
At the center of a Kubernetes cluster is the control plane, which consists of a set of components that are responsible for managing the state of the cluster. The control plane components include the API server, the controller manager, and the scheduler. The API server is the central component of the control plane and exposes the Kubernetes API, which is used by other components, such as kubectl and other client tools, to interact with the cluster. The controller manager and scheduler work together to manage the state of the cluster and ensure that desired state is maintained, such as ensuring that the desired number of replicas of a deployment are running on nodes.
Each node in the cluster runs an agent called kubelet, which communicates with the control plane to receive instructions and report the status of containers running on the node. The node also runs a container runtime, such as Docker, to manage containers.
In a Kubernetes cluster, containers are organized into higher-level abstractions called pods, which are the smallest deployable units. Pods provide a way to group one or more containers that should be deployed together on the same node. Pods can be managed using higher-level abstractions, such as deployments, services, and stateful sets, which provide more advanced features such as scaling, self-healing, and network communication between pods.
In summary, a Kubernetes cluster is a set of nodes that run containers, managed by the control plane. The control plane, nodes, and containers work together to provide a highly available, scalable infrastructure for running applications.

Data cannot be stored to any random location in the file system because programmes running on your cluster are not guaranteed to operate on a particular node. The file won’t be where the application expects it to be if it attempts to store data to a file for later use but is later moved to a new node. As a result, each node’s traditional local storage is regarded as a temporary cache for holding programmes; nevertheless, any data saved locally cannot be assumed to last.
In Kubernetes, a Persistent Volume (PV) is a piece of storage in the cluster that has been dynamically provisioned or statically provisioned by an administrator. The PV provides a way for pods to access and persist data beyond the lifetime of a single pod.
A Persistent Volume Claim (PVC) is used to request a specific amount of storage from a PV. When a PVC is created, Kubernetes binds it to a PV that satisfies the requested storage capacity and access modes. Once a PVC is bound to a PV, the PVC can be mounted as a volume in a pod. The data in the volume is then accessible from within the pod and persists even if the pod is deleted.
PVs can be backed by a variety of storage systems, including local disk, network attached storage (NAS), and cloud-based storage solutions. The PV provides an abstraction for the underlying storage, so that the pods do not need to know the specifics of how the storage is implemented.
PVs can be statically provisioned by an administrator, which means that the storage is created ahead of time and is available for use by PVCs. Alternatively, PVs can be dynamically provisioned by the cluster, using a storage class, when a PVC requests storage that cannot be satisfied by a statically provisioned PV. The storage class defines the parameters for dynamically provisioning storage, such as the type of storage and the capacity.
In summary, Persistent Volumes and Persistent Volume Claims provide a way for pods to access and persist data in a Kubernetes cluster, backed by a variety of storage systems. PVs can be statically provisioned or dynamically provisioned using a storage class.

A container in Kubernetes is a standalone executable package that includes everything needed to run a piece of software, including the code, runtime, libraries, environment variables, and system tools. Containers provide a way to package and isolate applications, making them portable and easier to manage.
In Kubernetes, containers are typically packaged as images and stored in a container registry, such as Docker Hub or Google Container Registry. The images can be pulled from the registry and run on a node in a Kubernetes cluster.
Containers are the fundamental building blocks for deploying applications in Kubernetes. Applications are deployed as pods, which are the smallest deployable units in Kubernetes. Pods can contain one or more containers, and they provide a way to group containers that should be deployed together on the same node.

In Kubernetes, a pod is the smallest and simplest unit in the object model for deploying applications. A pod represents a single instance of a running process in a cluster.
A pod can contain one or more containers, and all containers in a pod run on the same node and share the same network namespace. This means that the containers can communicate with each other using localhost and share the same file system.
Pods provide a way to group containers that should be deployed together on the same node. For example, you might deploy a web server and a database server as separate containers within the same pod, so that they can communicate with each other over localhost.
Pods are created and managed by higher-level abstractions, such as deployments and replicasets, which provide features such as scaling, rolling updates, and self-healing. The Kubernetes control plane is responsible for ensuring that the desired state of the cluster, as specified by the higher-level abstractions, is maintained.
Pods can also be exposed to the network using services, which provide network communication and load balancing for pods. Services can be used to expose pods to the network, so that they can be accessed by external clients.
In summary, pods in Kubernetes are the smallest and simplest unit in the object model for deploying applications. Pods represent a single instance of a running process in a cluster and provide a way to group containers that should be deployed together on the same node. Pods are created and managed by higher-level abstractions and can be exposed to the network using services.

In Kubernetes, a deployment is a higher-level abstraction that provides a declarative way to manage the desired state of a set of replicas of a pod. A deployment represents a group of replicas of a single pod, ensuring that a specified number of replicas are running and available at all times.
A deployment is defined using a YAML or JSON file that specifies the desired state of the deployment, such as the number of replicas and the image to be used for the containers in the pods. The deployment can be created using the Kubernetes API or a command-line tool like kubectl
.
Once a deployment is created, the Kubernetes control plane ensures that the desired state of the deployment is maintained. If a node fails or becomes unavailable, the control plane will automatically reschedule the replicas on another node to ensure high availability.
The deployment also provides a way to perform rolling updates, which allows you to update the image for the containers in the pods without downtime. Rolling updates can be performed by updating the image in the deployment definition and applying the changes using kubectl
. The Kubernetes control plane will then gradually roll out the update to the replicas, ensuring that the desired number of replicas are always running and available.
In summary, a deployment in Kubernetes is a higher-level abstraction that provides a declarative way to manage the desired state of a set of replicas of a pod. Deployments provide features such as scaling, rolling updates, and self-healing, and they ensure that a specified number of replicas are running and available at all times.
Conclusion
Kubernetes provides a highly scalable and flexible platform for deploying and managing containerized applications. By abstracting away the underlying infrastructure, it allows developers to focus on writing code and deploying applications, while the Kubernetes system handles the management and scaling of the underlying resources.
In conclusion, understanding the basic architecture of Kubernetes is essential for anyone looking to deploy and manage containerized applications at scale. By knowing the key components and their responsibilities, you can more effectively use Kubernetes to automate and manage your application infrastructure.
Disclaimer
This article is not endorsed by Salesforce, Google, or any other company in any way. I shared my knowledge on this topic in this blog post. Please always refer to Official Documentation for the latest information.
0 Comments