Pods are arguably the most fundamental concept in Kubernetes. They serve as the smallest deployable units of computing that you can create and manage within your cluster. If you are working with containers in a Kubernetes environment, understanding how Pods function is essential to effective application deployment and scaling.
What Exactly is a Kubernetes Pod?

A Pod (like a pod of whales or a pea pod) is defined as a group of one or more containers, along with shared storage resources, network resources, and a specification detailing how the containers should run.
In essence, a Pod models an application-specific “logical host“. Its contents are always co-located and co-scheduled, running within a shared context. This shared context is created using a set of Linux namespaces, cgroups, and potentially other facets of isolationβthe same mechanisms that isolate a container.
It is important to remember that a Pod is not a process. Instead, it is the environment for running containers. Kubernetes manages Pods rather than managing containers directly. To ensure Pods can run, a container runtime must be installed on every node in the cluster.
The Two Main Uses of Pods

Pods are typically employed in two primary ways within a Kubernetes cluster:
- Pods that run a single container: This is the most common Kubernetes use case, known as the “one-container-per-Pod” model. Here, you can envision the Pod simply as a wrapper around that single container.
- Pods that run multiple containers that need to work together: This is a more advanced pattern that should only be used when containers are tightly coupled. A single Pod can encapsulate an application composed of multiple co-located containers that are tightly coupled and need to share resources. These containers form a single cohesive unit.
An example of the multi-container approach is having one container act as a web server for files stored in a shared volume, while a separate sidecar container handles refreshing or updating those files from a remote source.
Shared Resources: Networking and Storage

A core feature of Pods is the native provision of two kinds of shared resources for their constituent containers: networking and storage.
- Pod Networking: Each Pod is assigned a unique IP address. Every container within that Pod shares the network namespace, including that IP address and the associated network ports. Inside the Pod, containers can communicate with one another using
localhost. When communicating with entities outside the Pod, containers must coordinate their use of the shared network resources.

- Storage in Pods: A Pod can specify a set of shared storage volumes. All containers in the Pod can access these shared volumes, allowing them to share data. Volumes also ensure that persistent data in the Pod survives if one of the containers within the Pod needs to be restarted.
Managing Pods: Workload Resources and Templates
You will rarely create individual Pods directly in Kubernetes. Pods are designed to be ephemeral and disposable entities.
Instead of direct management, Pods are typically created and managed using workload resources (and their associated controllers). These controllers handle crucial functions like replication, rollout, and automatic healing in the case of Pod failure.
Key workload resources used to manage one or more Pods include:
- Deployment
- StatefulSet (if your Pods need to track state)
- Job
- DaemonSet
If you need to scale your application horizontally (providing more overall resources by running more instances), you should use multiple Pods, one for each instance. This is typically referred to as replication and is managed by a workload resource and its controller. For example, if a Node fails, a controller will notice the stopped Pods and create a replacement Pod, which the scheduler places onto a healthy Node.
Pod Templates
Controllers for workload resources (like Deployments) create and manage Pods based on a pod template. The PodTemplate is a specification for creating Pods and is included within the definition of the workload resource.
Here is an example of a simple Pod specification, which would typically be included within a workload resource template (like a Job, as seen in the source material):
pods/simple-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
Modifying the Pod template of a workload resource does not directly affect existing Pods. Instead, the resource needs to create replacement Pods that utilize the updated template. For instance, if you edit a StatefulSet’s Pod template, the controller starts creating new Pods based on the updated template, eventually replacing all the old Pods.
Advanced Pod Features and Configuration
Init and Sidecar Containers
In addition to the main application containers, a Pod can contain init containers that run during Pod startup. By default, init containers must run and complete successfully before the application containers begin.
Furthermore, Kubernetes allows the use of sidecar containers that provide auxiliary services to the main application. Since Kubernetes v1.33 [stable], the SidecarContainers feature enables you to specify restartPolicy: Always for containers, treating them as sidecars that remain running for the entire lifetime of the Pod, starting up before the main application Pod. You can also inject ephemeral containers for debugging a running Pod.
Pod Security and Diagnostics
You can set security constraints on Pods and their containers using the securityContext field in the Pod specification. This provides granular control over actions, such as:
- Dropping specific Linux capabilities.
- Forcing processes to run as a non-root user or a specific user ID.
- Setting a specific seccomp profile.
The kubelet periodically performs diagnostics on a container using a probe. These probes can involve actions like executing a command (ExecAction), checking a TCP socket (TCPSocketAction), or making an HTTP request (HTTPGetAction).
Static Pods
Static Pods are unique because they are managed directly by the kubelet daemon on a specific node, bypassing the API server for observation. While most Pods are managed by the control plane (e.g., via a Deployment), the kubelet directly supervises each Static Pod and restarts it if it fails. Their main purpose is running a self-hosted control planeβusing the kubelet to supervise individual control plane components.
Pods are the essential building blocks for deploying applications in Kubernetes. By wrapping tightly coupled containers, sharing network and storage resources, and leveraging controllers for replication and self-healing, they provide the necessary environment for resilient cloud-native applications.
Understanding Pods is like understanding the passenger cabin on an airplane: it’s the fundamental unit that houses and protects the essential components (containers) and provides the shared resources (air, light, connectivity) necessary for the journey (application execution).