Kubernetes Controllers Comparison Table
In Kubernetes, controllers manage Pod lifecycles and are core workload components. Each controller has its own use cases and behavior. Below is a detailed comparison of the main controller types.
Controller Comparison Table
| Dimension | Deployment | StatefulSet | DaemonSet | Job |
|---|---|---|---|---|
| Primary use | Manage stateless apps (web services, APIs) | Manage stateful apps (databases, message queues) | Run a daemon on every node (monitoring, logging) | Run one‑off batch tasks |
| App state | Stateless | Stateful | Usually stateless | Usually stateless |
| Pod identity | No fixed identity, replaceable | Each Pod keeps a stable, permanent ID | Bound to node | Temporary identity |
| Storage | No persistent storage needed, can use ephemeral volumes | Requires stable persistent storage (PV/PVC, dynamic or static) | Optional, usually HostPath or ephemeral | Usually not needed, mount volumes if required |
| Deploy & scale | Unordered deploy, supports horizontal scaling | Ordered, one‑by‑one deploy/scale/delete (0,1,2…) | Auto‑deploys to new nodes, scales with node count | Controls parallelism and completions, runs to completion |
| Update strategy | RollingUpdate, Recreate; configurable maxSurge/maxUnavailable | RollingUpdate, supports partition or OnDelete | RollingUpdate or OnDelete | Not applicable; Pod template immutable after creation |
| Rollback | Automatic rollback to history (revisionHistoryLimit) | Not directly; manual or external tools | Supports rollback | Not applicable |
| Failure handling | Auto restart/recreate failed Pods | At most one Pod with same identity runs; manual recovery needed | Node failure removes Pod; recreated on node recovery/new node | Retry on failure (backoffLimit) |
| Key features | Declarative updates, versioning/rollback, pause/resume | Ordered operations, stable network/storage, Headless Service | Node affinity, auto‑deploy on new nodes, tolerates taints | Retries, parallel execution, optional TTL cleanup |
| Examples | Nginx, Tomcat, NodeJS apps | MySQL, Redis, Kafka, ETCD | Fluentd, Prometheus Node Exporter, network plugins | Data migration, batch compute, backups, ETL |
| Best practice | Default choice for most stateless apps | Use only when you need stable identity or ordering | For infra and node‑level services | For tasks that terminate (not daemons) |
Controller Hierarchy
Deployment → ReplicaSet → Pod: This is the most common pattern for managing stateless apps. Users operate the Deployment object to declare desired state (image version, replicas, etc.). Deployment doesn’t manage Pods directly — it creates and manages ReplicaSet. When the Deployment is updated, it creates a new ReplicaSet and migrates Pods from old to new in a controlled manner (rolling update). This enables versioning and rollback, while ReplicaSet focuses on maintaining a specific number of Pod replicas. In other words, Deployment gives ReplicaSet declarative updates and history management.
CronJob → Job → Pod: CronJob schedules Job objects according to a cron expression. Job executes one‑off tasks by creating one or more Pods and ensuring they complete successfully. CronJob doesn’t manage Pods directly; it only triggers Jobs at the right time. This layering separates scheduling from execution.
StatefulSet and DaemonSet manage Pods directly — they don’t have an intermediate controller because their Pod management logic (stable identity, node binding) is their core capability.