Kubernetes vs. Docker: When You Need an Orchestrator and When You Don't
Let’s clear something up: Docker and Kubernetes are not competitors. They solve fundamentally different problems. Docker packages and runs containers. Kubernetes orchestrates them at scale. Comparing them is like comparing a single server to a data center — one is a building block, the other is a system for managing many of those blocks.
Yet we keep seeing teams adopt Kubernetes for a two-service app running on a single VM. The result is predictable: months of YAML wrangling, a bloated infrastructure bill, and engineers spending more time on the platform than on the product.
Here’s how to make the right call.
graph TD
A[New Project] --> B{How many services?}
B -->|1-5| C{Need HA / auto-scaling?}
B -->|5+| F[Kubernetes]
C -->|No| D{Team size < 5?}
C -->|Yes| F
D -->|Yes| E[Docker Compose]
D -->|No| F
Docker: The Container Runtime
Docker gives you a portable, reproducible unit of deployment. You define your app’s environment in a Dockerfile, build an image, and run it anywhere Docker is installed. That’s the core value proposition — consistency from laptop to production.
Docker Compose extends this to multi-container setups on a single host. A docker-compose.yml file lets you define your app, its database, a cache layer, and a reverse proxy in one place:
# docker-compose.yml
services:
app:
build: .
ports:
- "8080:8080"
environment:
- DATABASE_URL=postgres://db:5432/myapp
depends_on:
- db
- redis
db:
image: postgres:16
volumes:
- pgdata:/var/lib/postgresql/data
redis:
image: redis:7-alpine
volumes:
pgdata:
That’s it. Run docker compose up and your entire stack is running. Deployments are docker compose pull && docker compose up -d. Rollbacks are switching an image tag. For a surprising number of workloads, this is all you need.
Kubernetes: The Orchestrator
Kubernetes manages containers across multiple nodes. It handles scheduling, scaling, networking between services, health checks, rolling deployments, and automatic restarts. It’s a distributed system for running distributed systems.
The same app deployed to Kubernetes looks quite different:
# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: app
spec:
replicas: 3
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: app
image: registry.example.com/myapp:1.2.0
ports:
- containerPort: 8080
env:
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: db-credentials
key: url
readinessProbe:
httpGet:
path: /healthz
port: 8080
resources:
requests:
memory: "128Mi"
cpu: "100m"
limits:
memory: "256Mi"
cpu: "500m"
---
apiVersion: v1
kind: Service
metadata:
name: app
spec:
selector:
app: myapp
ports:
- port: 80
targetPort: 8080
And that’s just the Deployment and Service. You still need an Ingress, a Secret, probably a HorizontalPodAutoscaler, and equivalent manifests for your database and cache — or more likely, managed services for those. The operational surface area is an order of magnitude larger.
When Docker Compose Is Perfectly Fine
Docker Compose is the right choice more often than people admit. It works well when:
- You have fewer than ~10 services. If your architecture fits on one or two machines, Compose handles it cleanly.
- Your team is small. A team of 3-5 engineers doesn’t need a dedicated platform. They need to ship features.
- You’re running internal tools or dev environments. Staging environments, CI runners, internal dashboards — these don’t need five-nines uptime.
- Your SLA is “reasonable.” If you can tolerate 5 minutes of downtime during a deployment, Compose with a simple blue-green script is enough.
- You’re building an MVP or early-stage product. Validated learning matters more than perfect infrastructure. Spend your time on the product.
A well-structured Compose setup with proper health checks, volume management, and a basic deployment script will take you further than most people think.
When You Actually Need Kubernetes
Kubernetes earns its complexity when your requirements demand it:
- Horizontal scaling under variable load. If your traffic spikes 10x during peak hours and you need automatic scaling, Kubernetes does this natively.
- Multi-team, multi-service architectures. When you have 20+ microservices owned by different teams, you need standardized deployment, networking, and observability. Kubernetes provides the control plane for that.
- High availability is non-negotiable. Self-healing, rolling deployments with zero downtime, pod disruption budgets, multi-zone scheduling — Kubernetes was built for workloads where downtime costs real money.
- You need advanced traffic management. Canary deployments, traffic splitting, service mesh integration — these patterns are first-class in the Kubernetes ecosystem.
- Compliance or multi-tenancy requirements. Namespace isolation, RBAC, network policies, and resource quotas give you the guardrails for running multiple workloads securely on shared infrastructure.
If you’ve hit these requirements and want to go deeper, we wrote about what running Kubernetes in production actually looks like in our post on Kubernetes production lessons — covering the operational patterns that matter once you’ve committed to the platform.
The Complexity Cost
Every abstraction has a price. For Kubernetes, that price is steep:
- Operational overhead. Someone on your team needs to understand networking (CNI, Services, Ingress), storage (PV/PVC, CSI drivers), RBAC, and the control plane components. This is a full-time job, not a side task.
- Debugging is harder. When a pod won’t schedule, you’re reading events across Nodes, Deployments, ReplicaSets, and Pods. When networking breaks, you’re tracing through kube-proxy, CoreDNS, and your CNI plugin.
- YAML proliferation. A moderately complex app can easily generate thousands of lines of Kubernetes manifests. Helm charts and Kustomize help, but they add their own complexity layers.
- Cost. The control plane, extra nodes for overhead, monitoring stack (Prometheus, Grafana, Loki), and the engineering hours to maintain it all. For a small workload, this overhead can exceed the cost of the workload itself.
Don’t adopt Kubernetes because it’s the industry default. Adopt it because your specific requirements justify the investment. As we discussed in our infrastructure automation post, the right level of automation depends on what you’re actually managing — over-engineering your platform is just as costly as under-engineering it.
The Decision Framework
Here’s a practical way to think about it:
| Factor | Docker Compose | Kubernetes |
|---|---|---|
| Number of services | 1-10 | 10+ |
| Team size | 1-8 engineers | 8+ or dedicated platform team |
| Scaling needs | Predictable, mostly vertical | Variable, horizontal |
| Uptime SLA | 99.5% or lower | 99.9%+ |
| Deployment frequency | Daily or less | Multiple times per day |
| Traffic pattern | Steady or low | Spiky, unpredictable |
| Budget for infra ops | Minimal | Dedicated headcount |
If you land in the middle, consider K3s — a lightweight, certified Kubernetes distribution that strips out the heavy components. It runs on a single node, uses SQLite instead of etcd by default, and gives you the Kubernetes API without the full operational weight. It’s a solid stepping stone when you’re outgrowing Compose but aren’t ready for a full cluster.
Managed Kubernetes (EKS, GKE, AKS) is another middle path. It offloads control plane management, but you’re still responsible for node management, networking configuration, and the full application layer. It reduces operational burden — it doesn’t eliminate it.
The Bottom Line
Docker Compose is not a stepping stone. It’s a legitimate production tool for the right workloads. Kubernetes is not a best practice. It’s a powerful system for a specific class of problems.
Start with the simplest thing that meets your requirements. Move to Kubernetes when — and only when — your scaling needs, availability requirements, or organizational complexity demand it.
Not sure where your infrastructure lands on this spectrum? At robto, we help teams choose the right architecture for their actual requirements — not the one that looks best on a conference slide. Get in touch and let’s figure it out together.