Kubernetes vs Docker Compose: we deploy both ways
K8s is not always the answer. We use Docker Compose for 60% of client deployments and Kubernetes for the rest. Here is the decision framework.
Kubernetes vs Docker Compose: We Deploy Both Ways
A logistics startup asked us to deploy their fleet management platform. Three services, one database, predictable traffic, team of four. They wanted Kubernetes because "everyone uses it."
We deployed on Docker Compose with a single VM, Caddy for TLS, and a cron-based backup script. Total infrastructure cost: $40/month. Deployment time: one afternoon. The system has served 12,000 daily requests with 99.97% uptime for 8 months.
A healthcare platform with 14 microservices, HIPAA requirements, auto-scaling needs, and a team of twelve? That's Kubernetes. We run it across 3 nodes with ArgoCD, Prometheus, and network policies.
The difference isn't technical preference -- it's requirements.
The decision framework
We evaluate five factors:
1. Service count
- Under 8 services: Docker Compose handles this comfortably
- 8-15 services: Either works; lean toward Compose if traffic is predictable
- 15+ services: Kubernetes earns its complexity
2. Scaling requirements
- Fixed load, predictable traffic: Docker Compose + manual scaling
- Spiky traffic, auto-scaling needed: Kubernetes HPA
- Multi-region: Kubernetes (no practical alternative)
3. Team ops capacity
- No dedicated DevOps: Docker Compose (Kubernetes without ops expertise is a liability)
- Part-time ops: Docker Compose or managed Kubernetes
- Dedicated SRE/DevOps: Kubernetes
4. Availability requirements
- 99.9% (8.7 hours downtime/year): Docker Compose with health checks and restart policies
- 99.95%+ (4.4 hours/year): Kubernetes with rolling deployments and pod disruption budgets
- 99.99%+ (52 minutes/year): Kubernetes multi-zone with proper redundancy
5. Budget
- Small: Docker Compose on a $40-80/month VM
- Medium: Managed Kubernetes (EKS/GKE) at $200-500/month
- Large: Self-managed Kubernetes at whatever the infrastructure costs
Docker Compose in production
This is not a development-only tool. With the right setup, Docker Compose runs production workloads reliably:
services:
api:
image: registry.internal/fleet-api:v1.8.2
restart: unless-stopped
deploy:
resources:
limits:
memory: 512M
cpus: "0.5"
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3000/health"]
interval: 30s
retries: 3
depends_on:
db:
condition: service_healthy
db:
image: postgres:16
volumes:
- pgdata:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U app"]
interval: 10s
retries: 5
Key production patterns we follow with Compose:
- Always set resource limits. A memory leak in one container shouldn't kill the host.
- Health checks on every service. Docker restarts unhealthy containers automatically with
restart: unless-stopped. - Named volumes for data. Never bind-mount production data directories.
- Image tags, never
:latest. Reproducible deployments require pinned versions. - Caddy or Traefik for TLS. Automatic Let's Encrypt certificates with zero configuration.
When we move to Kubernetes
The migration trigger is usually one of these:
- The team adds a 9th or 10th service and Compose orchestration becomes unwieldy
- Traffic patterns require horizontal pod auto-scaling
- A compliance requirement mandates network policies, pod security standards, or audit logging
- Multi-node deployment becomes necessary for availability
We never migrate preemptively. The migration happens when Docker Compose is actively creating friction, not when someone reads a blog post about Kubernetes.
The cost reality
| Setup | Monthly cost | Ops effort |
|---|---|---|
| Docker Compose, single VM (Hetzner) | $40-80 | 2 hours/month |
| Docker Compose, single VM (AWS) | $80-150 | 2 hours/month |
| Managed K8s (EKS, 3 nodes) | $250-500 | 8 hours/month |
| Self-managed K8s (3 nodes, bare metal) | $150-300 | 15 hours/month |
The numbers include compute, storage, and egress. They don't include the engineering time to set up the initial infrastructure, which is 1-2 days for Compose and 1-2 weeks for Kubernetes.
The tradeoffs
Docker Compose tradeoffs:
- No auto-scaling. You scale by resizing the VM or adding container replicas manually.
- No self-healing across nodes. If the host dies, everything dies. Mitigate with automated backups and documented recovery procedures.
- No native service mesh, network policies, or pod security. If you need these, you need Kubernetes.
Kubernetes tradeoffs:
- Operational complexity is significant. RBAC, network policies, storage classes, ingress controllers, cert-manager -- each one is a subsystem to understand and maintain.
- Cost floor is high. Even a minimal cluster costs more than a well-configured VM.
- The abstraction layer adds latency to debugging. "Why is this pod not scheduling?" is a harder question than "why is this container not starting?"
Our recommendation
Start with Docker Compose. Deploy to a single VM with Caddy, proper health checks, and automated backups. This setup handles more traffic than most teams expect -- we've run 15,000 RPM on a $60 Hetzner VM without issues.
Move to Kubernetes when you have specific requirements that Docker Compose cannot meet: auto-scaling, multi-node availability, network isolation, or compliance mandates. And when you do move, use managed Kubernetes unless you have a dedicated platform team.
Running a marketing site on Kubernetes is driving a semi-truck to the grocery store. Most applications never need it.