Why we run Traefik at the edge and NGINX inside containers
We stopped choosing between them. Traefik handles dynamic routing and TLS at the edge. NGINX handles static serving and caching inside containers.
Why We Run Traefik at the Edge and NGINX Inside Containers
We used to pick one reverse proxy per project. NGINX for traditional deployments, Traefik for Docker setups. Then we hit a project that needed both: a healthcare platform with 8 services running on Docker Compose, each service needing custom cache headers, gzip rules, and security headers, while the edge needed automatic TLS, service discovery, and dynamic routing as services scaled.
We stopped choosing. Traefik runs at the edge. NGINX runs inside containers. Each does what it's best at.
The problem
Reverse proxies serve two distinct roles that are often conflated:
- Edge routing -- TLS termination, service discovery, dynamic routing, rate limiting, load balancing across services
- Application-level serving -- Static file serving, response caching, custom headers, request rewriting, gzip compression
Traefik is exceptional at #1 and mediocre at #2. NGINX is exceptional at #2 and requires manual config reloads for #1. Trying to make one tool do both leads to either complex Traefik middleware chains or NGINX configs that need to be reloaded every time a container starts.
Traefik at the edge
Traefik handles everything that's dynamic:
- Service discovery via Docker labels. When a container starts with the right labels, Traefik routes to it automatically. No config file to update, no reload.
- Automatic TLS via Let's Encrypt. Certificates are provisioned and renewed without intervention. We've run Traefik on 30+ domains without a single certificate expiry incident.
- Middleware chains. Rate limiting, basic auth, IP whitelisting, and redirect rules -- all composable and applied per-route.
- Dashboard. Real-time visibility into routes, services, and middleware. During deploys, we can watch traffic shift in real time.
# docker-compose.yml -- Traefik at the edge
services:
traefik:
image: traefik:v3.0
command:
- "--providers.docker=true"
- "--entrypoints.web.address=:80"
- "--entrypoints.websecure.address=:443"
- "--certificatesresolvers.letsencrypt.acme.email=ops@commitx.dev"
- "--certificatesresolvers.letsencrypt.acme.storage=/certs/acme.json"
- "--certificatesresolvers.letsencrypt.acme.httpchallenge.entrypoint=web"
ports:
- "80:80"
- "443:443"
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- certs:/certs
api:
image: registry.internal/patient-api:v2.1
labels:
- "traefik.http.routers.api.rule=Host(`api.client.health`) && PathPrefix(`/v2`)"
- "traefik.http.routers.api.tls.certresolver=letsencrypt"
- "traefik.http.routers.api.middlewares=rate-limit"
- "traefik.http.middlewares.rate-limit.ratelimit.average=100"
The API service gets HTTPS, rate limiting, and routing -- all from Docker labels. No Traefik config file to edit.
NGINX inside containers
NGINX handles everything that's static or application-specific:
- Static file serving with proper cache headers
- Gzip compression for API responses
- Security headers (CSP, HSTS, X-Frame-Options)
- SPA routing (rewrite all routes to index.html)
- Upstream buffering for slow backends
# nginx.conf inside a SvelteKit container
server {
listen 3000;
gzip on;
gzip_types text/plain text/css application/json application/javascript;
add_header X-Content-Type-Options nosniff always;
add_header X-Frame-Options DENY always;
add_header Referrer-Policy strict-origin-when-cross-origin always;
location /assets/ {
root /app/build/client;
expires 1y;
add_header Cache-Control "public, immutable";
}
location / {
proxy_pass http://localhost:3001;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}
This NGINX instance runs inside the container alongside the Node.js process. Traefik routes traffic to the container; NGINX handles the application-level concerns inside it.
The decision matrix
| Concern | Traefik | NGINX |
|---|---|---|
| TLS termination | Automatic, zero-config | Manual cert management |
| Service discovery | Docker labels, K8s Ingress | Config reload required |
| Dynamic routing | Built-in | Requires reload or Lua |
| Static file serving | Basic | Exceptional |
| Response caching | Limited | Full featured |
| Custom headers/compression | Middleware (verbose) | Native (concise) |
| Request rewriting | Middleware chain | try_files, rewrite |
| Performance ceiling | 50k concurrent | 100k+ concurrent |
What we learned
- Traefik's Docker provider reads all containers by default. If you run Traefik alongside containers that shouldn't be exposed, set
--providers.docker.exposedByDefault=falseand explicitly opt in with labels. We learned this when a Redis container became publicly accessible for 40 minutes. - NGINX inside containers has zero overhead. NGINX uses 5-10MB of memory and negligible CPU when proxying to a local upstream. The complexity cost is one config file per service.
- Don't put NGINX at the edge for Docker deployments. We tried. The NGINX config needs to be regenerated every time a container changes. Docker-gen works but it's fragile. Traefik solves this natively.
- Traefik v3 fixed the middleware verbosity. Earlier versions required separate label sets for every middleware. v3's middleware chains are cleaner.
The tradeoffs
- Two proxies means two things to understand. Junior engineers need to know which config to edit for which problem. We document this clearly: "routing and TLS = Traefik labels; caching, headers, and static files = NGINX config."
- Request path is longer. Client -> Traefik -> NGINX -> App. The added hop adds ~1ms of latency. For our workloads, this is irrelevant.
- Traefik's Docker socket access is a security surface. Traefik reads the Docker socket to discover containers. This gives it root-equivalent access to the Docker daemon. We mount it read-only and run Traefik in its own network.
Our recommendation
Run Traefik at the edge for TLS, routing, and service discovery. Run NGINX inside containers for static serving, caching, and application-level HTTP concerns. This separation maps cleanly to responsibilities: the platform team manages Traefik; application teams manage their NGINX configs.
If you're on a single VM with Docker Compose, Caddy is also an excellent edge proxy with automatic TLS and simpler configuration than Traefik. We use Caddy for smaller deployments (under 5 services) and Traefik for larger ones.