Celery on ECS: Separate Container Architecture
Running Celery in production on ECS Fargate means splitting your Django app, worker, and beat scheduler into separate containers sharing the same Docker image but with different CMD entrypoints. This diagram covers the full architecture — from task publication through Redis broker to worker execution and result storage.
Key Takeaways
Three ECS services, one image. The Django API container, Celery worker, and Beat scheduler all use the identical Docker image. Only the CMD in the ECS task definition changes — gunicorn vs celery worker vs celery beat.
Beat must be exactly 1 replica. If you scale Beat to 2 instances, every scheduled task runs twice. Keep it as a separate ECS service with desiredCount: 1. Use django-celery-beat to store the schedule in Postgres instead of an ephemeral file.
Prefork is the default for good reason. Each forked subprocess gets its own Python interpreter (no GIL sharing) and its own Django DB connection. Set CONN_MAX_AGE=0 or handle the worker_process_init signal to close inherited connections.
No built-in health check for workers. ECS doesn’t know if your Celery worker is healthy. Use celery inspect ping in a custom health check command, or run a sidecar probe.
acks_late=True matters. Without it, Redis ACKs the task on delivery, not completion. If your worker crashes mid-task, the task is gone. With acks_late=True, the task stays in the queue until the worker confirms it finished.
ECS Task Definitions at a Glance
| Service | CPU | Memory | Scaling | Ports |
|---|---|---|---|---|
| api | 512 | 1024 | 2+ (auto-scale) | 8000 (ALB) |
| worker | 1024 | 2048 | 2+ (auto-scale) | none |
| beat | 256 | 512 | 1 only | none |
All three services need CELERY_BROKER_URL and DATABASE_URL in their environment. Only the API service needs port mappings and an ALB target group.