Docker Compose v2

Last updated: 2026-03-13 · Related task: P1-OPS-001

This page explains how to run the FastAPI API, Celery worker and beat, PostgreSQL, Redis, and the React SPA locally with Docker Compose.

If you only want to run the product rather than develop the repository, prefer docker-compose.ghcr.yml. It pulls prebuilt images from GHCR instead of building locally.

Layout

.
├── docker/
│   ├── backend.Dockerfile
│   ├── celery.Dockerfile
│   └── frontend.Dockerfile
├── docker-compose.yml
└── .env.example

Environment Variables

Compose ships with predictable local defaults, so docker compose up --build can work even before you create .env.

VariableLocal demo default
POSTGRES_PASSWORDeval752db
DATABASE_URLpostgresql+psycopg://eval752:eval752db@postgres:5432/eval752
ENCRYPTION_KEYrepeated 0123456789abcdef to fill 32 bytes

These defaults are fine for a disposable demo, but they are not appropriate once you start storing real provider keys.

Recommended setup:

cp .env.example .env
openssl rand -hex 32  # paste into ENCRYPTION_KEY
# then keep POSTGRES_PASSWORD and DATABASE_URL aligned

Other common defaults:

  • REDIS_URL=redis://redis:6379/0
  • CELERY_BROKER_URL and CELERY_RESULT_BACKEND fall back to Redis
  • provider credentials are managed in the application UI, not injected through environment variables
  • docker-compose.build.yml can be used when CI only needs image builds

For integration tests, scripts/tests/run_docker_integration.sh bootstraps .env.integration from .env.integration.example when needed.

Starting the Full Stack

docker compose up --build

This starts:

  • FastAPI API on port 8000
  • Celery worker and beat
  • PostgreSQL and Redis
  • the React SPA on port 5173

Backend-only mode

If you only need the backend services:

docker compose up backend celery-worker celery-beat postgres redis -d

Local hot-reload development

For frontend hot reload, copy the override template:

cp docker-compose.override.example.yml docker-compose.override.yml

That override mounts the local frontend/ directory into the container. It is intended for local development only and stays ignored by git.

Persistence

  • PostgreSQL data is stored in the pgdata volume
  • Redis data is stored in the redisdata volume

Common Commands

CommandUse
docker compose logs -f backendfollow API logs
docker compose exec backend uv run alembic upgrade headrun migrations
docker compose -f docker-compose.build.yml build backend frontendbuild images without bringing up infrastructure
docker compose down -vstop and remove volumes

Operational Notes

  • Images build with uv sync --frozen so dependency resolution stays aligned with uv.lock.
  • Replace any demo passwords before storing real provider credentials.
  • Provider keys should come from the app UI, Compose .env, or an external secret manager, never from baked image layers.
  • Containers run as the non-root eval752 user by default.
  • ENCRYPTION_KEY is mandatory. The demo value is only for first boot convenience.
  • On startup, the backend entrypoint runs python -m eval_752.scripts.check_database. If it reports password authentication failed, your existing pgdata likely belongs to a different password. Restore the old password or reset with docker compose down -v.
  • The frontend defaults to /api, and the frontend container proxies /api/* to backend:8000, which keeps same-origin deployment simple.
  • If the frontend must call another backend origin, set VITE_API_BASE_URL=https://api.example.com and rebuild the frontend image.
  • On slow networks or ARM hosts, large wheel downloads may need a higher timeout: UV_HTTP_TIMEOUT=600 docker compose build backend

Prometheus Scraping

The backend exposes /metrics. A minimal Prometheus scrape config looks like this:

scrape_configs:
  - job_name: "eval752-api"
    metrics_path: /metrics
    static_configs:
      - targets:
          - "backend:8000"

If you expose the service through a reverse proxy or Kubernetes ingress, adjust the scrape target accordingly.

Related docs: