Backend Container Deployment
The backend now runs as a small compose stack:
db: Postgres 16api: Fastify HTTP serverworker: forecast sync and route compilation background workercaddyin production: reverse proxy and TLS edge
The api and worker services share the same Docker image and differ only by command. This keeps the deploy lightweight while separating request handling from background work.
Why this shape
- One build artifact for the backend
- One
docker composecommand to boot the full stack - Simpler scaling later if API and worker need different resources
- Safer runtime posture than running background jobs inside the API container
Local
- Copy or fill in
/Users/kyle/Developer/projects/shared/aspectavy-next/apps/backend/.env.local - Start the stack:
docker compose up -d --build
Useful checks:
docker compose ps
docker compose logs -f api
docker compose logs -f worker
curl http://127.0.0.1:3001/health
curl http://127.0.0.1:3001/livez
curl http://127.0.0.1:3001/readyz
Backup local bundled Postgres:
./scripts/backup-postgres.sh local
Staging / Shared Edge VPS
Use this path when another VPS edge already owns :80 and :443 and should proxy staging domains such as staging-app.example.com or staging-api.example.com.
Files:
/Users/kyle/Developer/projects/shared/aspectavy-next/apps/backend/.env.staging.example/Users/kyle/Developer/projects/shared/aspectavy-next/docker-compose.staging.edge.yml/Users/kyle/Developer/projects/shared/aspectavy-next/docker-compose.staging.bundled-db.yml
- APP_UNIVERSAL_LINK_HOST - APP_ALLOWED_ORIGINS - APPLE_APP_SITE_ASSOCIATION_APP_IDS - APTABASE_APP_KEY for the staging analytics app - SESSION_SECRET - SESSION_COOKIE_DOMAIN - SMTP credentials - optional iOS version gating fields: - IOS_APP_STORE_URL - IOS_MINIMUM_SUPPORTED_VERSION - IOS_RECOMMENDED_VERSION
- Copy
/Users/kyle/Developer/projects/shared/aspectavy-next/apps/backend/.env.staging.exampleto.env.staging.local - Fill in:
- Start the staging stack behind the shared edge with bundled Postgres:
docker compose \
-f docker-compose.production.yml \
-f docker-compose.staging.edge.yml \
-f docker-compose.staging.bundled-db.yml \
up -d --build api worker db
The API binds to loopback only on 127.0.0.1:${EDGE_PROXY_API_PORT:-3401} so a separate VPS edge proxy can forward staging-app.* or staging-api.* traffic into the backend without exposing the container directly on the public interface.
For split-host browser staging like staging-app.example.com plus staging-api.example.com, set SESSION_COOKIE_DOMAIN=example.com so hosted auth on app.* and API calls to api.* can share the same browser session cookie.
Useful checks:
docker compose \
-f docker-compose.production.yml \
-f docker-compose.staging.edge.yml \
-f docker-compose.staging.bundled-db.yml \
ps
curl http://127.0.0.1:${EDGE_PROXY_API_PORT:-3401}/health
curl http://127.0.0.1:${EDGE_PROXY_API_PORT:-3401}/readyz
GitHub Actions staging deploy:
- DEPLOY_STAGING_SSH_HOST - DEPLOY_STAGING_SSH_PORT - DEPLOY_STAGING_SSH_USER - DEPLOY_STAGING_SSH_PRIVATE_KEY - DEPLOY_STAGING_PATH
- branch:
dev - workflow job:
deploy-stagingin /.github/workflows/deploy.yml - required repository secrets:
The workflow runs backend npm run typecheck and npm test, rsyncs the deploy bundle to the VPS, and then runs scripts/deploy-stack.sh with the staging target.
Production / VPS
There are now two production deployment paths:
/Users/kyle/Developer/projects/shared/aspectavy-next/docker-compose.production.yml
/Users/kyle/Developer/projects/shared/aspectavy-next/docker-compose.production.bundled-db.yml
/Users/kyle/Developer/projects/shared/aspectavy-next/docker-compose.production.edge.yml
- managed/external Postgres:
- bundled Postgres in the same VPS stack:
- shared-edge production on a VPS that already has a separate edge/Caddy stack:
- APTABASE_APP_KEY for the production analytics app - SESSION_SECRET - DATABASE_URL - production cookie/session values
- CADDY_HOST when production owns :80 and :443
- EDGE_PROXY_API_PORT for shared-edge production - API_PORT - POSTGRES_DB - POSTGRES_USER - POSTGRES_PASSWORD - POSTGRES_VOLUME_NAME
- Copy
/Users/kyle/Developer/projects/shared/aspectavy-next/apps/backend/.env.production.exampleto.env.production.local - Fill in:
- Fill in or export:
- Optionally export top-level compose vars such as:
- Start the stack with external/managed Postgres:
docker compose -f docker-compose.production.yml up -d --build
Or start the stack with bundled Postgres on the VPS:
docker compose -f docker-compose.production.yml -f docker-compose.production.bundled-db.yml up -d --build
Or start production behind an existing shared edge on the same VPS:
docker compose \
-f docker-compose.production.yml \
-f docker-compose.production.edge.yml \
-f docker-compose.production.bundled-db.yml \
up -d --build api worker db
Useful checks:
docker compose -f docker-compose.production.yml ps
docker compose -f docker-compose.production.yml logs -f api
docker compose -f docker-compose.production.yml logs -f worker
curl http://127.0.0.1:${API_PORT:-3001}/health
curl http://127.0.0.1:${API_PORT:-3001}/livez
curl http://127.0.0.1:${API_PORT:-3001}/readyz
GitHub Actions production deploy:
- DEPLOY_PRODUCTION_SSH_HOST - DEPLOY_PRODUCTION_SSH_PORT - DEPLOY_PRODUCTION_SSH_USER - DEPLOY_PRODUCTION_SSH_PRIVATE_KEY - DEPLOY_PRODUCTION_PATH
- ASPECTAVY_PRODUCTION_DEPLOY_MODE with edge or standalone - ASPECTAVY_PRODUCTION_USE_BUNDLED_DB
- branch:
master - workflow job:
deploy-productionin /.github/workflows/deploy.yml - required repository secrets:
- optional repository secret:
The production workflow is intentionally guarded: if the production SSH secrets are not configured yet, the job exits cleanly without deploying. Once configured, it uses the same rsync-plus-remote-compose flow as staging and runs scripts/deploy-stack.sh with the production target.
Backup and restore options:
./scripts/backup-postgres.sh production-bundled
DATABASE_URL=postgres://... ./scripts/backup-postgres.sh production-external
FORCE=1 ./scripts/restore-postgres.sh production-bundled ./backups/production-bundled-YYYYMMDD-HHMMSS.dump
FORCE=1 DATABASE_URL=postgres://... ./scripts/restore-postgres.sh production-external ./backups/production-external-YYYYMMDD-HHMMSS.dump
Notes
dbuses a named Docker volume now rather than a repo-local bind mount.apihas an internal liveness check against/livez.workerowns periodic forecast syncs and compilation queue processing.workernow has its own container health check based on the persisted heartbeat in Postgres.- production traffic should terminate at the existing shared edge or the bundled
caddy, depending onASPECTAVY_PRODUCTION_DEPLOY_MODE. - production defaults are now managed-Postgres-friendly; bundled Postgres is an override layer.
LOG_LEVELcontrols API logging verbosity.workerwrites a heartbeat into Postgres so/readyzcan confirm background processing is still alive.- staging and production should use separate Aptabase app keys so QA traffic never pollutes production analytics.
- public
/healthis intentionally minimal now; detailed readiness checks belong on/readyz. apistill has lightweight inline fallback behavior for forecast cache misses and session finalization, which keeps local development resilient even if the worker is temporarily down.- legacy Firebase exports are no longer part of the backend package or deploy path; they live in
/Users/kyle/Developer/projects/shared/aspectavy-next/tools/legacy-firebase-exportas an optional migration-only utility. - the shared-edge staging path is useful for real SMTP, hosted auth, and universal-link QA before the final public domain is live.
- if you want hosted universal links to work for both dev and production-signed QA builds on the same staging host, include both app IDs in
APPLE_APP_SITE_ASSOCIATION_APP_IDS.
Later Hardening
Good future upgrades, but not required for the initial VPS deploy:
- managed Postgres instead of containerized Postgres
- object storage container or managed blob storage
- metrics/monitoring container