Skip to content

Docker Guide

Deploy Archon on a server with Docker. Includes automatic HTTPS, PostgreSQL, and the Web UI.


The fastest way to deploy. Paste the cloud-init config into your VPS provider’s User Data field when creating a server — it installs everything automatically.

File: deploy/cloud-init.yml

  1. Create a VPS (Ubuntu 22.04+ recommended) at DigitalOcean, AWS, Linode, Hetzner, etc.
  2. Paste the contents of deploy/cloud-init.yml into the “User Data” / “Cloud-Init” field
  3. Add your SSH key via the provider’s UI
  4. Create the server and wait ~5-8 minutes for setup to complete
  • Docker + Docker Compose
  • UFW firewall (ports 22, 80, 443)
  • Clones the repo to /opt/archon
  • Copies .env.example -> .env and Caddyfile.example -> Caddyfile
  • Pre-pulls PostgreSQL and Caddy images
  • Builds the Archon Docker image

SSH into the server and finish configuration:

Terminal window
# Check setup completed
cat /opt/archon/SETUP_COMPLETE
# Edit credentials and domain
nano /opt/archon/.env
# Set at minimum:
# CLAUDE_CODE_OAUTH_TOKEN=sk-ant-oat01-...
# DOMAIN=archon.example.com
# DATABASE_URL=postgresql://postgres:postgres@postgres:5432/remote_coding_agent
# (Optional) Set up basic auth to protect Web UI:
# docker run caddy caddy hash-password --plaintext 'YOUR_PASSWORD'
# Add to .env: CADDY_BASIC_AUTH=basicauth @protected { admin $$2a$$14$$<hash> }
# Start
cd /opt/archon
docker compose --profile with-db --profile cloud up -d

Don’t forget DNS: Before starting, point your domain’s A record to the server’s IP.

ProviderWhere to paste cloud-init
DigitalOceanCreate Droplet -> Advanced Options -> User Data
AWS EC2Launch Instance -> Advanced Details -> User Data
LinodeCreate Linode -> Add Tags -> Metadata (User Data)
HetznerCreate Server -> Cloud config -> User Data
VultrDeploy -> Additional Features -> Cloud-Init User-Data

Run Archon locally with Docker Desktop — no domain, no VPS required. Uses SQLite and the Web UI only.

Terminal window
git clone https://github.com/coleam00/Archon.git
cd Archon
cp .env.example .env
# Edit .env: set CLAUDE_CODE_OAUTH_TOKEN or CLAUDE_API_KEY
docker compose up -d

Access the Web UI at http://localhost:3000.

Build from WSL, not PowerShell. Docker Desktop on Windows cannot follow Bun workspace symlinks during the build context transfer. If you see The file cannot be accessed by the system, open a WSL terminal:

Terminal window
cd /mnt/c/Users/YourName/path/to/Archon
docker compose up -d

Line endings: The repo uses .gitattributes to force LF endings for shell scripts. If you cloned before this was added and see exec docker-entrypoint.sh: no such file or directory, re-clone or run:

Terminal window
git rm --cached -r .
git reset --hard
FeatureStatus
Web UIhttp://localhost:3000
DatabaseSQLite (automatic, zero setup)
HTTPS / CaddyNot needed locally
AuthNone (single-user, localhost only)
Platform adaptersOptional (Telegram, Slack, etc.)
Terminal window
docker compose --profile with-db up -d

Then add to .env:

DATABASE_URL=postgresql://postgres:postgres@postgres:5432/remote_coding_agent

Step-by-step alternative if you prefer not to use cloud-init, or need more control.

Terminal window
# On Ubuntu/Debian
curl -fsSL https://get.docker.com | sh
sudo usermod -aG docker $USER
# Log out and back in for group change to take effect
exit
# ssh back in
# Verify
docker --version
docker compose version
Terminal window
git clone https://github.com/coleam00/Archon.git
cd Archon
Terminal window
cp .env.example .env
cp Caddyfile.example Caddyfile
nano .env

Set these values in .env:

# AI Assistant — at least one is required
# Option A: Claude OAuth token (run `claude setup-token` on your local machine to get one)
CLAUDE_CODE_OAUTH_TOKEN=sk-ant-oat01-xxxxx
# Option B: Claude API key (from console.anthropic.com/settings/keys)
# CLAUDE_API_KEY=sk-ant-xxxxx
# Domain — your domain or subdomain pointing to this server
DOMAIN=archon.example.com
# Database — connect to the Docker PostgreSQL container
# Without this, the app uses SQLite (fine for getting started, but PostgreSQL recommended)
DATABASE_URL=postgresql://postgres:postgres@postgres:5432/remote_coding_agent
# Basic Auth (optional) — protects Web UI when exposed to the internet
# Skip if using IP-based firewall rules instead.
# Generate hash: docker run caddy caddy hash-password --plaintext 'YOUR_PASSWORD'
# CADDY_BASIC_AUTH=basicauth @protected { admin $$2a$$14$$... }
# Platform tokens (set the ones you use)
# TELEGRAM_BOT_TOKEN=123456789:ABCdef...
# SLACK_BOT_TOKEN=xoxb-...
# SLACK_APP_TOKEN=xapp-...
# GH_TOKEN=ghp_...
# GITHUB_TOKEN=ghp_...

Docker does not support CLAUDE_USE_GLOBAL_AUTH=true — there is no local claude CLI inside the container. You must provide either CLAUDE_CODE_OAUTH_TOKEN or CLAUDE_API_KEY explicitly.

If you use --profile with-db without setting DATABASE_URL, the app will fall back to SQLite and log a warning. The PostgreSQL container runs but is unused.

Create a DNS A record at your domain registrar:

TypeNameValue
Aarchon (or @ for root domain)Your server’s public IP

Wait for DNS propagation (usually 5-60 minutes). Verify with dig archon.example.com.

Terminal window
sudo ufw allow 22/tcp
sudo ufw allow 80/tcp
sudo ufw allow 443
sudo ufw --force enable
Terminal window
docker compose --profile with-db --profile cloud up -d

This starts three containers:

  • app — Archon server + Web UI
  • postgres — PostgreSQL 17 database (auto-initialized)
  • caddy — Reverse proxy with automatic HTTPS (Let’s Encrypt)
Terminal window
# Check all containers are running
docker compose --profile with-db --profile cloud ps
# Watch logs
docker compose logs -f app
docker compose logs -f caddy
# Test HTTPS (from your local machine)
curl https://archon.example.com/api/health

Open https://archon.example.com in your browser — you should see the Archon Web UI.


Archon uses Docker Compose profiles to optionally add PostgreSQL and/or HTTPS. Mix and match:

CommandWhat runs
docker compose up -dApp with SQLite
docker compose --profile with-db up -dApp + PostgreSQL
docker compose --profile cloud up -dApp + Caddy (HTTPS)
docker compose --profile with-db --profile cloud up -dApp + PostgreSQL + Caddy

Zero-config default. No database container needed — SQLite file is stored in the archon_data volume.

Starts a PostgreSQL 17 container. Set the connection URL in .env:

DATABASE_URL=postgresql://postgres:postgres@postgres:5432/remote_coding_agent

The schema is auto-initialized on first startup. PostgreSQL is exposed on ${POSTGRES_PORT:-5432} for external tools.

Adds a Caddy reverse proxy with automatic TLS certificates from Let’s Encrypt.

Requires before starting:

  1. Caddyfile created: cp Caddyfile.example Caddyfile
  2. DOMAIN set in .env
  3. DNS A record pointing to your server’s IP
  4. Ports 80 and 443 open

Caddy handles HTTPS certificates, HTTP->HTTPS redirect, HTTP/3, and SSE streaming.

Caddy can enforce HTTP Basic Auth on all routes except webhooks (/webhooks/*) and the health check (/api/health). This is optional — skip it if you use IP-based firewall rules or other network-level access control.

To enable:

  1. Generate a bcrypt password hash:

    Terminal window
    docker run caddy caddy hash-password --plaintext 'YOUR_PASSWORD'
  2. Set CADDY_BASIC_AUTH in .env (use $$ to escape $ in bcrypt hashes):

    CADDY_BASIC_AUTH=basicauth @protected { admin $$2a$$14$$abc123... }
  3. Restart: docker compose --profile cloud restart caddy

Your browser will prompt for username/password when accessing the Archon URL. Webhook endpoints bypass auth since they use HMAC signature verification.

To disable, leave CADDY_BASIC_AUTH empty or unset — the Caddyfile expands it to nothing.

Important: Always use the docker run caddy caddy hash-password command to generate hashes — never put plaintext passwords in .env.

Form-Based Authentication (HTML Login Page)

Section titled “Form-Based Authentication (HTML Login Page)”

An alternative to basic auth that serves a styled HTML login form instead of the browser’s credential popup. Uses a lightweight auth-service sidecar and Caddy’s forward_auth directive.

When to use form auth vs basic auth:

  • Form auth: Styled dark-mode login page, 24h session cookie, logout support. Requires an extra container.
  • Basic auth: Zero extra containers, simpler setup. Browser shows a native credential dialog.

Setup:

  1. Generate a bcrypt password hash:

    Terminal window
    docker compose --profile auth run --rm auth-service \
    node -e "require('bcryptjs').hash('YOUR_PASSWORD', 12).then(h => console.log(h))"

    First run builds the auth-service image. Save the output hash (starts with $2b$12$...).

  2. Generate a random cookie signing secret:

    Terminal window
    docker run --rm node:22-alpine \
    node -e "console.log(require('crypto').randomBytes(32).toString('hex'))"
  3. Set the following in .env:

    AUTH_USERNAME=admin
    AUTH_PASSWORD_HASH=$2b$12$REPLACE_WITH_YOUR_HASH
    COOKIE_SECRET=REPLACE_WITH_64_HEX_CHARS
  4. Update Caddyfile (copy from Caddyfile.example if not done yet):

    • Uncomment the “Option A” form auth block (the handle /login, handle /logout, and handle { forward_auth ... } blocks)
    • Comment out the “No auth” default handle block (the last handle { ... } block near the bottom of the site block)
  5. Start with both cloud and auth profiles:

    Terminal window
    docker compose --profile with-db --profile cloud --profile auth up -d
  6. Visit your domain — you should be redirected to /login.

Logout: Navigate to /logout to clear the session cookie and return to the login form.

Session duration: Defaults to 24 hours (COOKIE_MAX_AGE=86400). Override in .env:

COOKIE_MAX_AGE=3600 # 1 hour

Note: Do not use form auth and basic auth simultaneously. Choose one method and leave the other disabled (either empty CADDY_BASIC_AUTH or remove the basic auth @protected block from your Caddyfile).


The Docker healthcheck uses /api/health (not /health):

Terminal window
# Inside Docker
curl http://localhost:3000/api/health
# Local development (both work)
curl http://localhost:3090/health
curl http://localhost:3090/api/health

Docker containers cannot use CLAUDE_USE_GLOBAL_AUTH=true — there is no local claude CLI inside the container. You must set credentials explicitly in .env:

Claude (choose one):

# OAuth token — run `claude setup-token` on your local machine, copy the token
CLAUDE_CODE_OAUTH_TOKEN=sk-ant-oat01-xxxxx
# Or API key — from console.anthropic.com/settings/keys
CLAUDE_API_KEY=sk-ant-xxxxx

Codex (alternative):

CODEX_ID_TOKEN=eyJhbGc...
CODEX_ACCESS_TOKEN=eyJhbGc...
CODEX_REFRESH_TOKEN=rt_...
CODEX_ACCOUNT_ID=6a6a7ba6-...
TELEGRAM_BOT_TOKEN=123456789:ABCdef...
SLACK_BOT_TOKEN=xoxb-...
SLACK_APP_TOKEN=xapp-...
DISCORD_BOT_TOKEN=...
GH_TOKEN=ghp_...
GITHUB_TOKEN=ghp_...
WEBHOOK_SECRET=...
PORT=3000 # Default: 3000
DOMAIN=archon.example.com # Required for --profile cloud
LOG_LEVEL=info # fatal|error|warn|info|debug|trace
MAX_CONCURRENT_CONVERSATIONS=10

See .env.example for the full list with documentation.

The container stores all data at /.archon/ (workspaces, worktrees, artifacts, logs, SQLite DB).

By default this is a Docker-managed volume. To store data at a specific location on the host, set ARCHON_DATA in .env:

# Store Archon data at a specific host path
ARCHON_DATA=/opt/archon-data

The directory is created automatically. Make sure the path is writable by UID 1001 (the container user):

Terminal window
mkdir -p /opt/archon-data
sudo chown -R 1001:1001 /opt/archon-data

If ARCHON_DATA is not set, Docker manages the volume automatically (archon_data) — data persists across restarts and rebuilds but lives inside Docker’s storage.

GH_TOKEN from .env is picked up automatically. Alternatively:

Terminal window
docker compose exec app gh auth login

After the server is reachable via HTTPS:

  1. Go to https://github.com/<owner>/<repo>/settings/hooks
  2. Add webhook:
    • Payload URL: https://archon.example.com/webhooks/github
    • Content type: application/json
    • Secret: Your WEBHOOK_SECRET from .env
    • Events: Issues, Issue comments, Pull requests

For users who don’t need to build from source:

Terminal window
mkdir archon && cd archon
curl -O https://raw.githubusercontent.com/coleam00/Archon/main/deploy/docker-compose.yml
curl -O https://raw.githubusercontent.com/coleam00/Archon/main/.env.example
cp .env.example .env
# Edit .env — set AI credentials, DOMAIN, etc.
docker compose up -d

Uses ghcr.io/coleam00/archon:latest. To add PostgreSQL, uncomment the postgres service in the compose file and set DATABASE_URL in .env.

To layer custom tools on top of the pre-built image, see Customizing the Image.


The Dockerfile uses three stages:

  1. deps — Installs all dependencies (including devDependencies for the web build)
  2. web-build — Builds the React web UI with Vite
  3. production — Production image with only production dependencies + pre-built web assets
Terminal window
docker build -t archon .
docker run --env-file .env -p 3000:3000 archon

What’s in the image:

  • Runtime: Bun 1.2 (runs TypeScript directly, no compile step)
  • System deps: git, curl, gh (GitHub CLI), postgresql-client, Chromium
  • Browser tooling: agent-browser (Vercel Labs) — enables E2E testing workflows via CDP. Uses system Chromium (AGENT_BROWSER_EXECUTABLE_PATH=/usr/bin/chromium)
  • App: All 10 workspace packages (source), pre-built web UI
  • User: Non-root appuser (UID 1001) — required by Claude Code SDK
  • Archon dirs: /.archon/workspaces, /.archon/worktrees

The multi-stage build keeps the image lean — no devDependencies, test files, docs, or .git/.

To add extra tools without modifying the tracked Dockerfile:

  1. Copy the example:
    • Local/dev: cp Dockerfile.user.example Dockerfile.user
    • Server/deploy: cp deploy/Dockerfile.user.example Dockerfile.user
  2. Edit Dockerfile.user — uncomment and extend the examples as needed.
  3. Copy the override file:
    • Local/dev: cp docker-compose.override.example.yml docker-compose.override.yml
    • Server/deploy: cp deploy/docker-compose.override.example.yml docker-compose.override.yml
  4. Run docker compose up -d — Compose merges the override automatically.

Dockerfile.user and docker-compose.override.yml are gitignored so your customizations stay local.


Terminal window
docker compose logs -f # All services
docker compose logs -f app # App only
docker compose logs --tail=100 app # Last 100 lines
Terminal window
git pull
docker compose --profile with-db --profile cloud up -d --build
Terminal window
docker compose restart # All
docker compose restart app # App only
Terminal window
docker compose down # Stop containers (data preserved)
docker compose down -v # Stop + delete volumes (destructive!)

Migrations run automatically on first startup via 000_combined.sql. When upgrading to a newer version that adds database tables, you need to apply incremental migrations manually:

Terminal window
# Example: apply the env vars migration (required when upgrading to v0.3.x)
docker compose exec postgres psql -U postgres -d remote_coding_agent -f /migrations/020_codebase_env_vars.sql

The migrations/ directory is mounted read-only into the postgres container. Check for any new migration files after pulling updates.

Terminal window
docker system prune -a # Remove unused images/containers
docker volume prune # Remove unused volumes (caution!)
docker system df # Check disk usage

App won’t start: “no_ai_credentials”

Section titled “App won’t start: “no_ai_credentials””

No AI assistant configured. Docker does not support CLAUDE_USE_GLOBAL_AUTH=true. Set one of these in .env:

  • CLAUDE_CODE_OAUTH_TOKEN=sk-ant-oat01-... (run claude setup-token locally to get one)
  • CLAUDE_API_KEY=sk-ant-... (from console.anthropic.com)
  • Or Codex credentials (CODEX_ID_TOKEN, CODEX_ACCESS_TOKEN, etc.)

Caddy fails to start: “not a directory”

Section titled “Caddy fails to start: “not a directory””
error mounting "Caddyfile": not a directory

The Caddyfile doesn’t exist — Docker created a directory in its place. Fix:

Terminal window
rm -rf Caddyfile
cp Caddyfile.example Caddyfile
docker compose --profile cloud up -d
Terminal window
# Check DNS propagation
dig archon.example.com
# Should return your server IP
# Check Caddy logs
docker compose logs caddy
# Check firewall
sudo ufw status
# Ports 80 and 443 must be open

Common causes: DNS not propagated (wait 5-60min), firewall blocking 80/443, domain typo in .env.

The Docker healthcheck uses /api/health (not /health):

Terminal window
curl http://localhost:3000/api/health

When using --profile with-db, ensure:

  1. DATABASE_URL uses postgres as hostname (Docker service name), not localhost:
    DATABASE_URL=postgresql://postgres:postgres@postgres:5432/remote_coding_agent
  2. The postgres container is healthy: docker compose ps postgres
  3. Migrations ran: check docker compose logs postgres for init script output

The container runs as appuser (UID 1001). If using bind mounts instead of Docker volumes:

Terminal window
sudo chown -R 1001:1001 /path/to/archon-data

Default Docker port is 3000 (local dev is 3090). Change in .env:

PORT=3001
Terminal window
docker compose ps
docker compose logs --tail=50 app

Common causes: missing .env file, invalid credentials, database unreachable.