Docker Compose Deployment
Deploy Artifact Keeper using Docker Compose for a complete, containerized setup.
Prerequisites
- Docker 20.10+ and Docker Compose 2.0+
- 4 GB RAM minimum (8 GB recommended)
- 100 GB disk space for artifacts
Quick Start
1. Download and Start
mkdir artifact-keeper && cd artifact-keepercurl -fsSLO https://raw.githubusercontent.com/artifact-keeper/artifact-keeper/main/docker-compose.ymlmkdir dockercurl -fsSLo docker/Caddyfile https://raw.githubusercontent.com/artifact-keeper/artifact-keeper/main/docker/Caddyfilecurl -fsSLo docker/init-db.sql https://raw.githubusercontent.com/artifact-keeper/artifact-keeper/main/docker/init-db.sqlcurl -fsSLo docker/init-pg-ssl.sh https://raw.githubusercontent.com/artifact-keeper/artifact-keeper/main/docker/init-pg-ssl.shcurl -fsSLo docker/init-dtrack.sh https://raw.githubusercontent.com/artifact-keeper/artifact-keeper/main/docker/init-dtrack.shdocker compose up -d3. Complete First-Boot Setup
# Check service statusdocker compose ps
# Health checkcurl http://localhost:30080/health
# Read the generated admin passworddocker exec artifact-keeper-backend cat /data/storage/admin.password && echo
# Login, change password, and unlock the API (see Quickstart for full steps)The API is locked until the admin password is changed. See the Quickstart Guide for step-by-step instructions.
To skip the setup lock, set ADMIN_PASSWORD in your .env file before starting.
Architecture
The default docker-compose.yml runs these services:
| Service | Image | Port | Description |
|---|---|---|---|
| Backend | ghcr.io/artifact-keeper/artifact-keeper-backend | 8080 (internal) | Rust API server |
| Web UI | ghcr.io/artifact-keeper/artifact-keeper-web | 3000 (internal) | Next.js frontend |
| Caddy | caddy:2-alpine | 30080 (HTTP), 30443 (HTTPS) | Reverse proxy |
| PostgreSQL | postgres:16-alpine | 30432 | Metadata database |
| OpenSearch | opensearchproject/opensearch:2.19.1 | 9200 | Full-text search |
| Trivy | aquasec/trivy:latest | 8090 | Vulnerability scanning |
Caddy routes:
/api/*and/health→ Backend (port 8080)- Everything else → Web UI (port 3000)
Alternative Registry: Docker Hub
All images are also published to Docker Hub under the artifactkeeper organization. If your environment blocks ghcr.io, replace the image references:
| Service | ghcr.io (default) | Docker Hub |
|---|---|---|
| Backend | ghcr.io/artifact-keeper/artifact-keeper-backend | artifactkeeper/backend |
| Web UI | ghcr.io/artifact-keeper/artifact-keeper-web | artifactkeeper/web |
| OpenSCAP | ghcr.io/artifact-keeper/artifact-keeper-openscap | artifactkeeper/openscap |
Tags are identical on both registries. To switch, update the image names in your docker-compose.yml:
services: backend: image: artifactkeeper/backend:latest # was ghcr.io/artifact-keeper/artifact-keeper-backend:latest web: image: artifactkeeper/web:latest # was ghcr.io/artifact-keeper/artifact-keeper-web:latestProduction Configuration
Environment Variables
Create a .env file in the project root:
# Security (REQUIRED for production)JWT_SECRET=your-secure-64-char-secret-here# ADMIN_PASSWORD=your-secure-admin-password # Set to skip first-boot setup lock
# Caddy — set your domain for automatic Let's Encrypt TLSSITE_ADDRESS=registry.example.com
# Custom ports (optional)HTTP_PORT=80HTTPS_PORT=443
# LoggingRUST_LOG=info
# CORSCORS_ORIGINS=https://registry.example.comGenerate secure secrets:
openssl rand -base64 64 # JWT secretopenssl rand -base64 32 # Admin passwordWith S3 Storage
Add S3-compatible storage to the backend environment in docker-compose.yml:
services: backend: environment: STORAGE_BACKEND: s3 S3_ENDPOINT: https://s3.amazonaws.com S3_BUCKET: artifact-keeper S3_REGION: us-east-1 AWS_ACCESS_KEY_ID: ${AWS_ACCESS_KEY_ID} AWS_SECRET_ACCESS_KEY: ${AWS_SECRET_ACCESS_KEY}Works with AWS S3, MinIO, DigitalOcean Spaces, and other S3-compatible services.
With OpenSCAP Scanning
Enable OpenSCAP compliance scanning (requires local build):
docker compose --profile scanning up -dManaging Services
Start / Stop
# Start all servicesdocker compose up -d
# Stop all servicesdocker compose down
# Stop and remove volumes (WARNING: deletes all data)docker compose down -vView Logs
# All servicesdocker compose logs -f
# Specific servicedocker compose logs -f backend
# Last 100 linesdocker compose logs --tail=100 backendUpdate to Latest
docker compose pulldocker compose up -dData Persistence
All data is stored in Docker named volumes:
| Volume | Contents |
|---|---|
postgres_data | Database (users, repositories, metadata) |
artifact_storage | Uploaded artifacts |
backup_storage | Automated backups |
opensearch_data | Search indexes |
trivy_cache | Vulnerability database cache |
caddy_data | TLS certificates |
Backup Database
docker compose exec postgres pg_dump -U registry artifact_registry | gzip > backup.sql.gzRestore Database
gunzip < backup.sql.gz | docker compose exec -T postgres psql -U registry artifact_registryBackup Artifacts
docker run --rm \ -v artifact-keeper_artifact_storage:/data \ -v $(pwd):/backup \ alpine tar czf /backup/artifacts-backup.tar.gz /dataHealth Checks
# All servicesdocker compose ps
# Backend health (detailed)curl -s http://localhost:30080/health | jq
# Databasedocker compose exec postgres pg_isready -U registry
# OpenSearchcurl -s -u admin:admin -k https://localhost:9200/_cluster/health | jqTroubleshooting
Services Won’t Start
docker compose logsdocker compose config # Verify environment variablesPort Conflicts
Change ports in .env:
HTTP_PORT=8080HTTPS_PORT=8443Login Works but API Returns 401
If you can submit the login form without errors but the UI redirects back to login (or the network tab shows 401 “Missing authorization header” on API calls), auth cookies are likely being dropped by the browser. This happens when the backend runs in production mode (ENVIRONMENT=production, the default) but the browser connects over HTTP instead of HTTPS. Production cookies have the Secure flag, which browsers enforce strictly.
Fix: configure TLS on your reverse proxy, or set ENVIRONMENT=development on the backend for HTTP-only setups. See Reverse Proxy & TLS for details.
Out of Disk Space
docker system df -v # Check Docker disk usagedocker system prune -a # Clean unused images/containersOpenSearch CrashLoopBackOff or Start Failures
OpenSearch stores its data in /usr/share/opensearch/data. When upgrading OpenSearch across major versions, an existing index directory written by an older release may be incompatible. Symptoms include:
- Container starts, logs a bootstrap check failure, then exits
- CrashLoopBackOff in Kubernetes with repeated restarts
- Log errors mentioning
vm.max_map_count, cluster state corruption, or incompatible index metadata
The most common first-run failure on Linux hosts is vm.max_map_count being too low. Raise it on the host:
sudo sysctl -w vm.max_map_count=262144Add vm.max_map_count=262144 to /etc/sysctl.conf to make it permanent.
Fix for data corruption, Docker Compose:
# 1. Search indexes are derived from PostgreSQL, so deleting them is safe.# The backend re-indexes automatically on startup.
# 2. Stop OpenSearch and delete the volumedocker compose down opensearchdocker volume rm artifact-keeper_opensearch_data
# 3. Restart. OpenSearch creates a fresh cluster, backend re-indexes automaticallydocker compose up -dFix for data corruption, Kubernetes:
# 1. Delete the OpenSearch PVC (adjust namespace and name)kubectl delete pvc opensearch-data -n <namespace>
# 2. Delete the pod to trigger recreation with a fresh volumekubectl delete pod -n <namespace> -l app.kubernetes.io/component=opensearch
# 3. The backend re-indexes all artifacts into OpenSearch on reconnectionNote: OpenSearch search indexes are derived from PostgreSQL data. Deleting and recreating the OpenSearch volume is safe. The backend rebuilds the search index automatically. No artifact data is lost.
OpenSearch High Memory Usage
OpenSearch defaults to a 1 GB JVM heap. On small hosts this can be too much; on large hosts it may be too little to avoid frequent GC. Tune the heap via OPENSEARCH_JAVA_OPTS:
services: opensearch: environment: OPENSEARCH_JAVA_OPTS: "-Xms2g -Xmx2g"Set -Xms and -Xmx to the same value, and do not exceed 50% of host RAM. For Kubernetes, set the same env var in your Helm values or StatefulSet spec.
Production Checklist
- Set strong
JWT_SECRET(64+ characters) - Change the admin password on first login (or set
ADMIN_PASSWORDenv var) - Set
SITE_ADDRESSto your domain for auto-TLS - Configure S3 or persistent volume for artifact storage
- Set up automated database backups
- Configure log rotation
- Set resource limits (
deploy.resourcesin compose) - Configure firewall rules
- Set up monitoring and alerting