close
Skip to content

Docker Compose Deployment

Deploy Artifact Keeper using Docker Compose for a complete, containerized setup.

Prerequisites

  • Docker 20.10+ and Docker Compose 2.0+
  • 4 GB RAM minimum (8 GB recommended)
  • 100 GB disk space for artifacts

Quick Start

1. Download and Start

Terminal window
mkdir artifact-keeper && cd artifact-keeper
curl -fsSLO https://raw.githubusercontent.com/artifact-keeper/artifact-keeper/main/docker-compose.yml
mkdir docker
curl -fsSLo docker/Caddyfile https://raw.githubusercontent.com/artifact-keeper/artifact-keeper/main/docker/Caddyfile
curl -fsSLo docker/init-db.sql https://raw.githubusercontent.com/artifact-keeper/artifact-keeper/main/docker/init-db.sql
curl -fsSLo docker/init-pg-ssl.sh https://raw.githubusercontent.com/artifact-keeper/artifact-keeper/main/docker/init-pg-ssl.sh
curl -fsSLo docker/init-dtrack.sh https://raw.githubusercontent.com/artifact-keeper/artifact-keeper/main/docker/init-dtrack.sh
docker compose up -d

3. Complete First-Boot Setup

Terminal window
# Check service status
docker compose ps
# Health check
curl http://localhost:30080/health
# Read the generated admin password
docker exec artifact-keeper-backend cat /data/storage/admin.password && echo
# Login, change password, and unlock the API (see Quickstart for full steps)

The API is locked until the admin password is changed. See the Quickstart Guide for step-by-step instructions.

To skip the setup lock, set ADMIN_PASSWORD in your .env file before starting.

Architecture

The default docker-compose.yml runs these services:

ServiceImagePortDescription
Backendghcr.io/artifact-keeper/artifact-keeper-backend8080 (internal)Rust API server
Web UIghcr.io/artifact-keeper/artifact-keeper-web3000 (internal)Next.js frontend
Caddycaddy:2-alpine30080 (HTTP), 30443 (HTTPS)Reverse proxy
PostgreSQLpostgres:16-alpine30432Metadata database
OpenSearchopensearchproject/opensearch:2.19.19200Full-text search
Trivyaquasec/trivy:latest8090Vulnerability scanning

Caddy routes:

  • /api/* and /health → Backend (port 8080)
  • Everything else → Web UI (port 3000)

Alternative Registry: Docker Hub

All images are also published to Docker Hub under the artifactkeeper organization. If your environment blocks ghcr.io, replace the image references:

Serviceghcr.io (default)Docker Hub
Backendghcr.io/artifact-keeper/artifact-keeper-backendartifactkeeper/backend
Web UIghcr.io/artifact-keeper/artifact-keeper-webartifactkeeper/web
OpenSCAPghcr.io/artifact-keeper/artifact-keeper-openscapartifactkeeper/openscap

Tags are identical on both registries. To switch, update the image names in your docker-compose.yml:

services:
backend:
image: artifactkeeper/backend:latest # was ghcr.io/artifact-keeper/artifact-keeper-backend:latest
web:
image: artifactkeeper/web:latest # was ghcr.io/artifact-keeper/artifact-keeper-web:latest

Production Configuration

Environment Variables

Create a .env file in the project root:

Terminal window
# Security (REQUIRED for production)
JWT_SECRET=your-secure-64-char-secret-here
# ADMIN_PASSWORD=your-secure-admin-password # Set to skip first-boot setup lock
# Caddy — set your domain for automatic Let's Encrypt TLS
SITE_ADDRESS=registry.example.com
# Custom ports (optional)
HTTP_PORT=80
HTTPS_PORT=443
# Logging
RUST_LOG=info
# CORS
CORS_ORIGINS=https://registry.example.com

Generate secure secrets:

Terminal window
openssl rand -base64 64 # JWT secret
openssl rand -base64 32 # Admin password

With S3 Storage

Add S3-compatible storage to the backend environment in docker-compose.yml:

services:
backend:
environment:
STORAGE_BACKEND: s3
S3_ENDPOINT: https://s3.amazonaws.com
S3_BUCKET: artifact-keeper
S3_REGION: us-east-1
AWS_ACCESS_KEY_ID: ${AWS_ACCESS_KEY_ID}
AWS_SECRET_ACCESS_KEY: ${AWS_SECRET_ACCESS_KEY}

Works with AWS S3, MinIO, DigitalOcean Spaces, and other S3-compatible services.

With OpenSCAP Scanning

Enable OpenSCAP compliance scanning (requires local build):

Terminal window
docker compose --profile scanning up -d

Managing Services

Start / Stop

Terminal window
# Start all services
docker compose up -d
# Stop all services
docker compose down
# Stop and remove volumes (WARNING: deletes all data)
docker compose down -v

View Logs

Terminal window
# All services
docker compose logs -f
# Specific service
docker compose logs -f backend
# Last 100 lines
docker compose logs --tail=100 backend

Update to Latest

Terminal window
docker compose pull
docker compose up -d

Data Persistence

All data is stored in Docker named volumes:

VolumeContents
postgres_dataDatabase (users, repositories, metadata)
artifact_storageUploaded artifacts
backup_storageAutomated backups
opensearch_dataSearch indexes
trivy_cacheVulnerability database cache
caddy_dataTLS certificates

Backup Database

Terminal window
docker compose exec postgres pg_dump -U registry artifact_registry | gzip > backup.sql.gz

Restore Database

Terminal window
gunzip < backup.sql.gz | docker compose exec -T postgres psql -U registry artifact_registry

Backup Artifacts

Terminal window
docker run --rm \
-v artifact-keeper_artifact_storage:/data \
-v $(pwd):/backup \
alpine tar czf /backup/artifacts-backup.tar.gz /data

Health Checks

Terminal window
# All services
docker compose ps
# Backend health (detailed)
curl -s http://localhost:30080/health | jq
# Database
docker compose exec postgres pg_isready -U registry
# OpenSearch
curl -s -u admin:admin -k https://localhost:9200/_cluster/health | jq

Troubleshooting

Services Won’t Start

Terminal window
docker compose logs
docker compose config # Verify environment variables

Port Conflicts

Change ports in .env:

Terminal window
HTTP_PORT=8080
HTTPS_PORT=8443

Login Works but API Returns 401

If you can submit the login form without errors but the UI redirects back to login (or the network tab shows 401 “Missing authorization header” on API calls), auth cookies are likely being dropped by the browser. This happens when the backend runs in production mode (ENVIRONMENT=production, the default) but the browser connects over HTTP instead of HTTPS. Production cookies have the Secure flag, which browsers enforce strictly.

Fix: configure TLS on your reverse proxy, or set ENVIRONMENT=development on the backend for HTTP-only setups. See Reverse Proxy & TLS for details.

Out of Disk Space

Terminal window
docker system df -v # Check Docker disk usage
docker system prune -a # Clean unused images/containers

OpenSearch CrashLoopBackOff or Start Failures

OpenSearch stores its data in /usr/share/opensearch/data. When upgrading OpenSearch across major versions, an existing index directory written by an older release may be incompatible. Symptoms include:

  • Container starts, logs a bootstrap check failure, then exits
  • CrashLoopBackOff in Kubernetes with repeated restarts
  • Log errors mentioning vm.max_map_count, cluster state corruption, or incompatible index metadata

The most common first-run failure on Linux hosts is vm.max_map_count being too low. Raise it on the host:

Terminal window
sudo sysctl -w vm.max_map_count=262144

Add vm.max_map_count=262144 to /etc/sysctl.conf to make it permanent.

Fix for data corruption, Docker Compose:

Terminal window
# 1. Search indexes are derived from PostgreSQL, so deleting them is safe.
# The backend re-indexes automatically on startup.
# 2. Stop OpenSearch and delete the volume
docker compose down opensearch
docker volume rm artifact-keeper_opensearch_data
# 3. Restart. OpenSearch creates a fresh cluster, backend re-indexes automatically
docker compose up -d

Fix for data corruption, Kubernetes:

Terminal window
# 1. Delete the OpenSearch PVC (adjust namespace and name)
kubectl delete pvc opensearch-data -n <namespace>
# 2. Delete the pod to trigger recreation with a fresh volume
kubectl delete pod -n <namespace> -l app.kubernetes.io/component=opensearch
# 3. The backend re-indexes all artifacts into OpenSearch on reconnection

Note: OpenSearch search indexes are derived from PostgreSQL data. Deleting and recreating the OpenSearch volume is safe. The backend rebuilds the search index automatically. No artifact data is lost.

OpenSearch High Memory Usage

OpenSearch defaults to a 1 GB JVM heap. On small hosts this can be too much; on large hosts it may be too little to avoid frequent GC. Tune the heap via OPENSEARCH_JAVA_OPTS:

docker-compose.yml
services:
opensearch:
environment:
OPENSEARCH_JAVA_OPTS: "-Xms2g -Xmx2g"

Set -Xms and -Xmx to the same value, and do not exceed 50% of host RAM. For Kubernetes, set the same env var in your Helm values or StatefulSet spec.

Production Checklist

  • Set strong JWT_SECRET (64+ characters)
  • Change the admin password on first login (or set ADMIN_PASSWORD env var)
  • Set SITE_ADDRESS to your domain for auto-TLS
  • Configure S3 or persistent volume for artifact storage
  • Set up automated database backups
  • Configure log rotation
  • Set resource limits (deploy.resources in compose)
  • Configure firewall rules
  • Set up monitoring and alerting