Deployment
Docker
zimgx ships as a multi-arch Docker image (linux/amd64, linux/arm64) built on Alpine Linux.
Pull from GHCR
docker pull ghcr.io/officialunofficial/zimgx:latestHTTP origin
docker run -p 8080:8080 \
-e ZIMGX_ORIGIN_TYPE=http \
-e ZIMGX_ORIGIN_BASE_URL=https://images.example.com \
ghcr.io/officialunofficial/zimgx:latestR2 origin
docker run -p 8080:8080 \
-e ZIMGX_ORIGIN_TYPE=r2 \
-e ZIMGX_R2_ENDPOINT=https://<account_id>.r2.cloudflarestorage.com \
-e ZIMGX_R2_ACCESS_KEY_ID=your-access-key \
-e ZIMGX_R2_SECRET_ACCESS_KEY=your-secret-key \
-e ZIMGX_R2_BUCKET_ORIGINALS=originals \
-e ZIMGX_R2_BUCKET_VARIANTS=variants \
ghcr.io/officialunofficial/zimgx:latestEnvironment file
docker run -p 8080:8080 --env-file .env ghcr.io/officialunofficial/zimgx:latestBuild locally
docker build -t zimgx .
docker run -p 8080:8080 -e ZIMGX_ORIGIN_BASE_URL=https://images.example.com zimgxThe Dockerfile uses a two-stage build:
- Build stage — Alpine with Zig and vips-dev, compiles with
ReleaseSafe - Runtime stage — Alpine with only the vips runtime library (approximately 50 MiB image)
Docker Compose
services:
zimgx:
image: ghcr.io/officialunofficial/zimgx:latest
ports:
- "8080:8080"
environment:
ZIMGX_ORIGIN_TYPE: http
ZIMGX_ORIGIN_BASE_URL: https://images.example.com
ZIMGX_CACHE_ENABLED: "true"
ZIMGX_CACHE_MAX_SIZE_BYTES: "536870912"
ZIMGX_SERVER_PORT: "8080"
healthcheck:
test: ["CMD", "wget", "--spider", "-q", "http://localhost:8080/health"]
interval: 10s
timeout: 5s
retries: 3
start_period: 5s
restart: unless-stoppedBuild from source
Requirements:
- Zig 0.15.0 or later
- libvips 8.18.0 or later (with development headers)
macOS
brew install vips
zig build -Doptimize=ReleaseSafe
./zig-out/bin/zimgxAlpine Linux
apk add vips-dev
zig build -Doptimize=ReleaseSafe
./zig-out/bin/zimgxHealth checks and probes
zimgx exposes three endpoints for monitoring.
/health
Returns 200 with {"status":"ok"} when the server is running. Use this for Docker HEALTHCHECK, load balancer health checks, and uptime monitors.
curl http://localhost:8080/health
# {"status":"ok"}/ready
Returns 200 with {"ready":true} when the server is ready to accept requests. Use this for Kubernetes readiness probes.
curl http://localhost:8080/ready
# {"ready":true}/metrics
Returns 200 with JSON statistics:
{
"requests_total": 1042,
"cache_hits": 891,
"cache_misses": 151,
"cache_entries": 148,
"uptime_seconds": 3600
}| Field | Description |
|---|---|
requests_total | Total HTTP requests served since startup |
cache_hits | Requests served from cache |
cache_misses | Requests that required an origin fetch |
cache_entries | Current entries in the L1 memory cache |
uptime_seconds | Seconds since server startup |
Kubernetes
apiVersion: apps/v1
kind: Deployment
metadata:
name: zimgx
spec:
replicas: 2
selector:
matchLabels:
app: zimgx
template:
metadata:
labels:
app: zimgx
spec:
containers:
- name: zimgx
image: ghcr.io/officialunofficial/zimgx:latest
ports:
- containerPort: 8080
env:
- name: ZIMGX_ORIGIN_TYPE
value: "http"
- name: ZIMGX_ORIGIN_BASE_URL
value: "https://images.example.com"
livenessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 5
periodSeconds: 10
readinessProbe:
httpGet:
path: /ready
port: 8080
initialDelaySeconds: 3
periodSeconds: 5
resources:
requests:
memory: "256Mi"
cpu: "250m"
limits:
memory: "1Gi"
cpu: "1000m"CDN
zimgx is designed to sit behind a CDN. Image responses include:
Cache-Control: public, max-age=<ttl>— controlled byZIMGX_CACHE_DEFAULT_TTL_SECONDSETag— content-based hash for conditional requests (304 Not Modified)Vary: Accept— tells the CDN to cache separate variants per format negotiation
Configure your CDN to forward the Accept header and respect Vary: Accept for correct content negotiation.