Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.rxresu.me/llms.txt

Use this file to discover all available pages before exploring further.

Overview

Reactive Resume can be self-hosted using Docker in a matter of minutes, and this guide will walk you through the process. Here are some of the services you’ll need to get started:

PostgreSQL

Stores accounts, resumes, and application data.

Printer

Generates PDFs and screenshots using a headless Chromium browser.

Email (optional)

SMTP for verification emails, password reset, etc. If not configured, emails are logged to the server console.

Storage (optional)

Use S3-compatible storage, or local persistent storage via /app/data.
You can pull the latest app image from:
  • Docker Hub: amruthpillai/reactive-resume:latest
  • GitHub Container Registry: ghcr.io/amruthpillai/reactive-resume:latest

Minimum requirements

Docker + Docker Compose

Docker Engine + Docker Compose plugin (or Docker Desktop).

Compute

2 vCPU / 2 GB RAM minimum (4 GB recommended if Postgres + Printer run on the same host).

Storage

Enough for Postgres + uploads (start with 10-20 GB and scale as needed).

Quickstart using Docker Compose

Create a new folder (for example reactive-resume/) with:
  • compose.yml
  • .env
  • a persistent data directory for uploads (for example ./data)
1

Create your .env

Start by creating a .env file next to your compose.yml.
.env
# --- Server ---
TZ="Etc/UTC"
APP_URL="http://localhost:3000"

# Optional, uses APP_URL by default
# This can be set to a different URL (like http://host.docker.internal:3000 or http://{docker_service}:3000)
# to let the browser navigate to a non-public instance of Reactive Resume
PRINTER_APP_URL="http://host.docker.internal:3000"

# --- Printer ---
# Keep this token in sync with the Browserless TOKEN value.
BROWSERLESS_TOKEN="change-me"
PRINTER_ENDPOINT="ws://printer:3000?token=change-me"

# --- Database (PostgreSQL) ---
DATABASE_URL="postgresql://postgres:postgres@postgres:5432/postgres"

# --- Authentication ---
# Generated using `openssl rand -hex 32`
AUTH_SECRET=""
# Better Auth dashboard API key (optional)
BETTER_AUTH_API_KEY=""

# Social Auth (Google, optional)
GOOGLE_CLIENT_ID=""
GOOGLE_CLIENT_SECRET=""

# Social Auth (GitHub, optional)
GITHUB_CLIENT_ID=""
GITHUB_CLIENT_SECRET=""

# Social Auth (LinkedIn, optional)
LINKEDIN_CLIENT_ID=""
LINKEDIN_CLIENT_SECRET=""

# Custom OAuth Provider
OAUTH_PROVIDER_NAME=""
OAUTH_CLIENT_ID=""
OAUTH_CLIENT_SECRET=""
# Use EITHER discovery URL (preferred for OIDC-compliant providers):
OAUTH_DISCOVERY_URL=""
# OR manual URLs (all three required if not using discovery):
OAUTH_AUTHORIZATION_URL=""
OAUTH_TOKEN_URL=""
OAUTH_USER_INFO_URL=""
OAUTH_DYNAMIC_CLIENT_REDIRECT_HOSTS=""
# Custom scopes (space-separated, defaults to "openid profile email")
OAUTH_SCOPES=""

# Optional Better Auth runtime overrides for advanced deployments:
# BETTER_AUTH_URL="https://auth.example.com"
# BETTER_AUTH_SECRET=""

# --- AI (optional) ---
# Comma-separated hostnames/origins for custom AI base URLs
# Example: api.openai.com,https://gateway.ai.vercel.com
AI_ALLOWED_BASE_URLS=""

# --- Email (optional) ---
# If all keys are disabled, the app logs the email to be sent to the console instead.
SMTP_HOST=""
SMTP_PORT="587"
SMTP_USER=""
SMTP_PASS=""
SMTP_FROM="Reactive Resume <[email protected]>"
SMTP_SECURE="false"

# --- Storage (optional) ---
# If all keys are disabled, the app uses local filesystem (usually /app/data) to store uploads instead.
# Make sure to mount this directory to a volume or the host filesystem to ensure data integrity.
S3_ACCESS_KEY_ID=""
S3_SECRET_ACCESS_KEY=""
S3_REGION="us-east-1"
S3_ENDPOINT=""
S3_BUCKET=""
# Set to "true" for path-style URLs (https://endpoint/bucket), common with MinIO, SeaweedFS, etc.
# Set to "false" for virtual-hosted-style URLs (https://bucket.endpoint), common with AWS S3, Cloudflare R2, etc.
S3_FORCE_PATH_STYLE="false"

# --- Feature Flags ---
FLAG_DEBUG_PRINTER="false"
FLAG_DISABLE_SIGNUPS="false"
FLAG_DISABLE_EMAIL_AUTH="false"
FLAG_DISABLE_IMAGE_PROCESSING="false"
2

Generate AUTH_SECRET

Generate a strong secret and paste it into AUTH_SECRET.
openssl rand -hex 32
3

Create compose.yml

This setup runs Postgres + Printer + Reactive Resume on a private Docker network.
services:
  postgres:
    image: postgres:latest
    restart: unless-stopped
    environment:
      POSTGRES_DB: postgres
      POSTGRES_USER: postgres
      POSTGRES_PASSWORD: postgres
    volumes:
      - postgres_data:/var/lib/postgresql
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U postgres -d postgres"]
      interval: 10s
      timeout: 5s
      retries: 10

  printer:
    image: ghcr.io/browserless/chromium:latest
    restart: unless-stopped
    ports:
      - "4000:3000"
    environment:
      - HEALTH=true
      - CONCURRENT=20
      - QUEUED=10
      - TOKEN=${BROWSERLESS_TOKEN}
    healthcheck:
      test: ["CMD-SHELL", 'curl -fsS "http://localhost:3000/pressure?token=${BROWSERLESS_TOKEN}" > /dev/null']
      interval: 10s
      timeout: 5s
      retries: 10

  reactive-resume:
    image: amruthpillai/reactive-resume:latest
    # image: ghcr.io/amruthpillai/reactive-resume:latest
    restart: unless-stopped
    ports:
      - "3000:3000"
    env_file:
      - .env
    volumes:
      # Used when S3 is not configured; keeps uploads persistent
      - ./data:/app/data
    depends_on:
      postgres:
        condition: service_healthy
      printer:
        condition: service_healthy
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:3000/api/health"]
      interval: 30s
      timeout: 10s
      retries: 3

volumes:
  postgres_data:
Alternative Printer Options: If you don’t want to use browserless, you can also use a lightweight headless Chrome Docker image like chromedp/headless-shell:
chrome:
  image: chromedp/headless-shell:latest
  restart: unless-stopped
  ports:
    - "9222:9222"
Then set PRINTER_ENDPOINT to http://chrome:9222 (or http://localhost:9222 if running outside Docker Compose). This provides the same PDF/screenshot generation functionality with a smaller image footprint.
Prefer pulling from Docker Hub? Keep amruthpillai/reactive-resume:latest. Prefer GHCR? Swap it to ghcr.io/amruthpillai/reactive-resume:latest.
4

Start the stack

docker compose up -d
Reactive Resume should now be available at your APP_URL (for the example above: http://localhost:3000).

How startup works (database migrations)

On every start, the server automatically runs database migrations before serving traffic. If migrations fail (usually due to a DB connection issue), the container will exit with an error.

Environment variables

Required

  • APP_URL
  • DATABASE_URL
  • PRINTER_ENDPOINT
  • AUTH_SECRET

Optional

  • SMTP (SMTP_*)
  • Social auth (GOOGLE_*, GITHUB_*, LINKEDIN_*, OAUTH_*)
  • S3 storage (S3_*)
  • AI URL allowlist (AI_ALLOWED_BASE_URLS)
  • Feature flags (FLAG_*)
  • TZ: Sets the container timezone (affects logs and server-side timestamps). Recommended: Etc/UTC.
  • APP_URL: Canonical/public URL for your instance (used for absolute URLs, redirects, and auth flows). If behind a reverse proxy, set this to your public HTTPS URL (for example, https://resume.example.com).
  • PRINTER_APP_URL (optional): Overrides the base URL used when rendering the print route for the printer. Defaults to APP_URL. Useful when the printer must access the app via a different internal URL (for example, http://host.docker.internal:3000).
  • PRINTER_ENDPOINT: Endpoint where Reactive Resume connects to the printer browser.
  • Recommended (Browserless): ws://printer:3000?token=... and keep the token value in sync with Browserless TOKEN.
  • Also supported: http://chrome:9222 for Chrome DevTools Protocol endpoints.
Alternative to browserless: You can use a lightweight headless Chrome Docker image like chromedp/headless-shell:
chrome:
  image: chromedp/headless-shell:latest
  restart: unless-stopped
  ports:
    - "9222:9222"
Set PRINTER_ENDPOINT to http://chrome:9222 (in Docker Compose) or http://localhost:9222 (if running externally). This provides the same PDF/screenshot generation with a smaller image footprint.
  • DATABASE_URL: Postgres connection string in the format postgresql://USER:PASSWORD@HOST:PORT/DATABASE. - In Docker Compose, set HOST to the Postgres service name (e.g. postgres), not localhost. - If your password contains special characters (@, #, :), URL-encode it. - For managed Postgres, add provider-specific params (for example ?sslmode=require) when needed.
AUTH_SECRET: Secret used to secure authentication. Changing it invalidates existing sessions.Generate with:
openssl rand -hex 32
GOOGLE_CLIENT_ID / GOOGLE_CLIENT_SECRET (optional): Enables Google sign-in.GITHUB_CLIENT_ID / GITHUB_CLIENT_SECRET (optional): Enables GitHub sign-in.LINKEDIN_CLIENT_ID / LINKEDIN_CLIENT_SECRET (optional): Enables LinkedIn sign-in.BETTER_AUTH_API_KEY (optional): Enables Better Auth dashboard integrations.BETTER_AUTH_URL (optional, advanced): Overrides auth base URL if it must differ from APP_URL (for split-host deployments).BETTER_AUTH_SECRET (optional, advanced): Overrides AUTH_SECRET for Better Auth internals.Custom OAuth provider (optional):
  • OAUTH_PROVIDER_NAME: Display name in the UI
  • OAUTH_CLIENT_ID / OAUTH_CLIENT_SECRET: Required for any custom OAuth provider
  • OAUTH_DYNAMIC_CLIENT_REDIRECT_HOSTS: Comma-separated allowlist for extra dynamic OAuth redirect hosts/origins (HTTPS only, non-private hosts).
  • OAUTH_SCOPES: Space-separated scopes (defaults to openid profile email)
Configure endpoints using one of these methods:
  • Option A — OIDC Discovery (preferred): Set OAUTH_DISCOVERY_URL to your provider’s .well-known/openid-configuration URL
  • Option B — Manual URLs: Set all three: OAUTH_AUTHORIZATION_URL, OAUTH_TOKEN_URL, and OAUTH_USER_INFO_URL
If SMTP is not configured, the app logs emails to the server console instead of sending them.
  • Email delivery is enabled only when all of SMTP_HOST, SMTP_USER, SMTP_PASS, and SMTP_FROM are set.
  • SMTP_HOST: SMTP host (if empty, email sending is disabled).
  • SMTP_PORT: Defaults to 587 in the app.
  • SMTP_USER / SMTP_PASS: SMTP credentials.
  • SMTP_FROM: Default from address (for example, Reactive Resume <[email protected]>).
  • SMTP_SECURE: "true" or "false" (string). Match your provider settings.
  • Default (local): If all S3_* values are empty, uploads are stored under <cwd>/data (usually /app/data in the official image).
  • Mount local uploads to persistent storage (for example ./data:/app/data) or uploads can be lost on container recreation.
  • S3/S3-compatible: Configure S3_ACCESS_KEY_ID, S3_SECRET_ACCESS_KEY, S3_REGION, S3_ENDPOINT, and S3_BUCKET.
  • S3_FORCE_PATH_STYLE controls bucket addressing (defaults to "false"):
    • "true" for path-style URLs (https://endpoint/bucket) common with MinIO/SeaweedFS.
    • "false" for virtual-hosted-style URLs (https://bucket.endpoint) common with AWS S3 / Cloudflare R2.
  • AI_ALLOWED_BASE_URLS: Comma-separated hosts or origins allowed as custom AI API base URLs. - Use this when routing AI requests through your own gateway/proxy. - Example: api.openai.com,https://gateway.ai.vercel.com
  • FLAG_DEBUG_PRINTER: Bypasses the printer-only access restriction (useful when debugging /printer/{resumeId}). Recommended: keep "false" in production.
  • FLAG_DISABLE_SIGNUPS: Disables new signups (web app and server). Useful for private instances.
  • FLAG_DISABLE_EMAIL_AUTH: Disables email/password login entirely. Also disables email verification, forgot password, and reset password flows. Users can still sign up via social auth (Google/GitHub/LinkedIn/Custom OAuth), unless FLAG_DISABLE_SIGNUPS is also set to true. Useful when only SSO is required.
  • FLAG_DISABLE_IMAGE_PROCESSING: Disables image processing. This is useful if you are using a machine with limited resources, like a Raspberry Pi.

Updating your installation

To update your Reactive Resume installation to the latest available version, follow these steps:
  1. Back up your database and uploads first (highly recommended before every update).
  2. Pull the latest images for all services defined in your Docker Compose file.
    docker compose pull
    
  3. Restart the containers to run the new images.
    docker compose up -d
    
  4. Check migration/startup logs after deploy.
    docker compose logs -f reactive-resume
    
  5. (Optional) Remove old, unused Docker images to free up disk space.
    docker image prune -f
    
This process updates app services and automatically runs DB migrations on startup. If migration fails, restore from backup and fix configuration before retrying. Regular backups are essential to protect your data. Reactive Resume stores data in two places: the PostgreSQL database and file uploads (either local storage or S3).

Database backups

Your PostgreSQL database contains all user accounts, resumes, and application data. For self-hosted deployments, you can use pg_dump to create periodic backups of your database and store them in a secure location. Many hosting providers also offer automated backup solutions for managed PostgreSQL instances, which handle scheduling, retention, and restoration for you.

Upload backups

If you’re using local storage (the ./data directory), include this directory in your regular backup routine. A simple approach is to use rsync or a similar tool to copy the directory to a remote server or cloud storage. If you’re using S3-compatible storage, consider enabling versioning on your bucket to protect against accidental deletions. Most S3 providers also support lifecycle rules for automatic cleanup of old versions and cross-region replication for disaster recovery.

Health Checks

Reactive Resume exposes a health check endpoint at /api/health that verifies the application and its dependencies. It checks database, printer, and storage; if any one is unhealthy, the endpoint returns HTTP 503.

How it works

The Docker Compose configuration includes a health check that periodically calls the /api/health endpoint:
healthcheck:
  test: ["CMD", "curl", "-f", "http://localhost:3000/api/health"]
  interval: 30s
  timeout: 10s
  retries: 3
When the health check fails, Docker marks the container as unhealthy. This status is visible when running docker compose ps or docker ps.

Reverse proxy integration

Most reverse proxies (such as Traefik, Caddy, or nginx with upstream health checks) can use Docker’s health status to make routing decisions:
  • Healthy containers receive traffic as normal
  • Unhealthy containers are automatically removed from the load balancer pool
This is particularly useful in high-availability setups where you have multiple instances of Reactive Resume. If one instance becomes unhealthy (for example, it loses database, printer, or storage connectivity), the reverse proxy will stop routing traffic to it until it recovers.
If you’re using Traefik, it automatically respects Docker health checks when using the Docker provider. Unhealthy containers are excluded from routing without any additional configuration.

Manually checking health

You can manually verify the health of your Reactive Resume instance:
# From outside the container
curl -f http://localhost:3000/api/health

# Check Docker's health status
docker compose ps
A healthy response returns HTTP 200. Any other response (or a connection failure) indicates a problem that should be investigated in the JSON response body and container logs.

Troubleshooting

  • Common cause: database migrations failed (often a bad DATABASE_URL).
  • What to do: Check logs for migration errors and database connectivity details:
    docker compose logs -f reactive-resume
    
  • Common cause: APP_URL doesn’t match the URL you’re actually using (especially behind a reverse proxy), or you’re serving HTTPS while APP_URL is http://.... - Fix: set APP_URL to your canonical public HTTPS URL and restart the container.
  • Common cause: Reactive Resume can’t reach the printer, token mismatch, or the printer can’t reach your app. - Checks: - PRINTER_ENDPOINT should usually be ws://printer:3000?token=... in Compose. - Browserless TOKEN and the token in PRINTER_ENDPOINT must match. - If you use PRINTER_APP_URL="http://host.docker.internal:3000" on Linux, set extra_hosts: ["host.docker.internal:host-gateway"] for the printer service.
  • Common cause: printer or storage health failed (not only database). - Fix: inspect the endpoint response payload and check printer / storage fields: bash curl -s http://localhost:3000/api/health
  • Cause: local upload storage wasn’t mounted to a persistent volume. - Fix: add a volume mount like ./data:/app/data and redeploy.
  • Expected behavior: if SMTP isn’t fully configured, the app logs emails to the console. - Fix: set SMTP_HOST, SMTP_USER, SMTP_PASS, and SMTP_FROM, then verify SMTP_PORT and SMTP_SECURE.
  • Common cause: redirect host is not trusted for dynamic client registration. - Fix: add trusted HTTPS hosts/origins to OAUTH_DYNAMIC_CLIENT_REDIRECT_HOSTS.
  • Common cause: The S3 client is using virtual-hosted-style addressing (prepending the bucket name to the endpoint), but your S3-compatible storage expects path-style addressing.
  • Symptom: Error message like getaddrinfo ENOTFOUND mybucket.s3-server.com when your endpoint is s3-server.com.
  • Fix: Set S3_FORCE_PATH_STYLE="true" in your environment. This is required for most self-hosted S3-compatible services like MinIO, SeaweedFS, etc.