Skip to main content
TRW
TRWDocumentation

Deployment Guide

TRW runs entirely offline by default. No server, no account, no internet connection required. Deployment is only needed when you want team features: shared learnings, telemetry dashboards, and centralized management.

For teams who want a shared dashboard, telemetry sync, or CI/CD integration. Solo developers using TRW locally don't need to deploy anything.

Local-Only (Default)

When you install TRW, everything runs locally via MCP. Your AI agent communicates with the TRW MCP server over stdio. All data stays in your project directory.

terminal
# Install TRW in your project
pip install trw-mcp
trw-mcp init-project .

# That's it. Start a Claude Code session and call:
# trw_session_start()

All state lives in the .trw/ directory at your project root:

PathContents
.trw/config.yamlProject configuration.
.trw/learnings/YAML learning entries, accumulated across sessions.
.trw/context/Session state, analytics, ceremony feedback.
.trw/logs/JSONL event logs, LLM usage, crash reports.

Tip

Commit .trw/ to version control. Learnings are designed to be shared with your team via git.

When You Need the Platform

Deploy the TRW platform when your team needs:

  • Cross-project learning sync — Share discoveries across repos without committing to each one.
  • Telemetry dashboards — See tool usage, session patterns, and learning trends across your organization.
  • Team management — Organizations, API keys, user roles, and access control.
  • Centralized installer — Host your own release endpoint so teammates install with a single curl.

Deployment Decision Guide

ModeUse caseSetup timeRequirements
Local-OnlySolo dev, single machine, privacy-first< 2 minPython 3.10+, pip
Docker ComposeTeam eval, shared dashboards, local network~10 minDocker, 4 GB RAM
ProductionOrg-wide rollout, SSO, telemetry at scale~1 hrCloud host, PostgreSQL, TLS, DNS

Docker Compose

The fastest way to run the full platform stack. This starts three services: the FastAPI backend, PostgreSQL database, and Next.js frontend.

1

Create an environment file:

.env
# Required
DATABASE_URL=postgresql://trw:secret@db:5432/trw
JWT_SECRET=your-256-bit-secret-here
AUTH_SECRET=your-nextauth-secret-here
NEXT_PUBLIC_API_URL=http://localhost:5002/v1

# Optional
CORS_ORIGINS=http://localhost:5000
RATE_LIMIT_PER_MINUTE=500
LOG_LEVEL=INFO
2

Create the compose file:

docker-compose.yml
version: "3.9"
services:
  db:
    image: postgres:16-alpine
    environment:
      POSTGRES_USER: trw
      POSTGRES_PASSWORD: secret
      POSTGRES_DB: trw
    volumes:
      - pgdata:/var/lib/postgresql/data
    ports:
      - "5432:5432"
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U trw"]
      interval: 5s
      timeout: 3s
      retries: 5

  backend:
    build: ./backend
    env_file: .env
    ports:
      - "8000:8000"
    depends_on:
      db:
        condition: service_healthy

  platform:
    build: ./platform
    env_file: .env
    ports:
      - "3000:3000"
    depends_on:
      - backend

volumes:
  pgdata:
3

Start the stack:

terminal
docker compose up -d

# Verify all services are running:
docker compose ps

# Services:
#   db        → postgres://localhost:5432/trw
#   backend   → http://localhost:5002 (API docs at /api-docs)
#   platform  → http://localhost:5000

Tip

Verify the backend is healthy before connecting the platform: curl http://localhost:5002/v1/health. A 200 response with {"status":"ok"} means the API and database are ready.

Environment Variables

Backend

VariableRequiredDescription
DATABASE_URLrequiredPostgreSQL connection string.
JWT_SECRETrequiredSecret key for signing JWT tokens. Use a random 256-bit value.
CORS_ORIGINSoptionalComma-separated allowed origins for CORS.
RATE_LIMIT_PER_MINUTEoptionalMax requests per minute per IP. Default: 500.
SMTP_HOSToptionalSMTP server for transactional email (verification, resets).
SMTP_PORToptionalSMTP port. Default: 587.
SMTP_USERoptionalSMTP username.
SMTP_PASSWORDoptionalSMTP password or API key.
LOG_LEVELoptionalLogging level. Default: INFO.

Platform (Next.js)

VariableRequiredDescription
NEXT_PUBLIC_API_URLrequiredBackend API URL (with /v1 suffix).
NEXT_PUBLIC_SITE_URLoptionalPublic site URL for SEO. Default: https://trwframework.com.
AUTH_SECRETrequiredNextAuth session secret. Use a random 256-bit value.

Warning

Never commit .env to version control. Add it to .gitignore.

CI/CD Integration

Add TRW ceremony checks to your CI pipeline. This verifies that agents followed the build-check and review process before merging.

.github/workflows/trw-checks.yml
name: TRW Ceremony Checks
on:
  pull_request:
    branches: [main]

jobs:
  trw-verify:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - name: Set up Python
        uses: actions/setup-python@v5
        with:
          python-version: "3.12"

      - name: Install TRW
        run: pip install trw-mcp

      - name: Verify build check was run
        run: |
          # Check that .trw/context/build-status.yaml exists
          # and shows a passing build for the current commit
          python -c "
          import yaml, sys
          with open('.trw/context/build-status.yaml') as f:
              status = yaml.safe_load(f)
          if not status.get('passed'):
              print('Build check not passing. Run trw_build_check() before pushing.')
              sys.exit(1)
          print('Build check: PASSED')
          "

      - name: Verify tests pass
        run: |
          pip install -e ".[test]"
          pytest --tb=short

Note

This is a minimal example. For stricter enforcement, add ceremony score checks and review confidence thresholds.

Production Considerations

HTTPS

Always terminate TLS before the backend. Use a reverse proxy (nginx, Caddy, or a cloud load balancer) in front of the services. The backend does not handle TLS directly.

Rate Limiting

The backend has built-in per-IP rate limiting (default: 500 requests/minute). For production, add a second layer at the reverse proxy level. Adjust RATE_LIMIT_PER_MINUTE based on your team size.

Database Backups

Schedule regular PostgreSQL backups. Use pg_dump for logical backups or enable continuous archiving with WAL-G for point-in-time recovery.

terminal
# Daily backup via cron
pg_dump -h localhost -U trw trw | gzip > backup-$(date +%Y%m%d).sql.gz

Monitoring

The backend exposes GET /v1/health for health checks. Point your uptime monitor at this endpoint. For deeper observability, the Docker Compose stack includes an optional Grafana service on port 3001.

Troubleshooting

Backend returns 500 on startup

Cause: DATABASE_URL is missing or PostgreSQL is not running.

Fix: Verify postgres is healthy: docker compose ps. Check DATABASE_URL in your .env file.

Platform shows "Failed to fetch" errors

Cause: NEXT_PUBLIC_API_URL points to the wrong host or the backend is not running.

Fix: Set NEXT_PUBLIC_API_URL=http://localhost:5002/v1 for local dev. Use the Docker service name (http://backend:8000/v1) inside compose.

Port 5432 already in use

Cause: A local PostgreSQL instance is running on the same port.

Fix: Stop local postgres (sudo systemctl stop postgresql) or remap the port in docker-compose.yml.

JWT tokens rejected after restart

Cause: JWT_SECRET changed between restarts, invalidating existing tokens.

Fix: Pin JWT_SECRET in your .env file so it persists across container restarts.

Next Steps