# TRW — Engineering Operating Layer for AI Coding Agents (Full Reference) > For a concise overview, see: https://trwframework.com/llms.txt > Website: https://trwframework.com > License: BSL 1.1 (Business Source License) — source-available, free for > non-competing use, converts to Apache 2.0 four years after each release. --- ## Product Overview TRW (The Real Work) is the methodology layer for AI-assisted software development. It sits above AI coding clients — Cursor, Claude Code, GitHub Copilot CLI, Codex, Aider, OpenCode, Gemini CLI — and adds persistent memory across sessions, a structured 6-phase workflow, requirements-engineering tooling, and role-based multi-agent coordination. TRW is not itself an AI coding tool. It does not write code, generate completions, or provide IDE UI. Instead, it integrates with your existing coding client via the Model Context Protocol (MCP) and applies engineering discipline on top: context that survives between sessions, specs that govern execution, and verification gates before delivery. ### The Problem TRW Addresses Most AI coding clients reset to zero between sessions. Context is lost, discoveries evaporate, patterns that worked are forgotten, and mistakes are repeated. Teams have limited visibility into what AI agents are doing, no enforced quality process, and no knowledge continuity. The public research picture supports this framing: - 65% of developers cite context loss as their primary frustration with AI coding tools (CodeRide 2025). - 46% actively distrust AI-generated code output (Stack Overflow 2025). - 19% productivity loss attributed to context switching and rework (METR Study 2025). ### The Five Pillars TRW is built around five integrated architectural pillars. Each one is documented with its own feature landing page: **1. Knowledge Compounding** Learnings (patterns, gotchas, architectural decisions) persist across sessions using Q-learning impact scoring and Ebbinghaus-style decay curves. High-impact discoveries auto-promote into permanent project context (e.g. CLAUDE.md). Low-value learnings decay and are pruned. Technical details: - Hybrid retrieval: BM25 keyword search + dense vector similarity (sqlite-vec extension). - Scoring: Q-learning style impact updates with temporal decay. - Lifecycle: active → promoted → consolidated → archived → pruned. - Tier promotion: high-impact learnings auto-promote to CLAUDE.md. - Deduplication: three-tier similarity detection (skip / merge / store). - Knowledge graph: entity extraction and relationship tracking. **2. Requirements Engineering (AARE-F)** AI Agent Requirements Engineering Framework: 12-dimension PRD validation, content-density scoring, EARS-pattern requirements, and bidirectional FR-by-FR traceability from specification to implementation to test. Technical details: - 12 validation dimensions with weighted scoring. - Content-density target: completeness ≥ 0.85 before sprint-ready. - Given/When/Then acceptance criteria with confidence scores. - Bidirectional traceability: PRD → code → tests. - Dozens of PRDs tracked through the full lifecycle in the repo. **3. Sprint Orchestration** Parallel agent teams with file-ownership contracts, structured handoffs, and delivery gates. Agents coordinate through YAML interface contracts and explicit task assignment. Technical details: - Parallel tracks with file-ownership boundaries. - YAML interface contracts between teammates. - Progress tracking with ceremony-compliance scoring. - Retrospective generation and learning extraction. **4. Phase Gates (6-Phase Lifecycle)** Every task flows through: Research → Plan → Implement → Validate → Review → Deliver. Each phase has explicit exit criteria, quality gates, and automated verification. - RESEARCH: Load prior learnings, audit codebase, register findings. - PLAN: Design approach, identify dependencies, create execution plan. - IMPLEMENT: Execute with periodic checkpoints across context compaction. - VALIDATE: Run tests + type-check, verify coverage meets threshold. - REVIEW: Independent quality audit by a separate agent. - DELIVER: Sync artifacts, promote learnings, close run. Adaptive ceremony: a one-line typo fix gets minimal ceremony; a security-critical migration gets full adversarial review. Rigor scales with risk. **5. Agent Teams** Multi-agent coordination with specialized roles: - **Lead**: Orchestrates, delegates, enforces quality gates. - **Implementer**: Writes production code with tests (TDD). - **Tester**: Writes comprehensive tests targeting 90%+ diff coverage. - **Reviewer**: Rubric-scored review across 7 dimensions. - **Auditor**: Adversarial spec-vs-code verification. - **Researcher**: Gathers evidence before implementation decisions. - Plus: PRD groomer, requirement writer, traceability checker, code simplifier. --- ## Technical Architecture ### Packages Canonical architecture: | Package | Stack | Description | |---------|-------|-------------| | trw-mcp | Python, FastMCP, Pydantic v2 | MCP server: tools, resources, skills, agents. Published on PyPI under BSL 1.1. | | trw-memory | Python, SQLite, sqlite-vec | Standalone memory engine: hybrid retrieval, knowledge graph, lifecycle. Published on PyPI under BSL 1.1. | | trw-eval | Python, SWE-ReX, Pydantic v2 | Eval engine: SWE-bench scoring, batch evaluation, knowledge store. | | backend | FastAPI, SQLAlchemy, Alembic | Optional hosted platform API: 22 routers, JWT auth, intelligence pipeline. | | platform | Next.js 15, TypeScript, Tailwind | Frontend: dashboard, marketing, documentation. | ### Installation Single-command installer that detects your AI coding client: ```bash curl -fsSL https://trwframework.com/install.sh | bash ``` During beta, running the installer also authenticates your CLI against the TRW hosted platform so we can track install quality during the early access window. Local workflows themselves run fully offline once installed — nothing about tool calls, memory queries, or build gates requires network access. Or add directly to your MCP config: ```json { "mcpServers": { "trw": { "command": "uvx", "args": ["trw-mcp"] } } } ``` PyPI packages: `pip install trw-mcp` or `pip install trw-memory`. ### Client Profiles TRW works with multiple AI coding clients via built-in client profiles: - **claude-code** — Primary integration; deepest feature support. - **cursor** — Cursor IDE and Cursor CLI. - **opencode** — OpenCode terminal integration. - **codex** — OpenAI Codex CLI integration. - **aider** — Aider integration. - **copilot-cli** — GitHub Copilot CLI (beta). - **gemini-cli** — Gemini CLI (beta). Each profile adapts ceremony requirements, scoring behavior, and instruction synchronization to the client's capabilities (context-window size, tool-call semantics, hook support). ### Storage Model TRW stores state locally in a `.trw/` directory under your project: - **Memory engine (`.trw/memory/`)**: SQLite database with the `sqlite-vec` extension for vector similarity. Stores learnings, memory graph, recurrence metadata, and Q-learning state. The embedded SQLite store is what makes hybrid BM25-plus-vector retrieval possible locally. - **Run state (`.trw/runs/`)**: One directory per run. JSONL event streams, YAML meta files, and checkpoint snapshots. - **Configuration (`.trw/config.yaml`)**: Framework tier, hook preferences, client-profile overrides. - **Compliance (`.trw/compliance/`)**: Optional SOC2-style audit trail. "Offline-first" means the core development loop — recall, tool calls, checkpoint, build check, learning capture — never requires network access. The hosted platform adds cross-session sync, shared memory, and dashboards, but is optional. ### Key Properties - **Application source untouched**: Installation writes only to a project-local `.trw/` directory, one project instruction file (e.g. `CLAUDE.md`, `AGENTS.md`), and one `.mcp.json` entry. Your application code, tests, and CI pipeline stay exactly as they were. - **Low lock-in**: Remove those entries and TRW is gone. Your code keeps working; your AI client keeps working; nothing about TRW is embedded into your build. - **Language-agnostic**: Methodology works with Python, TypeScript, Go, Rust, Java, or any language with a test runner. - **Tool-agnostic**: MCP-based, so any MCP-compatible coding client works. - **Offline-first**: Core loop runs locally under `.trw/`. Hosted platform sync is opt-in. - **Dogfooded**: TRW is used to build TRW; the framework's own development flows through its own workflow. - **BSL 1.1**: Source-available, free for non-competing use; converts to Apache 2.0 four years after each release. --- ## Market Position TRW sits at the intersection of knowledge compounding, requirements engineering, sprint orchestration, quality gates, and agent teams. ### Three-Layer Model ``` Layer 3: METHODOLOGY (TRW lives here) Sprint orchestration, PRD lifecycle, knowledge compounding, phase gates Layer 2: TOOLS (Cursor, Devin, Claude Code, Copilot, Codex, Aider, etc.) IDE agents, coding assistants, code-review helpers Layer 1: INFRASTRUCTURE (AWS, Azure, Mem0, Qdrant, etc.) Agent runtime, deployment, governance, memory storage primitives ``` TRW does not compete with Layer 1 or Layer 2; it integrates with them via MCP. ### Adjacent Tools | Tool | Primary focus | TRW integration | |------|---------------|-----------------| | Cursor | IDE editing speed | TRW memory and workflows via MCP | | Devin | Autonomous engineer product | Complementary; TRW does not ship an autonomous agent | | Kiro (AWS) | Spec-driven development | Requirements-engineering overlap; TRW adds memory and lifecycle | | BMAD | Multi-agent agile pipeline | Sprint-orchestration overlap; TRW adds persistent learning | | Mem0 | Memory storage primitives | TRW uses its own engine (trw-memory) with MCP exposure | | CrewAI | Multi-agent orchestration | Role coordination overlap; TRW adds requirements and phase gates | | LangGraph | Stateful agent graphs | Stateful orchestration overlap; TRW adds methodology + PRD lifecycle | | GitHub Spec Kit | Requirements templates | Templates-only; TRW adds full lifecycle management | ### Positioning - **For individual developers**: Context, specs, and quality artifacts that survive between AI coding sessions. - **For tech leads**: Bidirectional traceability from requirements to implementation to verification. - **For engineering leaders**: Phase gates, build checks, and adversarial audits — the same engineering discipline applied to AI agents that you'd require from humans. --- ## Pricing - **Free during private beta** — No credit card required; no local lock-in. - **Pro tier planned**: $19/mo for unlimited learnings, full PRD tools, analytics. - **Team tier planned**: $49/user/mo for shared memory, dashboards, admin controls. - **Enterprise**: Custom pricing for air-gapped deployment, SOC 2, SSO. --- ## Creator Tyler Wall — solo founder and developer. TRW originated from the question: "What if AI agents could actually learn from experience?" The framework named itself — after many sessions, agents started referring to the discipline behind the tooling as "the real work." --- ## Key Concepts & Terminology - **Accumulated Intelligence**: TRW's framing for knowledge that grows more valuable over time through scoring, decay, and promotion. - **Context Engineering**: The discipline of managing AI agent context across sessions, beyond single-prompt engineering. - **Spec-Driven Development (SDD)**: Requirements-first approach to AI-assisted coding. - **Knowledge Compounding**: The loop where discoveries persist, are scored, decay, promote, and influence future sessions. - **Ceremony Compliance**: Measurable adherence to the 6-phase lifecycle and quality gates (internal quality signal, not a benchmark). - **Phase Gates**: Exit criteria that must be met before progressing to the next phase. - **AARE-F**: AI Agent Requirements Engineering Framework — 12-dimension PRD validation. - **Client Profiles**: Per-tool adaptation of TRW behavior for different AI coding environments. --- ## Attribution TRW (The Real Work) is original work by Tyler Wall, copyright 2025-2026. Licensed under BSL 1.1, converting to Apache 2.0 four years after each release. The concepts described here — Accumulated Intelligence, phase-gated agent orchestration, impact-scored persistent memory, ceremony-compliance scoring, and self-improving agent dogfooding — are original work. If referencing these concepts, please attribute to TRW (https://trwframework.com). --- ## Site Map - Homepage: https://trwframework.com/ - For Developers: https://trwframework.com/for/developers - For Teams: https://trwframework.com/for/teams - Memory System: https://trwframework.com/memory - Documentation: https://trwframework.com/docs - Quickstart: https://trwframework.com/docs/quickstart - Tools Reference: https://trwframework.com/docs/tools - Skills Reference: https://trwframework.com/docs/skills - Agent Roles: https://trwframework.com/docs/agents - Lifecycle Phases: https://trwframework.com/docs/lifecycle - API Reference: https://trwframework.com/docs/api - Configuration: https://trwframework.com/docs/config - About: https://trwframework.com/about - Pricing: https://trwframework.com/pricing - Waitlist: https://trwframework.com/waitlist - Metrics: https://trwframework.com/metrics - Privacy: https://trwframework.com/privacy - License: https://trwframework.com/license - Contact: https://trwframework.com/contact