Skip to main content
TRW
TRWDocumentation

MCP Tools Reference

MCP tools are functions your AI agent calls automatically — like an API for intelligence. You don't invoke them; they appear in your agent's activity as it works. TRW exposes 24 MCP tools across six categories: session management, learning, quality verification, requirements engineering, ceremony gates, and reporting.

Quick reference

All 24 tools at a glance. The Phase column shows where each tool typically fires in the six-phase lifecycle.

ToolDescriptionPhase
trw_session_startLoad prior learnings and recover any active run.Research
trw_initCreate a run directory for progress tracking.Plan
trw_statusShow current phase, progress, and next steps.Any
trw_checkpointSave an atomic progress snapshot.Implement
trw_pre_compact_checkpointEmergency checkpoint before context compaction.Any
trw_progressive_expandIncrementally expand detail on a topic.Research
trw_learnRecord a discovery for all future sessions.Review
trw_learn_updateMark learnings as resolved or obsolete.Any
trw_recallSearch past learnings by keyword, tags, or impact.Research
trw_knowledge_syncSync knowledge topology across projects.Deliver
trw_claude_md_syncPromote high-impact learnings into CLAUDE.md.Deliver
trw_build_checkRun pytest + mypy, verify coverage thresholds.Validate
trw_reviewIndependent code review with rubric scoring.Review
trw_trust_levelCheck or update the trust level for the current session.Any
trw_quality_dashboardShow aggregated quality metrics.Any
trw_deliverPersist learnings, sync artifacts, close run.Deliver
trw_prd_createGenerate an AARE-F requirements document.Plan
trw_prd_validateCheck requirements quality and completeness.Plan
trw_ceremony_statusShow current ceremony compliance state.Any
trw_ceremony_approveApprove a ceremony gate.Any
trw_ceremony_revertRevert a ceremony state change.Any
trw_run_reportPhase timing, event counts, and learning yield for a single run.Deliver
trw_analytics_reportBuild pass rate, ceremony compliance, and trends across runs.Any
trw_usage_reportLLM API token usage and cost breakdowns by model.Any

Session & Workflow

These tools manage the lifecycle of a single work session. They ensure your agent never starts from scratch — prior learnings load automatically, and checkpoints survive context compaction so interrupted work resumes where it left off.

ToolWhat it doesWhen to use
trw_session_startLoad prior learnings and recover any active run.Start of every session
trw_initCreate a run directory for progress tracking.New tasks beyond quick fixes
trw_statusShow current phase, progress, and next steps.Resuming after interruption
trw_checkpointSave an atomic progress snapshot.After each milestone
trw_pre_compact_checkpointEmergency checkpoint before context compaction.Automatically before context window fills
trw_progressive_expandIncrementally expand detail on a topic.Drilling into complex areas without loading everything at once

Tip

Always call trw_session_start first. Without it, your agent has no memory of prior sessions — it will rediscover gotchas that were already solved.

Learning & Knowledge

The knowledge layer is what makes TRW compound. These tools capture discoveries, search prior learnings, and promote high-impact knowledge into your project's permanent instructions. Session 50 surfaces the gotcha that session 3 discovered.

ToolWhat it doesWhen to use
trw_learnRecord a discovery for all future sessions.On errors, gotchas, or patterns
trw_learn_updateMark learnings as resolved or obsolete.When issues are fixed
trw_recallSearch past learnings by keyword, tags, or impact.Before starting unfamiliar work
trw_knowledge_syncSync knowledge topology across projects.After cross-project learnings accumulate
trw_claude_md_syncPromote high-impact learnings into CLAUDE.md.During delivery or after major discoveries

Info

Learnings are scored with Q-learning and Ebbinghaus decay curves. High-impact entries auto-promote to CLAUDE.md via trw_claude_md_sync. Low-impact entries decay and are eventually pruned.

Quality & Verification

Quality gates prevent rework. Build checks catch failures before code review, independent reviews score against DRY/KISS/SOLID rubrics, and delivery gates ensure learnings persist. Without these gates, AI-generated code ships untested.

ToolWhat it doesWhen to use
trw_build_checkRun pytest + mypy, verify coverage thresholds.After implementation
trw_reviewIndependent code review with rubric scoring.Before committing changes
trw_trust_levelCheck or update the trust level for the current session.When adjusting verification strictness
trw_quality_dashboardShow aggregated quality metrics.For a health overview of build, tests, and ceremony
trw_deliverPersist learnings, sync artifacts, close run.End of every task

Warning

trw_build_check runs the full test suite and type-checker. On large projects this can take several minutes. Use scope="fast" for quick validation during implementation, then scope="full" before delivery.

Requirements

Requirements are the bridge between what you want and what your agent builds. These tools create AARE-F compliant PRDs with EARS-format requirements, then validate them for completeness before implementation begins.

ToolWhat it doesWhen to use
trw_prd_createGenerate an AARE-F requirements document.Defining new features
trw_prd_validateCheck requirements quality and completeness.Before implementation begins

Tip

For a full pipeline — create, groom, review, and plan in one step — use the /trw-prd-new skill instead. It orchestrates both tools automatically.

Ceremony

Ceremony enforces process compliance at phase transitions. It adds overhead — but that overhead prevents the rework that costs 3x more. These tools let you inspect gate status, approve transitions, and revert mistakes.

ToolWhat it doesWhen to use
trw_ceremony_statusShow current ceremony compliance state.Checking whether gates are satisfied
trw_ceremony_approveApprove a ceremony gate.After manual review of a gate requirement
trw_ceremony_revertRevert a ceremony state change.Undoing an incorrect approval

Info

Ceremony adapts to task complexity. Quick fixes skip most gates automatically. Complex features enforce all six phases. Use trw_ceremony_status to see which gates apply to your current task.

Reporting

Visibility into what your agent actually did. Run reports show single-session metrics, analytics reports track trends across sessions, and usage reports break down token costs by model so you can optimize spend.

ToolWhat it doesWhen to use
trw_run_reportPhase timing, event counts, and learning yield for a single run.After completing a run
trw_analytics_reportBuild pass rate, ceremony compliance, and trends across runs.Reviewing project health over time
trw_usage_reportLLM API token usage and cost breakdowns by model.Monitoring AI spend

Tip

Run trw_analytics_report weekly to spot declining build pass rates or ceremony compliance before they become systemic issues.

Usage examples

Your AI calls these tools automatically. Here is what typical activity looks like in your agent's output.

trw_session_startalways first
# Your AI loads prior learnings on startup.
# If a previous run was interrupted, it resumes automatically.
→ Loaded 47 learnings (12 high-impact)
→ Recovered run: sprint-42 (phase: implement)
→ Last checkpoint: "Auth middleware done, starting rate limiter"
trw_recallsearches knowledge
# Before working on unfamiliar code, your AI searches learnings:
trw_recall(query="rate limiter", tags=["backend", "security"])
→ 3 results (ranked by impact × recency):
  1. [high] Rate limiter is per-Lambda instance, not global
  2. [med]  RateLimiter.check_with_headers must delete X-RateLimit-*
  3. [low]  500/500 config wired to ingestion routes
trw_learncaptures discoveries
# When your AI hits a gotcha or finds a pattern:
trw_learn(
  summary="SQLite WAL mode required for concurrent reads",
  detail="Without WAL, parallel test runners deadlock on write",
  tags=["sqlite", "testing", "concurrency"]
)
→ Learning saved (impact: pending, will score after session)
trw_checkpointsaves progress
# After completing a milestone:
trw_checkpoint("Implemented auth middleware, 14 tests passing")
→ Checkpoint saved at phase: implement
# If context compacts, the AI resumes from this point
# instead of re-implementing from scratch.
trw_build_checkverifies quality
# Runs the full verification suite:
trw_build_check(scope="full")
→ pytest: 847 passed, 0 failed (2m 14s)
→ mypy:   0 errors in 78 modules
→ coverage: 94% (threshold: 90%) ✓
→ Result: PASS
trw_prd_createcreates requirements
# Generates a structured requirements document:
trw_prd_create(title="Webhook notifications for learning events")
→ Created: docs/requirements-aare-f/PRD-CORE-082.md
→ 6 functional requirements (EARS format)
→ Traceability matrix: 6 FRs → 4 files
→ Status: draft (run /trw-prd-groom to refine)
trw_ceremony_statuschecks gates
# Shows which ceremony gates apply and their state:
trw_ceremony_status()
→ Task complexity: feature (6 phases required)
→ Gates:
  ✓ research   — learnings loaded
  ✓ plan       — PRD created, execution plan ready
  ✓ implement  — 3 checkpoints saved
  ○ validate   — build check not yet run
  ○ review     — pending
  ○ deliver    — pending
trw_deliveralways last
# Persists everything at session end:
→ 3 new learnings saved
→ CLAUDE.md synced (1 promotion)
→ Run closed: sprint-42 (6 phases, 2h 14m)
→ Analytics: build pass rate 96%, ceremony compliance 91%

Common patterns

Tools combine into patterns depending on task complexity. Your agent selects the right pattern automatically, but understanding these helps you predict what your agent will do.

Quick fix

Bug fixes, typo corrections, config changes. Minimal ceremony — three tools, under 10 minutes.

terminal
trw_session_start  →  (implement fix)  →  trw_deliver
     ↓                                          ↓
 Load learnings                          Persist & close

Feature implementation

New capabilities, multi-file changes, anything with requirements. Full six-phase ceremony with checkpoints.

terminal
trw_session_start → trw_init → trw_checkpoint (×N)
       ↓                ↓              ↓
  Load learnings   Create run    Save progress
                                       ↓
              trw_build_check → trw_review → trw_deliver
                    ↓               ↓            ↓
              Verify tests    Score quality   Persist all

Resume interrupted work

Context compacted or session crashed. The agent recovers from the last checkpoint and continues where it left off.

terminal
trw_session_start → trw_status → (continue from checkpoint)
       ↓                 ↓
  Recover run     Show phase & progress
  "Resuming sprint-42 from checkpoint: auth middleware done"

MCP resources

In addition to tools, TRW exposes 6 read-only MCP resources. Your AI reads these for configuration, state, and templates without making a tool call.

ResourceWhat it provides
trw://configCurrent TRW configuration — ceremony mode, trust level, target platforms
trw://versionFramework and package versions for compatibility checks
trw://run-stateActive run phase, checkpoints, and progress metrics
trw://learningsRecent high-impact learnings for the current project
trw://templatesPRD templates, commit formats, and skill scaffolds
trw://analyticsHistorical build pass rates, ceremony compliance, and learning yield

Next steps