Lifecycle Phases
TRW organizes every task into six phases: research, plan, implement, validate, review, deliver. Each phase has an exit gate. Your AI advances only when the gate is satisfied. This prevents the most common failure mode in AI-assisted development — rushing to code before understanding the problem, then reworking when tests fail or reviews surface design issues.
Phase Overview
| Phase | Purpose | Key tools | Exit criteria |
|---|---|---|---|
| Research | Load prior learnings, audit codebase, gather evidence | trw_session_starttrw_recall | Findings registered, relevant learnings loaded |
| Plan | Design approach, identify dependencies, create execution plan | trw_inittrw_prd_create | Plan approved (or auto-approved for simple tasks) |
| Implement | Execute the plan with periodic checkpoints | trw_checkpoint | Code written, checkpoints saved |
| Validate | Run tests and type-check, verify coverage meets thresholds | trw_build_check | Build passes, coverage met |
| Review | Independent quality audit (DRY/KISS/SOLID), fix gaps, record discoveries | trw_reviewtrw_learn | Review passes or issues resolved |
| Deliver | Sync artifacts, promote high-impact learnings, close run | trw_delivertrw_claude_md_sync | Learnings persisted, run closed |
Research
Every session starts here. trw_session_start loads learnings from prior sessions and checks for interrupted runs. If your AI has worked on this codebase before, it already knows the patterns, gotchas, and architecture decisions it discovered last time.
For unfamiliar territory, trw_recall searches past learnings by keyword. The AI explores relevant code, reads documentation, and builds context before proposing a plan.
trw_session_start()
# → Loaded 47 learnings (12 high-impact)
# → Recovered active run: sprint-52-auth-refactor
trw_recall("rate limiting middleware")
# → 3 relevant learnings foundExit criteria: Findings registered, relevant learnings loaded.
Plan
The AI designs its approach before writing code. For larger tasks, it calls trw_init to create a tracked run with a named directory for checkpoints. For features that need requirements, trw_prd_create generates a structured PRD.
Simple tasks get auto-approved. Complex tasks produce an explicit plan that identifies files to change, dependencies between changes, and the order of operations.
trw_init("auth-middleware-refactor")
# → Run created: .trw/runs/auth-middleware-refactor/
# → Phase: PLANExit criteria: Plan approved (or auto-approved for simple tasks).
Implement
The AI writes code and saves periodic checkpoints via trw_checkpoint. Checkpoints are atomic snapshots of progress. If the context window compacts mid-session, the AI resumes from the last checkpoint instead of starting over.
# ... write code ...
trw_checkpoint("rate limiter middleware complete, tests next")
# → Checkpoint saved: 3/5 tasks done
# ... write more code ...
trw_checkpoint("all middleware tests passing")
# → Checkpoint saved: 5/5 tasks doneTip
Call trw_checkpoint after each milestone. Long sessions without checkpoints risk losing progress to context compaction. A good rule: checkpoint whenever you would commit.
Exit criteria: Code written, checkpoints saved.
Validate
trw_build_check runs the full verification suite: pytest, mypy, and coverage analysis. It reports pass/fail status and coverage percentages. The AI cannot advance to review until the build passes and coverage meets the configured threshold.
trw_build_check()
# → pytest: 247 passed, 0 failed
# → mypy: clean (0 errors)
# → coverage: 94% (threshold: 90%)
# → Phase: VALIDATE → REVIEWExit criteria: Build passes, coverage met.
Review
An independent quality audit of the changes. The AI checks for DRY violations, unnecessary complexity, and SOLID principle adherence. It records any discoveries via trw_learn so future sessions benefit from what this session found.
trw_review()
# → Reviewed 4 files, 2 findings
# → Finding: duplicated validation logic in auth.py and api_keys.py
trw_learn(
"auth validation helpers should be shared",
"Both auth.py and api_keys.py independently validate API key format..."
)
# → Learning recorded (impact: 0.7)Exit criteria: Review passes or issues resolved.
Deliver
trw_deliver closes the run in one call. It syncs artifacts, promotes high-impact learnings into CLAUDE.md (so the next session loads them automatically), and persists session analytics.
trw_deliver()
# → Build gate: PASS (247 tests, mypy clean)
# → Learnings: 3 persisted, 1 promoted to CLAUDE.md
# → Run closed: auth-middleware-refactorExit criteria: Learnings persisted, run closed.
Adaptive Ceremony
Not every task needs all six phases. TRW scores task complexity and assigns a ceremony tier. Quick fixes skip the phases that would slow them down. Full features use every gate. The system scales process to match the risk of the change.
| Tier | When | Phases used | Phases skipped |
|---|---|---|---|
MINIMAL | Quick fixes, typos, config changes | Implement, Deliver | Research, Plan, Validate, Review |
STANDARD | Bug fixes, small features, refactors | Research, Implement, Validate, Deliver | Plan, Review |
FULL | New features, multi-file changes, architecture | All 6 phases | None |
TRW determines the tier automatically based on the number of files changed, whether the task has a PRD, and whether it touches existing tests.
Quick Fix vs Full Feature
These are the two most common workflows. A quick fix touches one or two files and skips straight to implementation. A full feature uses every phase.
trw_session_start()
# skip Research, Plan, Validate, Review
# → fix the bug
trw_deliver()trw_session_start() # Research
trw_recall("auth")
trw_init("feature") # Plan
trw_checkpoint(...) # Implement
trw_build_check() # Validate
trw_review() # Review
trw_learn(...)
trw_deliver() # DeliverPhase Reversion
Phases are not strictly linear. When a later phase reveals a problem, the AI reverts to an earlier phase rather than pushing through.
- Validate fails — revert to Implement. Fix the code, then re-validate.
- Review finds design issues — revert to Plan. Rethink the approach, then re-implement.
- Implementation hits unknowns — revert to Research. Gather more context, then re-plan.
Why this matters
Catching design issues during review costs a few minutes of rework. Shipping them costs hours of debugging in production. Phase reversion is the mechanism that enforces this tradeoff — the AI goes back instead of forward when quality gates fail.
Hooks
TRW includes 14 lifecycle hooks that fire at phase boundaries and session events. Hooks enforce quality automatically — you do not need to configure them.
- Session start — loads learnings and checks for interrupted runs
- Pre-compaction — automatically saves a checkpoint before context window compacts
- Phase gates — block advancement when exit criteria are not met
- Session end — warns if delivery was skipped, preventing knowledge loss