Skip to main content
TRW
TRWDocumentation

Lifecycle Phases

TRW organizes every task into six phases: research, plan, implement, validate, review, deliver. Each phase has an exit gate. Your AI advances only when the gate is satisfied. This prevents the most common failure mode in AI-assisted development — rushing to code before understanding the problem, then reworking when tests fail or reviews surface design issues.

Research
Plan
Implement
Validate
Review
Deliver

Phase Overview

PhasePurposeKey toolsExit criteria
ResearchLoad prior learnings, audit codebase, gather evidence
trw_session_starttrw_recall
Findings registered, relevant learnings loaded
PlanDesign approach, identify dependencies, create execution plan
trw_inittrw_prd_create
Plan approved (or auto-approved for simple tasks)
ImplementExecute the plan with periodic checkpoints
trw_checkpoint
Code written, checkpoints saved
ValidateRun tests and type-check, verify coverage meets thresholds
trw_build_check
Build passes, coverage met
ReviewIndependent quality audit (DRY/KISS/SOLID), fix gaps, record discoveries
trw_reviewtrw_learn
Review passes or issues resolved
DeliverSync artifacts, promote high-impact learnings, close run
trw_delivertrw_claude_md_sync
Learnings persisted, run closed

Research

Every session starts here. trw_session_start loads learnings from prior sessions and checks for interrupted runs. If your AI has worked on this codebase before, it already knows the patterns, gotchas, and architecture decisions it discovered last time.

For unfamiliar territory, trw_recall searches past learnings by keyword. The AI explores relevant code, reads documentation, and builds context before proposing a plan.

typical research phase
trw_session_start()
# → Loaded 47 learnings (12 high-impact)
# → Recovered active run: sprint-52-auth-refactor

trw_recall("rate limiting middleware")
# → 3 relevant learnings found

Exit criteria: Findings registered, relevant learnings loaded.

Plan

The AI designs its approach before writing code. For larger tasks, it calls trw_init to create a tracked run with a named directory for checkpoints. For features that need requirements, trw_prd_create generates a structured PRD.

Simple tasks get auto-approved. Complex tasks produce an explicit plan that identifies files to change, dependencies between changes, and the order of operations.

typical plan phase
trw_init("auth-middleware-refactor")
# → Run created: .trw/runs/auth-middleware-refactor/
# → Phase: PLAN

Exit criteria: Plan approved (or auto-approved for simple tasks).

Implement

The AI writes code and saves periodic checkpoints via trw_checkpoint. Checkpoints are atomic snapshots of progress. If the context window compacts mid-session, the AI resumes from the last checkpoint instead of starting over.

typical implement phase
# ... write code ...

trw_checkpoint("rate limiter middleware complete, tests next")
# → Checkpoint saved: 3/5 tasks done

# ... write more code ...

trw_checkpoint("all middleware tests passing")
# → Checkpoint saved: 5/5 tasks done

Tip

Call trw_checkpoint after each milestone. Long sessions without checkpoints risk losing progress to context compaction. A good rule: checkpoint whenever you would commit.

Exit criteria: Code written, checkpoints saved.

Validate

trw_build_check runs the full verification suite: pytest, mypy, and coverage analysis. It reports pass/fail status and coverage percentages. The AI cannot advance to review until the build passes and coverage meets the configured threshold.

typical validate phase
trw_build_check()
# → pytest: 247 passed, 0 failed
# → mypy: clean (0 errors)
# → coverage: 94% (threshold: 90%)
# → Phase: VALIDATE → REVIEW

Exit criteria: Build passes, coverage met.

Review

An independent quality audit of the changes. The AI checks for DRY violations, unnecessary complexity, and SOLID principle adherence. It records any discoveries via trw_learn so future sessions benefit from what this session found.

typical review phase
trw_review()
# → Reviewed 4 files, 2 findings
# → Finding: duplicated validation logic in auth.py and api_keys.py

trw_learn(
  "auth validation helpers should be shared",
  "Both auth.py and api_keys.py independently validate API key format..."
)
# → Learning recorded (impact: 0.7)

Exit criteria: Review passes or issues resolved.

Deliver

trw_deliver closes the run in one call. It syncs artifacts, promotes high-impact learnings into CLAUDE.md (so the next session loads them automatically), and persists session analytics.

typical deliver phase
trw_deliver()
# → Build gate: PASS (247 tests, mypy clean)
# → Learnings: 3 persisted, 1 promoted to CLAUDE.md
# → Run closed: auth-middleware-refactor

Exit criteria: Learnings persisted, run closed.

Adaptive Ceremony

Not every task needs all six phases. TRW scores task complexity and assigns a ceremony tier. Quick fixes skip the phases that would slow them down. Full features use every gate. The system scales process to match the risk of the change.

MINIMAL
R
P
I
V
Re
D
2 phases
STANDARD
R
P
I
V
Re
D
4 phases
FULL
R
P
I
V
Re
D
6 phases
R = ResearchP = PlanI = ImplementV = ValidateRe = ReviewD = Deliver
TierWhenPhases usedPhases skipped
MINIMALQuick fixes, typos, config changesImplement, DeliverResearch, Plan, Validate, Review
STANDARDBug fixes, small features, refactorsResearch, Implement, Validate, DeliverPlan, Review
FULLNew features, multi-file changes, architectureAll 6 phasesNone

TRW determines the tier automatically based on the number of files changed, whether the task has a PRD, and whether it touches existing tests.

Quick Fix vs Full Feature

These are the two most common workflows. A quick fix touches one or two files and skips straight to implementation. A full feature uses every phase.

quick fix
trw_session_start()
# skip Research, Plan, Validate, Review

# → fix the bug
trw_deliver()
full feature
trw_session_start()   # Research
trw_recall("auth")
trw_init("feature")   # Plan
trw_checkpoint(...)    # Implement
trw_build_check()      # Validate
trw_review()           # Review
trw_learn(...)
trw_deliver()          # Deliver

Phase Reversion

Phases are not strictly linear. When a later phase reveals a problem, the AI reverts to an earlier phase rather than pushing through.

  • Validate fails — revert to Implement. Fix the code, then re-validate.
  • Review finds design issues — revert to Plan. Rethink the approach, then re-implement.
  • Implementation hits unknowns — revert to Research. Gather more context, then re-plan.

Why this matters

Catching design issues during review costs a few minutes of rework. Shipping them costs hours of debugging in production. Phase reversion is the mechanism that enforces this tradeoff — the AI goes back instead of forward when quality gates fail.

Hooks

TRW includes 14 lifecycle hooks that fire at phase boundaries and session events. Hooks enforce quality automatically — you do not need to configure them.

  • Session start — loads learnings and checks for interrupted runs
  • Pre-compaction — automatically saves a checkpoint before context window compacts
  • Phase gates — block advancement when exit criteria are not met
  • Session end — warns if delivery was skipped, preventing knowledge loss

Next Steps