Skip to main content
TRW
TRWRequirements Engineering

Requirements engineering

AI agents write code fast. Without structured requirements, they write the wrong code fast. TRW uses AARE-F to turn vague feature requests into machine-verifiable specifications that agents can implement, trace, and validate. The result is a workflow where product intent, code changes, and tests stay connected instead of drifting apart.

263+PRDs64+Sprints6Lifecycle stages12PRD sections

AARE-F framework

AI-Augmented Requirements Engineering Framework, v2.0 — synthesized from 26 waves of systematic research. Ten components across four layers, grounded in five principles. The core insight: AI augments human judgment. It does not replace it, and the framework only works when it stays tied to the actual codebase and delivery flow.

#PrincipleWhat it means
P1Traceability firstEvery artifact traces to sources and downstream impacts
P2Human-in-the-loopAI accelerates but humans decide — oversight is mandatory
P3Risk-based rigorEffort scales with consequence, not all requirements need equal treatment
P4Semantic understandingEmbeddings replace keywords as the computational substrate
P5Continuous verificationCompliance is engineered in, not audited after

Four-layer architecture

Each layer builds on the one below. Foundation provides the data substrate. Governance controls AI decision-making. Execution coordinates agents. Operations integrates with DevOps.

Foundation
C1 TraceabilityC4 Semantic
Governance
C2 LLM GovC3 RiskC8 Guards
Execution
C5 AgentsC6 UncertaintyC10 Conflicts
Operations
C7 Req-as-CodeC9 Observability
Each layer builds on the one below

PRD system

Every feature starts as a PRD. Each has 12 mandatory sections, EARS-compliant requirements with confidence scores, and Given/When/Then acceptance criteria. Format: PRD-CORE-086, PRD-QUAL-016, PRD-FIX-035.

D
Draft
G
Groomed
R
Reviewed
S
Sprint-ready
IP
In-progress
Done

Stage by stage

Draft

Created from a feature description with 12 mandatory sections

trw_prd_create

Groomed

Iterated to 85%+ quality with traceability matrix and EARS requirements

/trw-prd-groom

Reviewed

Independent quality review with READY / NEEDS WORK verdict

/trw-prd-review

Sprint-ready

Create or refine the PRD, review it, then generate an execution plan

/trw-prd-ready

In-progress

Assigned to a sprint with agents implementing against each FR

/trw-sprint-init

Done

All FRs verified, build passes, delivery ceremony complete

trw_deliver

Quality gates

PRDs pass automated validation before entering a sprint. Four dimensions are scored. Fall below any threshold and the PRD is blocked until fixed.

DimensionThresholdHow it's measured
Ambiguity< 5%Vague terms detected — "TBD", "maybe", "could", "should consider"
Completeness> 85%All 12 mandatory sections populated with substantive content
Traceability> 90%Each FR linked to source files and test files via backtick references
Content density> 0.25Ratio of substantive lines to total lines — no filler, no boilerplate
grooming workflow
# One command creates, grooms, reviews, and plans
/trw-prd-ready "Add rate limiting to the API"
# → PRD-CORE-088 created (score: 62/100)
# → Groom pass 1: 62 → 78 (filled sections, added EARS patterns)
# → Groom pass 2: 78 → 86 (traceability matrix, density)
# → Review: READY (7 P2 suggestions, 0 blockers)
# → Execution plan: 3 waves, 24 tasks, file ownership assigned

Traceability

Every requirement links forward to code and backward to rationale. The traceability checker agent verifies these links at VALIDATE and DELIVER — unlinked FRs block delivery.

traceability chain
PRD-CORE-086                    (requirement)
  └── FR01: Assertion model
       ├── trw-memory/models/memory.py:45   (source)
       ├── trw-memory/lifecycle/verify.py   (source)
       └── tests/test_assertions.py:12      (test)

Target: >= 90% of FRs linked to both source and tests
Impact analysis: < 5 seconds per change

How it works

FRs reference source files with backtick-wrapped paths: `src/auth.py:42`. The traceability checker parses these, verifies the files exist, and scores coverage. Missing links block delivery.

Sprint execution

Sprints decompose PRDs into waves — groups of tasks with explicit dependency ordering. Each wave gets file ownership to prevent merge conflicts when agents work in parallel.

StepWhat happensTool
1.InitializeSelect PRDs, generate wave plan, assign file ownership/trw-sprint-init
2.PlanDecompose FRs into micro-tasks with dependency graphs/trw-exec-plan
3.ImplementAgents work waves sequentially, checkpoint after eachtrw_checkpoint
4.ValidateBuild gate — tests pass, type-check clean, coverage mettrw_build_check
5.ReviewAdversarial spec-vs-code audit by independent agent/trw-audit
6.DeliverPersist learnings, close run, sync startup instructionstrw_deliver

In practice

A real sprint usually starts with one or more approved PRDs, turns them into wave-based tasks, and keeps ownership explicit so multiple agents can work without stomping on the same files. The exact wave count changes by scope; the dependency model is what matters.

Executable assertions

Learnings and PRD FRs carry grep/glob assertions verified against the codebase automatically. If the code changes and an assertion fails, the learning is flagged as stale. Knowledge stays honest as the codebase evolves.

assertion example
trw_learn(
  summary="SQLite WAL mode required for concurrent reads",
  detail="Without WAL, concurrent read queries block on writes...",
  assertions=[{
    "type": "grep",
    "pattern": "journal_mode.*wal",
    "glob": "**/*.py",
    "must_match": true
  }]
)
# → Learning recorded with 1 assertion
# → Assertion verified: PASS (matched in storage/sqlite.py:34)

Why this matters

Traditional learnings are passive text — they claim things but never prove them. Executable assertions close the loop. Every claim is verified on every recall.

Tools and skills

trw_prd_create

Generate an AARE-F-compliant PRD from a feature description

MCP tool
trw_prd_validate

Score a PRD across 4 quality dimensions with pass/fail gate

MCP tool
/trw-prd-new

Full lifecycle in one command: create, groom, review, execution plan

Skill
/trw-prd-groom

Internal grooming stage that raises a draft PRD to sprint-ready quality

Skill
/trw-prd-review

Internal review stage that returns READY or NEEDS WORK with per-dimension scores

Skill
/trw-audit

Adversarial spec-vs-code verification — finds gaps implementation missed

Skill

Next steps

Once the spec is clear, move into the execution layer: the tools that manipulate PRDs, the skills that package that workflow, and the agents that carry it out.

Next Step

Requirements define the target. Tools, skills, and agents show how TRW turns those specs into delegated implementation, review, and audit steps.