Requirements engineering
AI agents write code fast. Without structured requirements, they write the wrong code fast. TRW uses AARE-F to turn vague feature requests into machine-verifiable specifications that agents can implement, trace, and validate. The result is a workflow where product intent, code changes, and tests stay connected instead of drifting apart.
AARE-F framework
AI-Augmented Requirements Engineering Framework, v2.0 — synthesized from 26 waves of systematic research. Ten components across four layers, grounded in five principles. The core insight: AI augments human judgment. It does not replace it, and the framework only works when it stays tied to the actual codebase and delivery flow.
| # | Principle | What it means |
|---|---|---|
| P1 | Traceability first | Every artifact traces to sources and downstream impacts |
| P2 | Human-in-the-loop | AI accelerates but humans decide — oversight is mandatory |
| P3 | Risk-based rigor | Effort scales with consequence, not all requirements need equal treatment |
| P4 | Semantic understanding | Embeddings replace keywords as the computational substrate |
| P5 | Continuous verification | Compliance is engineered in, not audited after |
Four-layer architecture
Each layer builds on the one below. Foundation provides the data substrate. Governance controls AI decision-making. Execution coordinates agents. Operations integrates with DevOps.
PRD system
Every feature starts as a PRD. Each has 12 mandatory sections, EARS-compliant requirements with confidence scores, and Given/When/Then acceptance criteria. Format: PRD-CORE-086, PRD-QUAL-016, PRD-FIX-035.
Stage by stage
Draft
Created from a feature description with 12 mandatory sections
trw_prd_createGroomed
Iterated to 85%+ quality with traceability matrix and EARS requirements
/trw-prd-groomReviewed
Independent quality review with READY / NEEDS WORK verdict
/trw-prd-reviewSprint-ready
Create or refine the PRD, review it, then generate an execution plan
/trw-prd-readyIn-progress
Assigned to a sprint with agents implementing against each FR
/trw-sprint-initDone
All FRs verified, build passes, delivery ceremony complete
trw_deliverQuality gates
PRDs pass automated validation before entering a sprint. Four dimensions are scored. Fall below any threshold and the PRD is blocked until fixed.
| Dimension | Threshold | How it's measured |
|---|---|---|
| Ambiguity | < 5% | Vague terms detected — "TBD", "maybe", "could", "should consider" |
| Completeness | > 85% | All 12 mandatory sections populated with substantive content |
| Traceability | > 90% | Each FR linked to source files and test files via backtick references |
| Content density | > 0.25 | Ratio of substantive lines to total lines — no filler, no boilerplate |
# One command creates, grooms, reviews, and plans
/trw-prd-ready "Add rate limiting to the API"
# → PRD-CORE-088 created (score: 62/100)
# → Groom pass 1: 62 → 78 (filled sections, added EARS patterns)
# → Groom pass 2: 78 → 86 (traceability matrix, density)
# → Review: READY (7 P2 suggestions, 0 blockers)
# → Execution plan: 3 waves, 24 tasks, file ownership assignedTraceability
Every requirement links forward to code and backward to rationale. The traceability checker agent verifies these links at VALIDATE and DELIVER — unlinked FRs block delivery.
PRD-CORE-086 (requirement)
└── FR01: Assertion model
├── trw-memory/models/memory.py:45 (source)
├── trw-memory/lifecycle/verify.py (source)
└── tests/test_assertions.py:12 (test)
Target: >= 90% of FRs linked to both source and tests
Impact analysis: < 5 seconds per changeHow it works
FRs reference source files with backtick-wrapped paths: `src/auth.py:42`. The traceability checker parses these, verifies the files exist, and scores coverage. Missing links block delivery.
Sprint execution
Sprints decompose PRDs into waves — groups of tasks with explicit dependency ordering. Each wave gets file ownership to prevent merge conflicts when agents work in parallel.
| Step | What happens | Tool |
|---|---|---|
| 1.Initialize | Select PRDs, generate wave plan, assign file ownership | /trw-sprint-init |
| 2.Plan | Decompose FRs into micro-tasks with dependency graphs | /trw-exec-plan |
| 3.Implement | Agents work waves sequentially, checkpoint after each | trw_checkpoint |
| 4.Validate | Build gate — tests pass, type-check clean, coverage met | trw_build_check |
| 5.Review | Adversarial spec-vs-code audit by independent agent | /trw-audit |
| 6.Deliver | Persist learnings, close run, sync startup instructions | trw_deliver |
In practice
A real sprint usually starts with one or more approved PRDs, turns them into wave-based tasks, and keeps ownership explicit so multiple agents can work without stomping on the same files. The exact wave count changes by scope; the dependency model is what matters.
Executable assertions
Learnings and PRD FRs carry grep/glob assertions verified against the codebase automatically. If the code changes and an assertion fails, the learning is flagged as stale. Knowledge stays honest as the codebase evolves.
trw_learn(
summary="SQLite WAL mode required for concurrent reads",
detail="Without WAL, concurrent read queries block on writes...",
assertions=[{
"type": "grep",
"pattern": "journal_mode.*wal",
"glob": "**/*.py",
"must_match": true
}]
)
# → Learning recorded with 1 assertion
# → Assertion verified: PASS (matched in storage/sqlite.py:34)Why this matters
Traditional learnings are passive text — they claim things but never prove them. Executable assertions close the loop. Every claim is verified on every recall.
Tools and skills
trw_prd_createGenerate an AARE-F-compliant PRD from a feature description
MCP tooltrw_prd_validateScore a PRD across 4 quality dimensions with pass/fail gate
MCP tool/trw-prd-newFull lifecycle in one command: create, groom, review, execution plan
Skill/trw-prd-groomInternal grooming stage that raises a draft PRD to sprint-ready quality
Skill/trw-prd-reviewInternal review stage that returns READY or NEEDS WORK with per-dimension scores
Skill/trw-auditAdversarial spec-vs-code verification — finds gaps implementation missed
SkillNext steps
Once the spec is clear, move into the execution layer: the tools that manipulate PRDs, the skills that package that workflow, and the agents that carry it out.