Multiple agents, one coherent output
Role-specialized agents with enforced file ownership, four coordination formations, and structured handoffs — so parallel work doesn't become parallel chaos.
Sequential stages with clean handoffs — each agent completes before the next begins.
nodes light as active · dashed boundary = file ownership scope · packet = handoff
One agent trying to do everything produces worse output
Split attention across research, implementation, and review degrades everything. A focused specialist with narrow context outperforms a generalist with wide context. This is empirically true for humans — focused engineers working in tight file scopes produce fewer defects than engineers switching contexts constantly. The same holds for LLMs: the researcher reading five files finds more relevant signal than the implementer reading five hundred files while also writing code. Specialization is the lever.
Looking for why teams choose TRW? see /for/teams — this page covers the mechanics.
File ownership, enforced
Each spawned agent declares its file scope at startup. Writes outside that scope are rejected at the hook layer before the filesystem is touched. Silent overlap between parallel agents is structurally impossible — not a convention, not a guideline, not something you have to remember. The trw-implementer owns src/module.*; the trw-tester owns tests/test_module.*. Both run in the same wave. Neither can write to the other's scope. The same pattern applies whether your repo is Python, TypeScript, Go, or anything else — scopes are glob patterns, not language rules.
Four coordination formations
TRW lets you choose the topology that matches the work — not just whether to use multiple agents, but how they relate to each other. Toggle between formations in the graph above; explanations below.
Sequential stages with clean handoffs.
Use when each stage depends on the previous output and cannot start until the predecessor is complete. Default for most feature implementations.
Parallelizable research → single merge.
Use when the same task can be split into independent axes (e.g., read three separate modules) and the outputs merge into one implementer. Reduces wall-clock time for large codebases.
Single synthesis from diverse inputs.
Use when a plan needs execution feedback to improve. The reflector reads the executor's diff and feeds back to the planner before the next wave. Good for iterative design.
Quality-critical decisions via adversarial evaluation.
Use when correctness matters more than speed. Two implementers propose competing approaches; the adversarial auditor probes both; the reviewer delivers a verdict. High cost, high confidence.
Wave/shard execution
Work decomposes into parallel SHARDS within sequential WAVES. Wave 1 (foundation) must complete before Wave 2 (integration) begins. Shards within a wave run concurrently. This model is defined in FRAMEWORK.md, not implemented as MCP tool overhead — which reduces context per agent by 77% compared to the v0.2.0 tool-based approach. The orchestrator writes shards/wave_manifest.yaml before spawning agents; each shard writes its findings to scratch/shard-{id}/findings.yaml before returning.
Subagent hooks
subagent-start.sh injects phase-specific guidance and the ceremony protocol into every spawned agent at startup. No agent operates outside the ceremony envelope, regardless of which formation it runs in. The hook fires before the agent receives its task, so every agent starts with consistent context: current phase, active run ID, and required tool lifecycle.
Pairs with
Workflows
Formations execute the 6-phase lifecycle in parallel. Wave structure maps directly to phase boundaries.
Memory
Team-wide memory means any agent's discovery surfaces for every other agent's next task.
Requirements
File ownership is declared in the PRD's traceability matrix. The PRD is the team's contract.
Common questions
Do I need to run agents in parallel to use TRW?
How does file ownership work if I need the same file edited twice?
Is agent spawning experimental?