Memory & Knowledge
TRW gives your AI agents a memory that compounds. Every discovery persists, surfaces when relevant, and improves over time. Session 50 is fundamentally better than session 1 — not because you remember more, but because the framework does.
The Knowledge Flywheel
TRW's architecture is built around a single reinforcing loop. Each turn makes the next turn more valuable. Early sessions produce broad learnings. Later sessions produce refined, high-impact patterns that surface automatically.
The Compounding Effect
This flywheel is TRW's core differentiator. Across 80+ surveyed competitors, no project combines learning with scoring, decay curves, tier promotion, consolidation, and delivery ceremony into a single compounding system. The integrated lifecycle is unique — and because the flywheel compounds, every session widens the gap.
How It Works
A learning flows through six stages from initial discovery to permanent project context.
- 1
Learn
Your AI discovers a gotcha, pattern, or architecture decision during work.
- 2
Persist
trw_learn() saves the discovery as a structured YAML entry with tags and metadata.
- 3
Recall
Future sessions search past learnings via trw_recall() — hybrid keyword + semantic match.
- 4
Apply
The AI uses recalled knowledge to avoid known mistakes and follow proven patterns.
- 5
Improve
Each recall boosts the learning's impact score. Unused learnings decay naturally.
- 6
Promote
High-impact learnings get promoted into CLAUDE.md — permanent context for every session.
Learning Lifecycle
Every learning goes through a managed lifecycle. This is not a flat key-value store — it is a living knowledge system with scoring, decay, and promotion.
| Stage | What happens | Mechanism |
|---|---|---|
| Recording | AI calls trw_learn() with summary, detail, and tags | YAML entry created in .trw/learnings/ |
| Scoring | Initial impact score assigned based on content quality and tags | Q-learning + heuristic analysis |
| Recall | Future sessions retrieve relevant learnings via hybrid search | BM25 keyword + dense vector similarity |
| Decay | Unused learnings lose impact score over time | Ebbinghaus-inspired decay curve |
| Promotion | High-impact learnings promoted to permanent project context | Auto-sync into CLAUDE.md |
| Consolidation | Related learnings merged, stale ones pruned | Jaccard dedup + semantic clustering |
Impact Scoring
Not all learnings are equal. TRW scores each learning across four dimensions and uses the composite score to determine recall priority, decay rate, and promotion eligibility.
| Factor | Weight | Description |
|---|---|---|
| Utility | High | How often this learning is recalled and applied successfully. |
| Recency | Medium | When the learning was last accessed. Recent learnings score higher. |
| Frequency | Medium | How many sessions have used this learning. Cross-session value compounds. |
| Specificity | Low | Targeted learnings (tagged, scoped) score higher than vague ones. |
Scores range from 0.0 to 1.0. Learnings above 0.7 are considered high-impact and eligible for CLAUDE.md promotion. Learnings that decay below 0.1 are candidates for pruning during consolidation.
CLAUDE.md Promotion
The most valuable learnings graduate from the learning store into CLAUDE.md — the file that every AI session reads on startup. This means high-impact discoveries become permanent context without manual curation.
# Learning recorded in session 12:
trw_learn(
summary="SQLite WAL mode required for concurrent reads",
detail="Without WAL, parallel test runners deadlock on write",
tags=["sqlite", "testing"]
)
# → Impact score: 0.45 (new, untested)
# Session 18: recalled and applied successfully
# → Impact score: 0.72 (boosted by utility + frequency)
# Session 22: trw_claude_md_sync() promotes it
# → Now in CLAUDE.md — loaded on every session startTip
You do not need to manually curate CLAUDE.md. The promotion system handles it automatically during trw_deliver(). High-impact learnings rise; low-impact ones stay in the learning store where they can still be recalled on demand.
Memory Tools
Five MCP tools manage the full knowledge lifecycle. Your AI calls them automatically at the right moments — you see them in the activity log during sessions.
| Tool | What it does | When to use |
|---|---|---|
| trw_learn | Record a discovery with summary, detail, and tags. | Errors, gotchas, patterns, architecture decisions |
| trw_recall | Search past learnings by keyword, tags, or impact tier. | Before starting unfamiliar work or revisiting a domain |
| trw_learn_update | Mark learnings as resolved, obsolete, or update their content. | When an issue is fixed or context changes |
| trw_knowledge_sync | Sync knowledge topology across projects. | After cross-project learnings accumulate |
| trw_claude_md_sync | Promote high-impact learnings into CLAUDE.md. | During delivery or after major discoveries |
See the Tools Reference for the complete list of all 24 MCP tools.
Code Examples
Here is what the memory system looks like in practice across a typical session.
trw_learnrecording a discovery# AI discovers a gotcha during implementation:
trw_learn(
summary="FastAPI dependency overrides must be reset in teardown",
detail="Without resetting app.dependency_overrides in test teardown, "
"overrides leak between tests causing flaky failures.",
tags=["fastapi", "testing", "fixtures"]
)
# → Learning recorded
# → Impact score: 0.51
# → Stored: .trw/learnings/entries/2026-03-20-fastapi-dependency-...yamltrw_recallsearching past learnings# Next session: AI is about to write FastAPI tests
trw_recall("fastapi testing fixtures")
# → 3 relevant learnings found:
#
# [0.72] FastAPI dependency overrides must be reset in teardown
# tags: fastapi, testing, fixtures
#
# [0.65] TestClient requires app factory pattern for isolation
# tags: fastapi, testing
#
# [0.41] pytest-asyncio auto mode conflicts with sync fixtures
# tags: pytest, async, testingtrw_deliverpersisting at session end# End of session: deliver persists everything
trw_deliver()
# → Build gate: PASS (312 tests, mypy clean)
# → Learnings: 4 new, 2 updated, 1 promoted to CLAUDE.md
# → CLAUDE.md synced: +1 entry (FastAPI overrides gotcha)
# → Run closed: api-tests-refactor (3h 12m)Memory Routing
TRW uses trw_learn() for knowledge, not the AI tool's native auto-memory. This is a deliberate architectural decision — here is why.
| Dimension | trw_learn() | Native auto-memory |
|---|---|---|
| Search | trw_recall() — semantic + keyword hybrid | Filename scan only |
| Visibility | All agents, subagents, teammates | Primary session only |
| Lifecycle | Impact-scored, auto-promotes to CLAUDE.md | Static until manually edited |
| Scale | Hundreds of entries, auto-pruned by staleness | 200-line index cap |
| Best for | Gotchas, patterns, build tricks, architecture decisions | Commit style, communication preferences |
Tip
Rule of thumb: gotcha or error pattern → trw_learn(). User's preferred commit style → native memory. Build trick that saves time → trw_learn(). Communication preference → native memory.
Where Learnings Live
The entire memory system is file-based and travels with your project. No external database required. No server for base functionality.
| Path | Contents |
|---|---|
.trw/learnings/entries/ | Individual YAML files, one per learning. Human-readable, git-trackable. |
.trw/context/ | Analytics, ceremony state, and build status snapshots. |
CLAUDE.md | Promoted high-impact learnings. Read by the AI on every session start. |
Learnings are plain YAML. You can read, edit, or delete them with any text editor. The format is designed for both machine processing and human review.