Memory & Knowledge
This is the memory engine inside TRW's engineering operating layer. Discoveries persist, resurface when relevant, and evolve over time so later sessions can resume with real project context instead of starting from fragments.
Naming note
The sync tool still uses the historical name trw_claude_md_sync, but modern installs use it to update the repo's selected client surfaces — not only CLAUDE.md.
The Knowledge Flywheel
TRW's architecture is built around a reinforcing loop: capture what mattered, rank it by usefulness, and surface it when the next task needs it. Early sessions produce raw learnings. Later sessions inherit refined, higher-signal guidance automatically.
What matters
The important idea is not a marketing superlative. It is that capture, recall, decay, consolidation, and promotion are wired together instead of being left as separate manual chores. That is what lets later sessions inherit better judgment instead of just more notes.
How It Works
A learning flows through six stages from initial discovery to permanent project context.
- 1
Learn
Your AI discovers a gotcha, pattern, or architecture decision during work.
- 2
Persist
trw_learn() stores the discovery in the project memory layer under .trw/ with tags and metadata.
- 3
Recall
Future sessions search past learnings via trw_recall() — hybrid keyword + semantic match.
- 4
Apply
The AI uses recalled knowledge to avoid known mistakes and follow proven patterns.
- 5
Improve
Each recall boosts the learning's impact score. Unused learnings decay naturally.
- 6
Promote
High-impact learnings get promoted into the repo’s instruction surfaces — permanent context for future sessions.
Learning Lifecycle
Every learning goes through a managed lifecycle. This is not a flat key-value store — it is a living knowledge system with scoring, decay, and promotion.
| Stage | What happens | Mechanism |
|---|---|---|
| Recording | AI calls trw_learn() with summary, detail, and tags | Structured entry stored in the project learning store under .trw/ |
| Scoring | Initial impact score assigned based on content quality and tags | Q-learning + heuristic analysis |
| Recall | Future sessions retrieve relevant learnings via hybrid search | BM25 keyword + dense vector similarity |
| Decay | Unused learnings lose impact score over time | Ebbinghaus-inspired decay curve |
| Promotion | High-impact learnings promoted to permanent project context | Auto-sync into the repo's selected instruction surfaces |
| Consolidation | Related learnings merged, stale ones pruned | Jaccard dedup + semantic clustering |
Impact Scoring
Not all learnings are equal. TRW scores each learning across four dimensions and uses the composite score to determine recall priority, decay rate, and promotion eligibility.
| Factor | Weight | Description |
|---|---|---|
| Utility | High | How often this learning is recalled and applied successfully. |
| Recency | Medium | When the learning was last accessed. Recent learnings score higher. |
| Frequency | Medium | How many sessions have used this learning. Cross-session value compounds. |
| Specificity | Low | Targeted learnings (tagged, scoped) score higher than vague ones. |
Scores range from 0.0 to 1.0. Learnings above 0.7 are considered high-impact and eligible for startup instruction promotion. Learnings that decay below 0.1 are candidates for pruning during consolidation.
Instruction Surface Promotion
The most valuable learnings graduate from the learning store into CLAUDE.md, .cursor/rules/, .codex/INSTRUCTIONS.md, GEMINI.md, or the other startup surfaces your repo targets. This means high-impact discoveries become permanent context without manual curation.
# Learning recorded in session 12:
trw_learn(
summary="SQLite WAL mode required for concurrent reads",
detail="Without WAL, parallel test runners deadlock on write",
tags=["sqlite", "testing"]
)
# → Impact score: 0.45 (new, untested)
# Session 18: recalled and applied successfully
# → Impact score: 0.72 (boosted by utility + frequency)
# Session 22: trw_claude_md_sync() promotes it
# → Now in the repo's startup instructions — loaded on later sessionsTip
You do not need to manually curate every instruction file. The promotion system handles it automatically during trw_deliver(). High-impact learnings rise; low-impact ones stay in the learning store where they can still be recalled on demand.
Memory Tools
Five MCP tools manage the full knowledge lifecycle. Your AI calls them automatically at the right moments — you see them in the activity log during sessions.
| Tool | What it does | When to use |
|---|---|---|
| trw_learn | Record a discovery with summary, detail, and tags. | Errors, gotchas, patterns, architecture decisions |
| trw_recall | Search past learnings by keyword, tags, or impact tier. | Before starting unfamiliar work or revisiting a domain |
| trw_learn_update | Mark learnings as resolved, obsolete, or update their content. | When an issue is fixed or context changes |
| trw_claude_md_sync | Promote high-impact learnings into the repo’s client-facing instruction surfaces. | During delivery or after major discoveries |
See the Tools Reference for the complete list of all 24 MCP tools.
Code Examples
Here is what the memory system looks like in practice across a typical session.
trw_learnrecording a discovery# AI discovers a gotcha during implementation:
trw_learn(
summary="FastAPI dependency overrides must be reset in teardown",
detail="Without resetting app.dependency_overrides in test teardown, "
"overrides leak between tests causing flaky failures.",
tags=["fastapi", "testing", "fixtures"]
)
# → Learning recorded
# → Impact score: 0.51
# → Stored in the project learning store under .trw/trw_recallsearching past learnings# Next session: AI is about to write FastAPI tests
trw_recall("fastapi testing fixtures")
# → 3 relevant learnings found:
#
# [0.72] FastAPI dependency overrides must be reset in teardown
# tags: fastapi, testing, fixtures
#
# [0.65] TestClient requires app factory pattern for isolation
# tags: fastapi, testing
#
# [0.41] pytest-asyncio auto mode conflicts with sync fixtures
# tags: pytest, async, testingtrw_deliverpersisting at session end# End of session: deliver persists everything
trw_deliver()
# → Build gate: PASS (312 tests, mypy clean)
# → Learnings: 4 new, 2 updated, 1 promoted to startup instructions
# → Instruction sync: +1 entry (FastAPI overrides gotcha)
# → Run closed: api-tests-refactor (3h 12m)Developer Experience
As of v0.6.2, MemoryConfig and MemoryEntry both implement __repr__ for quick inspection during debugging. Print any object in a REPL or log output and get a readable one-liner showing the most useful fields at a glance.
>>> print(config)
MemoryConfig(backend=sqlite, path=/home/user/.trw/memory, encryption=off, rbac=off)
>>> print(entry)
MemoryEntry(id=M-a1b2c3d4e5f6, content="Agent Teams sprint: st...", tags=[sprint, integration], importance=0.8)Memory Routing
TRW uses trw_learn() for knowledge, not the AI tool's native auto-memory. This is a deliberate architectural decision — here is why.
| Dimension | trw_learn() | Native auto-memory |
|---|---|---|
| Search | trw_recall() — semantic + keyword hybrid | Filename scan only |
| Visibility | All agents, subagents, teammates | Primary session only |
| Lifecycle | Impact-scored, auto-promotes into instruction surfaces | Static until manually edited |
| Scale | Hundreds of entries, auto-pruned by staleness | 200-line index cap |
| Best for | Gotchas, patterns, build tricks, architecture decisions | Commit style, communication preferences |
Tip
Rule of thumb: gotcha or error pattern → trw_learn(). User's preferred commit style → native memory. Build trick that saves time → trw_learn(). Communication preference → native memory.
Where Learnings Live
TRW memory is project-local and travels with your repo. The runtime uses a local storage layer under .trw/, so the base workflow does not depend on a hosted service.
| Path | Contents |
|---|---|
.trw/ | Project-local learning store, run state, and supporting memory artifacts managed by TRW. |
.trw/config.yaml | Project-level settings that shape recall thresholds, sync behavior, and related memory defaults. |
instruction surfaces | Promoted high-impact learnings. Read by the AI on every session start. |
Some memory artifacts are human-readable, while the retrieval layer is optimized for local search performance rather than hand-editing every internal file. Treat .trw/ as project state managed by TRW, and use the memory tools for normal day-to-day updates.
Audit log durability: fsync_on_append
MemoryConfig accepts a fsync_on_append boolean (default false). When enabled, each audit log write is flushed to disk with fsync before returning — preventing log loss on unexpected process exit. Enable this in environments where audit durability is required (e.g. compliance workloads). It trades a small latency increase for hard durability guarantees.
SQLite corruption auto-recoveryv0.6.1+
If trw-memory detects a corrupt SQLite database on open, it recovers automatically without user intervention:
- Renames the corrupt file to
<original>.corrupt.bak - Salvages any recoverable rows into a fresh database
- Cleans up stale
-waland-shmsidecar files - Retries the original operation
The recovery is transparent — your session continues without interruption. A warning is logged so you can inspect the .corrupt.bak file if needed.