Skip to main content
TRW
TRWDocumentation

Memory & Knowledge

TRW gives your AI agents a memory that compounds. Every discovery persists, surfaces when relevant, and improves over time. Session 50 is fundamentally better than session 1 — not because you remember more, but because the framework does.

The Knowledge Flywheel

TRW's architecture is built around a single reinforcing loop. Each turn makes the next turn more valuable. Early sessions produce broad learnings. Later sessions produce refined, high-impact patterns that surface automatically.

LEARNPERSISTRECALLAPPLYIMPROVEKnowledgeCompounds

The Compounding Effect

This flywheel is TRW's core differentiator. Across 80+ surveyed competitors, no project combines learning with scoring, decay curves, tier promotion, consolidation, and delivery ceremony into a single compounding system. The integrated lifecycle is unique — and because the flywheel compounds, every session widens the gap.

How It Works

A learning flows through six stages from initial discovery to permanent project context.

  1. 1

    Learn

    Your AI discovers a gotcha, pattern, or architecture decision during work.

  2. 2

    Persist

    trw_learn() saves the discovery as a structured YAML entry with tags and metadata.

  3. 3

    Recall

    Future sessions search past learnings via trw_recall() — hybrid keyword + semantic match.

  4. 4

    Apply

    The AI uses recalled knowledge to avoid known mistakes and follow proven patterns.

  5. 5

    Improve

    Each recall boosts the learning's impact score. Unused learnings decay naturally.

  6. 6

    Promote

    High-impact learnings get promoted into CLAUDE.md — permanent context for every session.

Learning Lifecycle

Every learning goes through a managed lifecycle. This is not a flat key-value store — it is a living knowledge system with scoring, decay, and promotion.

StageWhat happensMechanism
RecordingAI calls trw_learn() with summary, detail, and tagsYAML entry created in .trw/learnings/
ScoringInitial impact score assigned based on content quality and tagsQ-learning + heuristic analysis
RecallFuture sessions retrieve relevant learnings via hybrid searchBM25 keyword + dense vector similarity
DecayUnused learnings lose impact score over timeEbbinghaus-inspired decay curve
PromotionHigh-impact learnings promoted to permanent project contextAuto-sync into CLAUDE.md
ConsolidationRelated learnings merged, stale ones prunedJaccard dedup + semantic clustering

Impact Scoring

Not all learnings are equal. TRW scores each learning across four dimensions and uses the composite score to determine recall priority, decay rate, and promotion eligibility.

FactorWeightDescription
UtilityHighHow often this learning is recalled and applied successfully.
RecencyMediumWhen the learning was last accessed. Recent learnings score higher.
FrequencyMediumHow many sessions have used this learning. Cross-session value compounds.
SpecificityLowTargeted learnings (tagged, scoped) score higher than vague ones.

Scores range from 0.0 to 1.0. Learnings above 0.7 are considered high-impact and eligible for CLAUDE.md promotion. Learnings that decay below 0.1 are candidates for pruning during consolidation.

CLAUDE.md Promotion

The most valuable learnings graduate from the learning store into CLAUDE.md — the file that every AI session reads on startup. This means high-impact discoveries become permanent context without manual curation.

promotion flow
# Learning recorded in session 12:
trw_learn(
  summary="SQLite WAL mode required for concurrent reads",
  detail="Without WAL, parallel test runners deadlock on write",
  tags=["sqlite", "testing"]
)
# → Impact score: 0.45 (new, untested)

# Session 18: recalled and applied successfully
# → Impact score: 0.72 (boosted by utility + frequency)

# Session 22: trw_claude_md_sync() promotes it
# → Now in CLAUDE.md — loaded on every session start

Tip

You do not need to manually curate CLAUDE.md. The promotion system handles it automatically during trw_deliver(). High-impact learnings rise; low-impact ones stay in the learning store where they can still be recalled on demand.

Memory Tools

Five MCP tools manage the full knowledge lifecycle. Your AI calls them automatically at the right moments — you see them in the activity log during sessions.

ToolWhat it doesWhen to use
trw_learnRecord a discovery with summary, detail, and tags.Errors, gotchas, patterns, architecture decisions
trw_recallSearch past learnings by keyword, tags, or impact tier.Before starting unfamiliar work or revisiting a domain
trw_learn_updateMark learnings as resolved, obsolete, or update their content.When an issue is fixed or context changes
trw_knowledge_syncSync knowledge topology across projects.After cross-project learnings accumulate
trw_claude_md_syncPromote high-impact learnings into CLAUDE.md.During delivery or after major discoveries

See the Tools Reference for the complete list of all 24 MCP tools.

Code Examples

Here is what the memory system looks like in practice across a typical session.

trw_learnrecording a discovery
# AI discovers a gotcha during implementation:
trw_learn(
  summary="FastAPI dependency overrides must be reset in teardown",
  detail="Without resetting app.dependency_overrides in test teardown, "
         "overrides leak between tests causing flaky failures.",
  tags=["fastapi", "testing", "fixtures"]
)
# → Learning recorded
# → Impact score: 0.51
# → Stored: .trw/learnings/entries/2026-03-20-fastapi-dependency-...yaml
trw_recallsearching past learnings
# Next session: AI is about to write FastAPI tests
trw_recall("fastapi testing fixtures")
# → 3 relevant learnings found:
#
#   [0.72] FastAPI dependency overrides must be reset in teardown
#          tags: fastapi, testing, fixtures
#
#   [0.65] TestClient requires app factory pattern for isolation
#          tags: fastapi, testing
#
#   [0.41] pytest-asyncio auto mode conflicts with sync fixtures
#          tags: pytest, async, testing
trw_deliverpersisting at session end
# End of session: deliver persists everything
trw_deliver()
# → Build gate: PASS (312 tests, mypy clean)
# → Learnings: 4 new, 2 updated, 1 promoted to CLAUDE.md
# → CLAUDE.md synced: +1 entry (FastAPI overrides gotcha)
# → Run closed: api-tests-refactor (3h 12m)

Memory Routing

TRW uses trw_learn() for knowledge, not the AI tool's native auto-memory. This is a deliberate architectural decision — here is why.

Dimensiontrw_learn()Native auto-memory
Searchtrw_recall() — semantic + keyword hybridFilename scan only
VisibilityAll agents, subagents, teammatesPrimary session only
LifecycleImpact-scored, auto-promotes to CLAUDE.mdStatic until manually edited
ScaleHundreds of entries, auto-pruned by staleness200-line index cap
Best forGotchas, patterns, build tricks, architecture decisionsCommit style, communication preferences

Tip

Rule of thumb: gotcha or error pattern → trw_learn(). User's preferred commit style → native memory. Build trick that saves time → trw_learn(). Communication preference → native memory.

Where Learnings Live

The entire memory system is file-based and travels with your project. No external database required. No server for base functionality.

PathContents
.trw/learnings/entries/Individual YAML files, one per learning. Human-readable, git-trackable.
.trw/context/Analytics, ceremony state, and build status snapshots.
CLAUDE.mdPromoted high-impact learnings. Read by the AI on every session start.

Learnings are plain YAML. You can read, edit, or delete them with any text editor. The format is designed for both machine processing and human review.

Next Steps