Skip to main content
TRW
TRWMemory

Memory & Knowledge

This is the memory engine inside TRW's engineering operating layer. Discoveries persist, resurface when relevant, and evolve over time so later sessions can resume with real project context instead of starting from fragments.

Naming note

The sync tool still uses the historical name trw_claude_md_sync, but modern installs use it to update the repo's selected client surfaces — not only CLAUDE.md.

The Knowledge Flywheel

TRW's architecture is built around a reinforcing loop: capture what mattered, rank it by usefulness, and surface it when the next task needs it. Early sessions produce raw learnings. Later sessions inherit refined, higher-signal guidance automatically.

LEARNPERSISTRECALLAPPLYIMPROVEKnowledgeCompounds

What matters

The important idea is not a marketing superlative. It is that capture, recall, decay, consolidation, and promotion are wired together instead of being left as separate manual chores. That is what lets later sessions inherit better judgment instead of just more notes.

How It Works

A learning flows through six stages from initial discovery to permanent project context.

  1. 1

    Learn

    Your AI discovers a gotcha, pattern, or architecture decision during work.

  2. 2

    Persist

    trw_learn() stores the discovery in the project memory layer under .trw/ with tags and metadata.

  3. 3

    Recall

    Future sessions search past learnings via trw_recall() — hybrid keyword + semantic match.

  4. 4

    Apply

    The AI uses recalled knowledge to avoid known mistakes and follow proven patterns.

  5. 5

    Improve

    Each recall boosts the learning's impact score. Unused learnings decay naturally.

  6. 6

    Promote

    High-impact learnings get promoted into the repo’s instruction surfaces — permanent context for future sessions.

Learning Lifecycle

Every learning goes through a managed lifecycle. This is not a flat key-value store — it is a living knowledge system with scoring, decay, and promotion.

StageWhat happensMechanism
RecordingAI calls trw_learn() with summary, detail, and tagsStructured entry stored in the project learning store under .trw/
ScoringInitial impact score assigned based on content quality and tagsQ-learning + heuristic analysis
RecallFuture sessions retrieve relevant learnings via hybrid searchBM25 keyword + dense vector similarity
DecayUnused learnings lose impact score over timeEbbinghaus-inspired decay curve
PromotionHigh-impact learnings promoted to permanent project contextAuto-sync into the repo's selected instruction surfaces
ConsolidationRelated learnings merged, stale ones prunedJaccard dedup + semantic clustering

Impact Scoring

Not all learnings are equal. TRW scores each learning across four dimensions and uses the composite score to determine recall priority, decay rate, and promotion eligibility.

FactorWeightDescription
UtilityHighHow often this learning is recalled and applied successfully.
RecencyMediumWhen the learning was last accessed. Recent learnings score higher.
FrequencyMediumHow many sessions have used this learning. Cross-session value compounds.
SpecificityLowTargeted learnings (tagged, scoped) score higher than vague ones.

Scores range from 0.0 to 1.0. Learnings above 0.7 are considered high-impact and eligible for startup instruction promotion. Learnings that decay below 0.1 are candidates for pruning during consolidation.

Instruction Surface Promotion

The most valuable learnings graduate from the learning store into CLAUDE.md, .cursor/rules/, .codex/INSTRUCTIONS.md, GEMINI.md, or the other startup surfaces your repo targets. This means high-impact discoveries become permanent context without manual curation.

promotion flow
# Learning recorded in session 12:
trw_learn(
  summary="SQLite WAL mode required for concurrent reads",
  detail="Without WAL, parallel test runners deadlock on write",
  tags=["sqlite", "testing"]
)
# → Impact score: 0.45 (new, untested)

# Session 18: recalled and applied successfully
# → Impact score: 0.72 (boosted by utility + frequency)

# Session 22: trw_claude_md_sync() promotes it
# → Now in the repo's startup instructions — loaded on later sessions

Tip

You do not need to manually curate every instruction file. The promotion system handles it automatically during trw_deliver(). High-impact learnings rise; low-impact ones stay in the learning store where they can still be recalled on demand.

Memory Tools

Five MCP tools manage the full knowledge lifecycle. Your AI calls them automatically at the right moments — you see them in the activity log during sessions.

ToolWhat it doesWhen to use
trw_learnRecord a discovery with summary, detail, and tags.Errors, gotchas, patterns, architecture decisions
trw_recallSearch past learnings by keyword, tags, or impact tier.Before starting unfamiliar work or revisiting a domain
trw_learn_updateMark learnings as resolved, obsolete, or update their content.When an issue is fixed or context changes
trw_claude_md_syncPromote high-impact learnings into the repo’s client-facing instruction surfaces.During delivery or after major discoveries

See the Tools Reference for the complete list of all 24 MCP tools.

Code Examples

Here is what the memory system looks like in practice across a typical session.

trw_learnrecording a discovery
# AI discovers a gotcha during implementation:
trw_learn(
  summary="FastAPI dependency overrides must be reset in teardown",
  detail="Without resetting app.dependency_overrides in test teardown, "
         "overrides leak between tests causing flaky failures.",
  tags=["fastapi", "testing", "fixtures"]
)
# → Learning recorded
# → Impact score: 0.51
# → Stored in the project learning store under .trw/
trw_recallsearching past learnings
# Next session: AI is about to write FastAPI tests
trw_recall("fastapi testing fixtures")
# → 3 relevant learnings found:
#
#   [0.72] FastAPI dependency overrides must be reset in teardown
#          tags: fastapi, testing, fixtures
#
#   [0.65] TestClient requires app factory pattern for isolation
#          tags: fastapi, testing
#
#   [0.41] pytest-asyncio auto mode conflicts with sync fixtures
#          tags: pytest, async, testing
trw_deliverpersisting at session end
# End of session: deliver persists everything
trw_deliver()
# → Build gate: PASS (312 tests, mypy clean)
# → Learnings: 4 new, 2 updated, 1 promoted to startup instructions
# → Instruction sync: +1 entry (FastAPI overrides gotcha)
# → Run closed: api-tests-refactor (3h 12m)

Developer Experience

As of v0.6.2, MemoryConfig and MemoryEntry both implement __repr__ for quick inspection during debugging. Print any object in a REPL or log output and get a readable one-liner showing the most useful fields at a glance.

repr examples
>>> print(config)
MemoryConfig(backend=sqlite, path=/home/user/.trw/memory, encryption=off, rbac=off)

>>> print(entry)
MemoryEntry(id=M-a1b2c3d4e5f6, content="Agent Teams sprint: st...", tags=[sprint, integration], importance=0.8)

Memory Routing

TRW uses trw_learn() for knowledge, not the AI tool's native auto-memory. This is a deliberate architectural decision — here is why.

Dimensiontrw_learn()Native auto-memory
Searchtrw_recall() — semantic + keyword hybridFilename scan only
VisibilityAll agents, subagents, teammatesPrimary session only
LifecycleImpact-scored, auto-promotes into instruction surfacesStatic until manually edited
ScaleHundreds of entries, auto-pruned by staleness200-line index cap
Best forGotchas, patterns, build tricks, architecture decisionsCommit style, communication preferences

Tip

Rule of thumb: gotcha or error pattern → trw_learn(). User's preferred commit style → native memory. Build trick that saves time → trw_learn(). Communication preference → native memory.

Where Learnings Live

TRW memory is project-local and travels with your repo. The runtime uses a local storage layer under .trw/, so the base workflow does not depend on a hosted service.

PathContents
.trw/Project-local learning store, run state, and supporting memory artifacts managed by TRW.
.trw/config.yamlProject-level settings that shape recall thresholds, sync behavior, and related memory defaults.
instruction surfacesPromoted high-impact learnings. Read by the AI on every session start.

Some memory artifacts are human-readable, while the retrieval layer is optimized for local search performance rather than hand-editing every internal file. Treat .trw/ as project state managed by TRW, and use the memory tools for normal day-to-day updates.

Audit log durability: fsync_on_append

MemoryConfig accepts a fsync_on_append boolean (default false). When enabled, each audit log write is flushed to disk with fsync before returning — preventing log loss on unexpected process exit. Enable this in environments where audit durability is required (e.g. compliance workloads). It trades a small latency increase for hard durability guarantees.

SQLite corruption auto-recoveryv0.6.1+

If trw-memory detects a corrupt SQLite database on open, it recovers automatically without user intervention:

  1. Renames the corrupt file to <original>.corrupt.bak
  2. Salvages any recoverable rows into a fresh database
  3. Cleans up stale -wal and -shm sidecar files
  4. Retries the original operation

The recovery is transparent — your session continues without interruption. A warning is logged so you can inspect the .corrupt.bak file if needed.

Next Steps

Next Step

Memory matters when recall changes planning, review, and delivery. Core concepts and tools explain where that feedback loop shows up.