Skip to main content
TRW
Skip to content
TRWCross-Project Sharing

Cross-project sharing

A technical reference for how a validated learning in one repo becomes part of the starting context in every other repo in the same workspace — the mechanism, the validation rule, the privacy model, and the honest limits.

You will learn

  • What the trw-memory package gives you as a standalone install.
  • How cross-project sharing turns single-project memory into a network.
  • When a learning is validated for propagation versus kept local.
  • The offline-first and privacy guarantees the framework provides.
  • The honest limits — what cross-project sharing does not attempt to do.

What cross-project sharing is

One discovery, every project smarter. If a gotcha is solved once in repo A, repo B should not have to rediscover it on its next session start. Cross-project sharing is the layer that makes that possible — learnings that earn validation in one project get surfaced to every sibling project in the same workspace.

The mental model is simple. Memory is depth: one project gets smarter over time. Sharing is breadth: that depth spreads across every repo in the workspace. Both live on the same storage substrate; sharing is the policy that decides what crosses project boundaries.

How propagation works

Not every learning propagates. Most stay local — they are specific to one codebase, one fixture, one deployment. A learning earns cross-project status when independent evidence shows up in another project: another repo records a near-duplicate entry, and the two are matched by semantic similarity above a configurable threshold. That match is what marks both entries as cross-validated and lifts their recall priority.

Manual propagation goes through the trw_knowledge_sync tool. A lead agent runs it during the deliver phase; the tool clusters learnings by tag co-occurrence, deduplicates near-duplicates across namespaces, and writes a canonical entry back to each project's memory store with evidence links preserved. For the exact call signature, see the tools reference.

Offline-first and privacy

Everything runs locally. Each project keeps its learnings and run state under its own .trw/ directory. Cross-project sharing within a workspace is filesystem-level — no network, no cloud dependency, no third-party service in the path. If you never opt in to a hosted sync service, your discoveries never leave your machine.

When you do opt in to the TRW platform for team-wide sync, the hosted layer is additive: the local store remains the source of truth, and the platform synchronizes validated entries between teammates. The default posture is off.

trw-memory as a package

The sharing layer is powered by trw-memory, a standalone Python package built on SQLite and sqlite-vec. It handles hybrid retrieval (keyword plus dense vectors), knowledge-graph traversal, semantic deduplication, and tiered-storage lifecycle. You can install and use it independently of the rest of TRW:

Standalone install
pip install trw-memory

This installs only the memory engine — useful if you want the retrieval layer inside a different agent framework, or if you want to experiment with trw-memory against an existing pipeline. To get the full TRW framework (ceremony, phases, verification, the MCP server), use the installer documented in the quickstart instead.

The package exposes the same store, recall, and sync primitives that the MCP tools wrap. If you embed the library directly, you can write to the same .trw/ database that a TRW session would — which means a standalone service can feed cross-project learnings into the same workspace that your interactive sessions operate on. Nothing about the storage format changes when you switch paths.

What cross-project does not do

An honest list of the limits, so you can plan around them instead of discovering them on your third repo.

  • No real-time sync. Propagation happens at sync boundaries — during trw_knowledge_sync runs and at session start. Two agents working concurrently in different repos will not see each other's new learnings until the next sync.
  • No automatic propagation of every learning. Only learnings that pass the cross-validation threshold are surfaced in sibling projects. Single-project insights stay scoped to where they were recorded.
  • No conflict resolution for contradictions. If two projects record directly contradictory learnings, both are kept and both surface at recall. TRW does not arbitrate which one is right — that stays a human call.
  • No sync across user account boundaries by default. Local workspace sharing is filesystem-level. Cross-teammate sync requires the hosted platform and is opt-in.
  • No mixing of embedding models. A workspace with different models across projects will see reduced deduplication quality. Keep the embedding model consistent per workspace.

Next steps

Next

Cross-project sharing is memory scaled across repos. The memory page covers the scoring, decay, and retrieval model underneath; tools cover the sync API.