Skip to main content
PROMPT SPACE
S
$10.00marketingUniversal

synthesizing-institutional-knowledge

Builds the organizational memory schema your AI agent needs to answer why — capturing decision provenance, causal chains, and event context that embedding-based retrieval permanently discards.

skill install https://www.promptspace.in/skills/synthesizing-institutional-knowledge

What This Skill Does

When you embed a document, you preserve what it says. You lose who decided it, why, what it replaced, and what it caused. This skill teaches you to capture that missing provenance as structured institutional memory — so your agent can answer questions that no RAG system can touch.

Problems It Solves

  • Provenance blindness — "Why are we doing it this way?" is unanswerable from a vector store because the reasoning was never indexed, only the output document.

  • Type 3 knowledge gap — most organizations capture facts (Type 1) and some events (Type 2), but almost never capture causal reasoning (Type 3) at the time decisions are made. This skill closes that gap before it compounds.

  • Retroactive ingestion failure — teams trying to rebuild institutional history from old docs discover the causal edges were never written down. This skill provides a model-assisted extraction workflow with human review for causal edge validation.

  • "Why do we use X?" queries — technology, policy, and architectural choices require graph traversal over decision chains, not semantic similarity.

What You Get

The skill defines three knowledge types with distinct storage targets:

  • Declarative (Type 1): Facts and current-state policies → Vector RAG. The only category where embeddings are structurally sufficient.

  • Episodic (Type 2): Events, incidents, decisions with timestamps → Temporal store with full event schema.

  • Causal (Type 3): Decision rationale, constraint chains, alternatives considered → Knowledge graph with explicit causal predecessor/successor edges.

You also get a complete institutional event schema — a JSON structure capturing actors, affected entities, rationale, alternatives considered, constraints, outcome, and causal links — plus an ingestion workflow for both live capture and retroactive extraction from legacy documents like ADRs, post-mortems, and meeting notes.

Who Should Use This

  • Teams building AI agents that must answer questions about organizational reasoning — why decisions were made, how the current architecture evolved, what historical constraints drive current policy — across engineering, compliance, strategy, or any domain where institutional memory compounds over time.

Use cases

  • Engineering knowledge base: An agent over ADRs, design docs, and incident reports can answer "Why did we migrate off the monolith?" by traversing causal predecessor chains — not just finding the migration doc.
  • Compliance and audit agent: "What constraints drove the current data retention policy?" requires causal context from the regulatory event that preceded it, not just the policy text.
  • Onboarding acceleration: New engineers ask "Why is this system built this way?" The institutional event graph answers with the full decision chain — alternatives considered, constraints, and outcome — rather than returning the design doc with no context.
  • Post-mortem reconstruction: "What sequence of decisions led to this incident?" is an episodic + causal query over timestamped events with explicit predecessor links.
  • Strategic context for AI advisors: Agents assisting leadership on current strategy need to know what was decided before, why it was decided, and what it caused — not just what current policy says.

Example

Prompt

"Extract the causal reasoning and decision provenance from these meeting notes on API changes."

Sample output preview is available after purchase.

Known limitations

* Requires human validation for complex causal links. * Accuracy depends on the quality of source input (ADRs, notes). * Not a replacement for real-time state monitoring.

Frequently asked questions

Most RAG systems only perform semantic searches on current documents (Declarative Knowledge). This skill adds Episodic and Causal layers, allowing agents to traverse the 'why' and 'when' by mapping decision provenance and temporal events that embeddings alone cannot represent.
synthesizing-institutional-knowledge — AI Agent Skill | PromptSpace