We've been thinking about artifacts as things — documents, decisions, specs, slides. But what if the interesting part isn't the artifacts themselves, but the space between them? A single strategy doc sitting alone is just text. A strategy doc that implements a goal, draws_from three meeting transcripts, supersedes a prior strategy, and has two specs that implement it — that's a position in a field of meaning. The artifact gets its significance from its relationships, not its content.
This is the graph theorist's heresy: the edges are more important than the nodes.
Folders are a lie we inherited from physical filing cabinets. They enforce a single taxonomy ("this doc goes in Marketing OR Engineering, pick one") and make cross-cutting concerns invisible. When you have 50 artifacts, folders work. When you have 5,000 artifacts created by dozens of agents, folders become a graveyard — things go in and never come out because nobody remembers which drawer they're in.
The graph replaces the folder tree with something fundamentally more powerful: every artifact is reachable from every other artifact through the relationship web. You don't navigate DOWN a hierarchy; you navigate ACROSS a network.
Instead of ls /strategy/2026/Q1/, an agent navigates by relationship:
// "What's the current strategy?"
query: type=document, tag=strategy, NOT status=superseded
-> returns the living strategy docs (not the dead ones)
// "What informed this strategy?"
follow: strategy_doc -> incoming(draws_from)
-> returns meetings, research notes, customer feedback
// "What depends on this strategy?"
follow: strategy_doc -> outgoing(implements)
-> returns specs, projects, tasks that implement it
// "What contradicts this?"
follow: strategy_doc -> any(conflicts_with)
-> returns constraints, other strategies, unresolved tensions
This is more expressive than browsing. An agent asked to "write a product update" doesn't need to know WHERE things are filed. It needs to know:
type=decision, created_at > 2 weeks ago)follow: decisions -> implements)follow: implementations -> status)The query pattern is: anchor on something known, then walk edges.
| Old Pattern | Graph Equivalent |
|---|---|
| "Open the strategy folder" | query: type=document, tag=strategy, NOT superseded |
| "What's in this project?" | follow: project -> children + implements + depends_on |
| "Find all related docs" | follow: artifact -> any(2 hops) -> filter(type=document) |
| "What changed recently?" | query: last_activity > 1 week, sort by activity |
| "Where should I put this?" | find: artifacts with similar tags/links, suggest neighborhood |
| "What am I missing?" | find: orphans with no incoming edges and age > 3 days |
The last one is crucial. In a folder system, an unfiled document is invisible. In a graph, an orphan — a node with no edges — is a visible anomaly. The graph makes gaps visible.
When you open an artifact, you don't see it in a folder. You see it in its neighborhood: the 1-hop and 2-hop graph around it. This is like Google Maps — you see the place, but also the streets, the nearby landmarks, the routes to other places.
A decision record's neighborhood might look like:
[Goal: Increase Retention]
|
serves
|
[Meeting: Jan 15] --draws_from--> [DECISION: Use event-driven arch]
|
implements
/ \
[Spec: Event Bus Design] [Spec: Migration Plan]
| |
depends_on conflicts_with
| |
[Constraint: <10ms latency] [Decision: Monolith-first]
This neighborhood IS the context. An agent seeing this graph knows: the decision serves retention, was informed by a January meeting, has two implementing specs, one of which conflicts with a prior decision. That's a richer briefing than any folder path.
The existing relationships are: draws_from, supersedes, implements, depends_on, blocks, serves, conflicts_with, relates_to, evaluated_by, gated_by, same_as, participates_in.
This is a solid foundation, but it's missing several relationships that become critical when the graph is the primary navigation surface.
Provenance relationships (where did this come from?):
summarizes — This artifact is a condensed version of that one. A meeting summary summarizes a meeting transcript. An executive brief summarizes a strategy doc. This is different from draws_from because it implies a specific structural relationship: the summary should update when the source changes.is_canonical_version_of (or fold into supersedes with a subtype) — Among multiple versions/drafts, this one is the current truth. Critical when agents produce multiple iterations.refines — A more specific version of something. A detailed spec refines a high-level spec. Different from supersedes because the original remains valid at its level of abstraction.Epistemic relationships (what's the knowledge status?):
contradicts — Stronger than conflicts_with. This artifact makes a factual claim that is incompatible with that artifact's claim. The system should surface these actively.validates — This artifact provides evidence supporting that artifact's claims. Test results validate a spec. User research validates a hypothesis.questions — This artifact raises unresolved questions about that artifact. A review questions a proposal. This is a "soft block" — not a dependency, but an epistemic gap.Compositional relationships (how do artifacts combine?):
includes / is_part_of — A slide deck includes individual slides (which are themselves artifacts with provenance). A report includes sections that can be independently versioned. Different from parent-child hierarchy because it's about content composition, not organizational grouping.is_variant_of — Two artifacts that share structure but differ in content. A pricing page for Enterprise vs. SMB. A strategy adapted for different markets. Variants should propagate structural changes but preserve content differences.If you had to pick the smallest set of relationships that generates the richest graph, I'd argue for seven axioms:
draws_from — provenance (where did the content come from?)supersedes — temporal succession (what replaces what?)implements — abstraction descent (what makes this concrete?)depends_on — prerequisite ordering (what must exist first?)conflicts_with — tension (what can't coexist?)serves — strategic alignment (what does this contribute to?)summarizes — compression (what is this a digest of?)Every other relationship is either a special case of these (e.g., blocks is inverse depends_on with urgency; validates is draws_from with an epistemic qualifier) or a convenience alias (e.g., relates_to is the "I know there's a connection but I can't name it" escape hatch).
In practice, you want the full current set PLUS:
summarizes (compression with update obligation)refines (abstraction descent without replacement)validates / questions (epistemic edges)is_variant_of (structural siblings)And critically: every edge should carry metadata:
created_by — which agent/human created this link?created_at — when?confidence — how certain is this relationship? (An agent's best guess vs. a human's explicit declaration)note — why does this relationship exist? (Already supported, but should be culturally required)When you have 1,000+ artifacts with rich relationships, clusters emerge naturally. A cluster is a group of artifacts that are densely connected to each other and sparsely connected to the rest of the graph. These clusters correspond to themes, projects, domains — the organizational units that folders try to impose artificially.
The difference: clusters are discovered, not declared. You don't create a "Marketing" folder; you notice that 47 artifacts about messaging, brand, positioning, and campaigns are densely interlinked, and the system says "this looks like a cluster — want to name it?"
This is the graph equivalent of desire paths. Structure follows use, not the other way around.
Long chains of depends_on and implements relationships reveal critical paths. If Artifact A depends on B depends on C depends on D, and D is blocked, the graph can propagate that signal: "A is transitively blocked, 4 hops from the blockage."
At scale, you can compute:
An orphan is an artifact with zero or very few edges. In a folder system, orphans are invisible — they sit in some folder, forgotten. In a graph, orphans are structurally visible anomalies.
Orphans come in flavors:
A bridge is an artifact that connects two otherwise-disconnected clusters. These are strategically important — they're the artifacts that create coherence across domains.
Example: A "Product Principles" document that is served_by engineering specs AND marketing messaging AND customer success playbooks. Remove it and three clusters drift apart. These bridge artifacts deserve special attention and maintenance.
| Metric | Healthy | Unhealthy |
|---|---|---|
| Orphan rate | < 5% of artifacts are orphans | > 20% orphans — things are created but not connected |
| Average degree | 3-7 edges per artifact | < 2 (disconnected) or > 15 (over-linked, noise) |
| Cluster count | Matches number of active workstreams | 1 giant blob (no structure) or 50 tiny islands (fragmentation) |
| Bridge count | Multiple bridges between major clusters | 0 bridges = silos; single bridge = fragility |
| Supersession depth | Most chains are 1-3 deep | Chains > 5 deep = too much churn, not enough stability |
| Conflict density | Some conflicts_with edges (tension is healthy) | Zero conflicts (false consensus) or > 10% conflict edges (chaos) |
| Orphan age | Orphans get connected within 48 hours | Orphans aging > 1 week = integration failure |
A "graph health dashboard" would show these metrics as a radar chart: balanced = healthy, lopsided = attention needed.
The full graph at 1,000 nodes is overwhelming as a force-directed layout. Better approaches:
What if artifacts weren't passive documents that happen to have links? What if they understood their position in the graph and behaved differently based on it?
A strategy doc that knows what implements it:
outgoing(implements) and shows: 3 specs (2 complete, 1 in progress), 12 tasks (8 done, 2 blocked, 2 pending).conflicts_with another implementing spec, the strategy doc flags: "Internal contradiction detected between Spec A and Spec B."A decision that knows what it superseded:
supersedes Decision X, which superseded Decision W. You can see the full evolution of thinking.draws_from sources have been superseded, it flags: "This decision may be based on outdated information."A spec that knows it has contradictions:
conflicts_with edges to Spec B, both specs display a banner: "Unresolved tension with [other spec]. See [link to contradiction]."validated (by tests, by research, by approval) and which are unvalidated assertions. A spec with 80% validated claims is more trustworthy than one with 20%.depends_on is updated, the spec flags itself for review: "Upstream constraint changed — this spec may need revision."A meeting transcript that knows what it produced:
outgoing(draws_from) — i.e., all the decisions, tasks, and notes that drew from this meeting. "This meeting produced: 2 decisions, 5 tasks, 1 strategy revision."A task that knows its full context:
implements -> Spec -> implements -> Decision -> serves -> Goal. The agent working on the task can see why it exists all the way up to the strategic level.superseded, the task flags: "The strategic rationale for this task may have changed."This is the feature that makes graph-awareness transformative. When a source artifact changes, every artifact that draws_from or depends_on it should know.
Concretely:
[Customer Research Report] is updated on Feb 10
|
draws_from (incoming)
|
[Product Strategy v3] -- now flagged: "Source updated since last revision"
|
implements (incoming)
|
[Spec: Onboarding Redesign] -- now flagged: "Upstream strategy may have changed"
|
implements (incoming)
|
[Task: Build new welcome flow] -- now flagged: "Verify spec is still current"
The propagation attenuates with distance — 1-hop gets a hard flag, 2-hop gets a soft flag, 3+ hops get a note in the briefing. This is the mycorrhizal network from the biology analogy in the prior brainstorm: warnings travel through the root system.
Every edge and node has a timestamp. This means the graph at any point in time is recoverable. You can "rewind" to last Tuesday and see: what existed, what was connected to what, what was the current canonical version of each artifact.
The system already supports get_record_at for individual records. The graph extension is get_graph_at: reconstruct the full neighborhood (or full graph) at a point in time.
"What changed in the graph between last Monday and today?" This is a diff of two temporal snapshots:
Graph Diff: Feb 8 -> Feb 15
NEW NODES (12):
+ [Spec: Payment Processing v2] (created Feb 10)
+ [Decision: Switch to Stripe] (created Feb 11)
+ [Task: Implement Stripe SDK] (created Feb 12)
... (9 more)
REMOVED NODES (3):
- [Task: Fix PayPal integration] (archived Feb 11)
- [Note: PayPal pricing research] (archived Feb 11)
- [Draft: Payment comparison] (superseded Feb 10)
NEW EDGES (18):
+ [Decision: Switch to Stripe] --supersedes--> [Decision: Use PayPal]
+ [Spec: Payment Processing v2] --draws_from--> [Customer complaints collection]
+ [Spec: Payment Processing v2] --implements--> [Decision: Switch to Stripe]
... (15 more)
BROKEN EDGES (2):
- [Task: Fix PayPal integration] --implements--> [Spec: Payment Processing v1]
(task archived, spec superseded)
- [Constraint: PayPal SLA requirement] --conflicts_with--> [Decision: Switch to Stripe]
(unresolved!)
STRUCTURAL CHANGES:
* New cluster emerged: "Stripe Migration" (7 nodes, 11 edges)
* Cluster "PayPal Integration" is dissolving (3 of 5 nodes archived)
* Bridge gap: "Stripe Migration" cluster has no link to "Customer Success" cluster
(PayPal cluster had 2 bridges — migration may be missing customer-facing plan)
This diff is incredibly useful for standup briefings. An agent starting a session can see not just "what tasks are assigned to me" but "how has the landscape of work shifted since I last engaged?"
Drift is when the graph's structure no longer matches the stated intent. Examples:
implements edges, but 7 of them are to tasks/specs that were created before the goal was last updated. The implementations may not reflect the current strategy.implements a decision that was superseded. The spec is implementing a dead decision.supersedes B, and then later Decision C supersedes A but restores B's content. The graph shows this oscillation.binding: hard has 5 tasks that depends_on it, but 3 of those tasks have attestations with status: overridden. The constraint is nominally hard but practically ignored.Drift detection is a background process that runs graph analytics and produces alerts:
Drift Report — Feb 15, 2026
WARNING: 3 specs implement superseded decisions
[Spec: Auth Flow] implements [Decision: Use OAuth] (superseded by [Decision: Use Passkeys])
...
WARNING: Goal "Increase Retention" has 0 new implementations in 3 weeks
Last implementation activity: Jan 24
INFO: Constraint "All APIs must use auth" overridden 3/5 times
Consider: is this still a hard constraint or should it be downgraded?
INFO: Cluster "Mobile App" has grown 40% in 2 weeks but has no bridge to "Backend API" cluster
Risk: work may be proceeding without backend coordination
Over time, the graph accumulates layers — like geological strata. You can analyze these layers:
A human asks: "Write a product strategy for Q2 2026."
The agent doesn't start writing. It starts walking the graph.
Step 1: Find the current strategy
query: type=document, tag=strategy, NOT superseded
-> Found: [Product Strategy Q1 2026] (id: strat-q1)
Step 2: What does it serve?
follow: strat-q1 -> serves
-> [Goal: 10K MAU by end of 2026]
-> [Goal: Series A readiness]
Step 3: What informed it?
follow: strat-q1 -> draws_from
-> [Meeting: Board Review Jan 5]
-> [Customer Research: Jan cohort analysis]
-> [Decision: Focus on SMB segment]
Step 4: What implemented it?
follow: strat-q1 -> incoming(implements)
-> [Spec: Onboarding v2] (completed)
-> [Spec: Analytics Dashboard] (in progress)
-> [Spec: API for Partners] (not started)
-> [Project: Mobile App] (in progress)
Step 5: What changed since Q1 strategy was written?
graph_diff: strat-q1.created_at -> now
-> 3 new decisions that aren't reflected in strategy
-> 1 constraint added (SOC2 compliance by June)
-> Customer research updated with Feb data
-> Goal "Series A readiness" has new objectives
Step 6: What's in tension?
follow: strat-q1 -> any(conflicts_with, 2 hops)
-> [Constraint: SOC2 by June] conflicts_with [Spec: API for Partners]
(API spec doesn't account for SOC2 requirements)
-> [Decision: Focus on SMB] questions [Meeting note: Enterprise interest from Acme]
Now the agent writes the Q2 strategy, informed by the graph context. But it doesn't just write a document — it wires the document into the graph as it creates it.
create_record:
type: document
title: "Product Strategy Q2 2026"
tags: [strategy, q2-2026]
body: [the strategy content, which references graph context throughout]
links:
- supersedes: strat-q1
- serves: [Goal: 10K MAU]
- serves: [Goal: Series A readiness]
- draws_from: [Customer Research: Feb update]
- draws_from: [Meeting: Board Review Jan 5]
- draws_from: [Decision: Focus on SMB]
- draws_from: [Constraint: SOC2 by June]
- conflicts_with: [Decision: Focus on SMB]
note: "Strategy proposes enterprise pilot alongside SMB focus;
this is in tension with pure SMB decision"
The moment the strategy is created, the graph ripples:
drawn_from by the strategy, making it more central to the graph.conflicts_with edge from the new strategy. This surfaces a tension that needs resolution.Before (simplified):
[Goal: 10K MAU] <--serves-- [Strategy Q1] --draws_from--> [Research: Jan]
|
implements
/ | \
[Spec: Onb] [Spec: Analytics] [Spec: API]
[Constraint: SOC2] (orphan — just created, not yet connected)
[Decision: Focus SMB] (connected to Q1 strategy)
After:
[Goal: 10K MAU] <--serves-- [Strategy Q2] --draws_from--> [Research: Feb]
| | | | |
serves | conflicts_with updates
| | | |
[Goal: Series A] <--serves----+ v [Research: Jan]
| [Decision: Focus SMB] |
supersedes draws_from
| |
[Strategy Q1] ----draws_from----------+
/ | \
[Spec: Onb] [Spec: Analytics] [Spec: API]
(flagged: check alignment with Q2)
[Constraint: SOC2] --drawn_from_by--> [Strategy Q2]
--conflicts_with--> [Spec: API]
(tension surfaced!)
The graph is denser, more connected, and more truthful. The tensions are visible. The provenance is clear. The supersession is recorded. An agent arriving tomorrow can look at this graph and understand not just what the strategy says, but why it says it, what changed, and what's unresolved.
Graph topology exposes patterns that are invisible in folder-based systems:
supersedes chains with short lifespans: Thrashing. Decisions are being made and reversed rapidly. The graph shows the oscillation pattern.conflicts_with: Three or more artifacts in a conflict cycle. This means there's a foundational disagreement that hasn't been resolved — it's being papered over by local decisions that contradict each other.Agents don't need to talk to each other if the graph is rich enough. Agent A writes a spec. Agent B, tasked with implementation, doesn't need to message Agent A. It walks the graph from the spec: what does it implement? What does it draw_from? What conflicts_with it? What constraints apply?
This is the prior brainstorm's principle made concrete: the environment IS the coordination. Agents don't coordinate by exchanging messages; they coordinate by reading and writing to the shared graph. The edges are the messages.
The prior brainstorm identified decay as a feature. The graph makes decay principled rather than arbitrary. Instead of "archive everything older than 90 days," you can say:
serves chain has been completed (the goal was achieved, the strategy was superseded, the specs were implemented).drawn_from by living artifacts, regardless of age (they're still load-bearing).conflicts_with edges until the tension is explicitly resolved (don't forget disagreements).The graph topology tells you what's safe to forget.
The system can use graph structure to suggest edges. When an agent creates a new artifact:
draw_from that decision?"serves edge to any goal. Does it serve a goal, or is it speculative?"related_to, or does one refine the other?"These suggestions are cheap (semantic similarity + graph proximity) and high-value (they maintain graph quality without requiring agents to have perfect graph awareness).
Git tracks versions of files. The artifact graph tracks versions of ideas. A supersedes chain is a richer version history than a git log because it captures why something changed, not just that it changed.
Moreover, branching in git is about parallel development of the same artifact. is_variant_of in the graph is about intentional divergence — two artifacts that share lineage but serve different purposes. Git merges assume convergence; the graph allows permanent divergence.
Not all edges are equal. A draws_from edge where the source was read carefully and synthesized deeply is different from a draws_from edge where the source was skimmed. Edge weights — explicit (set by creator) or implicit (derived from how much of the source was referenced) — create an attention topology.
Heavily weighted paths through the graph are the "main storylines" of the workspace. Lightly weighted paths are footnotes. An agent navigating the graph should follow heavy edges first, light edges only when doing deep research.
If I were building this as a product:
Phase 1 — Graph Foundation:
Phase 2 — Graph Navigation:
Phase 3 — Graph Intelligence:
Phase 4 — Graph as Primary Navigation:
The key insight: you don't need to build all of this to start getting value. Just making edges visible and orphans conspicuous changes agent behavior. Agents start linking things because they can see when things aren't linked. The graph grows organically, and the emergent structure becomes navigable.
The folder tree is a crutch for a world where relationships are invisible. Make relationships visible, and the tree becomes unnecessary.
We've been thinking about artifacts as things — documents, decisions, specs, slides. But what if the interesting part isn't the artifacts themselves, but the space between them? A single strategy doc sitting alone is just text. A strategy doc that implements a goal, draws_from three meeting transcripts, supersedes a prior strategy, and has two specs that implement it — that's a position in a field of meaning. The artifact gets its significance from its relationships, not its content.
This is the graph theorist's heresy: the edges are more important than the nodes.
Folders are a lie we inherited from physical filing cabinets. They enforce a single taxonomy ("this doc goes in Marketing OR Engineering, pick one") and make cross-cutting concerns invisible. When you have 50 artifacts, folders work. When you have 5,000 artifacts created by dozens of agents, folders become a graveyard — things go in and never come out because nobody remembers which drawer they're in.
The graph replaces the folder tree with something fundamentally more powerful: every artifact is reachable from every other artifact through the relationship web. You don't navigate DOWN a hierarchy; you navigate ACROSS a network.
Instead of ls /strategy/2026/Q1/, an agent navigates by relationship:
// "What's the current strategy?"
query: type=document, tag=strategy, NOT status=superseded
-> returns the living strategy docs (not the dead ones)
// "What informed this strategy?"
follow: strategy_doc -> incoming(draws_from)
-> returns meetings, research notes, customer feedback
// "What depends on this strategy?"
follow: strategy_doc -> outgoing(implements)
-> returns specs, projects, tasks that implement it
// "What contradicts this?"
follow: strategy_doc -> any(conflicts_with)
-> returns constraints, other strategies, unresolved tensions
This is more expressive than browsing. An agent asked to "write a product update" doesn't need to know WHERE things are filed. It needs to know:
type=decision, created_at > 2 weeks ago)follow: decisions -> implements)follow: implementations -> status)The query pattern is: anchor on something known, then walk edges.
| Old Pattern | Graph Equivalent |
|---|---|
| "Open the strategy folder" | query: type=document, tag=strategy, NOT superseded |
| "What's in this project?" | follow: project -> children + implements + depends_on |
| "Find all related docs" | follow: artifact -> any(2 hops) -> filter(type=document) |
| "What changed recently?" | query: last_activity > 1 week, sort by activity |
| "Where should I put this?" | find: artifacts with similar tags/links, suggest neighborhood |
| "What am I missing?" | find: orphans with no incoming edges and age > 3 days |
The last one is crucial. In a folder system, an unfiled document is invisible. In a graph, an orphan — a node with no edges — is a visible anomaly. The graph makes gaps visible.
When you open an artifact, you don't see it in a folder. You see it in its neighborhood: the 1-hop and 2-hop graph around it. This is like Google Maps — you see the place, but also the streets, the nearby landmarks, the routes to other places.
A decision record's neighborhood might look like:
[Goal: Increase Retention]
|
serves
|
[Meeting: Jan 15] --draws_from--> [DECISION: Use event-driven arch]
|
implements
/ \
[Spec: Event Bus Design] [Spec: Migration Plan]
| |
depends_on conflicts_with
| |
[Constraint: <10ms latency] [Decision: Monolith-first]
This neighborhood IS the context. An agent seeing this graph knows: the decision serves retention, was informed by a January meeting, has two implementing specs, one of which conflicts with a prior decision. That's a richer briefing than any folder path.
The existing relationships are: draws_from, supersedes, implements, depends_on, blocks, serves, conflicts_with, relates_to, evaluated_by, gated_by, same_as, participates_in.
This is a solid foundation, but it's missing several relationships that become critical when the graph is the primary navigation surface.
Provenance relationships (where did this come from?):
summarizes — This artifact is a condensed version of that one. A meeting summary summarizes a meeting transcript. An executive brief summarizes a strategy doc. This is different from draws_from because it implies a specific structural relationship: the summary should update when the source changes.is_canonical_version_of (or fold into supersedes with a subtype) — Among multiple versions/drafts, this one is the current truth. Critical when agents produce multiple iterations.refines — A more specific version of something. A detailed spec refines a high-level spec. Different from supersedes because the original remains valid at its level of abstraction.Epistemic relationships (what's the knowledge status?):
contradicts — Stronger than conflicts_with. This artifact makes a factual claim that is incompatible with that artifact's claim. The system should surface these actively.validates — This artifact provides evidence supporting that artifact's claims. Test results validate a spec. User research validates a hypothesis.questions — This artifact raises unresolved questions about that artifact. A review questions a proposal. This is a "soft block" — not a dependency, but an epistemic gap.Compositional relationships (how do artifacts combine?):
includes / is_part_of — A slide deck includes individual slides (which are themselves artifacts with provenance). A report includes sections that can be independently versioned. Different from parent-child hierarchy because it's about content composition, not organizational grouping.is_variant_of — Two artifacts that share structure but differ in content. A pricing page for Enterprise vs. SMB. A strategy adapted for different markets. Variants should propagate structural changes but preserve content differences.If you had to pick the smallest set of relationships that generates the richest graph, I'd argue for seven axioms:
draws_from — provenance (where did the content come from?)supersedes — temporal succession (what replaces what?)implements — abstraction descent (what makes this concrete?)depends_on — prerequisite ordering (what must exist first?)conflicts_with — tension (what can't coexist?)serves — strategic alignment (what does this contribute to?)summarizes — compression (what is this a digest of?)Every other relationship is either a special case of these (e.g., blocks is inverse depends_on with urgency; validates is draws_from with an epistemic qualifier) or a convenience alias (e.g., relates_to is the "I know there's a connection but I can't name it" escape hatch).
In practice, you want the full current set PLUS:
summarizes (compression with update obligation)refines (abstraction descent without replacement)validates / questions (epistemic edges)is_variant_of (structural siblings)And critically: every edge should carry metadata:
created_by — which agent/human created this link?created_at — when?confidence — how certain is this relationship? (An agent's best guess vs. a human's explicit declaration)note — why does this relationship exist? (Already supported, but should be culturally required)When you have 1,000+ artifacts with rich relationships, clusters emerge naturally. A cluster is a group of artifacts that are densely connected to each other and sparsely connected to the rest of the graph. These clusters correspond to themes, projects, domains — the organizational units that folders try to impose artificially.
The difference: clusters are discovered, not declared. You don't create a "Marketing" folder; you notice that 47 artifacts about messaging, brand, positioning, and campaigns are densely interlinked, and the system says "this looks like a cluster — want to name it?"
This is the graph equivalent of desire paths. Structure follows use, not the other way around.
Long chains of depends_on and implements relationships reveal critical paths. If Artifact A depends on B depends on C depends on D, and D is blocked, the graph can propagate that signal: "A is transitively blocked, 4 hops from the blockage."
At scale, you can compute:
An orphan is an artifact with zero or very few edges. In a folder system, orphans are invisible — they sit in some folder, forgotten. In a graph, orphans are structurally visible anomalies.
Orphans come in flavors:
A bridge is an artifact that connects two otherwise-disconnected clusters. These are strategically important — they're the artifacts that create coherence across domains.
Example: A "Product Principles" document that is served_by engineering specs AND marketing messaging AND customer success playbooks. Remove it and three clusters drift apart. These bridge artifacts deserve special attention and maintenance.
| Metric | Healthy | Unhealthy |
|---|---|---|
| Orphan rate | < 5% of artifacts are orphans | > 20% orphans — things are created but not connected |
| Average degree | 3-7 edges per artifact | < 2 (disconnected) or > 15 (over-linked, noise) |
| Cluster count | Matches number of active workstreams | 1 giant blob (no structure) or 50 tiny islands (fragmentation) |
| Bridge count | Multiple bridges between major clusters | 0 bridges = silos; single bridge = fragility |
| Supersession depth | Most chains are 1-3 deep | Chains > 5 deep = too much churn, not enough stability |
| Conflict density | Some conflicts_with edges (tension is healthy) | Zero conflicts (false consensus) or > 10% conflict edges (chaos) |
| Orphan age | Orphans get connected within 48 hours | Orphans aging > 1 week = integration failure |
A "graph health dashboard" would show these metrics as a radar chart: balanced = healthy, lopsided = attention needed.
The full graph at 1,000 nodes is overwhelming as a force-directed layout. Better approaches:
What if artifacts weren't passive documents that happen to have links? What if they understood their position in the graph and behaved differently based on it?
A strategy doc that knows what implements it:
outgoing(implements) and shows: 3 specs (2 complete, 1 in progress), 12 tasks (8 done, 2 blocked, 2 pending).conflicts_with another implementing spec, the strategy doc flags: "Internal contradiction detected between Spec A and Spec B."A decision that knows what it superseded:
supersedes Decision X, which superseded Decision W. You can see the full evolution of thinking.draws_from sources have been superseded, it flags: "This decision may be based on outdated information."A spec that knows it has contradictions:
conflicts_with edges to Spec B, both specs display a banner: "Unresolved tension with [other spec]. See [link to contradiction]."validated (by tests, by research, by approval) and which are unvalidated assertions. A spec with 80% validated claims is more trustworthy than one with 20%.depends_on is updated, the spec flags itself for review: "Upstream constraint changed — this spec may need revision."A meeting transcript that knows what it produced:
outgoing(draws_from) — i.e., all the decisions, tasks, and notes that drew from this meeting. "This meeting produced: 2 decisions, 5 tasks, 1 strategy revision."A task that knows its full context:
implements -> Spec -> implements -> Decision -> serves -> Goal. The agent working on the task can see why it exists all the way up to the strategic level.superseded, the task flags: "The strategic rationale for this task may have changed."This is the feature that makes graph-awareness transformative. When a source artifact changes, every artifact that draws_from or depends_on it should know.
Concretely:
[Customer Research Report] is updated on Feb 10
|
draws_from (incoming)
|
[Product Strategy v3] -- now flagged: "Source updated since last revision"
|
implements (incoming)
|
[Spec: Onboarding Redesign] -- now flagged: "Upstream strategy may have changed"
|
implements (incoming)
|
[Task: Build new welcome flow] -- now flagged: "Verify spec is still current"
The propagation attenuates with distance — 1-hop gets a hard flag, 2-hop gets a soft flag, 3+ hops get a note in the briefing. This is the mycorrhizal network from the biology analogy in the prior brainstorm: warnings travel through the root system.
Every edge and node has a timestamp. This means the graph at any point in time is recoverable. You can "rewind" to last Tuesday and see: what existed, what was connected to what, what was the current canonical version of each artifact.
The system already supports get_record_at for individual records. The graph extension is get_graph_at: reconstruct the full neighborhood (or full graph) at a point in time.
"What changed in the graph between last Monday and today?" This is a diff of two temporal snapshots:
Graph Diff: Feb 8 -> Feb 15
NEW NODES (12):
+ [Spec: Payment Processing v2] (created Feb 10)
+ [Decision: Switch to Stripe] (created Feb 11)
+ [Task: Implement Stripe SDK] (created Feb 12)
... (9 more)
REMOVED NODES (3):
- [Task: Fix PayPal integration] (archived Feb 11)
- [Note: PayPal pricing research] (archived Feb 11)
- [Draft: Payment comparison] (superseded Feb 10)
NEW EDGES (18):
+ [Decision: Switch to Stripe] --supersedes--> [Decision: Use PayPal]
+ [Spec: Payment Processing v2] --draws_from--> [Customer complaints collection]
+ [Spec: Payment Processing v2] --implements--> [Decision: Switch to Stripe]
... (15 more)
BROKEN EDGES (2):
- [Task: Fix PayPal integration] --implements--> [Spec: Payment Processing v1]
(task archived, spec superseded)
- [Constraint: PayPal SLA requirement] --conflicts_with--> [Decision: Switch to Stripe]
(unresolved!)
STRUCTURAL CHANGES:
* New cluster emerged: "Stripe Migration" (7 nodes, 11 edges)
* Cluster "PayPal Integration" is dissolving (3 of 5 nodes archived)
* Bridge gap: "Stripe Migration" cluster has no link to "Customer Success" cluster
(PayPal cluster had 2 bridges — migration may be missing customer-facing plan)
This diff is incredibly useful for standup briefings. An agent starting a session can see not just "what tasks are assigned to me" but "how has the landscape of work shifted since I last engaged?"
Drift is when the graph's structure no longer matches the stated intent. Examples:
implements edges, but 7 of them are to tasks/specs that were created before the goal was last updated. The implementations may not reflect the current strategy.implements a decision that was superseded. The spec is implementing a dead decision.supersedes B, and then later Decision C supersedes A but restores B's content. The graph shows this oscillation.binding: hard has 5 tasks that depends_on it, but 3 of those tasks have attestations with status: overridden. The constraint is nominally hard but practically ignored.Drift detection is a background process that runs graph analytics and produces alerts:
Drift Report — Feb 15, 2026
WARNING: 3 specs implement superseded decisions
[Spec: Auth Flow] implements [Decision: Use OAuth] (superseded by [Decision: Use Passkeys])
...
WARNING: Goal "Increase Retention" has 0 new implementations in 3 weeks
Last implementation activity: Jan 24
INFO: Constraint "All APIs must use auth" overridden 3/5 times
Consider: is this still a hard constraint or should it be downgraded?
INFO: Cluster "Mobile App" has grown 40% in 2 weeks but has no bridge to "Backend API" cluster
Risk: work may be proceeding without backend coordination
Over time, the graph accumulates layers — like geological strata. You can analyze these layers:
A human asks: "Write a product strategy for Q2 2026."
The agent doesn't start writing. It starts walking the graph.
Step 1: Find the current strategy
query: type=document, tag=strategy, NOT superseded
-> Found: [Product Strategy Q1 2026] (id: strat-q1)
Step 2: What does it serve?
follow: strat-q1 -> serves
-> [Goal: 10K MAU by end of 2026]
-> [Goal: Series A readiness]
Step 3: What informed it?
follow: strat-q1 -> draws_from
-> [Meeting: Board Review Jan 5]
-> [Customer Research: Jan cohort analysis]
-> [Decision: Focus on SMB segment]
Step 4: What implemented it?
follow: strat-q1 -> incoming(implements)
-> [Spec: Onboarding v2] (completed)
-> [Spec: Analytics Dashboard] (in progress)
-> [Spec: API for Partners] (not started)
-> [Project: Mobile App] (in progress)
Step 5: What changed since Q1 strategy was written?
graph_diff: strat-q1.created_at -> now
-> 3 new decisions that aren't reflected in strategy
-> 1 constraint added (SOC2 compliance by June)
-> Customer research updated with Feb data
-> Goal "Series A readiness" has new objectives
Step 6: What's in tension?
follow: strat-q1 -> any(conflicts_with, 2 hops)
-> [Constraint: SOC2 by June] conflicts_with [Spec: API for Partners]
(API spec doesn't account for SOC2 requirements)
-> [Decision: Focus on SMB] questions [Meeting note: Enterprise interest from Acme]
Now the agent writes the Q2 strategy, informed by the graph context. But it doesn't just write a document — it wires the document into the graph as it creates it.
create_record:
type: document
title: "Product Strategy Q2 2026"
tags: [strategy, q2-2026]
body: [the strategy content, which references graph context throughout]
links:
- supersedes: strat-q1
- serves: [Goal: 10K MAU]
- serves: [Goal: Series A readiness]
- draws_from: [Customer Research: Feb update]
- draws_from: [Meeting: Board Review Jan 5]
- draws_from: [Decision: Focus on SMB]
- draws_from: [Constraint: SOC2 by June]
- conflicts_with: [Decision: Focus on SMB]
note: "Strategy proposes enterprise pilot alongside SMB focus;
this is in tension with pure SMB decision"
The moment the strategy is created, the graph ripples:
drawn_from by the strategy, making it more central to the graph.conflicts_with edge from the new strategy. This surfaces a tension that needs resolution.Before (simplified):
[Goal: 10K MAU] <--serves-- [Strategy Q1] --draws_from--> [Research: Jan]
|
implements
/ | \
[Spec: Onb] [Spec: Analytics] [Spec: API]
[Constraint: SOC2] (orphan — just created, not yet connected)
[Decision: Focus SMB] (connected to Q1 strategy)
After:
[Goal: 10K MAU] <--serves-- [Strategy Q2] --draws_from--> [Research: Feb]
| | | | |
serves | conflicts_with updates
| | | |
[Goal: Series A] <--serves----+ v [Research: Jan]
| [Decision: Focus SMB] |
supersedes draws_from
| |
[Strategy Q1] ----draws_from----------+
/ | \
[Spec: Onb] [Spec: Analytics] [Spec: API]
(flagged: check alignment with Q2)
[Constraint: SOC2] --drawn_from_by--> [Strategy Q2]
--conflicts_with--> [Spec: API]
(tension surfaced!)
The graph is denser, more connected, and more truthful. The tensions are visible. The provenance is clear. The supersession is recorded. An agent arriving tomorrow can look at this graph and understand not just what the strategy says, but why it says it, what changed, and what's unresolved.
Graph topology exposes patterns that are invisible in folder-based systems:
supersedes chains with short lifespans: Thrashing. Decisions are being made and reversed rapidly. The graph shows the oscillation pattern.conflicts_with: Three or more artifacts in a conflict cycle. This means there's a foundational disagreement that hasn't been resolved — it's being papered over by local decisions that contradict each other.Agents don't need to talk to each other if the graph is rich enough. Agent A writes a spec. Agent B, tasked with implementation, doesn't need to message Agent A. It walks the graph from the spec: what does it implement? What does it draw_from? What conflicts_with it? What constraints apply?
This is the prior brainstorm's principle made concrete: the environment IS the coordination. Agents don't coordinate by exchanging messages; they coordinate by reading and writing to the shared graph. The edges are the messages.
The prior brainstorm identified decay as a feature. The graph makes decay principled rather than arbitrary. Instead of "archive everything older than 90 days," you can say:
serves chain has been completed (the goal was achieved, the strategy was superseded, the specs were implemented).drawn_from by living artifacts, regardless of age (they're still load-bearing).conflicts_with edges until the tension is explicitly resolved (don't forget disagreements).The graph topology tells you what's safe to forget.
The system can use graph structure to suggest edges. When an agent creates a new artifact:
draw_from that decision?"serves edge to any goal. Does it serve a goal, or is it speculative?"related_to, or does one refine the other?"These suggestions are cheap (semantic similarity + graph proximity) and high-value (they maintain graph quality without requiring agents to have perfect graph awareness).
Git tracks versions of files. The artifact graph tracks versions of ideas. A supersedes chain is a richer version history than a git log because it captures why something changed, not just that it changed.
Moreover, branching in git is about parallel development of the same artifact. is_variant_of in the graph is about intentional divergence — two artifacts that share lineage but serve different purposes. Git merges assume convergence; the graph allows permanent divergence.
Not all edges are equal. A draws_from edge where the source was read carefully and synthesized deeply is different from a draws_from edge where the source was skimmed. Edge weights — explicit (set by creator) or implicit (derived from how much of the source was referenced) — create an attention topology.
Heavily weighted paths through the graph are the "main storylines" of the workspace. Lightly weighted paths are footnotes. An agent navigating the graph should follow heavy edges first, light edges only when doing deep research.
If I were building this as a product:
Phase 1 — Graph Foundation:
Phase 2 — Graph Navigation:
Phase 3 — Graph Intelligence:
Phase 4 — Graph as Primary Navigation:
The key insight: you don't need to build all of this to start getting value. Just making edges visible and orphans conspicuous changes agent behavior. Agents start linking things because they can see when things aren't linked. The graph grows organically, and the emergent structure becomes navigable.
The folder tree is a crutch for a world where relationships are invisible. Make relationships visible, and the tree becomes unnecessary.