Spreadsheets are the most successful end-user programming environment ever created. VisiCalc didn't win because it was a calculator — it won because it made data tangible and manipulable. You could see your data, poke it, change one number and watch fifty others ripple. That sensation — that the information is right there, alive, responsive — is what every productivity tool since has tried to reproduce.
Airtable and Notion databases got partway there: they added relational structure, views, and formulas. But they're still designed for humans clicking buttons. The question is: what happens when you take the spreadsheet paradigm — structured data, computed values, reactive propagation, flexible views — and rebuild it for a world where agents are the primary operators?
The answer is not "a spreadsheet with an API." It's something more fundamental: every artifact in the workspace becomes a queryable, computable object with structured internals, and the workspace itself becomes a programmable data surface.
A Google Doc is an opaque blob. You can search for words in it, but you can't ask "show me all strategy docs where the recommended approach involves partnerships" without building a separate search index, training an NLP model, or having a human tag things. The content is locked inside a format designed for rendering, not reasoning.
What if every artifact had a structured data layer alongside its human-readable surface? Not metadata about the doc — structure within the doc.
Consider a decision record. Today it's markdown with headers. In the spreadsheet paradigm, it's a record with typed, queryable fields:
Record: "Adopt event-driven architecture for notifications"
Type: decision
Data:
context: "Growing notification volume causing synchronous bottlenecks..."
options_considered:
- name: "Event-driven (SNS/SQS)"
pros: ["Decoupled", "Scalable"]
cons: ["Complexity", "Eventual consistency"]
estimated_effort: "3 sprints"
- name: "Polling with batch processing"
pros: ["Simple"]
cons: ["Latency", "Resource waste"]
estimated_effort: "1 sprint"
chosen_option: "Event-driven (SNS/SQS)"
rationale: "Notification volume projected to 10x in 6 months..."
risk_factors: ["team-unfamiliarity", "debugging-complexity"]
reversibility: "medium"
stakeholders: ["eng-lead", "platform-team"]
review_date: "2026-06-01"
The body field can still hold a beautifully written narrative. But the data field makes the artifact's meaning addressable. An agent (or a human, through a view) can now:
query_record(type: 'decision', where: { data.risk_factors: { hasAny: ['team-unfamiliarity'] } }) — "Show me all decisions where team unfamiliarity is a risk factor."The workspace already has data as a JSONB field on every record, and _schema for validation. The leap is to treat this not as optional metadata but as the primary queryable surface of the artifact. The body remains the human-readable rendering; the data is the machine-readable truth.
The key design choice: data fields are first-class, not afterthoughts. When an agent creates a decision record, it populates both the narrative body AND the structured data. When a human reads it, they see the narrative. When an agent queries it, it reads the data. Same artifact, two access patterns.
The workspace query DSL already supports filtering on status, tags, type, and dates. Extending it to support data.* field queries (with JSON path expressions) would unlock the full spreadsheet-as-database vision. Every artifact becomes a row in a workspace-wide queryable table.
In a spreadsheet, cell B1 might contain =SUM(A1:A10). Its value is not stored — it's derived. Change any input cell and B1 updates. This is the most powerful idea in the spreadsheet: some values exist only as computations over other values.
What's the artifact equivalent? An artifact whose content is defined not by what someone wrote, but by a query over other artifacts.
Team Health Dashboard (computed artifact):
Record: "Engineering Team Health — February 2026"
Type: note
Computation:
source_query:
type: ["task", "goal"]
ancestor_id: "eng-workstream-id"
status: { notIn: ["closed"] }
aggregations:
- name: "tasks_completed_this_week"
filter: { status: "completed", completed_at: { gte: "2026-02-09" } }
operation: "count"
- name: "tasks_blocked"
filter: { status: "blocked" }
operation: "count"
- name: "overdue_items"
filter: { due_at: { lt: "NOW()" }, status: { notIn: ["completed", "closed"] } }
operation: "count"
- name: "avg_cycle_time_days"
filter: { status: "completed", completed_at: { gte: "2026-02-01" } }
operation: "avg"
field: "duration_days"
template: |
## Engineering Health — Week of {{week_start}}
- **Completed**: {{tasks_completed_this_week}} tasks
- **Blocked**: {{tasks_blocked}} ({{blocked_details}})
- **Overdue**: {{overdue_items}}
- **Avg cycle time**: {{avg_cycle_time_days}} days
### Trends
{{trend_chart}}
This artifact doesn't have an author in the traditional sense. It has a definition. Every time it's accessed, it re-evaluates. Or it's recomputed on a schedule. Or it recomputes when any source artifact changes (reactive).
Risk Register (computed artifact):
Computation:
source_query:
type: "decision"
where: { data.risk_factors: { isEmpty: false } }
group_by: "data.risk_factors"
output: |
For each risk factor, list:
- All decisions that cite it
- Whether mitigations were documented
- Current status of related tasks
- Overall risk trend (increasing/stable/decreasing)
Stale Decisions Report (computed artifact):
Computation:
source_query:
type: "decision"
where: { data.review_date: { lt: "NOW()" } }
output: |
Decisions past their review date, sorted by staleness.
For each: original context, what's changed since, recommended action.
A computed artifact has three parts:
This is precisely a database view. The difference is that the "view" can render as prose, charts, tables, or any artifact format. A computed artifact that renders as a slide deck is a presentation that updates itself. A computed artifact that renders as a checklist is a living action tracker.
The workspace already has the query infrastructure. The missing piece is a computation field on records that defines "this artifact's content is derived from a query, not directly authored." The simplest version: a draws_from link to a saved query, plus a template. The full version: a reactive computation graph.
In Airtable, the same table can be viewed as a grid, a kanban board, a calendar, a gallery, or a form. The underlying data doesn't change — only the lens. This is the insight: creating a new view should never require duplicating data.
The workspace already embodies this somewhat — records have a canonical existence, and different queries surface them differently. But the view concept can go much further.
1. The Dependency Graph View
Agents create artifacts with rich relationship links: depends_on, blocks, implements, draws_from. The graph is the natural view.
[Market Research] --draws_from--> [Competitive Analysis]
| |
implements draws_from
| |
v v
[Product Strategy] --implements--> [Q2 Roadmap]
| |
serves depends_on
| |
v v
[Company Objective: Market Leadership] [API Redesign Spec]
This is not a feature to build separately — it's an automatic view derivable from the existing link structure. Every record is a node; every link is an edge. The view renders the subgraph reachable from any starting point.
2. The Timeline/Gantt View
Records with due_at and depends_on links form a natural project timeline. The view computes critical path automatically. When an agent updates a task's due date, the view shows downstream impact.
3. The Evidence Trail View
For any claim in any artifact, show the chain: "This market size figure in the strategy deck → cited from competitive analysis → sourced from market research artifact → which draws_from original data." It's the draws_from chain rendered as a provenance tree.
4. The Pivot Table View This is the "pivot table for knowledge work." Take a flat set of records and pivot on any data field:
Pivot: All decision records
Rows: data.risk_factors (expanded)
Columns: data.reversibility
Values: COUNT
| High Rev. | Medium Rev. | Low Rev. |
--------------------|-----------|-------------|----------|
team-unfamiliarity | 2 | 3 | 1 |
scale-risk | 1 | 1 | 4 |
vendor-dependency | 0 | 2 | 2 |
An agent can generate this view on demand: "Pivot all our decisions by risk factor and reversibility." A human can save it as a named view. The insight is that structured data fields on artifacts make pivoting possible — you can't pivot on prose.
5. The Diff/Change View Not a snapshot, but a view of how things changed. "What changed in the competitive landscape since last month?" This is a temporal diff across a set of artifacts — possible because the workspace has event history on every record.
Views are not features — they're natural consequences of structured data + relationships. If every artifact has typed data fields and typed links to other artifacts, then every useful view is just a query + renderer. The workspace doesn't need to ship 12 view types. It needs to ship a query engine and a rendering layer, and let agents (or users) compose views on demand.
Spreadsheets have cell types: number, date, currency, percentage. These prevent a class of errors (you can't accidentally sum text). What's the equivalent for agent-created artifacts?
The workspace already has _schema on container records — a JSON Schema that validates child data. This is the foundation. The question is how far to push it.
Level 1: Advisory Schema ("you should have these fields")
{
"enforcement": "advisory",
"schema": {
"properties": {
"context": { "type": "string", "description": "Background that led to this decision" },
"options_considered": { "type": "array", "minItems": 2 },
"chosen_option": { "type": "string" },
"rationale": { "type": "string" }
}
}
}
Agent creates a decision without listing options? Warning in the event log, but creation succeeds. This is the spreadsheet equivalent of conditional formatting — yellow highlight on suspicious values.
Level 2: Soft Schema ("you must have these fields, but types are flexible")
{
"enforcement": "soft",
"schema": {
"required": ["context", "options_considered", "chosen_option", "rationale"],
"properties": {
"options_considered": { "type": "array", "minItems": 2 },
"chosen_option": { "type": "string" }
}
}
}
Creation fails if required fields are missing. But the agent can put anything in context — a string, an object, whatever. This is the spreadsheet equivalent of required cells.
Level 3: Hard Schema ("strict types and structure")
{
"enforcement": "hard",
"schema": {
"required": ["context", "options_considered", "chosen_option", "rationale", "risk_factors", "reversibility"],
"properties": {
"reversibility": { "type": "string", "enum": ["high", "medium", "low"] },
"risk_factors": { "type": "array", "items": { "type": "string" } },
"review_date": { "type": "string", "format": "date" },
"options_considered": {
"type": "array",
"minItems": 2,
"items": {
"type": "object",
"required": ["name", "pros", "cons"],
"properties": {
"name": { "type": "string" },
"pros": { "type": "array", "items": { "type": "string" } },
"cons": { "type": "array", "items": { "type": "string" } }
}
}
}
}
}
}
This is a proper database table definition. Every decision record in this collection must have exactly this shape. An agent that tries to create a sloppy decision gets rejected.
Humans resist schema. They find rigid forms annoying. But agents thrive on schema. A schema tells the agent exactly what's expected. It's not a constraint — it's a contract. The agent knows that if it produces data matching the schema, the artifact will be accepted, queryable, and interoperable with views and computations that depend on that shape.
Schema also prevents the "chaos at a higher frame rate" problem. Without schema, an agent producing 100 artifacts per hour produces 100 uniquely-shaped blobs that no query can reliably process. With schema, those 100 artifacts are 100 well-formed rows in a queryable table.
Spreadsheets handle schema evolution poorly — add a column and old formulas break. The workspace can do better:
_schema_version, queries can filter by version.This is where the agent-as-power-user shines: "Migrate all competitive analysis records from schema v1 to v2, inferring the new market_position field from the existing description." A human would groan at updating 50 records. An agent does it in seconds.
The most powerful spreadsheet feature: =A1 + B1. Change A1, and the cell updates. Now scale this to artifacts.
The workspace already has [[shortId]] mention syntax for linking records. The leap is from static links to live references — content that re-evaluates when the source changes.
Static reference (what exists today):
Our market size estimate is $4.2B (see @abc123).
The link exists, but if the market research artifact updates its estimate to $5.1B, this text is stale and nobody knows.
Live reference (the spreadsheet paradigm):
Our market size estimate is
{{abc123.data.market_size_estimate}}(see @abc123).
When the market research artifact updates, this value updates. Or more practically: the system flags that the referencing artifact contains a stale reference and surfaces it for review.
Full live references (auto-updating prose) are probably wrong for knowledge work — you don't want a strategy doc changing under someone while they read it. But staleness propagation is exactly right.
The model:
draws_from link or [[shortId]] mention).stale_reference marker, pointing to the changed source.This is the INDIRECT() function of knowledge work: not automatic propagation, but automatic detection of broken references.
Aggregation formula: A project status artifact whose "percent complete" field is computed as COUNT(children WHERE status = 'completed') / COUNT(children).
Conditional formula: A risk assessment whose severity level is computed from the data fields of linked decision records: IF(any linked decision has reversibility = 'low' AND risk_factors includes 'vendor-dependency', THEN severity = 'high').
Temporal formula: A "velocity" field on a project that computes COUNT(descendants WHERE type = 'task' AND completed_at > NOW() - 7d) — tasks completed in the last week.
Cross-artifact validation: A constraint record that checks whether all tasks implementing a goal actually have test plans. The constraint's "satisfied" field is computed: EVERY(tasks WHERE implements goal_X, HAS data.test_plan IS NOT NULL).
These aren't sci-fi features. They're compositions of existing workspace capabilities: queries, data fields, relationships, and status tracking. The infrastructure exists. The leap is treating certain data fields as computed rather than stored.
Power users don't just enter data. They:
Batch Operations Across Hundreds of Artifacts
A human can update one record at a time. An agent can:
This is the macro equivalent: operations that a human could do one at a time but would take days.
Complex Queries That Surface Insights
These are the pivot tables of knowledge work: not just viewing data differently, but revealing structure that was invisible.
Automated Maintenance Routines
The spreadsheet equivalent of a scheduled macro:
transience: 'short' that haven't been updated in 14 days. Flag or archive._schema. Report which records fail validation (possible after a schema evolution).[[shortId]] mentions that point to archived or deleted records. Surface broken references.The Meta-Operation: Agents Writing Queries for Other Agents
The most powerful spreadsheet users write formulas that generate other formulas. The agent equivalent: an orchestrating agent that defines computed artifacts, schemas, and maintenance routines — building the workspace's infrastructure, not just its content.
"Set up a competitive intelligence collection with: a schema requiring competitor name, market segment, last-updated date, and key metrics. A computed artifact that shows which competitors haven't been updated in 30 days. A weekly maintenance task that checks for public announcements from each tracked competitor."
The agent isn't creating content — it's creating the system that creates and maintains content.
Collection: Competitive Landscape
Record: "Competitive Landscape"
Type: collection
Schema (on children):
enforcement: "soft"
schema:
required: ["company_name", "market_segment", "key_products", "assessment"]
properties:
company_name: { type: "string" }
market_segment: { type: "string", enum: ["direct", "adjacent", "potential"] }
key_products:
type: array
items:
type: object
properties:
name: { type: string }
overlap_area: { type: string }
threat_level: { type: string, enum: ["high", "medium", "low"] }
founding_year: { type: number }
funding_stage: { type: string }
estimated_arr: { type: string }
key_differentiators: { type: array, items: { type: string } }
weaknesses: { type: array, items: { type: string } }
recent_moves:
type: array
items:
type: object
properties:
date: { type: string, format: date }
description: { type: string }
significance: { type: string, enum: ["high", "medium", "low"] }
assessment:
type: object
properties:
overall_threat: { type: string, enum: ["critical", "high", "medium", "low"] }
trajectory: { type: string, enum: ["growing", "stable", "declining"] }
our_advantage: { type: string }
our_vulnerability: { type: string }
last_deep_review: { type: string, format: date }
Individual Competitor Record:
Record: "Acme Corp"
Type: note
Parent: Competitive Landscape collection
Data:
company_name: "Acme Corp"
market_segment: "direct"
key_products:
- name: "Acme Workspace"
overlap_area: "Team collaboration"
threat_level: "high"
- name: "Acme AI Assistant"
overlap_area: "Agent-based workflows"
threat_level: "medium"
founding_year: 2019
funding_stage: "Series C"
estimated_arr: "$45M"
key_differentiators:
- "Strong enterprise sales motion"
- "Deep Slack integration"
- "SOC2 Type II certified"
weaknesses:
- "Agent features feel bolted-on, not native"
- "No structured data/schema support"
- "Slow to ship — quarterly release cycle"
recent_moves:
- date: "2026-01-15"
description: "Launched 'AI Copilot' feature for workspace search"
significance: "medium"
- date: "2026-02-01"
description: "Announced partnership with Anthropic for embedded Claude"
significance: "high"
assessment:
overall_threat: "high"
trajectory: "growing"
our_advantage: "Agent-native architecture vs. bolted-on AI"
our_vulnerability: "Their enterprise GTM and existing customer base"
last_deep_review: "2026-02-10"
Body: |
## Acme Corp — Competitive Profile
Acme is our most direct competitor in the team workspace space. Founded in 2019,
they've built a strong enterprise presence with ~$45M ARR and a Series C behind them.
### What They Do Well
Their Slack integration is best-in-class, and their enterprise sales motion is
mature. SOC2 certification gives them credibility in regulated industries.
### Where They're Vulnerable
Their AI features feel bolted on. "Acme AI Copilot" is essentially a search
wrapper — no structured data understanding, no agent-native workflows, no
schema validation. They can't do what we can do with computed artifacts.
### Recent Activity
The Anthropic partnership (Feb 2026) is significant — it signals they're serious
about AI, but embedding a general-purpose LLM into a workspace designed for
humans doesn't make it agent-native.
### Assessment
High threat, growing trajectory. Our structural advantage is real but time-boxed:
if they rebuild for agents, the enterprise GTM advantage could be decisive.
Comparison Table View (query: all children of Competitive Landscape, render as table):
| Competitor | Segment | Threat | Trajectory | Our Advantage | Last Reviewed |
|---|---|---|---|---|---|
| Acme Corp | Direct | High | Growing | Agent-native arch | Feb 10 |
| Beta Inc | Direct | Medium | Stable | Schema/validation | Jan 28 |
| Gamma AI | Adjacent | Medium | Growing | Artifact lifecycle | Feb 05 |
| Delta Tools | Potential | Low | Declining | — | Dec 15 |
This is just query_record(parent_id: competitive_landscape_id) with the data fields projected as columns. Any agent can generate it; any human can read it.
Threat Matrix View (pivot on market_segment x overall_threat):
| Critical | High | Medium | Low | |
|---|---|---|---|---|
| Direct | 0 | 1 | 1 | 0 |
| Adjacent | 0 | 0 | 1 | 0 |
| Potential | 0 | 0 | 0 | 1 |
Narrative Summary View (computed artifact, re-generated on demand):
"As of February 2026, we track 4 competitors across 3 segments. Our most significant threat is Acme Corp (direct, high threat, growing), whose recent Anthropic partnership signals increased AI investment. Our structural advantage — agent-native architecture — remains strong against all tracked competitors, none of whom have schema validation, computed artifacts, or structured data fields. Key vulnerability: Acme's enterprise GTM could outpace our product advantage. One competitor (Delta Tools) has not been reviewed in 60+ days and may warrant archival or re-assessment."
This narrative is not hand-written. It's generated from the structured data of all competitor records. Update any competitor's data, and the summary can be regenerated to reflect the change.
Staleness Alert View (computed, auto-flagging):
Query: children of Competitive Landscape WHERE
data.last_deep_review < NOW() - 30d
OR data.recent_moves is empty
OR data.assessment.trajectory = 'growing' AND last_activity_at < NOW() - 14d
Result:
⚠ Delta Tools — last reviewed Dec 15, 62 days ago
⚠ Gamma AI — trajectory 'growing' but no update in 11 days
This is the conditional formatting of knowledge work: rules that surface artifacts needing attention, applied automatically across the collection.
Initial Creation (agent action):
recent_moves, re-assess trajectory.Ongoing Maintenance (agent action):
recent_moves field on records where new activity is found.recent_move with significance: 'high' is added, agent re-evaluates assessment.overall_threat and assessment.trajectory.get_record_at for each competitor, computes diffs, generates a trend analysis.The Key Difference from Airtable: In Airtable, a human fills in the table. The structure helps, but the human does all the work. In the agent-native workspace, the agent populates the structured data, generates the narrative, maintains freshness, and surfaces insights. The human reviews, directs, and decides. The structured data isn't extra work — it's what makes automation possible.
The spreadsheet succeeded because it made three things simultaneously true:
The agent-native workspace, seen through this lens, needs to make three parallel things true:
data JSONB field on every record is the foundation. Schema enforcement ensures consistency. Queries across artifacts surface patterns invisible to any single document.draws_from link type plus staleness propagation gives you the essential feedback loop.The most provocative implication: the workspace IS the database. Not "the workspace has a database behind it" — the workspace, from the agent's perspective, is a structured, queryable, programmable data surface where every artifact is a record, every relationship is a foreign key, and every view is a query. The human-readable renderings (narratives, decks, dashboards) are views — not the underlying reality.
Spreadsheets didn't win by being better calculators. They won by giving everyone a programmable data surface. The agent-native workspace wins the same way: not by being a better document editor, but by giving agents (and the humans who direct them) a programmable artifact surface where knowledge work is structured, queryable, and alive.
Spreadsheets are the most successful end-user programming environment ever created. VisiCalc didn't win because it was a calculator — it won because it made data tangible and manipulable. You could see your data, poke it, change one number and watch fifty others ripple. That sensation — that the information is right there, alive, responsive — is what every productivity tool since has tried to reproduce.
Airtable and Notion databases got partway there: they added relational structure, views, and formulas. But they're still designed for humans clicking buttons. The question is: what happens when you take the spreadsheet paradigm — structured data, computed values, reactive propagation, flexible views — and rebuild it for a world where agents are the primary operators?
The answer is not "a spreadsheet with an API." It's something more fundamental: every artifact in the workspace becomes a queryable, computable object with structured internals, and the workspace itself becomes a programmable data surface.
A Google Doc is an opaque blob. You can search for words in it, but you can't ask "show me all strategy docs where the recommended approach involves partnerships" without building a separate search index, training an NLP model, or having a human tag things. The content is locked inside a format designed for rendering, not reasoning.
What if every artifact had a structured data layer alongside its human-readable surface? Not metadata about the doc — structure within the doc.
Consider a decision record. Today it's markdown with headers. In the spreadsheet paradigm, it's a record with typed, queryable fields:
Record: "Adopt event-driven architecture for notifications"
Type: decision
Data:
context: "Growing notification volume causing synchronous bottlenecks..."
options_considered:
- name: "Event-driven (SNS/SQS)"
pros: ["Decoupled", "Scalable"]
cons: ["Complexity", "Eventual consistency"]
estimated_effort: "3 sprints"
- name: "Polling with batch processing"
pros: ["Simple"]
cons: ["Latency", "Resource waste"]
estimated_effort: "1 sprint"
chosen_option: "Event-driven (SNS/SQS)"
rationale: "Notification volume projected to 10x in 6 months..."
risk_factors: ["team-unfamiliarity", "debugging-complexity"]
reversibility: "medium"
stakeholders: ["eng-lead", "platform-team"]
review_date: "2026-06-01"
The body field can still hold a beautifully written narrative. But the data field makes the artifact's meaning addressable. An agent (or a human, through a view) can now:
query_record(type: 'decision', where: { data.risk_factors: { hasAny: ['team-unfamiliarity'] } }) — "Show me all decisions where team unfamiliarity is a risk factor."The workspace already has data as a JSONB field on every record, and _schema for validation. The leap is to treat this not as optional metadata but as the primary queryable surface of the artifact. The body remains the human-readable rendering; the data is the machine-readable truth.
The key design choice: data fields are first-class, not afterthoughts. When an agent creates a decision record, it populates both the narrative body AND the structured data. When a human reads it, they see the narrative. When an agent queries it, it reads the data. Same artifact, two access patterns.
The workspace query DSL already supports filtering on status, tags, type, and dates. Extending it to support data.* field queries (with JSON path expressions) would unlock the full spreadsheet-as-database vision. Every artifact becomes a row in a workspace-wide queryable table.
In a spreadsheet, cell B1 might contain =SUM(A1:A10). Its value is not stored — it's derived. Change any input cell and B1 updates. This is the most powerful idea in the spreadsheet: some values exist only as computations over other values.
What's the artifact equivalent? An artifact whose content is defined not by what someone wrote, but by a query over other artifacts.
Team Health Dashboard (computed artifact):
Record: "Engineering Team Health — February 2026"
Type: note
Computation:
source_query:
type: ["task", "goal"]
ancestor_id: "eng-workstream-id"
status: { notIn: ["closed"] }
aggregations:
- name: "tasks_completed_this_week"
filter: { status: "completed", completed_at: { gte: "2026-02-09" } }
operation: "count"
- name: "tasks_blocked"
filter: { status: "blocked" }
operation: "count"
- name: "overdue_items"
filter: { due_at: { lt: "NOW()" }, status: { notIn: ["completed", "closed"] } }
operation: "count"
- name: "avg_cycle_time_days"
filter: { status: "completed", completed_at: { gte: "2026-02-01" } }
operation: "avg"
field: "duration_days"
template: |
## Engineering Health — Week of {{week_start}}
- **Completed**: {{tasks_completed_this_week}} tasks
- **Blocked**: {{tasks_blocked}} ({{blocked_details}})
- **Overdue**: {{overdue_items}}
- **Avg cycle time**: {{avg_cycle_time_days}} days
### Trends
{{trend_chart}}
This artifact doesn't have an author in the traditional sense. It has a definition. Every time it's accessed, it re-evaluates. Or it's recomputed on a schedule. Or it recomputes when any source artifact changes (reactive).
Risk Register (computed artifact):
Computation:
source_query:
type: "decision"
where: { data.risk_factors: { isEmpty: false } }
group_by: "data.risk_factors"
output: |
For each risk factor, list:
- All decisions that cite it
- Whether mitigations were documented
- Current status of related tasks
- Overall risk trend (increasing/stable/decreasing)
Stale Decisions Report (computed artifact):
Computation:
source_query:
type: "decision"
where: { data.review_date: { lt: "NOW()" } }
output: |
Decisions past their review date, sorted by staleness.
For each: original context, what's changed since, recommended action.
A computed artifact has three parts:
This is precisely a database view. The difference is that the "view" can render as prose, charts, tables, or any artifact format. A computed artifact that renders as a slide deck is a presentation that updates itself. A computed artifact that renders as a checklist is a living action tracker.
The workspace already has the query infrastructure. The missing piece is a computation field on records that defines "this artifact's content is derived from a query, not directly authored." The simplest version: a draws_from link to a saved query, plus a template. The full version: a reactive computation graph.
In Airtable, the same table can be viewed as a grid, a kanban board, a calendar, a gallery, or a form. The underlying data doesn't change — only the lens. This is the insight: creating a new view should never require duplicating data.
The workspace already embodies this somewhat — records have a canonical existence, and different queries surface them differently. But the view concept can go much further.
1. The Dependency Graph View
Agents create artifacts with rich relationship links: depends_on, blocks, implements, draws_from. The graph is the natural view.
[Market Research] --draws_from--> [Competitive Analysis]
| |
implements draws_from
| |
v v
[Product Strategy] --implements--> [Q2 Roadmap]
| |
serves depends_on
| |
v v
[Company Objective: Market Leadership] [API Redesign Spec]
This is not a feature to build separately — it's an automatic view derivable from the existing link structure. Every record is a node; every link is an edge. The view renders the subgraph reachable from any starting point.
2. The Timeline/Gantt View
Records with due_at and depends_on links form a natural project timeline. The view computes critical path automatically. When an agent updates a task's due date, the view shows downstream impact.
3. The Evidence Trail View
For any claim in any artifact, show the chain: "This market size figure in the strategy deck → cited from competitive analysis → sourced from market research artifact → which draws_from original data." It's the draws_from chain rendered as a provenance tree.
4. The Pivot Table View This is the "pivot table for knowledge work." Take a flat set of records and pivot on any data field:
Pivot: All decision records
Rows: data.risk_factors (expanded)
Columns: data.reversibility
Values: COUNT
| High Rev. | Medium Rev. | Low Rev. |
--------------------|-----------|-------------|----------|
team-unfamiliarity | 2 | 3 | 1 |
scale-risk | 1 | 1 | 4 |
vendor-dependency | 0 | 2 | 2 |
An agent can generate this view on demand: "Pivot all our decisions by risk factor and reversibility." A human can save it as a named view. The insight is that structured data fields on artifacts make pivoting possible — you can't pivot on prose.
5. The Diff/Change View Not a snapshot, but a view of how things changed. "What changed in the competitive landscape since last month?" This is a temporal diff across a set of artifacts — possible because the workspace has event history on every record.
Views are not features — they're natural consequences of structured data + relationships. If every artifact has typed data fields and typed links to other artifacts, then every useful view is just a query + renderer. The workspace doesn't need to ship 12 view types. It needs to ship a query engine and a rendering layer, and let agents (or users) compose views on demand.
Spreadsheets have cell types: number, date, currency, percentage. These prevent a class of errors (you can't accidentally sum text). What's the equivalent for agent-created artifacts?
The workspace already has _schema on container records — a JSON Schema that validates child data. This is the foundation. The question is how far to push it.
Level 1: Advisory Schema ("you should have these fields")
{
"enforcement": "advisory",
"schema": {
"properties": {
"context": { "type": "string", "description": "Background that led to this decision" },
"options_considered": { "type": "array", "minItems": 2 },
"chosen_option": { "type": "string" },
"rationale": { "type": "string" }
}
}
}
Agent creates a decision without listing options? Warning in the event log, but creation succeeds. This is the spreadsheet equivalent of conditional formatting — yellow highlight on suspicious values.
Level 2: Soft Schema ("you must have these fields, but types are flexible")
{
"enforcement": "soft",
"schema": {
"required": ["context", "options_considered", "chosen_option", "rationale"],
"properties": {
"options_considered": { "type": "array", "minItems": 2 },
"chosen_option": { "type": "string" }
}
}
}
Creation fails if required fields are missing. But the agent can put anything in context — a string, an object, whatever. This is the spreadsheet equivalent of required cells.
Level 3: Hard Schema ("strict types and structure")
{
"enforcement": "hard",
"schema": {
"required": ["context", "options_considered", "chosen_option", "rationale", "risk_factors", "reversibility"],
"properties": {
"reversibility": { "type": "string", "enum": ["high", "medium", "low"] },
"risk_factors": { "type": "array", "items": { "type": "string" } },
"review_date": { "type": "string", "format": "date" },
"options_considered": {
"type": "array",
"minItems": 2,
"items": {
"type": "object",
"required": ["name", "pros", "cons"],
"properties": {
"name": { "type": "string" },
"pros": { "type": "array", "items": { "type": "string" } },
"cons": { "type": "array", "items": { "type": "string" } }
}
}
}
}
}
}
This is a proper database table definition. Every decision record in this collection must have exactly this shape. An agent that tries to create a sloppy decision gets rejected.
Humans resist schema. They find rigid forms annoying. But agents thrive on schema. A schema tells the agent exactly what's expected. It's not a constraint — it's a contract. The agent knows that if it produces data matching the schema, the artifact will be accepted, queryable, and interoperable with views and computations that depend on that shape.
Schema also prevents the "chaos at a higher frame rate" problem. Without schema, an agent producing 100 artifacts per hour produces 100 uniquely-shaped blobs that no query can reliably process. With schema, those 100 artifacts are 100 well-formed rows in a queryable table.
Spreadsheets handle schema evolution poorly — add a column and old formulas break. The workspace can do better:
_schema_version, queries can filter by version.This is where the agent-as-power-user shines: "Migrate all competitive analysis records from schema v1 to v2, inferring the new market_position field from the existing description." A human would groan at updating 50 records. An agent does it in seconds.
The most powerful spreadsheet feature: =A1 + B1. Change A1, and the cell updates. Now scale this to artifacts.
The workspace already has [[shortId]] mention syntax for linking records. The leap is from static links to live references — content that re-evaluates when the source changes.
Static reference (what exists today):
Our market size estimate is $4.2B (see @abc123).
The link exists, but if the market research artifact updates its estimate to $5.1B, this text is stale and nobody knows.
Live reference (the spreadsheet paradigm):
Our market size estimate is
{{abc123.data.market_size_estimate}}(see @abc123).
When the market research artifact updates, this value updates. Or more practically: the system flags that the referencing artifact contains a stale reference and surfaces it for review.
Full live references (auto-updating prose) are probably wrong for knowledge work — you don't want a strategy doc changing under someone while they read it. But staleness propagation is exactly right.
The model:
draws_from link or [[shortId]] mention).stale_reference marker, pointing to the changed source.This is the INDIRECT() function of knowledge work: not automatic propagation, but automatic detection of broken references.
Aggregation formula: A project status artifact whose "percent complete" field is computed as COUNT(children WHERE status = 'completed') / COUNT(children).
Conditional formula: A risk assessment whose severity level is computed from the data fields of linked decision records: IF(any linked decision has reversibility = 'low' AND risk_factors includes 'vendor-dependency', THEN severity = 'high').
Temporal formula: A "velocity" field on a project that computes COUNT(descendants WHERE type = 'task' AND completed_at > NOW() - 7d) — tasks completed in the last week.
Cross-artifact validation: A constraint record that checks whether all tasks implementing a goal actually have test plans. The constraint's "satisfied" field is computed: EVERY(tasks WHERE implements goal_X, HAS data.test_plan IS NOT NULL).
These aren't sci-fi features. They're compositions of existing workspace capabilities: queries, data fields, relationships, and status tracking. The infrastructure exists. The leap is treating certain data fields as computed rather than stored.
Power users don't just enter data. They:
Batch Operations Across Hundreds of Artifacts
A human can update one record at a time. An agent can:
This is the macro equivalent: operations that a human could do one at a time but would take days.
Complex Queries That Surface Insights
These are the pivot tables of knowledge work: not just viewing data differently, but revealing structure that was invisible.
Automated Maintenance Routines
The spreadsheet equivalent of a scheduled macro:
transience: 'short' that haven't been updated in 14 days. Flag or archive._schema. Report which records fail validation (possible after a schema evolution).[[shortId]] mentions that point to archived or deleted records. Surface broken references.The Meta-Operation: Agents Writing Queries for Other Agents
The most powerful spreadsheet users write formulas that generate other formulas. The agent equivalent: an orchestrating agent that defines computed artifacts, schemas, and maintenance routines — building the workspace's infrastructure, not just its content.
"Set up a competitive intelligence collection with: a schema requiring competitor name, market segment, last-updated date, and key metrics. A computed artifact that shows which competitors haven't been updated in 30 days. A weekly maintenance task that checks for public announcements from each tracked competitor."
The agent isn't creating content — it's creating the system that creates and maintains content.
Collection: Competitive Landscape
Record: "Competitive Landscape"
Type: collection
Schema (on children):
enforcement: "soft"
schema:
required: ["company_name", "market_segment", "key_products", "assessment"]
properties:
company_name: { type: "string" }
market_segment: { type: "string", enum: ["direct", "adjacent", "potential"] }
key_products:
type: array
items:
type: object
properties:
name: { type: string }
overlap_area: { type: string }
threat_level: { type: string, enum: ["high", "medium", "low"] }
founding_year: { type: number }
funding_stage: { type: string }
estimated_arr: { type: string }
key_differentiators: { type: array, items: { type: string } }
weaknesses: { type: array, items: { type: string } }
recent_moves:
type: array
items:
type: object
properties:
date: { type: string, format: date }
description: { type: string }
significance: { type: string, enum: ["high", "medium", "low"] }
assessment:
type: object
properties:
overall_threat: { type: string, enum: ["critical", "high", "medium", "low"] }
trajectory: { type: string, enum: ["growing", "stable", "declining"] }
our_advantage: { type: string }
our_vulnerability: { type: string }
last_deep_review: { type: string, format: date }
Individual Competitor Record:
Record: "Acme Corp"
Type: note
Parent: Competitive Landscape collection
Data:
company_name: "Acme Corp"
market_segment: "direct"
key_products:
- name: "Acme Workspace"
overlap_area: "Team collaboration"
threat_level: "high"
- name: "Acme AI Assistant"
overlap_area: "Agent-based workflows"
threat_level: "medium"
founding_year: 2019
funding_stage: "Series C"
estimated_arr: "$45M"
key_differentiators:
- "Strong enterprise sales motion"
- "Deep Slack integration"
- "SOC2 Type II certified"
weaknesses:
- "Agent features feel bolted-on, not native"
- "No structured data/schema support"
- "Slow to ship — quarterly release cycle"
recent_moves:
- date: "2026-01-15"
description: "Launched 'AI Copilot' feature for workspace search"
significance: "medium"
- date: "2026-02-01"
description: "Announced partnership with Anthropic for embedded Claude"
significance: "high"
assessment:
overall_threat: "high"
trajectory: "growing"
our_advantage: "Agent-native architecture vs. bolted-on AI"
our_vulnerability: "Their enterprise GTM and existing customer base"
last_deep_review: "2026-02-10"
Body: |
## Acme Corp — Competitive Profile
Acme is our most direct competitor in the team workspace space. Founded in 2019,
they've built a strong enterprise presence with ~$45M ARR and a Series C behind them.
### What They Do Well
Their Slack integration is best-in-class, and their enterprise sales motion is
mature. SOC2 certification gives them credibility in regulated industries.
### Where They're Vulnerable
Their AI features feel bolted on. "Acme AI Copilot" is essentially a search
wrapper — no structured data understanding, no agent-native workflows, no
schema validation. They can't do what we can do with computed artifacts.
### Recent Activity
The Anthropic partnership (Feb 2026) is significant — it signals they're serious
about AI, but embedding a general-purpose LLM into a workspace designed for
humans doesn't make it agent-native.
### Assessment
High threat, growing trajectory. Our structural advantage is real but time-boxed:
if they rebuild for agents, the enterprise GTM advantage could be decisive.
Comparison Table View (query: all children of Competitive Landscape, render as table):
| Competitor | Segment | Threat | Trajectory | Our Advantage | Last Reviewed |
|---|---|---|---|---|---|
| Acme Corp | Direct | High | Growing | Agent-native arch | Feb 10 |
| Beta Inc | Direct | Medium | Stable | Schema/validation | Jan 28 |
| Gamma AI | Adjacent | Medium | Growing | Artifact lifecycle | Feb 05 |
| Delta Tools | Potential | Low | Declining | — | Dec 15 |
This is just query_record(parent_id: competitive_landscape_id) with the data fields projected as columns. Any agent can generate it; any human can read it.
Threat Matrix View (pivot on market_segment x overall_threat):
| Critical | High | Medium | Low | |
|---|---|---|---|---|
| Direct | 0 | 1 | 1 | 0 |
| Adjacent | 0 | 0 | 1 | 0 |
| Potential | 0 | 0 | 0 | 1 |
Narrative Summary View (computed artifact, re-generated on demand):
"As of February 2026, we track 4 competitors across 3 segments. Our most significant threat is Acme Corp (direct, high threat, growing), whose recent Anthropic partnership signals increased AI investment. Our structural advantage — agent-native architecture — remains strong against all tracked competitors, none of whom have schema validation, computed artifacts, or structured data fields. Key vulnerability: Acme's enterprise GTM could outpace our product advantage. One competitor (Delta Tools) has not been reviewed in 60+ days and may warrant archival or re-assessment."
This narrative is not hand-written. It's generated from the structured data of all competitor records. Update any competitor's data, and the summary can be regenerated to reflect the change.
Staleness Alert View (computed, auto-flagging):
Query: children of Competitive Landscape WHERE
data.last_deep_review < NOW() - 30d
OR data.recent_moves is empty
OR data.assessment.trajectory = 'growing' AND last_activity_at < NOW() - 14d
Result:
⚠ Delta Tools — last reviewed Dec 15, 62 days ago
⚠ Gamma AI — trajectory 'growing' but no update in 11 days
This is the conditional formatting of knowledge work: rules that surface artifacts needing attention, applied automatically across the collection.
Initial Creation (agent action):
recent_moves, re-assess trajectory.Ongoing Maintenance (agent action):
recent_moves field on records where new activity is found.recent_move with significance: 'high' is added, agent re-evaluates assessment.overall_threat and assessment.trajectory.get_record_at for each competitor, computes diffs, generates a trend analysis.The Key Difference from Airtable: In Airtable, a human fills in the table. The structure helps, but the human does all the work. In the agent-native workspace, the agent populates the structured data, generates the narrative, maintains freshness, and surfaces insights. The human reviews, directs, and decides. The structured data isn't extra work — it's what makes automation possible.
The spreadsheet succeeded because it made three things simultaneously true:
The agent-native workspace, seen through this lens, needs to make three parallel things true:
data JSONB field on every record is the foundation. Schema enforcement ensures consistency. Queries across artifacts surface patterns invisible to any single document.draws_from link type plus staleness propagation gives you the essential feedback loop.The most provocative implication: the workspace IS the database. Not "the workspace has a database behind it" — the workspace, from the agent's perspective, is a structured, queryable, programmable data surface where every artifact is a record, every relationship is a foreign key, and every view is a query. The human-readable renderings (narratives, decks, dashboards) are views — not the underlying reality.
Spreadsheets didn't win by being better calculators. They won by giving everyone a programmable data surface. The agent-native workspace wins the same way: not by being a better document editor, but by giving agents (and the humans who direct them) a programmable artifact surface where knowledge work is structured, queryable, and alive.