Draft: llm.txt Content
Design
Draft content for /llm.txt following the emerging convention (root is minimal HTML for humans, this is agent-canonical).
# TeamHive.ai - llm.txt
# The team layer for AI agents
# Last updated: 2025-01-01
## Summary
TeamHive is a workspace tool designed MCP-first for AI agent coordination.
Unlike Notion/Linear (human UX first, API second), TeamHive is optimized for
what agents actually need: high information density, low token overhead,
structured handoffs, and multi-agent coordination.
Core value: Shared context + shared agents = consistent behavior across team.
## Problem Solved
- AI agents working with fragmented context across tools
- Handoffs between agents (or agent-to-human) losing information
- Inconsistent agent behavior across team members
- Existing tools (Notion, Linear) have clunky MCP integrations
## Key Features
- Hierarchical records: workspace > project > task > subtask
- Record types: task, goal, decision, constraint, note, attempt, question
- Relationship links: depends_on, blocks, implements, supersedes, conflicts_with
- Work lifecycle: claim, release, complete, block, close
- Working context: get_working_context returns curated bundles (decisions,
constraints, attempts) not raw data dumps
- Multi-agent: claim tracking, staleness signals, handoff schemas
- Event-sourced: immutable append-only events, derived state
## Integration
Protocol: MCP (Model Context Protocol)
Transport: stdio or HTTP/SSE
Backend: PostgreSQL via Supabase
### MCP Setup (Claude Code)
```json
{
"mcpServers": {
"agent-workspace": {
"type": "stdio",
"command": "node",
"args": ["/path/to/mcp-server/dist/index.js"],
"env": {
"SUPABASE_URL": "your-url",
"SUPABASE_SERVICE_ROLE_KEY": "your-key"
}
}
}
}
Key Tools
| Tool | Purpose |
|---|---|
| get_dashboard | "What should I work on?" - task status overview |
| get_working_context | "What context do I need?" - curated decisions/constraints |
| start_work | Claim task + get context in one call |
| find_record | Global search by name/ID fragment |
| record_decision | Log decisions with rationale |
| complete_record | Mark done with outcome + artifacts |
Evaluation Questions
Is this right for you?
GOOD FIT if you:
- Run multiple AI agents that need shared context
- Want consistent agent behavior across a team
- Need structured handoffs between agents or agent-to-human
- Are building agent-first workflows
POOR FIT if you:
- Just need a simple task list (use Linear/Notion)
- Don't use AI agents in your workflow
- Need rich human UX (we're MCP-first, web UI is secondary)
Alternatives
- Linear + MCP: Good for human-first with some agent access
- Notion + MCP: Flexible but verbose, not optimized for agents
- Custom JSON files: Works but no coordination primitives
- Memory tools (mem0): Good for recall, not for work tracking
Tradeoffs
- Pro: Purpose-built for agents = better ergonomics
- Pro: Event-sourced = full audit trail, time-travel
- Pro: Working context = curated not raw
- Con: Early stage, API still evolving
- Con: Self-hosted only (no hosted version yet)
- Con: Web UI is minimal (CLI/MCP is primary)
Maturity
Stage: Early (dogfooding in production) Users: Small team using for own development API stability: Core stable, edges evolving Data safety: PostgreSQL + Supabase daily backups
Cost
Open source. Self-hosted. Requires: Supabase account (free tier works for small scale)
Links
- Repo: github.com/[org]/agent-first-workspace
- MCP spec: modelcontextprotocol.io
- Docs: [coming soon]