Architecture Overview
4.1 Overall Architecture
┌──────────────────────────────────────────────────────────────┐
│ User │
└─────────────────────────────┬────────────────────────────────┘
│
▼
┌──────────────────────────────────────────────────────────────┐
│ Claude Code / Cursor / Codex / OpenCode / Kilo / Kiro │
│ (AI Coding Assistant Interface) │
└─────────────────────────────┬────────────────────────────────┘
│
┌────────────┼────────────┐
│ │
▼ ▼
┌────────────────────┐ ┌───────────────────────────┐
│ Slash Commands │ │ Hook System │
│ /trellis:* │ │ (Auto-inject) │
│ │ │ [Claude Code only] │
│ User-triggered │ │ │
│ Trigger workflows │ │ session-start.py │
│ │ │ inject-context.py │
│ [All platforms] │ │ ralph-loop.py │
└─────────┬──────────┘ └─────────────┬─────────────┘
│ │
└──────────────┬───────────────┘
│
▼
┌──────────────────────────────────────────────────────┐
│ .trellis/ Directory │
│ │
│ spec/ workspace/ tasks/ scripts/ │
│ (Specs) (Records) (Tasks) (Auto) │
└─────────────────────────┬────────────────────────────┘
│
▼
┌──────────────────────────────────────────────────────┐
│ Agent System │
│ │
│ Plan → Research → Implement → Check → Debug │
│ │
│ Each Agent receives precise context via Hooks │
└─────────────────────────┬────────────────────────────┘
│
▼
┌──────────────────────────────────────────────────────┐
│ Your Project Code │
│ (AI generates/modifies code per specs) │
└──────────────────────────────────────────────────────┘
4.2 Session Startup Flow
Claude Code / OpenCode
Cursor / Codex
Kilo
Kiro
These two platforms’ Hook systems automatically trigger session-start.py at session start,
no need to manually enter /start. Users can describe tasks immediately after opening the
terminal. Optionally run /start for a more detailed context report.
Cursor’s Hook support is still in development, and Codex provides base context via AGENTS.md.
Users must manually invoke /start (or $start for Codex) to get full project context. The
command internally executes the same steps as the Claude Code Hook: reading identity, workflow,
history, tasks, etc.
Kilo auto-loads project rules via .kilocode/rules/ and AGENTS.md into every interaction.
Optionally run the /start.md workflow for full Trellis context (identity, Git status, active
tasks, etc.).
Kiro auto-loads project context via .kiro/steering/ (product.md, tech.md, structure.md,
etc.) into every interaction. Optionally run a custom prompt for full Trellis context. Kiro also
supports auto-generating requirements, design, and task documents via Spec mode.
When you enter /start (or when the Claude Code Hook fires automatically), here’s what happens behind the scenes:
/start
│
▼
┌────────────────────────────────────────┐
│ Hook: session-start.py (auto-trigger) │
│ │
│ Read .trellis/.developer → identity│
│ Read .trellis/workflow.md → workflow│
│ Read workspace/{name}/index → history │
│ Read git log → commits │
│ Read .trellis/tasks/ → tasks │
└───────────────────┬────────────────────┘
│
▼
┌──────────────────────────┐
│ Context injected into AI│
└────────────┬─────────────┘
│
▼
┌──────────────────────────────────┐
│ start.md command runs: │
│ │
│ Step 1: Read workflow.md │
│ Step 2: Run get-context.py │
│ Step 3: Read spec indexes │
│ Step 4: Report & ask user │
└──────────────────────────────────┘
session-start.py is a SessionStart Hook that fires automatically at the start of each new session, ensuring the AI always begins with full context.
4.3 Spec Injection Mechanism
Automatic spec injection is a Claude Code exclusive feature, relying on the Hook system
(PreToolUse intercepting Task tool calls). Cursor, Codex, OpenCode, and other platforms
require manually loading specs via commands (e.g., /before-backend-dev).
inject-subagent-context.py is the core engine of Trellis. When the main Agent invokes a sub-Agent (e.g., Implement), this Hook automatically intercepts and injects context.
Workflow:
Main Agent calls: Task(subagent_type="implement", prompt="...")
│
▼
┌──────────────────────────────────────────────────────────────┐
│ Hook: inject-subagent-context.py (PreToolUse intercept) │
│ │
│ 1. Read .trellis/.current-task │
│ → Get current task directory path │
│ │
│ 2. Read {task_dir}/implement.jsonl │
│ → Get spec file list for this Agent │
│ │
│ 3. Read each file referenced in JSONL │
│ → .trellis/spec/backend/index.md │
│ → .trellis/spec/backend/database-guidelines.md │
│ → ... │
│ │
│ 4. Read prd.md (requirements) and info.md (design) │
│ │
│ 5. Assemble all into new prompt, replace original │
│ │
│ 6. Update current_phase in task.json │
└───────────────────────┬──────────────────────────────────────┘
│
▼
Actual prompt received by Implement Agent:
# Implement Agent Task
## Your Context
=== .trellis/spec/backend/index.md ===
(Full backend spec content)
=== .trellis/spec/backend/database-guidelines.md ===
(Full database spec content)
=== {task_dir}/prd.md (Requirements) ===
(Full requirements doc)
## Your Task
Implement the feature
JSONL (JSON Lines) files define which files each Agent needs to read. Each line is a JSON object:
{"file": ".trellis/workflow.md", "reason": "Project workflow and conventions"}
{"file": ".trellis/spec/backend/index.md", "reason": "Backend development guide"}
{"file": ".trellis/spec/backend/database-guidelines.md", "reason": "Database patterns"}
{"file": "src/services/", "type": "directory", "reason": "Existing service patterns"}
Field descriptions:
| Field | Required | Description |
|---|
file | Yes | Relative path to file or directory (relative to project root) |
reason | Yes | Why this file is needed (also used to generate completion markers) |
type | No | Defaults to "file". Set to "directory" to read all .md files in the directory (max 20) |
Three types of JSONL files:
| File | Used By | Typical Content |
|---|
implement.jsonl | Implement Agent | workflow.md + relevant specs + code pattern examples |
check.jsonl | Check Agent | finish-work.md + check commands + relevant specs |
debug.jsonl | Debug Agent | relevant specs + check commands |
Practical example (implement.jsonl for a backend task):
{"file": ".trellis/workflow.md", "reason": "Project workflow and conventions"}
{"file": ".trellis/spec/shared/index.md", "reason": "Shared coding standards"}
{"file": ".trellis/spec/backend/index.md", "reason": "Backend development guide"}
{"file": ".trellis/spec/backend/api-module.md", "reason": "API module conventions"}
{"file": ".trellis/spec/backend/quality.md", "reason": "Code quality requirements"}
Injection timing and control:
- implement Agent: injects implement.jsonl + prd.md + info.md
- check Agent: injects check.jsonl + prd.md (to understand intent)
- check Agent (
[finish] marker): lightweight injection of finish-work.md + prd.md
- debug Agent: injects debug.jsonl + codex-review-output.txt (if available)
- research Agent: injects project structure overview + research.jsonl (optional)
4.4 Quality Control Loop (Ralph Loop)
Ralph Loop is a Claude Code exclusive quality control mechanism, relying on the SubagentStop
Hook. Cursor and other platforms require manually running /check-backend
(/trellis-check-backend) or /check-frontend for code checks.
Ralph Loop is based on the Ralph Wiggum technique, automatically verifying when the Check Agent finishes its work.
Check Agent thinks it's done
│
▼
┌────────────────────────────────────────────────────────┐
│ Hook: ralph-loop.py (SubagentStop intercept) │
│ │
│ 1. Read verify config from worktree.yaml │
│ │
│ ┌── Has verify commands? ──┐ │
│ │ │ │
│ ▼ Yes ▼ No │
│ Run verify commands Check completion │
│ (pnpm lint etc.) markers │
│ │ │ │
│ ├── All pass ──────────┤── All present ──→ Allow stop │
│ │ │ │
│ └── Has failure ───────┘── Missing ──→ Block stop │
│ │ │
│ Return to Check Agent │
│ Continue fixing │
│ │
│ Safety limit: max 5 iterations │
└────────────────────────────────────────────────────────┘
Two verification modes:
Mode A — Programmatic verification (recommended, configured in worktree.yaml):
verify:
- pnpm lint
- pnpm typecheck
- pnpm test
Ralph Loop runs each command in sequence; all must return 0 to pass.
Mode B — Completion Markers (fallback when no verify config):
Markers are generated from check.jsonl reason fields. For example:
{"file": "...", "reason": "TypeCheck"}
{"file": "...", "reason": "Lint"}
Generated markers: TYPECHECK_FINISH, LINT_FINISH. The Check Agent must include all markers in its output to stop.
State tracking:
Ralph Loop tracks iteration count via .trellis/.ralph-state.json:
{
"task": ".trellis/tasks/02-27-user-login",
"iteration": 2,
"started_at": "2026-02-27T10:30:00"
}
- Resets automatically on task switch
- 30-minute timeout auto-reset
- Force pass at 5 iterations (prevents infinite loops)
4.5 Complete Agent System
Trellis includes 6 built-in Agents, each with different roles, tools, and responsibilities:
| Agent | Role | Tools | Model | Key Features |
|---|
| dispatch | Pure orchestrator | Read, Bash, Exa Search, Exa Code Context | opus | Only invokes other Agents in sequence, does not read specs |
| plan | Requirements assessment | Read, Bash, Glob, Grep, Task | opus | Can reject unclear requirements |
| implement | Code implementation | Read, Write, Edit, Bash, Glob, Grep, Exa Search, Exa Code Context | opus | Forbidden from git commit |
| check | Quality checking | Read, Write, Edit, Bash, Glob, Grep, Exa Search, Exa Code Context | opus | Must self-fix, controlled by Ralph Loop |
| debug | Bug fixing | Read, Write, Edit, Bash, Glob, Grep, Exa Search, Exa Code Context | opus | Precise fixes only, no extra refactoring |
| research | Information search | Read, Glob, Grep, Exa Search, Exa Code Context | opus | Read-only, does not modify files |
Agent Collaboration Flow (Multi-Agent Pipeline):
Plan Agent
│
│ Assess requirements → accept/reject
│ Invoke Research Agent to analyze codebase
│ Create task directory + JSONL config
│
▼
Dispatch Agent
│
├──→ Implement Agent (phase 1)
│ Hook injects implement.jsonl context
│ Implement feature → report completion
│
├──→ Check Agent (phase 2)
│ Hook injects check.jsonl context
│ Check code → auto-fix → Ralph Loop verification
│
├──→ Check Agent [finish] (phase 3)
│ Lightweight injection of finish-work.md
│ Final verification → confirm requirements met
│
└──→ create-pr.py (phase 4)
Commit code → push → create Draft PR
Dispatch Agent Timeout Configuration:
| Phase | Max Time | Poll Count |
|---|
| implement | 30 minutes | 6 polls |
| check | 15 minutes | 3 polls |
| debug | 20 minutes | 4 polls |