Commands & Skills Reference
Starting with 0.5.0, Trellis is skill-first. The heavy lifting lives in auto-trigger skills; explicit slash commands are kept small and focused on session boundaries.
Surface at a Glance
| Kind | Name | Trigger | Purpose |
|---|
| Command | /trellis:start | Manual (or hook on hook-capable platforms) | Open a session, load context, classify work |
| Command | /trellis:finish-work | Manual, after human testing + commit | Checklist + record session in journal |
| Command | /trellis:continue | Manual | Push AI to the next workflow step; prevent step-skipping |
| Skill | trellis-brainstorm | Auto when user describes a feature / bug / ambiguous need | Turn request into task + prd.md |
| Skill | trellis-before-dev | Auto before touching code in a task | Read relevant spec before writing |
| Skill | trellis-check | Auto after implementation; also via sub-agent | Verify + self-fix loop |
| Skill | trellis-update-spec | Auto when a learning is worth capturing | Promote knowledge into .trellis/spec/ |
| Skill | trellis-break-loop | Auto after a tricky bug | Root-cause + prevention analysis |
| Sub-agent | trellis-research | Spawned by main session for investigation | Read-only codebase search |
| Sub-agent | trellis-implement | Spawned by main session for coding | Writes code, no git commit |
| Sub-agent | trellis-check | Spawned by main session for verification | Runs verify + self-fix, has its own loop |
Only three slash commands are shipped: start, finish-work, continue. Everything that used to be a command (/before-backend-dev, /check-backend, /record-session, /onboard, …) has either been folded into a skill/sub-agent or removed.
Commands
/trellis:start: Start a session
Run this at the beginning of a session if your platform does not auto-inject context. On hook-capable platforms (Claude Code, Cursor, OpenCode, Gemini, Qoder, CodeBuddy, Copilot, Droid, plus Codex with codex_hooks = true), the SessionStart hook does this automatically; you may still run /trellis:start the first time to watch the flow.
What it does:
- Read
.trellis/workflow.md so the AI knows the workflow contract.
- Run
get_context.py to surface developer identity, git status, active tasks.
- Read spec indexes (per relevant package in a monorepo).
- Report context and ask what you want to work on.
Task classification the AI will apply:
| Type | Criteria | Flow |
|---|
| Q&A | Question about code / architecture | Answer directly |
| Quick fix | Typo / one-liner, under ~5 minutes | Edit directly |
| Development task | Logic changes, new features, multi-file modification | Task workflow (brainstorm skill) |
When in doubt, use the task workflow; it’s what ensures sub-agents get spec injection.
/trellis:finish-work: End a session cleanly
Prerequisite: the human has tested and committed. The AI never runs git commit.
Steps:
- Run
get_context.py --mode record to confirm there are changes and active tasks.
- For each task whose work is actually done (code merged, acceptance criteria met), archive it with
task.py archive <name>.
- Add a session entry to the journal with
add_session.py --title … --commit ….
- If a significant learning came out of this session, route it to
trellis-update-spec.
/trellis:continue: Advance the current workflow step
Reads .trellis/.current-task plus the task’s status, then consults workflow.md to decide which phase/step is current and what the next action should be (e.g. whether to run before-dev, check, or update-spec). Keeps AI on the workflow rails and prevents it from silently skipping steps like check or update-spec.
Auto-trigger Skills
Skills run without an explicit command; the platform matches on the user’s intent. You can always trigger them manually (/skill trellis-brainstorm, etc.) if the auto-match misses.
trellis-brainstorm
Turns a fuzzy user request into a concrete task:
- Proposes task name and slug.
- Drafts
prd.md using the assumption / requirement / acceptance template.
- Runs the
trellis-research sub-agent in parallel when the request depends on investigation (existing code, external APIs, library docs).
- Creates the task via
task.py create.
trellis-before-dev
Runs before coding starts on a task. Reads the spec index for the affected package(s), then the specific guideline files referenced in the pre-development checklist. Ensures the AI knows the conventions before writing code, not after.
trellis-check
Runs after implementation:
git diff --name-only HEAD to find what changed.
- Discover which spec layers apply.
- Compare the diff against the quality checklist in each layer’s index.
- Run
pnpm lint / pnpm typecheck / pnpm test (or equivalent) for affected packages.
- Self-fix violations in a bounded loop, then report what was fixed and what’s left.
The trellis-check sub-agent wraps the skill so the main session can delegate and stay focused. The sub-agent owns its own retry loop; there is no external Ralph Loop anymore.
trellis-update-spec
Captures a learning as an executable contract in .trellis/spec/. Used after debugging sessions, after hitting a gotcha, or after making a non-obvious design decision. Picks the right spec file, adds a focused update (decision / convention / pattern / anti-pattern / gotcha), updates the index if needed.
trellis-break-loop
Invoked after resolving a hard bug. Produces a 5-dimension analysis:
- Root-cause classification (missing spec / contract violation / change propagation / test gap / implicit assumption).
- Why earlier fix attempts failed.
- Prevention mechanisms (spec update, type constraints, lint rule, test, review checklist, doc).
- Systematic expansion: other places with the same pattern.
- Knowledge capture: route findings into
trellis-update-spec.
The value of debugging is not fixing this bug; it’s making sure this class of bugs never happens again.
Sub-agents
Sub-agents are isolated AI sub-processes with their own prompt and (platform-specific) their own tool / hook wiring. They receive spec context via JSONL files per task.
| Sub-agent | Restriction | When main session spawns it |
|---|
trellis-research | Read-only | Codebase search / pattern discovery / doc lookup |
trellis-implement | Writes code, no commit | Once requirements + plan exist, for the coding phase |
trellis-check | Writes code (fixes) | Verification phase; runs self-fix loop internally |
On Claude Code, Cursor, OpenCode, CodeBuddy, and Droid, these sub-agents are hooked so the right JSONL context (implement.jsonl, check.jsonl, research.jsonl) is injected automatically before they start. On the rest, the main session reads the JSONL files itself and passes the relevant content into sub-agent prompts.
Task Management Workflow
Task Lifecycle
create → init-context → add-context → start → implement/check → finish → archive
│ │ │ │ │ │ │
│ │ │ │ │ │ │
▼ ▼ ▼ ▼ ▼ ▼ ▼
Create Init JSONL Add context Set as Development/ Clear Archive to
directory config files entries current check cycle current archive/
task.json task ptr task
task.py Subcommands
Task Creation
# Create a task
TASK_DIR=$(./.trellis/scripts/task.py create "Add user login" \
--slug user-login \ # Directory name suffix (optional, auto-slugifies otherwise)
--assignee alice \ # Assignee (optional)
--priority P1 \ # Priority: P0/P1/P2/P3 (optional, default P2)
--description "Implement JWT login") # Description (optional)
# Created directory: .trellis/tasks/02-27-user-login/
# Created file: task.json
Context Configuration
# Initialize JSONL config (generates implement.jsonl + check.jsonl)
./.trellis/scripts/task.py init-context "$TASK_DIR" backend
# dev_type: backend | frontend | fullstack | test | docs
# Optional: --package PACKAGE (required for monorepo, picks the spec/<pkg>/ root)
# Add extra context entries
./.trellis/scripts/task.py add-context "$TASK_DIR" implement \
"src/services/auth.ts" "Existing auth patterns"
# file arg: implement | check (shorthand, auto-appends .jsonl)
# path arg: a file OR directory path — add-context auto-detects and sets type="directory" for dirs
# Validate implement.jsonl + check.jsonl (all referenced files exist)
./.trellis/scripts/task.py validate "$TASK_DIR"
# View all JSONL entries
./.trellis/scripts/task.py list-context "$TASK_DIR"
research.jsonl is managed separately (read by the trellis-research sub-agent). task.py add-context only writes to implement.jsonl / check.jsonl — create / edit research.jsonl by hand if you need custom research context.
Task Control
# Set as current task (sub-agent hooks read .current-task for JSONL injection)
./.trellis/scripts/task.py start "$TASK_DIR"
# Clear current task (no arguments needed, auto-reads .current-task)
./.trellis/scripts/task.py finish
# Set Git branch name
./.trellis/scripts/task.py set-branch "$TASK_DIR" "feature/user-login"
# Set PR target branch
./.trellis/scripts/task.py set-base-branch "$TASK_DIR" "main"
# Set scope (used in commit messages: feat(scope): ...)
./.trellis/scripts/task.py set-scope "$TASK_DIR" "auth"
Parent-child (subtasks)
A task can have children. Children are independent task directories on disk — they have their own prd.md, JSONL files, and status. The parent just references them for grouping.
# Option A: create a child directly under a parent
./.trellis/scripts/task.py create "JWT middleware" \
--slug jwt-middleware \
--parent 02-27-user-login
# Option B: link two existing tasks
./.trellis/scripts/task.py add-subtask \
02-27-user-login \ # parent directory
02-28-jwt-middleware # child directory
# Unlink (does not delete either task)
./.trellis/scripts/task.py remove-subtask \
02-27-user-login 02-28-jwt-middleware
Effects on task.json:
- Parent’s
children: [<child-dir-name>, ...] gets the child appended.
- Child’s
parent: "<parent-dir-name>" gets set.
task.py list renders children indented under their parent and shows [done/total done] so you can see progress at a glance.
Parent-child links use the parent and children fields. The subtasks field that also appears in task.json is unrelated — it’s a checklist of to-do items within a single task (name + status pairs), populated mainly by the bootstrap task. Don’t confuse the two.
Task Management
# List active tasks
./.trellis/scripts/task.py list
./.trellis/scripts/task.py list --mine # Only your own
./.trellis/scripts/task.py list --status review # Filter by status
# Archive completed tasks
./.trellis/scripts/task.py archive user-login
# Moves to archive/2026-02/
# List archived tasks
./.trellis/scripts/task.py list-archive
./.trellis/scripts/task.py list-archive 2026-02 # Filter by month
task.json Schema
The exact shape task.py create writes today (see .trellis/scripts/common/task_store.py):
{
"id": "02-27-user-login",
"name": "user-login",
"title": "Add user login",
"description": "Implement JWT login flow",
"status": "planning",
"dev_type": null,
"scope": null,
"package": null,
"priority": "P1",
"creator": "alice",
"assignee": "alice",
"createdAt": "2026-02-27",
"completedAt": null,
"branch": null,
"base_branch": "main",
"worktree_path": null,
"commit": null,
"pr_url": null,
"subtasks": [],
"children": [],
"parent": null,
"relatedFiles": [],
"notes": "",
"meta": {}
}
Fields get populated over time:
dev_type / scope / package → set via task.py set-* or init-context
branch → set via task.py set-branch
status → transitions planning → in_progress → completed
completedAt → set by task.py archive (archive does NOT write the commit hash back)
parent / children → set via task.py create --parent / add-subtask
worktree_path / commit / pr_url are schema placeholders only; no 0.5 script populates them. Store commit hashes or PR URLs under meta: {}, or write them back from an after_archive hook.
Older tasks created before a field existed may be missing some keys (e.g. tasks created pre-package support won’t have "package"); task.py treats missing fields as null, so nothing breaks.
Status transitions:
task.py create → status: "planning"
task.py start → (only writes .trellis/.current-task; does NOT change status)
(manual / future) → status: "in_progress"
task.py archive → status: "completed" + move to archive/
planning / in_progress / completed are the default set, aligned with the three phases in workflow.md. Note: task.py start does not flip status to in_progress — the per-turn workflow-state breadcrumb will still say planning (so the AI runs brainstorm / before-dev) until someone edits task.json.status manually (or an after_start hook does it). task.py list --status also accepts review as a filter for custom workflows that introduce it; add any additional statuses you need by writing a matching [workflow-state:<name>] block in workflow.md.
JSONL Context Configuration in Practice
Auto-Generated Default Configuration
task.py init-context generates minimal JSONL files based on dev_type. The default is deliberately thin; it points sub-agents at the workflow contract and the relevant spec index, and relies on the skill (trellis-before-dev, trellis-check) to pull in the specific guideline files referenced from those indexes.
Single-repo, dev_type=backend:
# implement.jsonl
{"file": ".trellis/workflow.md", "reason": "Project workflow and conventions"}
{"file": ".trellis/spec/backend/index.md", "reason": "Backend development guide"}
# check.jsonl (paths resolve to the active platform's command/skill location)
{"file": ".claude/commands/trellis/finish-work.md", "reason": "Finish work checklist"}
{"file": ".claude/commands/trellis/check.md", "reason": "Code quality check spec"}
Monorepo projects substitute .trellis/spec/<package>/... in the implement side. dev_type=frontend points at spec/frontend/index.md; dev_type=fullstack includes both; dev_type=test uses the backend index; dev_type=docs only includes workflow.md.
Add extra context (task-specific code patterns, cross-layer guides, etc.) via task.py add-context; the defaults stay lean on purpose.
Adding Custom Context
# Add existing code as reference (file)
./.trellis/scripts/task.py add-context "$TASK_DIR" implement \
"src/services/user.ts" "Existing user service patterns"
# Add an entire directory (auto-reads all .md files)
./.trellis/scripts/task.py add-context "$TASK_DIR" implement \
"src/services/" "Existing service patterns"
# Add custom check context
./.trellis/scripts/task.py add-context "$TASK_DIR" check \
".trellis/spec/guides/cross-layer-thinking-guide.md" "Cross-layer verification"
Task Lifecycle Hooks
You can configure shell commands that run automatically when task lifecycle events occur. This enables integrations like syncing tasks to Linear, posting to Slack, or triggering CI pipelines.
Configuration
Add a hooks block to .trellis/config.yaml:
hooks:
after_create:
- 'python3 .trellis/scripts/hooks/linear_sync.py create'
after_start:
- 'python3 .trellis/scripts/hooks/linear_sync.py start'
after_finish:
- "echo 'Task finished'"
after_archive:
- 'python3 .trellis/scripts/hooks/linear_sync.py archive'
The default config.yaml ships with the hooks section commented out. Uncomment and edit to
activate.
Supported Events
| Event | Fires When | Use Case |
|---|
after_create | task.py create completes | Create linked issue in project tracker |
after_start | task.py start sets the current task | Update issue status to “In Progress” |
after_finish | task.py finish clears the current task | Notify team, trigger review |
after_archive | task.py archive moves the task | Mark issue as “Done” |
Environment Variables
Each hook receives:
| Variable | Value |
|---|
TASK_JSON_PATH | Absolute path to the task’s task.json |
All other environment variables from the parent process are inherited.
Execution Behavior
- Working directory: Repository root
- Shell: Commands run through the system shell (
shell=True)
- Failures don’t block: A failing hook prints a
[WARN] message to stderr but does not prevent the task operation from completing
- Sequential: Multiple hooks per event execute in list order; a failure in one does not skip the rest
- stdout captured: Hook stdout is not displayed to the user; use stderr for diagnostic output
The after_archive hook receives TASK_JSON_PATH pointing to the archived location (e.g.,
.trellis/tasks/archive/2026-03/task-name/task.json), not the original path.
Example: Linear Sync Hook
Trellis ships with an example hook at .trellis/scripts/hooks/linear_sync.py that syncs task lifecycle events to Linear.
What it does:
| Action | Trigger | Effect |
|---|
create | after_create | Creates a Linear issue from task.json (title, priority, assignee, parent) |
start | after_start | Updates the linked issue to “In Progress” |
archive | after_archive | Updates the linked issue to “Done” |
sync | Manual | Pushes prd.md content to the Linear issue description |
Prerequisites:
- Install the
linearis CLI and set LINEAR_API_KEY
- Create
.trellis/hooks.local.json (gitignored) with your team config:
{
"linear": {
"team": "ENG",
"project": "My Project",
"assignees": {
"alice": "linear-user-id-for-alice"
}
}
}
The hook writes the Linear issue identifier back to task.json under meta.linear_issue (e.g., "ENG-123"), making subsequent events idempotent.
Writing Specs
Spec Directory Structure and Layering
Default layout from trellis init
trellis init writes a skeleton with frontend/ + backend/ + guides/, all filled with empty placeholder templates marked “(To be filled by the team)”. The templates are not ready to inject into sub-agents as-is.
.trellis/spec/
├── frontend/ # Frontend specs (placeholders)
│ ├── index.md # Index: lists all specs and their status
│ ├── component-guidelines.md # Component specs
│ ├── hook-guidelines.md # Hook specs
│ ├── state-management.md # State management
│ ├── type-safety.md # Type safety
│ ├── quality-guidelines.md # Quality guidelines
│ └── directory-structure.md # Directory structure
│
├── backend/ # Backend specs (placeholders)
│ ├── index.md
│ ├── database-guidelines.md
│ ├── error-handling.md
│ ├── logging-guidelines.md
│ ├── quality-guidelines.md
│ └── directory-structure.md
│
└── guides/ # Thinking guides
├── index.md
├── cross-layer-thinking-guide.md
└── code-reuse-thinking-guide.md
Running trellis init also creates a bootstrap task (00-bootstrap-guidelines). On first /trellis:start, AI detects it, runs trellis-research to read your actual codebase, then fills the placeholders with specs grounded in the real project (tech stack, conventions, directory shape). Skip this task and you’ll be handing empty scaffolds to every sub-agent — don’t.
The layout is only a convention
frontend/ and backend/ are not special. Trellis discovers spec layers by scanning one level under .trellis/spec/ for any directory that contains an index.md. Name them after how your project actually splits — by runtime, by package, by responsibility — as long as each layer has its own index.md.
Trellis itself uses a different shape (monorepo, per-package):
.trellis/spec/ # Trellis's own spec tree
├── cli/ # Package: CLI
│ ├── backend/
│ │ └── index.md # ← layer registered via index.md
│ └── unit-test/
│ └── index.md # ← another layer
│
├── docs-site/ # Package: docs site
│ └── docs/
│ └── index.md # ← single-layer package
│
└── guides/ # Cross-package thinking guides
├── index.md
├── cross-layer-thinking-guide.md
├── cross-platform-thinking-guide.md
└── code-reuse-thinking-guide.md
No frontend/ or backend/ at the top level, because the repo is structured by package. The only contract Trellis enforces is “a layer is a directory with index.md”; everything else is up to your project.
From Empty Templates to Complete Specs
trellis init generates empty templates marked “(To be filled by the team)”. Here’s how to fill them:
Step 1: Extract patterns from actual code
# See how existing code is organized
ls src/components/ # Component structure
ls src/services/ # Service structure
Step 2: Write down your conventions
# Component Guidelines
## File Structure
- One component per file
- Use PascalCase for filenames: `UserProfile.tsx`
- Co-locate styles: `UserProfile.module.css`
- Co-locate tests: `UserProfile.test.tsx`
## Patterns
#### Required
- Functional components + hooks (no class components)
- TypeScript with explicit Props interface
- `export default` for page components, named export for shared
#### Forbidden
- No `any` type in Props
- No inline styles (use CSS Modules)
- No direct DOM manipulation
Step 3: Add code examples
#### Good Example
```tsx
interface UserProfileProps {
userId: string;
onUpdate: (user: User) => void;
}
export function UserProfile({ userId, onUpdate }: UserProfileProps) {
// ...
}
```
#### Bad Example
```tsx
// Don't: no Props interface, using any
export default function UserProfile(props: any) {
// ...
}
```
Step 4: Update index.md status
| Guideline | File | Status |
|-----------|------|--------|
| Component Guidelines | component-guidelines.md | **Filled** |
| Hook Guidelines | hook-guidelines.md | To fill |
What a Spec Should Look Like
The trellis-update-spec skill treats specs as executable contracts, not principle text. Every entry that sub-agents read at trellis-implement / trellis-check time should tell AI how to implement safely — concrete signatures, contracts, cases, and tests — or, if it’s about how to think, belong in guides/ instead.
Code-Spec vs Guide
| Type | Location | Purpose | Content style |
|---|
| Code-Spec | <layer>/*.md (e.g., backend/, cli/backend/) | “How to implement safely” | Signatures, contracts, validation matrix, good/base/bad cases, required tests |
| Guide | guides/*.md | ”What to think about before writing” | Checklists, questions, pointers into specs |
If you’re writing “don’t forget to check X”, put it in a guide. If you’re writing “X accepts {field: type, ...} and returns {...}; here are the error cases and the required tests”, put it in a code-spec.
Pick the right update shape
trellis-update-spec ships several templates; pick the one that matches what you learned:
| You learned… | Template | Key fields |
|---|
| Why we picked approach X over Y | Design Decision | Context, Options Considered, Decision, Example, Extensibility |
| The project does X this way | Convention | What, Why, Example, Related |
| A reusable solution to a recurring problem | Pattern | Problem, Solution, Example (Good + Bad), Why |
| An approach that causes trouble | Forbidden Pattern (Don't) | Problem snippet, Why it’s bad, Instead snippet |
| An easy-to-make error | Common Mistake | Symptom, Cause, Fix, Prevention |
| Non-obvious behavior | Gotcha | > Warning: blockquote with when/how |
When the change touches a command / API signature, a cross-layer request-response contract, a DB schema, or infra wiring (storage, queue, cache, secrets, env), the skill requires all seven sections:
- Scope / Trigger — why this demands code-spec depth
- Signatures — command / API / DB signature(s)
- Contracts — request fields, response fields, env keys (name, type, constraint)
- Validation & Error Matrix —
<condition> → <error> table
- Good / Base / Bad Cases — example inputs with expected outcome
- Tests Required — unit / integration / e2e with assertion points
- Wrong vs Correct — at least one explicit pair
Skip any of these and the skill prompts you to fill them; that’s the “executable contract” bar.
Concrete contrast
A good Convention entry (backend/database-guidelines.md):
#### Convention: Use ORM batch methods, never loop single-row DB calls
**What**: For any collection of N rows, call the ORM's batch method (`createMany`, `updateMany`, `deleteMany`) once. Never wrap a single-row `create` / `update` / `delete` in a `for` / `Promise.all` loop.
**Why**: Each DB call is a round-trip. In production, a 200-item loop inside a request handler is how p99 latency silently grows from 50ms to 8s — we've already caught this twice in code review (PRs #312, #417). Batch methods collapse N round-trips into one statement and let the DB plan the write.
**Example**:
```ts
// ✅ Correct — one round-trip
await prisma.user.createMany({ data: users });
// ❌ Wrong — N round-trips
for (const user of users) {
await prisma.user.create({ data: user });
}
// ❌ Also wrong — still N round-trips, just parallel
await Promise.all(users.map(user => prisma.user.create({ data: user })));
```
**When batch is not available**: wrap the loop in a single transaction (`prisma.$transaction`) so it's at least one logical unit; add a comment explaining why batch wasn't usable.
**Related**: `quality-guidelines.md#performance`, `error-handling.md#transactions`.
A bad spec entry — no signature, no example, no why, no test point:
#### Database
- Use good query patterns
- Be careful with SQL
- Follow best practices
An over-specified spec — mechanical rules with no reasoning, stifles judgment:
#### Variable Naming
- All boolean variables must start with `is` or `has`
- All arrays must end with `List`
- All functions must be less than 20 lines
- All files must be less than 200 lines
The bar: specific, actionable, with a code example, with a stated why, and — for code-specs — with enough signature / contract detail that a sub-agent can act on it without asking follow-up questions.
Bootstrap Guided Initial Fill
trellis init automatically creates a bootstrap guide task (00-bootstrap-guidelines). The AI detects it during the first /start and guides you through filling in the blank spec files.
During this guided task, the AI analyzes your codebase, extracts existing patterns, and auto-fills spec templates.