Documentation Index
Fetch the complete documentation index at: https://docs.trytrellis.app/llms.txt
Use this file to discover all available pages before exploring further.
mem-recall makes the AI invoke trellis mem whenever the user references past conversations, retrieve content from local Claude Code, Codex, and OpenCode session stores, and answer with session-id + verbatim quotation.
Trigger phrases include last time, we discussed, what did I tell <Claude/Codex>, find ... last week, 上次, 之前, and other references to prior dialogue.
Without the skill, the AI defaults to “I don’t have that context” or speculative answers. The skill’s frontmatter description field instructs the AI to run trellis mem in these cases, with a search → context two-step retrieval flow.
Prerequisites
| Tool | Purpose | Required |
|---|
| Trellis CLI 0.6.0-beta.0+ | Provides trellis mem | Required |
| Claude Code, Codex CLI, OpenCode | Source of past sessions | At least one |
npm install -g @mindfoldhq/trellis@beta
trellis --version # ≥ 0.6.0-beta.0
Install
npx skills add mindfold-ai/marketplace --skill mem-recall
Or install all marketplace skills:
npx skills add mindfold-ai/marketplace
| Flag | Description |
|---|
-g | Install globally to ~/.claude/skills/ |
-a claude-code | Target a specific agent |
-y | Non-interactive mode |
Ask the AI which skills are available; mem-recall should appear in the list.
Trigger examples
No manual command needed. The following user messages trigger the skill:
- last time how did we solve the wait_agent deadlock in #240?
- which project did I discuss the plugin design in?
- find what I told Claude about memory architecture last week
- 上次我们怎么处理 #240 的来着?
Retrieval flow
The skill instructs the AI to execute two steps.
Step 1 — Candidate search
trellis mem search "<keyword>" [--cwd <project>] [--since <date>]
Multi-token AND search across cleaned dialogue. Returns ranked sessions. Score formula: (3 × user_hits + assistant_hits) / total_turns. User-turn hits are weighted ×3 because user wording reflects topic intent more strongly than AI elaboration.
Step 2 — Content extraction
trellis mem context <session-id> --grep <keyword> --turns 3 --around 1
Returns the top-N hit turns plus surrounding context. Default character budget ≤6000, adjustable via --max-chars.
Cleaning before search
trellis mem strips the following before searching, so hits reflect actual dialogue:
- prompt injections:
<system-reminder>, <workflow-state>, <INSTRUCTIONS>, <environment_context>, etc.
- Codex AGENTS.md preamble (first user message is dropped entirely)
- tool calls and tool results (only
text blocks retained)
- pre-compaction history (Claude
isCompactSummary / Codex compacted events replace older turns with a summary)
Data sources
Reads local files directly. No daemon, no index, no upload.
| Platform | Storage |
|---|
| Claude Code | ~/.claude/projects/<sanitized-cwd>/*.jsonl |
| Codex | ~/.codex/sessions/YYYY/MM/DD/rollout-*.jsonl |
| OpenCode | ~/.local/share/opencode/storage/{session,message,part}/... |
Out-of-scope use cases
| Need | Tool |
|---|
| Search code | Grep / Read |
| Search commit history | git log / gh |
| Search current-project files/docs | Read / Glob |
mem-recall is for AI conversation history only, not file or code search.
| Scope | Time |
|---|
| Project-scoped 3-week search | ~0.85s |
| Global, no time filter | ~3s |
Stateless. Each invocation cold-reads from disk; OS page cache absorbs IO so warm and cold runs perform similarly.