Integrations
MCP Server
Expose Engram's tools directly inside your AI editor via the Model Context Protocol.
Overview
The Engram MCP server runs locally and exposes four tools to any MCP-compatible client:
- engram_query — retrieve facts from a project most relevant to a task description
- engram_extract — extract structured facts from a text snippet and merge them into a project
- engram_stats — return node counts and graph statistics for a project
- engram_list_projects — list all available projects
Your editor calls these tools automatically as context, or you can invoke them explicitly during a session. No API keys are required — the server reads your local ~/.engram/config.yaml directly.
Starting the server
Run the MCP server from your terminal:
engram mcp-serve
Or use the dedicated entry point (useful for process managers and editor launch configs):
engram-mcp
The server communicates over stdio by default, which is the mode all major editors expect. It stays running in the foreground — open a new terminal for other work, or configure your editor to launch it automatically (recommended).
Project detection: When no project is specified, Engram auto-detects the project from the current working directory passed by the editor. If you always work in one project, scope it explicitly with --project to skip detection.
Claude Code
Add Engram to your Claude Code MCP config. Open or create ~/.claude/settings.json:
{
"mcpServers": {
"engram": {
"command": "engram",
"args": ["mcp-serve"],
"env": {}
}
}
}
Restart Claude Code. You'll see engram_query, engram_extract, engram_stats, and engram_list_projects listed in the available tools. Claude will call engram_query automatically at the start of sessions where memory is relevant.
Per-project config
To scope the server to a specific project, set it in the project-level .claude/settings.json:
{
"mcpServers": {
"engram": {
"command": "engram",
"args": ["mcp-serve", "--project", "my-project"]
}
}
}
Zero-friction hooks
Add hooks to ~/.claude/settings.json so Engram context is automatically injected before every prompt — no manual tool calls required.
{
"mcpServers": {
"engram": {
"command": "engram",
"args": ["mcp-serve"]
}
},
"hooks": {
"UserPromptSubmit": [
{
"matcher": "",
"hooks": [{
"type": "command",
"command": "engram last-context --raw"
}]
}
],
"Stop": [
{
"matcher": "",
"hooks": [{
"type": "command",
"command": "engram extract --from-session --delta"
}]
}
]
}
}
UserPromptSubmit — injects the last queried Engram context before each prompt. Stop — automatically extracts facts from the session when Claude finishes responding. Together these give you full automatic memory with no manual intervention.
Agent instructions (CLAUDE.md)
Add to your project's CLAUDE.md or system prompt so Claude knows to actively use memory:
## Memory (Engram)
At the start of each session, call engram_query with a description of the current task.
After completing significant work or before ending a session, call engram_extract with
a summary of what was built, decided, or changed.
Do not write to MEMORY.md — use Engram tools instead.
Cursor
Open Cursor → Settings → MCP or create ~/.cursor/mcp.json:
{
"mcpServers": {
"engram": {
"command": "engram",
"args": ["mcp-serve"]
}
}
}
Reload Cursor. The tools appear in the Model Context panel and are available to the Cursor AI in Composer and Chat.
Windsurf
Open Windsurf → Settings → Extensions → MCP and add:
{
"engram": {
"command": "engram",
"args": ["mcp-serve"]
}
}
Save and restart the AI engine. Engram tools will be available to Cascade.
Continue.dev
Open or create ~/.continue/config.yaml (global) or .continue/config.yaml in your project root:
mcpServers:
- name: engram
command: engram
args:
- mcp-serve
If you're using the JSON format (~/.continue/config.json), add an mcpServers array:
{
"mcpServers": [
{
"name": "engram",
"command": "engram",
"args": ["mcp-serve"]
}
]
}
Reload the Continue extension. The tools appear in the context panel and are available in Chat and Edit modes.
OpenClaw
Add to your openclaw.json configuration file:
{
"mcpServers": {
"engram": {
"command": "engram",
"args": ["mcp-serve"],
"env": {
"ENGRAM_PROJECT": "my-project"
}
}
}
}
OpenClaw uses a MEMORY.md convention by default. Engram replaces this with a persistent structured graph. To migrate:
- Import your existing
MEMORY.mdinto Engram:engram extract my-project MEMORY.md - Update your agent instructions: replace "write to MEMORY.md" with "call
engram_extract" - Add session-start instruction: "call
engram_querywith a description of the current task"
See the full OpenClaw integration guide for a complete agent instructions template and MEMORY.md migration walkthrough.
Cline
Cline stores MCP config in VS Code's extension storage. Open or create:
~/Library/Application Support/Code/User/globalStorage/saoudrizwan.claude-dev/settings/cline_mcp_settings.json
On Windows: %APPDATA%\Code\User\globalStorage\saoudrizwan.claude-dev\settings\cline_mcp_settings.json
{
"mcpServers": {
"engram": {
"command": "engram",
"args": ["mcp-serve"],
"env": {
"ENGRAM_PROJECT": "my-project"
}
}
}
}
Restart Cline. The MCP tools appear in the Cline panel. Cline will call engram_query automatically when you describe a task that references prior context.
Zed
Open your Zed settings (Zed → Settings → Open Settings or Cmd ,) and add a context_servers block:
{
"context_servers": {
"engram": {
"command": "engram",
"args": ["mcp-serve"]
}
}
}
Save the settings file. Zed will start the Engram context server automatically and make its tools available in the Assistant panel.
Codex
OpenAI Codex CLI stores its MCP config at ~/.codex/config.yaml. Add an mcpServers block:
mcpServers:
engram:
command: engram
args:
- mcp-serve
env:
ENGRAM_PROJECT: my-project
Save the file and restart Codex. The engram_query, engram_extract, engram_stats, and engram_list_projects tools will appear in the Codex tool list.
To use the hosted API, add the optional env vars:
mcpServers:
engram:
command: engram
args:
- mcp-serve
env:
ENGRAM_PROJECT: my-project
ENGRAM_API_KEY: your-api-key
ENGRAM_REMOTE_URL: https://api.engram.unbidden.ai
Recommended agent instructions to add to your Codex system prompt or AGENTS.md:
## Memory (Engram)
At the start of each session, call engram_query with a description of the
current task to retrieve relevant context from prior sessions.
After completing significant work, call engram_extract with a summary of
what was built, decided, or changed. Engram will structure and store the
facts automatically.
Tool reference
engram_query
Retrieve facts relevant to a task description. This is the primary tool — call it at the start of a session or whenever you need to recall past decisions.
engram_query(task, project?, cwd?, hops?, top_k?)
| Param | Type | Default | Description |
|---|---|---|---|
task |
string | required | Natural-language task description or question |
project |
string | auto-detect | Project name to query. Auto-detected from cwd if omitted. |
cwd |
string | null | Working directory for project auto-detection |
hops |
number | 3 | BFS traversal depth for graph expansion |
top_k |
number | 25 | Maximum number of facts to return |
engram_extract
Extract structured facts from a text snippet and store them in the project graph. Use this at the end of a session to persist what was decided or learned.
engram_extract(text, project?, cwd?, source_name?, verify?)
| Param | Type | Default | Description |
|---|---|---|---|
text |
string | required | Raw text to extract facts from (transcript, notes, etc.) |
project |
string | auto-detect | Project to store facts in. Auto-detected from cwd if omitted. |
cwd |
string | null | Working directory for project auto-detection |
source_name |
string | "mcp_extract" | Label for the source (e.g. "session-2026-04-24") |
verify |
boolean | false | Run a second verification pass to catch missed facts |
engram_stats
Return node counts and graph statistics for a project. Useful for confirming extraction succeeded or understanding what's in memory.
engram_stats(project?, cwd?)
| Param | Type | Default | Description |
|---|---|---|---|
project |
string | auto-detect | Project name. Auto-detected from cwd if omitted. |
cwd |
string | null | Working directory for project auto-detection |
engram_list_projects
List all projects in ~/.engram/projects/. Takes no parameters.
engram_list_projects()
Troubleshooting
Tools don't appear in my editor
Make sure engram is on your PATH. Run which engram in a terminal. If it's not found, reinstall with python3.13 -m pip install engram and confirm your Python bin directory is in PATH. Also make sure you are using Python 3.13 — Engram will not install correctly on 3.14+.
engram_query returns no results
The project may be empty or not yet initialized. Run engram list in a terminal to see what projects exist and how many nodes they contain. If the project is empty, run engram extract <project> <file> or engram onboard <project> to populate it.
engram_extract fails with model errors
Check that ~/.engram/config.yaml has a valid API key for your configured model. Gemini Flash requires a GEMINI_API_KEY env var or an api_key under the llm: section in config. See the configuration reference for details.
Project auto-detection picks the wrong project
Auto-detection matches the current working directory to a project name by path fragment. Pass project explicitly to override it, or use engram mcp-serve --project my-project to pin a single project for the entire server session.