Setup
Step-by-step instructions for AI agents to install Mnemexa's intelligent memory. If you're a human, you can also follow these steps manually — or just run
npx @mnemexa/mcp
in your terminal.
Identify your runtime, then follow the matching option below. Each path ends at Agent Protocol → Verify → Report.
Copy the block below and paste it into your agent's system prompt / rules file. The MCP installer (npx @mnemexa/mcp) does this automatically for Claude Code, Cursor, Windsurf, and OpenClaw — but for any other runtime (Claude Desktop, VS Code Copilot, internal agent frameworks, custom builds), you must paste it yourself.
## Mnemexa Memory — Mandatory Protocol
You have Mnemexa shared workspace memory via the `brain.*` tools. This is NOT
optional. Failure to use these tools wastes the user's tokens and money — they pay for
memory specifically so you don't re-derive context from scratch every conversation.
### Hard rules (non-negotiable)
1. **First action of every conversation:** call `brain.recall` with a query that
summarizes the user's current ask. Do this BEFORE responding, even for greetings.
If recall returns relevant memories, use them. If empty, proceed normally — but
you MUST have made the call.
2. **Save on signal — call `brain.remember` immediately when the user:**
- States a preference ("I prefer...", "I always...", "I never...", "use X not Y")
- Shares a fact about themselves, their team, their company, their project
- Makes a decision ("we're going with X", "the plan is Y", "we picked Z")
- Reports a deadline, constraint, incident, or stakeholder
- Corrects you ("no, actually...", "stop doing X", "that's wrong because...")
- Confirms a non-obvious approach worked ("yes, that was the right call")
3. **Before saying "I don't know" or "I don't have context":** call `brain.recall`
first. Only after recall returns empty should you ask the user.
4. **For broad / synthesis questions** ("what do we know about X?", "summarize Y",
"give me everything about Z"): use `brain.answer` instead of `brain.recall` —
it reasons across multiple memories.
5. **To verify the connection:** `brain.status` returns the workspace ID and health.
### Do NOT save
- Greetings, small talk, transient debugging output
- Code you just wrote (the diff is the record, not memory)
- Passwords, API keys, tokens, credit card numbers, secrets of any kind
- Information the user explicitly says is one-off or temporary
### Why this matters
Memory is shared across every agent on the workspace. One agent learns → every agent
knows on its very next call. Skipping these tools means every conversation starts
from zero, costs the user more tokens, and produces lower-quality answers. Use them.
Where to paste it:
npx @mnemexa/mcp — nothing to do..github/copilot-instructions.md in your project root and paste the block.Verify it's working
In one session, say: "Remember that I prefer dark mode." Start a fresh session and ask: "What do you know about my preferences?" If your agent answers correctly, memory is live. If it doesn't, the protocol isn't loaded — re-paste and restart the agent.
Run the interactive installer. It auto-detects your IDE and configures everything.
Or, for silent/non-interactive install (useful if you are an AI agent running this):
Replace YOUR_API_KEY with the actual key from the user's Mnemexa dashboard.
claude_desktop_config.json on macOS, Linux, and Windows — so the MCP server is wired up automatically. But Claude Desktop has no rules file, so the Agent Protocol still has to be pasted into Settings → Profile (or per-Project Custom Instructions) by hand. After both steps, fully quit and relaunch the app.
If the automated installer doesn't work, or you prefer manual configuration:
Ask the user for their Mnemexa API key. They can find it at app.mnemexa.com → Workspace → API Keys. If they don't have an account, direct them to sign up (free, no credit card).
Create the config directory and file:
Pick the block that matches your runtime and add it to the file shown.
~/.claude/settings.json or
~/.cursor/mcp.json does not expose it to
OpenClaw agents — OpenClaw reads only its own
~/.openclaw/openclaw.json. If you're using OpenClaw,
skip to the OpenClaw block below.
claude_desktop_config.json (path differs by OS)macOS: ~/Library/Application Support/Claude/claude_desktop_config.json Linux: ~/.config/Claude/claude_desktop_config.json Windows: %APPDATA%\Claude\claude_desktop_config.json
{
"mcpServers": {
"mnemexa": {
"command": "npx",
"args": ["-y", "@mnemexa/mcp"]
}
}
}
After saving, fully quit Claude Desktop (Cmd-Q on macOS, right-click tray → Quit on Windows/Linux) and relaunch. Closing the window alone is not enough — the MCP host only loads on cold start. Then paste the Agent Protocol into Settings → Profile so the agent actually uses the tools.
~/.claude/settings.json or project .mcp.json{
"mcpServers": {
"mnemexa": {
"command": "npx",
"args": ["-y", "@mnemexa/mcp"]
}
}
}
~/.cursor/mcp.json{
"mcpServers": {
"mnemexa": {
"command": "npx",
"args": ["-y", "@mnemexa/mcp"]
}
}
}
~/.codeium/windsurf/mcp_config.json{
"mcpServers": {
"mnemexa": {
"command": "npx",
"args": ["-y", "@mnemexa/mcp"]
}
}
}
.vscode/mcp.json in your project root{
"mcpServers": {
"mnemexa": {
"command": "npx",
"args": ["-y", "@mnemexa/mcp"]
}
}
}
~/.openclaw/openclaw.json (different shape — note the nested mcp.servers key){
"mcp": {
"servers": {
"mnemexa": {
"command": "npx",
"args": ["-y", "@mnemexa/mcp"],
"env": {
"MNEMEXA_API_KEY": "YOUR_API_KEY"
}
}
}
}
}
After saving, restart the OpenClaw gateway so the new server is picked up:
Multi-agent swarms: use the same
MNEMEXA_API_KEY across every agent so they share one
workspace memory pool. Agents with their own per-agent
mcpServers override must list mnemexa
explicitly — overrides do not inherit from the global block.
IDEs (Claude Code, Cursor, Windsurf, VS Code): restart the IDE, then ask your AI:
The AI should use the brain.status tool and report a successful connection.
OpenClaw: a working machine-level MCP install (e.g. mcporter call 'mnemexa.brain.status()' succeeding) does not prove OpenClaw can see the server. Verify against OpenClaw's own registry:
Then trigger a real tool call through an OpenClaw agent (not just mcporter) — e.g. ask the agent "What is your Mnemexa status?" and confirm it invokes brain.status and returns a successful response.
For sandboxed agents, custom frameworks, or environments without MCP support. Skip the installer entirely and call the Mnemexa REST API with the user's API key. The four brain.* tools map 1:1 to four endpoints below.
| Tool | Method & path | Body |
|---|---|---|
| brain.remember | POST /v1/memory/store | { "text": "...", "meta": {} } |
| brain.recall | POST /v1/memory/retrieve | { "query": "...", "top_k": 5 } |
| brain.answer | POST /v1/memory/reason | { "query": "...", "top_k": 8 } |
| brain.health | GET /v1/optimize/health | — |
Base URL: https://api.mnemexa.com
curl -X POST https://api.mnemexa.com/v1/memory/store \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{"text": "User prefers dark mode and Tailwind CSS."}'
Even with direct HTTP, the protocol prompt is still required — it's what tells the agent when to call these endpoints. Copy it from the Agent Protocol section above and include it in your agent's system prompt.
401 Bad / expired API key — ask the user to regenerate at app.mnemexa.com.403 Workspace suspended or not yet active. Tell the user to check status.429 Rate limit / unit cap reached — respect Retry-After header.Once connected, the AI agent gets these MCP tools:
| Tool | What it does |
|---|---|
| brain.remember | Save important information — auto-scored, deduplicated, categorized |
| brain.recall | Semantic search over memory — returns ranked, scored results |
| brain.answer | Reason across multiple memories to synthesize a direct answer |
| brain.health | Memory quality report — health score, total count, stale signals |
| brain.status | Live connection check — confirms the workspace is reachable |
After setup, try these commands with your AI to confirm everything works:
After completing setup, report the following to the user:
brain.status returned a successful connectionNeed help? Contact us • Full API docs • npm package