Local AI memory - no cloud, no API keys. Your AI remembers across sessions, 100% locally.
# npm
npm install opencode-superlocalmemory
# pnpm
pnpm add opencode-superlocalmemoryAdd to ~/.config/opencode/opencode.json:
{
"plugin": ["opencode-superlocalmemory"]
}Restart OpenCode. Done.
# npm
npm install @superlocalmemory/core
# pnpm
pnpm add @superlocalmemory/coreimport { createMemoryStore } from "@superlocalmemory/core";
const store = await createMemoryStore();
await store.add("User prefers dark mode", "user_tag", { type: "preference" });
const results = await store.search("preferences", "user_tag");cd supermemory-local
pnpm build
node packages/mcp/dist/index.jsAdd to Claude Desktop config:
{
"mcpServers": {
"memory": {
"command": "node",
"args": ["/path/to/supermemory-local/packages/mcp/dist/index.js"]
}
}
}cd docker
docker compose up -d
# API at http://localhost:3333On first message, your AI receives:
[SUPERLOCALMEMORY CONTEXT]
## User Profile
- Prefers concise responses
- Expert in TypeScript
## Project Context
- Uses pnpm, not npm
- Build command: pnpm build
[/SUPERLOCALMEMORY CONTEXT]
This happens automatically - no prompting needed.
Say "remember", "save this", "don't forget" and the AI auto-saves:
You: "Remember that this project uses bun"
AI: [saves to project memory]
Everything stays on your machine. No API calls, no cloud storage.
Use <private> tags to prevent sensitive data from being stored:
My API key is <private>sk-abc123</private>
Content in <private> tags is replaced with [REDACTED] before saving.
When context hits 80% of model limit:
- Injects project memories into compaction prompt
- Triggers OpenCode's summarization
- Saves session summary as a memory
This preserves context across long sessions.
The memory tool is available to your AI:
| Mode | Args | Description |
|---|---|---|
add |
content, type?, scope? |
Store memory |
search |
query, scope?, limit? |
Semantic search |
list |
scope?, limit? |
List memories |
delete |
memoryId |
Remove memory |
profile |
- | View user facts |
help |
- | Show commands |
user- Cross-project (preferences, patterns)project- Project-specific (default)
preference- User preferencesproject-config- Project settingsarchitecture- Design decisionserror-solution- Bug fixeslearned-pattern- Code patterns
# Save a preference
memory mode:add content:"User prefers dark mode" scope:user type:preference
# Search memories
memory mode:search query:"build commands"
# List project memories
memory mode:list scope:project limit:10
# Delete a memory
memory mode:delete memoryId:mem_123abc
| Component | Tech | Purpose |
|---|---|---|
| Vector DB | Orama | Fast embedded search |
| Embeddings | Transformers.js | Local ML, no API |
| Storage | JSON file | ~/.superlocalmemory/memories.json |
First query downloads the embedding model (~30MB). Subsequent queries are instant.
Create ~/.config/opencode/superlocalmemory.json:
{
"dataPath": "~/.superlocalmemory",
"similarityThreshold": 0.6,
"maxMemories": 5,
"maxProjectMemories": 10,
"compactionThreshold": 0.8,
"embeddingModel": "Xenova/all-MiniLM-L6-v2",
"debug": false
}| Option | Default | Description |
|---|---|---|
dataPath |
~/.superlocalmemory |
Where memories are stored |
similarityThreshold |
0.6 |
Min similarity for search results |
maxMemories |
5 |
Max memories injected per request |
maxProjectMemories |
10 |
Max project memories listed |
compactionThreshold |
0.8 |
Context usage ratio that triggers compaction |
embeddingModel |
Xenova/all-MiniLM-L6-v2 |
Local embedding model (or "none") |
debug |
false |
Enable debug logging to ~/.superlocalmemory.log |
pnpm install
pnpm build
pnpm test| Feature | superlocalmemory | supermemory |
|---|---|---|
| Privacy | 100% local | Cloud API |
| API Key | Not needed | Required |
| Cost | Free | Paid |
| Setup | Clone & run | Install + signup |
| Embeddings | Local (Transformers.js) | Cloud |
| Context injection | Yes | Yes |
| Keyword detection | Yes | Yes |
| Privacy tags | Yes | Yes |
| Preemptive compaction | Yes | Yes |
packages/
core/ # Memory engine (Orama + embeddings)
mcp/ # MCP server (stdio + HTTP)
opencode-plugin/# OpenCode integration
docker/ # Docker deployment
MIT