Skip to content

Local AI memory - no cloud, no API keys. 100% local alternative to Supermemory.

Notifications You must be signed in to change notification settings

Siim/superlocalmemory

Repository files navigation

superlocalmemory

Local AI memory - no cloud, no API keys. Your AI remembers across sessions, 100% locally.

npm npm

Installation

OpenCode Plugin

# npm
npm install opencode-superlocalmemory

# pnpm
pnpm add opencode-superlocalmemory

Add to ~/.config/opencode/opencode.json:

{
  "plugin": ["opencode-superlocalmemory"]
}

Restart OpenCode. Done.

Core Library (for custom integrations)

# npm
npm install @superlocalmemory/core

# pnpm  
pnpm add @superlocalmemory/core
import { createMemoryStore } from "@superlocalmemory/core";

const store = await createMemoryStore();
await store.add("User prefers dark mode", "user_tag", { type: "preference" });
const results = await store.search("preferences", "user_tag");

MCP Server (Claude Desktop, Cursor)

cd supermemory-local
pnpm build
node packages/mcp/dist/index.js

Add to Claude Desktop config:

{
  "mcpServers": {
    "memory": {
      "command": "node",
      "args": ["/path/to/supermemory-local/packages/mcp/dist/index.js"]
    }
  }
}

Docker

cd docker
docker compose up -d
# API at http://localhost:3333

Features

Context Injection

On first message, your AI receives:

[SUPERLOCALMEMORY CONTEXT]
## User Profile
- Prefers concise responses
- Expert in TypeScript

## Project Context  
- Uses pnpm, not npm
- Build command: pnpm build
[/SUPERLOCALMEMORY CONTEXT]

This happens automatically - no prompting needed.

Keyword Detection

Say "remember", "save this", "don't forget" and the AI auto-saves:

You: "Remember that this project uses bun"
AI: [saves to project memory]

Privacy

Everything stays on your machine. No API calls, no cloud storage.

Use <private> tags to prevent sensitive data from being stored:

My API key is <private>sk-abc123</private>

Content in <private> tags is replaced with [REDACTED] before saving.

Preemptive Compaction

When context hits 80% of model limit:

  1. Injects project memories into compaction prompt
  2. Triggers OpenCode's summarization
  3. Saves session summary as a memory

This preserves context across long sessions.

Tool Usage

The memory tool is available to your AI:

Mode Args Description
add content, type?, scope? Store memory
search query, scope?, limit? Semantic search
list scope?, limit? List memories
delete memoryId Remove memory
profile - View user facts
help - Show commands

Scopes

  • user - Cross-project (preferences, patterns)
  • project - Project-specific (default)

Types

  • preference - User preferences
  • project-config - Project settings
  • architecture - Design decisions
  • error-solution - Bug fixes
  • learned-pattern - Code patterns

Examples

# Save a preference
memory mode:add content:"User prefers dark mode" scope:user type:preference

# Search memories
memory mode:search query:"build commands"

# List project memories
memory mode:list scope:project limit:10

# Delete a memory
memory mode:delete memoryId:mem_123abc

How It Works

Component Tech Purpose
Vector DB Orama Fast embedded search
Embeddings Transformers.js Local ML, no API
Storage JSON file ~/.superlocalmemory/memories.json

First query downloads the embedding model (~30MB). Subsequent queries are instant.

Configuration

Create ~/.config/opencode/superlocalmemory.json:

{
  "dataPath": "~/.superlocalmemory",
  "similarityThreshold": 0.6,
  "maxMemories": 5,
  "maxProjectMemories": 10,
  "compactionThreshold": 0.8,
  "embeddingModel": "Xenova/all-MiniLM-L6-v2",
  "debug": false
}
Option Default Description
dataPath ~/.superlocalmemory Where memories are stored
similarityThreshold 0.6 Min similarity for search results
maxMemories 5 Max memories injected per request
maxProjectMemories 10 Max project memories listed
compactionThreshold 0.8 Context usage ratio that triggers compaction
embeddingModel Xenova/all-MiniLM-L6-v2 Local embedding model (or "none")
debug false Enable debug logging to ~/.superlocalmemory.log

Development

pnpm install
pnpm build
pnpm test

vs Supermemory

Feature superlocalmemory supermemory
Privacy 100% local Cloud API
API Key Not needed Required
Cost Free Paid
Setup Clone & run Install + signup
Embeddings Local (Transformers.js) Cloud
Context injection Yes Yes
Keyword detection Yes Yes
Privacy tags Yes Yes
Preemptive compaction Yes Yes

Architecture

packages/
  core/           # Memory engine (Orama + embeddings)
  mcp/            # MCP server (stdio + HTTP)
  opencode-plugin/# OpenCode integration
docker/           # Docker deployment

License

MIT

About

Local AI memory - no cloud, no API keys. 100% local alternative to Supermemory.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors