MemoryRouterMemoryRouter

CLI

Upload documents, code, and knowledge to your memory vault from the command line.

The MemoryRouter CLI lets you upload files, directories, and codebases to your memory vault. Give your AI context on your projects, documentation, and existing knowledge — before you even start a conversation.


Install

npm install -g memoryrouter

Authentication

memoryrouter auth <your-memory-key>

Your key is stored locally at ~/.memoryrouter/config.json. Get a key at app.memoryrouter.ai.

Verify

memoryrouter whoami

Upload

Upload files, directories, or entire codebases to your memory vault.

# Upload a directory
memoryrouter upload ./docs/

# Upload a specific file
memoryrouter upload ./README.md

# Upload with a session namespace
memoryrouter upload ./my-project/ --session my-project

# Upload to a custom endpoint
memoryrouter upload ./docs/ --endpoint https://your-instance.com

Supported File Types

The CLI automatically discovers and uploads files with these extensions:

CategoryExtensions
Documentation.md, .mdx, .txt, .rst
Code.ts, .js, .py, .go, .rs, .swift, .java, .cpp, .rb, .sh
Data.json, .yaml, .yml, .csv, .sql
Web.html, .css
Transcripts.jsonl (session transcripts)

Files outside these types are skipped. Binary files, node_modules, .git, and other common non-content directories are excluded automatically.

How Upload Works

  1. Discovery — Recursively finds supported files in the target path
  2. Chunking — Large files are split into memories (~8K characters, 50K max per memory) for optimal retrieval
  3. Batching — Memories are uploaded in batches (100 items or 2MB per batch)
  4. Retry — Failed uploads retry with exponential backoff (up to 5 attempts), handling rate limits (429) and server errors (5xx) gracefully

Sessions

Use --session to namespace uploads. This lets you keep different projects isolated in the same memory vault:

memoryrouter upload ./project-a/ --session project-a
memoryrouter upload ./project-b/ --session project-b

Your AI can then recall context from specific projects based on what's relevant to the conversation.


Status

Check your vault stats and connection:

memoryrouter status

JSON output:

memoryrouter status --json

Delete

Clear all memories from your vault:

# With confirmation prompt
memoryrouter delete

# Skip confirmation
memoryrouter delete -y

# Delete a specific session only
memoryrouter delete --session old-project

Common Workflows

Upload your docs before using the API

# 1. Auth
memoryrouter auth mk_your_key

# 2. Upload your documentation
memoryrouter upload ./docs/

# 3. Now your AI knows your docs
curl -X POST https://api.memoryrouter.ai/v1/chat/completions \
  -H "Authorization: Bearer mk_your_key" \
  -H "Content-Type: application/json" \
  -d '{"model": "openai/gpt-5.1", "messages": [{"role": "user", "content": "How does authentication work in our app?"}]}'

Give your AI context on a codebase

memoryrouter upload ./src/ --session my-app
memoryrouter upload ./README.md --session my-app

Pre-load knowledge for an OpenClaw agent

# Upload workspace + session history
memoryrouter upload ~/.openclaw/workspace/
memoryrouter upload ~/.openclaw/agents/main/sessions/ --session transcripts

Configuration

Config is stored at ~/.memoryrouter/config.json:

{
  "key": "mk_xxxxxxxxxxxxxxxx",
  "endpoint": "https://api.memoryrouter.ai"
}

Custom Endpoint

For self-hosted instances:

memoryrouter auth mk_your_key --endpoint https://your-instance.com

On this page