MemoryRouterMemoryRouter

Quick Start

Add persistent memory to any AI model in 5 minutes.

Your AI remembers everything. Change one URL, get persistent memory across all your AI conversations.

# Before: Stateless AI with amnesia
client = OpenAI(api_key="sk-xxx")

# After: AI with persistent memory
client = OpenAI(api_key="mk_xxx", base_url="https://api.memoryrouter.ai/v1")

Quick Start (5 minutes)

Step 1: Get Your Memory Key

  1. Go to app.memoryrouter.ai
  2. Sign in with Google
  3. Add your OpenAI/Anthropic API key(s) in Settings
  4. Copy your Memory Key (mk_xxxxxxxxxxxxxxxx)

Step 2: Swap One Line

Python (OpenAI SDK):

from openai import OpenAI

client = OpenAI(
    api_key="mk_xxxxxxxxxxxxxxxx",  # Your Memory Key
    base_url="https://api.memoryrouter.ai/v1"
)

response = client.chat.completions.create(
    model="openai/gpt-5.1",
    messages=[{"role": "user", "content": "My name is Alice"}]
)
print(response.choices[0].message.content)

JavaScript:

import OpenAI from 'openai';

const client = new OpenAI({
    apiKey: 'mk_xxxxxxxxxxxxxxxx',
    baseURL: 'https://api.memoryrouter.ai/v1'
});

const response = await client.chat.completions.create({
    model: 'openai/gpt-5.1',
    messages: [{ role: 'user', content: 'My name is Alice' }]
});

curl:

curl -X POST https://api.memoryrouter.ai/v1/chat/completions \
  -H "Authorization: Bearer mk_xxxxxxxxxxxxxxxx" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "openai/gpt-5.1",
    "messages": [{"role": "user", "content": "My name is Alice"}]
  }'

Step 3: Watch the Memory Work

# First request - introduce yourself
client.chat.completions.create(
    model="openai/gpt-5.1",
    messages=[{"role": "user", "content": "My name is Alice and I love hiking"}]
)

# Later (even days later) - the AI remembers
response = client.chat.completions.create(
    model="openai/gpt-5.1",
    messages=[{"role": "user", "content": "What are my hobbies?"}]
)
# Response: "Based on our previous conversation, you mentioned you love hiking!"

How It Works

MemoryRouter sits between your app and your AI provider:

  1. Queries your semantic-temporal memory store
  2. Injects relevant context into your prompt
  3. Forwards to your chosen AI provider
  4. Stores new memories automatically
  5. Returns the response — standard format, zero changes to your code

Supported Providers

Use any model from these providers — just prefix the model name:

ProviderExample ModelsPrefix
OpenAIgpt-5.2, gpt-5.1, gpt-4.5, o3, o1-pro, o1openai/
Anthropicclaude-opus-4.5, claude-sonnet-5, claude-haiku-4.5anthropic/
Googlegemini-3-pro, gemini-3-flash, gemini-2.5-pro, gemini-2.5-flashgoogle/
Metallama-4-maverick, llama-4-scout, llama-3.3-70bmeta/
Mistralmistral-large-3, ministral-3-14b, mistral-small-3.2mistral/
xAIgrok-4, grok-4-fast, grok-3, grok-3-minix-ai/
OpenRouterany model on openrouter.ai (200+)openrouter/
DeepSeekdeepseek-chat, deepseek-reasonerdeepseek/
Azure OpenAIyour deployed modelsazure/
Ollamalocal modelsollama/

100+ models available out of the box. OpenRouter gives you access to 200+ additional models (Cohere, Perplexity, Together, etc.) through a single API key.



MemoryRouter — Same memory, any model. 🧠

On this page