Quick Start
Add persistent memory to any AI model in 5 minutes.
Your AI remembers everything. Change one URL, get persistent memory across all your AI conversations.
# Before: Stateless AI with amnesia
client = OpenAI(api_key="sk-xxx")
# After: AI with persistent memory
client = OpenAI(api_key="mk_xxx", base_url="https://api.memoryrouter.ai/v1")Quick Start (5 minutes)
Step 1: Get Your Memory Key
- Go to app.memoryrouter.ai
- Sign in with Google
- Add your OpenAI/Anthropic API key(s) in Settings
- Copy your Memory Key (
mk_xxxxxxxxxxxxxxxx)
Step 2: Swap One Line
Python (OpenAI SDK):
from openai import OpenAI
client = OpenAI(
api_key="mk_xxxxxxxxxxxxxxxx", # Your Memory Key
base_url="https://api.memoryrouter.ai/v1"
)
response = client.chat.completions.create(
model="openai/gpt-5.1",
messages=[{"role": "user", "content": "My name is Alice"}]
)
print(response.choices[0].message.content)JavaScript:
import OpenAI from 'openai';
const client = new OpenAI({
apiKey: 'mk_xxxxxxxxxxxxxxxx',
baseURL: 'https://api.memoryrouter.ai/v1'
});
const response = await client.chat.completions.create({
model: 'openai/gpt-5.1',
messages: [{ role: 'user', content: 'My name is Alice' }]
});curl:
curl -X POST https://api.memoryrouter.ai/v1/chat/completions \
-H "Authorization: Bearer mk_xxxxxxxxxxxxxxxx" \
-H "Content-Type: application/json" \
-d '{
"model": "openai/gpt-5.1",
"messages": [{"role": "user", "content": "My name is Alice"}]
}'Step 3: Watch the Memory Work
# First request - introduce yourself
client.chat.completions.create(
model="openai/gpt-5.1",
messages=[{"role": "user", "content": "My name is Alice and I love hiking"}]
)
# Later (even days later) - the AI remembers
response = client.chat.completions.create(
model="openai/gpt-5.1",
messages=[{"role": "user", "content": "What are my hobbies?"}]
)
# Response: "Based on our previous conversation, you mentioned you love hiking!"How It Works
MemoryRouter sits between your app and your AI provider:
- Queries your semantic-temporal memory store
- Injects relevant context into your prompt
- Forwards to your chosen AI provider
- Stores new memories automatically
- Returns the response — standard format, zero changes to your code
Supported Providers
Use any model from these providers — just prefix the model name:
| Provider | Example Models | Prefix |
|---|---|---|
| OpenAI | gpt-5.2, gpt-5.1, gpt-4.5, o3, o1-pro, o1 | openai/ |
| Anthropic | claude-opus-4.5, claude-sonnet-4.5, claude-haiku-4.5 | anthropic/ |
| gemini-3-pro, gemini-3-flash, gemini-2.5-pro, gemini-2.5-flash | google/ | |
| Meta | llama-4-maverick, llama-4-scout, llama-3.3-70b | meta/ |
| Mistral | mistral-large-3, ministral-3-14b, mistral-small-3.2 | mistral/ |
| xAI | grok-4, grok-4-fast, grok-3, grok-3-mini | x-ai/ |
| OpenRouter | any model on openrouter.ai (200+) | openrouter/ |
| DeepSeek | deepseek-chat, deepseek-reasoner | deepseek/ |
| Azure OpenAI | your deployed models | azure/ |
| Ollama | local models | ollama/ |
100+ models available out of the box. OpenRouter gives you access to 200+ additional models (Cohere, Perplexity, Together, etc.) through a single API key.
Links
- Dashboard: app.memoryrouter.ai
- API:
https://api.memoryrouter.ai/v1
MemoryRouter — Same memory, any model. 🧠