Setup Guide
Hardware requirements, dependencies, and the creator's reference architecture.
1Prerequisites
What you need before installing Cerebro:
- Claude Subscription (Pro or Max)
Cerebro uses your native Claude Code subscription. Zero extra API cost.
- Python 3.10+
With pip installed. Check with
python --version - An MCP-Compatible Client
Claude Code, Cursor, Windsurf, or Claude Desktop
- 4GB+ RAM, 500MB Disk Space
More RAM recommended if using local embeddings
2The Creator's Setup
The reference architecture powering Cerebro's development — a distributed home lab with dedicated compute, storage, and networking.
You don't need all of this. Cerebro runs great on a single laptop. This is what happens when you go all-in.
3Recommended Setups
Pick the tier that matches your setup. Each includes the full workflow — what to install, how to configure, and what your day-to-day looks like.
Minimal
(Laptop only)Everything runs on a single machine. Memory stored locally. No network, no GPU, no NAS.
What You Need
Setup Steps
Install Cerebro
The base install includes all 49 MCP tools with keyword search.
pip install cerebro-aiInitialize storage
Creates ~/.cerebro with the SQLite database, config, and FAISS index.
cerebro initAdd to your MCP client
Add this to your Claude Desktop, Claude Code, Cursor, or Windsurf config:
{
"mcpServers": {
"cerebro": {
"command": "cerebro",
"args": ["serve"]
}
}
}Verify
cerebro statusDay-to-day workflow
Open your MCP client, start chatting. Cerebro automatically saves conversations, extracts facts, and builds your memory. Search with search(), save insights with record_learning(). All data lives in ~/.cerebro on your machine.
Best for: Trying it out, personal projects, single-machine setups
Enthusiast
(Desktop + NAS)RecommendedAdd a NAS for centralized, persistent memory accessible from any device on your network. Includes semantic search with embeddings.
What You Need
Setup Steps
Install with embeddings
This adds sentence-transformers and FAISS for semantic vector search.
pip install cerebro-ai[embeddings]Mount your NAS
Create a shared folder on your NAS, then mount it on every machine you use:
# Mount NAS share (replace with your NAS IP and share name)
sudo mount -t nfs YOUR_NAS_IP:/volume1/AI_MEMORY /mnt/nas
# Or add to /etc/fstab for auto-mount on boot
# YOUR_NAS_IP:/volume1/AI_MEMORY /mnt/nas nfs defaults 0 0:: Map network drive (replace with your NAS IP and share)
net use Z: \\YOUR_NAS_IP\AI_MEMORY /persistent:yesInitialize with NAS path
# Point Cerebro at your NAS
CEREBRO_STORAGE_PATH=/mnt/nas/cerebro cerebro init
# Windows:
# set CEREBRO_STORAGE_PATH=Z:\cerebro && cerebro initConfigure MCP client with NAS storage
{
"mcpServers": {
"cerebro": {
"command": "cerebro",
"args": ["serve"],
"env": {
"CEREBRO_STORAGE_PATH": "/mnt/nas/cerebro"
}
}
}
}On Windows, use "CEREBRO_STORAGE_PATH": "Z:\\cerebro"
Repeat on other machines
Install Cerebro + mount the NAS on each machine you use. They all read/write the same memory database. No sync needed — it's the same files.
Day-to-day workflow
Work from any machine — your laptop, desktop, or a VM. Cerebro reads and writes to the same NAS-backed memory. Semantic search finds memories by meaning, not just keywords. Switch devices mid-conversation and pick up exactly where you left off.
Tip: Any NAS that supports NFS or SMB works — Synology, QNAP, TrueNAS, or even a Raspberry Pi with an external drive. The key is a shared directory all your machines can access.
Best for: Daily use, multiple workstations, persistent memory across devices
Power User
(Full home lab)Creator's setupDedicated GPU server running Cerebro Pro, NAS for storage, optional GPU compute node for embeddings. Docker-orchestrated.
What You Need
Setup Steps
Install with GPU support on your server
SSH into your dedicated server and install with GPU acceleration:
pip install cerebro-ai[gpu]
# Verify GPU is detected
python -c "import torch; print(torch.cuda.is_available())"Mount NAS on the server
Same as the Enthusiast tier — mount your NAS so the server can read/write the shared memory:
sudo mount -t nfs YOUR_NAS_IP:/volume1/AI_MEMORY /mnt/nas
# Initialize Cerebro pointing to NAS
CEREBRO_STORAGE_PATH=/mnt/nas/cerebro cerebro initDeploy with Docker Compose
For always-on deployment with Redis caching and automatic restarts:
services:
cerebro:
image: professorlow/cerebro:latest
environment:
- CEREBRO_LICENSE_KEY=your-license-key
- CEREBRO_STORAGE_PATH=/data
- CEREBRO_EMBEDDING_MODEL=all-MiniLM-L6-v2
volumes:
- /mnt/nas/cerebro:/data
ports:
- "8420:8420"
restart: unless-stopped
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: 1
capabilities: [gpu]
redis:
image: redis:7-alpine
restart: unless-stopped
volumes:
- redis-data:/data
volumes:
redis-data:Start the stack
docker compose up -d
# Verify it's running
docker compose ps
curl http://localhost:8420/healthPoint your workstations at the server
On each dev machine, you can either run Cerebro locally (pointed at NAS) or connect to the server's MCP endpoint:
{
"mcpServers": {
"cerebro": {
"command": "cerebro",
"args": ["serve"],
"env": {
"CEREBRO_STORAGE_PATH": "/mnt/nas/cerebro"
}
}
}
}Optional: Dedicated GPU Compute Node
If you have a second GPU machine (like a DGX Spark), you can offload embedding generation and Ollama LLM inference to it. Install cerebro-ai[gpu] on that node and configure it to write to the same NAS path. This distributes the compute load across your network.
Day-to-day workflow
Your server runs 24/7. The Cerebro desktop app (Pro) connects to it for the full experience — agents, cognitive loop, autonomous reasoning. Your dev machines run Claude Code with the MCP tools pointed at the same NAS. Everything stays in sync because it's the same storage. GPU-accelerated FAISS makes semantic search instant even with millions of memories.
Best for: Agents, autonomy, multi-device workflows, maximum performance
4Full Dependency Breakdown
Everything Cerebro installs, broken down by install tier.
Core Dependencies
| Package | Version | Purpose |
|---|---|---|
mcp | >=1.25.0 | MCP protocol server |
anyio | latest | Async I/O framework |
numpy | latest | Numerical operations |
pydantic | latest | Data validation |
python-dateutil | latest | Date/time parsing |
Embeddings (recommended)
| Package | Version | Purpose |
|---|---|---|
sentence-transformers | >=5.0.0 | Embedding model loading |
faiss-cpu | >=1.13.0 | Vector similarity search |
GPU Acceleration (optional)
| Package | Version | Purpose |
|---|---|---|
faiss-gpu | latest | GPU-accelerated FAISS |
torch | >=2.0.0 | PyTorch for GPU compute |
Install Commands
pip install cerebro-ai # Minimal
pip install cerebro-ai[embeddings] # With semantic search (recommended)
pip install cerebro-ai[gpu] # With GPU accelerationDocker stack: Redis 7 and Python 3.12-slim base image. External services (Ollama, Redis, DGX) are all optional.
5What Cerebro Does NOT Require
Common assumptions that are wrong — Cerebro is simpler than you think:
- No API keys
Uses your Claude subscription directly
- No cloud account
100% local — your data never leaves your machine
- No database server
Built-in SQLite storage, zero config
- No Playwright or browser tools
Memory server only — browser automation is a Cerebro Pro desktop feature
- No Redis
Only needed in the Docker stack for caching