Configuration

Customize storage, embeddings, and advanced settings for your Cerebro deployment.

Environment Variables

All settings can be configured via environment variables. Set them in your shell profile, .env file, or directly in your MCP client configuration.

VariableDefaultDescription
CEREBRO_STORAGE_PATH~/.cerebroRoot directory for all Cerebro data including the SQLite database, FAISS index, and configuration files.
CEREBRO_EMBEDDING_MODELall-MiniLM-L6-v2The sentence-transformers model used for semantic search embeddings. Changing this requires rebuilding the FAISS index.
CEREBRO_EMBEDDING_DIMENSION384Dimension of the embedding vectors. Must match the chosen model. 384 for MiniLM, 768 for larger models.
CEREBRO_LOG_LEVELINFOLogging verbosity. Options: DEBUG, INFO, WARNING, ERROR, CRITICAL.
CEREBRO_MAX_RESULTS20Default maximum number of results returned by search operations.
CEREBRO_DECAY_ENABLEDtrueEnable or disable the memory decay system that gradually reduces relevance of unused memories.
CEREBRO_LICENSE_KEY(none)License key for Pro/Pro+ features. Only required for paid tier features. Free tier works without it.

Storage Options

Cerebro stores all data locally. You can use the default location, a custom directory, or a network-attached storage (NAS) path.

Local Filesystem (Default)

The default storage path is ~/.cerebro. All data stays on your machine.

.env
CEREBRO_STORAGE_PATH=~/.cerebro

Custom Directory

Point to any directory with sufficient storage:

.env
CEREBRO_STORAGE_PATH=/data/cerebro

Network-Attached Storage (NAS)

Mount your NAS and point Cerebro to it for centralized storage accessible from multiple machines:

.env (macOS / Linux)
# Mount your NAS first
sudo mount -t nfs nas-ip:/share /mnt/nas

# Point Cerebro to the NAS path
CEREBRO_STORAGE_PATH=/mnt/nas/cerebro
.env (Windows)
# Map network drive first
net use Z: \\NAS-IP\share

# Point Cerebro to the mapped drive
CEREBRO_STORAGE_PATH=Z:\cerebro

Embedding Configuration

Cerebro uses sentence-transformers for semantic search. The default model provides a good balance of speed and accuracy.

Default Model (Recommended)

.env
CEREBRO_EMBEDDING_MODEL=all-MiniLM-L6-v2
CEREBRO_EMBEDDING_DIMENSION=384

Higher Accuracy Model

Use a larger model for better semantic accuracy at the cost of more memory and slower indexing:

.env
CEREBRO_EMBEDDING_MODEL=all-mpnet-base-v2
CEREBRO_EMBEDDING_DIMENSION=768

Warning: Changing the embedding model requires rebuilding the FAISS index. Run cerebro rebuild-index after changing the model.

Full Example Configuration

A complete MCP client config with all common options:

claude_desktop_config.json
{
  "mcpServers": {
    "cerebro": {
      "command": "cerebro",
      "args": ["serve"],
      "env": {
        "CEREBRO_STORAGE_PATH": "/path/to/storage",
        "CEREBRO_EMBEDDING_MODEL": "all-MiniLM-L6-v2",
        "CEREBRO_EMBEDDING_DIMENSION": "384",
        "CEREBRO_LOG_LEVEL": "INFO",
        "CEREBRO_MAX_RESULTS": "20",
        "CEREBRO_DECAY_ENABLED": "true"
      }
    }
  }
}

Docker Deployment

For Pro and Pro+ users, Cerebro can run as a Docker container:

docker-compose.yml
services:
  cerebro:
    image: professorlow/cerebro:latest
    environment:
      - CEREBRO_LICENSE_KEY=your-license-key
      - CEREBRO_STORAGE_PATH=/data
    volumes:
      - cerebro-data:/data
    ports:
      - "8420:8420"
    restart: unless-stopped

volumes:
  cerebro-data:

Start with docker compose up -d. The container exposes port 8420 for the MCP server.