Store and retrieve memories using natural language queries powered by vector embeddings
🌙
AutoDream Consolidation
Every night, AutoDream does what human brains do during sleep — consolidates the day's short-term memories into long-term knowledge, and quietly discards what's no longer relevant.
🗄️
Flexible Vector Store
Supports AWS OpenSearch (default) and AWS S3 Vectors as the vector backend — switch with a single environment variable. LLM inference and embeddings run on AWS Bedrock, keeping everything within your AWS account.
🌊
MemoryStream
Conversations flow into mem0 continuously — snapshotted every 5 min, digested every 15 min. No context is lost between sessions.
🤖
Multi-Agent Support
Isolated memory spaces per agent, with cross-agent search capability. Memories tagged as `experience` are automatically shared across all agents — building a collective knowledge base.
🔌
Zero-Config Agent Onboarding
Enable the mem0-memory Skill once — every agent automatically inherits memory behavior (diary writing, MEMORY.md maintenance, retrieval). No AGENTS.md edits needed.
🛠️
Simple CLI & REST API
Easy-to-use CLI for all operations, plus a FastAPI REST server for programmatic access
🔒
Privacy-First, Self-Hosted
Fully self-hosted on your own AWS infrastructure. No data leaves your account — telemetry is disabled by default, and all LLM calls go through AWS Bedrock with your own IAM credentials.