Getting Started โ
Overview โ
mem0 Memory Service for OpenClaw is a unified memory layer based on mem0, providing persistent semantic memory storage for AI agents.
Agents automatically store and retrieve memories through conversations โ no manual file management required.
Design Philosophy โ
mem0's core strength is memory extraction and deduplication โ automatically extracting key facts from conversations, intelligently merging similar memories, and providing semantic retrieval. However, mem0 itself does not distinguish between "short-term events" and "long-term knowledge".
This service adds a memory lifecycle management layer on top of mem0:
mem0 handles: Semantic extraction, intelligent deduplication, vector retrieval
This service handles: Tiered storage, lifecycle management, activity-based archivingArchitecture โ
OpenClaw Agents (agent1, agent2, ...)
โ
โ HTTP API (localhost:8230)
โผ
โโโโโโโโโโโโโโโโโโโโโโโโ
โ Memory Service โ FastAPI + mem0
โ (Docker / systemd) โ
โ โ
โ Tiered Memory: โ Long-term (no run_id)
โ - Long: tech โ Short-term (run_id=date)
โ decisions, โ Archive: activity-based
โ lessons, prefs โ upgrade/delete
โ - Short: daily โ
โ discussions โ
โโโโโโโโโโโโฌโโโโโโโโโโโโ
โ
โโโโโโโผโโโโโโ โโโโโโโโโโโโโโโโโโโโ
โ mem0 โโโโโโโโถโ LLM (Bedrock) โ
โ โโโโโโโโถโ Embedder (Titan) โ
โโโโโโโฌโโโโโโ โโโโโโโโโโโโโโโโโโโโ
โผ
OpenSearch / S3 VectorsPrerequisites โ
- Docker 20.10+ and docker compose (v2)
- OpenSearch cluster (2.x or 3.x, k-NN plugin required) or AWS S3 Vectors
- AWS Bedrock access (or modify
config.pyfor other LLM/Embedder providers) - AWS IAM permissions โ the deployment environment needs:
bedrock:InvokeModelandbedrock:InvokeModelWithResponseStreamfor the Embedding and LLM models- If using S3Vectors:
s3vectors:*on the bucket resource - EC2 users: attach an IAM Role to the instance โ no Access Key needed
One-Line Deploy โ
Want to try it out quickly? Send this to your OpenClaw AI assistant to deploy a memory system backed by local pgvector โ no cloud vector store required, and your memory data is persisted locally.
When you're ready to scale, you can smoothly migrate to S3 Vectors or OpenSearch at any time using the built-in migration tool. Your data follows you.
Deploy mem0 Memory Service for me using local pgvector.
Steps:
git clone https://github.com/norrishuang/mem0-memory-service.git && cd mem0-memory-service- Run
./install.shโ it will auto-detect your AWS Region and use local pgvector (no cloud vector store needed)- Verify:
curl http://localhost:8230/health- Enable the skill in OpenClaw Settings โ Skills โ mem0-memory
The installer handles everything: AWS Region detection, default config, Docker containers (including PostgreSQL + pgvector), and skill installation.
Prerequisites: Docker 20.10+ (auto-installed if missing on Linux/macOS), AWS Bedrock access (IAM Role on EC2 or configured credentials). No OpenSearch or S3 Vectors account needed to get started.
Want to migrate later? Use
tools/migrate_between_stores.pyto move your memories to S3 Vectors or OpenSearch without data loss. See Migration Guide.
Deployment โ
Choose the deployment method that fits your setup:
๐ Option A: Local pgvector (Quickest, No Cloud Vector Store) โ
Just want to try it out? Start with the built-in PostgreSQL + pgvector โ no S3 Vectors or OpenSearch needed. Only requires AWS Bedrock (LLM + Embedding).
git clone https://github.com/norrishuang/mem0-memory-service.git
cd mem0-memory-service
cp .env.example .env # Set VECTOR_STORE=pgvector + AWS Bedrock credentials
docker compose --profile pgvector up -dโ Full guide: Docker + pgvector Quickstart
๐ณ Option B: Docker (Recommended) โ
Production-ready. Uses Docker Compose with your choice of vector store (S3 Vectors, OpenSearch, or pgvector). All pipelines run as cron jobs inside the container.
git clone https://github.com/norrishuang/mem0-memory-service.git
cd mem0-memory-service
cp .env.example .env # Configure VECTOR_STORE + credentials
docker compose up -dโ Full guide: Docker Deployment
โ๏ธ Option C: systemd (Advanced) โ
Run directly on the host without Docker. Suitable for environments where Docker is not available or not preferred.
โ Full guide: systemd Deployment
Not sure which to pick? Start with Option A (pgvector) for a quick local trial. When ready for production, switch to Option B with S3 Vectors or OpenSearch. See When to switch backends.
Enabling the Skill for OpenClaw Agents โ
After installation, enable the mem0-memory skill in OpenClaw so all agents automatically get memory behavior:
# Copy skill to OpenClaw skills directory
mkdir -p ~/.openclaw/skills/mem0-memory
cp skill/SKILL.md ~/.openclaw/skills/mem0-memory/SKILL.mdThen go to OpenClaw Settings โ Skills and enable mem0-memory.
That's it. Every agent (new or existing) automatically inherits the full memory behavior on their next session start:
- Proactive memory retrieval before answering
- Diary writing during conversations
- MEMORY.md maintenance during heartbeats
- Correct
--agent <id>targeting without any AGENTS.md changes
No need to modify individual
AGENTS.mdfiles. The skill applies to all agents uniformly.
Want to understand why this works? See How It Works for the full explanation of the skill system, memory flow, and agent behavior rules.
Quick Usage โ
# Add a long-term memory
python3 cli.py add --user me --agent <your-agent-id> --text "Important lesson learned..."
# Add a short-term memory (today's date)
python3 cli.py add --user me --agent <your-agent-id> --run 2026-03-27 \
--text "Today's discussion about refactoring"
# Semantic search
python3 cli.py search --user me --agent <your-agent-id> --query "refactoring" --top-k 5
# Combined search (long-term + recent 7 days)
python3 cli.py search --user me --agent <your-agent-id> --query "refactoring" --combinedMemory Tiering โ
| Type | run_id | Lifetime | Use Case |
|---|---|---|---|
| Long-term | None | Permanent | Tech decisions, lessons, preferences |
| Short-term | YYYY-MM-DD | 7 days โ AutoDream | Daily discussions, temp decisions, task progress |
Three paths to long-term memory:
memory_sync.pyโ syncsMEMORY.mddaily (same-day, curated knowledge)pipelines/auto_dream.py(AutoDream) โ promotes active short-term memories after 7 days- Agent explicit write โ call CLI without
--runat any time
Shared Knowledge Base โ
Memories with category=experience are automatically shared across all agents and users. When any agent adds a memory tagged as experience, it is written to both the personal memory space and a global shared pool.
During retrieval, every search automatically includes results from the shared pool โ so all agents benefit from the team's collective experience without any extra configuration.