Known Issues & Patches
mem0 has known upstream bugs that affect AWS Bedrock + OpenSearch / S3 Vectors usage. PRs have been submitted but are not yet merged. You must apply patches manually before using this service.
Summary
| Issue | PR | Affects | Status |
|---|---|---|---|
OpenSearch 3.x nmslib engine deprecated | #4392 | OpenSearch 3.0+ | Pending merge |
Converse API temperature + top_p conflict | #4393 | Claude Haiku 4.5 and newer models | ✅ Merged via #4469 |
| S3Vectors invalid filter format | #4554 | S3 Vectors backend | Pending merge |
| MiniMax models not recognized as valid provider | #4609 | All MiniMax models on Bedrock | ✅ Merged 2026-03-30 |
| Telemetry causes thread leak | — (config, no PR needed) | All deployments | ✅ Fix: set MEM0_TELEMETRY=false |
Configuration Issue: Telemetry Thread Leak
mem0 enables anonymous telemetry by default. Every add / search / delete call triggers capture_event() in mem0/memory/telemetry.py, which:
- Creates a new
AnonymousTelemetryinstance → a newPosthogclient - Each
Posthogclient spawns aConsumerbackground thread - The thread is a daemon but never exits — it blocks in
queue.get()indefinitely - Data is uploaded to
https://us.i.posthog.com(mem0 official PostHog), including your collection name, LLM type, and vector store type
Impact: After a long auto_dream run (hundreds of add/delete calls), we observed 135 zombie PostHog threads. Thread count grows ~1/request and will eventually destabilize the process.
Fix — add to .env:
MEM0_TELEMETRY=falseThen restart the service:
sudo systemctl restart mem0-memory.serviceThread count drops from 135 → 8 immediately after restart.
TIP
This is an official config option supported by mem0. Private/self-hosted deployments have no reason to send telemetry to mem0's servers.
PR #4392: OpenSearch 3.x nmslib Engine Deprecated
Patch steps:
# Locate the file
python3 -c "import mem0; import os; print(os.path.join(os.path.dirname(mem0.__file__), 'vector_stores/opensearch.py'))"
# Replace nmslib → lucene
sed -i 's/"engine": "nmslib"/"engine": "lucene"/g' <path>PR #4393: Converse API temperature + top_p Conflict
✅ Resolved: Fixed in upstream via PR #4469 (merged 2025-03-25). Run
pip install --upgrade mem0ai— no manual patch needed.
Claude Haiku 4.5 and newer models reject requests that include both temperature and top_p simultaneously. mem0 defaults top_p=0.9, causing ValidationException on Bedrock Converse API calls.
Patch steps:
# Locate the file
python3 -c "import mem0; import os; print(os.path.join(os.path.dirname(mem0.__file__), 'llms/aws_bedrock.py'))"Edit the file: comment out the topP line in the Converse API inferenceConfig block. Also change top_p default to None in mem0/configs/llms/aws_bedrock.py.
PR #4554: S3Vectors Filter Format
s3_vectors.py's _convert_filters() generates an incorrect filter format for the S3Vectors query_vectors API. It produces {"equals": {"key": "...", "value": {"stringValue": "..."}}} instead of the required MongoDB-style {"field": {"$eq": "value"}}.
Patch steps:
# One-click patch (provided by this project)
python3 patch_s3vectors_filter.pyPR #4609: MiniMax Models Not Recognized on AWS Bedrock
✅ Resolved: Fixed in upstream via PR #4609 (merged 2026-03-30). Run
pip install --upgrade mem0ai— no manual patch needed.
mem0's aws_bedrock LLM provider had three bugs affecting MiniMax M2.x models on Bedrock Converse API:
Bug 1 — PROVIDERS allowlistminimax was missing from the list, causing ValueError: Unknown provider in model at startup.
Bug 2 — Reasoning model response format
MiniMax M2.5 (and M2.1) are reasoning models. Their Converse API response includes a reasoningContent block before the actual text block. Taking content[0]["text"] directly raised a KeyError.
Bug 3 — System messages discarded
The original code only forwarded the last user message to the Converse API, silently dropping role=system messages. Without the JSON instruction, MiniMax returned free-form markdown, causing json.JSONDecodeError in mem0's fact-extraction pipeline.
Why MiniMax M2.5?
We benchmarked MiniMax M2.5 against Claude Haiku 4.5 and DeepSeek V3.2 for mem0's fact-extraction workload (short text in, short JSON out) on AWS Bedrock us-east-1:
| Model | Avg latency | Input price | Output price | Notes |
|---|---|---|---|---|
| Claude Haiku 4.5 | ~1.0s | $1.00 / 1M tokens | $5.00 / 1M tokens | Fastest; most expensive |
| DeepSeek V3.2 | ~2.4s | $0.62 / 1M tokens | $1.85 / 1M tokens | No clear advantage over MiniMax |
| MiniMax M2.5 ✅ | ~2.6s | $0.30 / 1M tokens | $1.20 / 1M tokens | Best cost; ~3× cheaper than Haiku |
MiniMax M2.5 is our default choice: ~3× cheaper than Claude Haiku 4.5 at an acceptable ~2-3 s latency for background memory extraction tasks.
TIP
If raw speed matters more than cost, switch back to Claude Haiku 4.5 by setting LLM_MODEL=us.anthropic.claude-haiku-4-5-20251001-v1:0. No patch needed for Haiku.
WARNING
minimax.minimax-m2 (non-reasoning variant) does not work — its Converse API response contains only reasoningContent with no text block. Use M2.1 or M2.5 instead.
After PRs Are Merged
Once all PRs are merged upstream, simply upgrade mem0 and patches are no longer needed:
pip install --upgrade mem0aiCheck PR status:
gh pr view 4392 --repo mem0ai/mem0 --json state -q .state
gh pr view 4554 --repo mem0ai/mem0 --json state -q .state
# PR #4609 (MiniMax) — merged 2026-03-30, no longer needs trackingWARNING
After every pip install --upgrade mem0ai, re-apply patches until the corresponding PR is merged.