AI memory OS for LLM and Agent systems(moltbot,clawdbot,openclaw), enabling persistent Skill memory for cross-task skill reuse and evolution.
npx skills add https://github.com/MemTensor/MemOS --skill safe-file-deletionInstallieren Sie diesen Skill über die CLI und beginnen Sie mit der Verwendung des SKILL.md-Workflows in Ihrem Arbeitsbereich.
MemOS 2.0: 星尘(Stardust)
🎯 +43.70% Accuracy vs. OpenAI Memory
🏆 Top-tier long-term memory + personalization
💰 Saves 35.24% memory tokens
LoCoMo 75.80 • LongMemEval +40.43% • PrefEval-10 +2568% • PersonaMem +40.75%

🦞 Your lobster now has a working memory system — choose Cloud or Local to get started.
Get your API key: MemOS Dashboard
Full tutorial → MemOS-Cloud-OpenClaw-Plugin
🌐 Homepage ·
📖 Documentation · 📦 NPM
MemOS is a Memory Operating System for LLMs and AI agents that unifies store / retrieve / manage for long-term memory, enabling context-aware and personalized interactions with KB, multi-modal, tool memory, and enterprise-grade optimizations built in.
2026-03-08 · 🦞 MemOS OpenClaw Plugin — Cloud & Local
Official OpenClaw memory plugins launched. Cloud Plugin: hosted memory service with 72% lower token usage and multi-agent memory sharing (MemOS-Cloud-OpenClaw-Plugin). Local Plugin (v1.0.0): 100% on-device memory with persistent SQLite, hybrid search (FTS5 + vector), task summarization & skill evolution, multi-agent collaboration, and a full Memory Viewer dashboard.
2025-12-24 · 🎉 MemOS v2.0: Stardust (星尘) Release
Comprehensive KB (doc/URL parsing + cross-project sharing), memory feedback & precise deletion, multi-modal memory (images/charts), tool memory for agent planning, Redis Streams scheduling + DB optimizations, streaming/non-streaming chat, MCP upgrade, and lightweight quick/full deployment.
Knowledge Base & Memory
Feedback & Memory Management
Conversation & Retrieval
Multimodal & Tool Memory
Data & Infrastructure
Scheduler
Deployment & Engineering
Memory Scheduling & Updates
2025-08-07 · 🎉 MemOS v1.0.0 (MemCube) Release
First MemCube release with a word-game demo, LongMemEval evaluation, BochaAISearchRetriever integration, improved search capabilities, and the official Playground launch.
Playground
MemCube Construction
Extended Evaluation Set
Plaintext Memory
KV Cache Concatenation
Plaintext Memory
2025-07-07 · 🎉 MemOS v1.0: Stellar (星河) Preview Release
A SOTA Memory OS for LLMs is now open-sourced.
2025-07-04 · 🎉 MemOS Paper Release
MemOS: A Memory OS for AI System is available on arXiv.
2024-07-04 · 🎉 Memory3 Model Release at WAIC 2024
The Memory3 model, featuring a memory-layered architecture, was unveiled at the 2024 World Artificial Intelligence Conference.
git clone https://github.com/MemTensor/MemOS.git
cd MemOS
pip install -r ./docker/requirements.txt
docker/.env.example and copy to MemOS/.envOPENAI_API_KEY,MOS_EMBEDDER_API_KEY,MEMRADER_API_KEY and others can be applied for through BaiLian.MemOS/.env file.MOS_CHAT_MODEL_PROVIDER to select the backend (e.g., openai, qwen, deepseek, minimax).Launch via Docker
cd docker) before executing the following command.# Enter docker directory
docker compose up
Docker Reference.Launch via the uvicorn command line interface (CLI)
cd src
uvicorn memos.api.server_api:app --host 0.0.0.0 --port 8001 --workers 1
CLI Reference.import requests
import json
data = {
"user_id": "8736b16e-1d20-4163-980b-a5063c3facdc",
"mem_cube_id": "b32d0977-435d-4828-a86f-4f47f8b55bca",
"messages": [
{
"role": "user",
"content": "I like strawberry"
}
],
"async_mode": "sync"
}
headers = {
"Content-Type": "application/json"
}
url = "http://localhost:8000/product/add"
res = requests.post(url=url, headers=headers, data=json.dumps(data))
print(f"result: {res.json()}")
import requests
import json
data = {
"query": "What do I like",
"user_id": "8736b16e-1d20-4163-980b-a5063c3facdc",
"mem_cube_id": "b32d0977-435d-4828-a86f-4f47f8b55bca"
}
headers = {
"Content-Type": "application/json"
}
url = "http://localhost:8000/product/search"
res = requests.post(url=url, headers=headers, data=json.dumps(data))
print(f"result: {res.json()}")
Join our community to ask questions, share your projects, and connect with other developers.
[!NOTE]
We publicly released the Short Version on May 28, 2025, making it the earliest work to propose the concept of a Memory Operating System for LLMs.
If you use MemOS in your research, we would appreciate citations to our papers.
@article{li2025memos_long,
title={MemOS: A Memory OS for AI System},
author={Li, Zhiyu and Song, Shichao and Xi, Chenyang and Wang, Hanyu and Tang, Chen and Niu, Simin and Chen, Ding and Yang, Jiawei and Li, Chunyu and Yu, Qingchen and Zhao, Jihao and Wang, Yezhaohui and Liu, Peng and Lin, Zehao and Wang, Pengyuan and Huo, Jiahao and Chen, Tianyi and Chen, Kai and Li, Kehang and Tao, Zhen and Ren, Junpeng and Lai, Huayi and Wu, Hao and Tang, Bo and Wang, Zhenren and Fan, Zhaoxin and Zhang, Ningyu and Zhang, Linfeng and Yan, Junchi and Yang, Mingchuan and Xu, Tong and Xu, Wei and Chen, Huajun and Wang, Haofeng and Yang, Hongkang and Zhang, Wentao and Xu, Zhi-Qin John and Chen, Siheng and Xiong, Feiyu},
journal={arXiv preprint arXiv:2507.03724},
year={2025},
url={https://arxiv.org/abs/2507.03724}
}
@article{li2025memos_short,
title={MemOS: An Operating System for Memory-Augmented Generation (MAG) in Large Language Models},
author={Li, Zhiyu and Song, Shichao and Wang, Hanyu and Niu, Simin and Chen, Ding and Yang, Jiawei and Xi, Chenyang and Lai, Huayi and Zhao, Jihao and Wang, Yezhaohui and others},
journal={arXiv preprint arXiv:2505.22101},
year={2025},
url={https://arxiv.org/abs/2505.22101}
}
@article{yang2024memory3,
author = {Yang, Hongkang and Zehao, Lin and Wenjin, Wang and Wu, Hao and Zhiyu, Li and Tang, Bo and Wenqiang, Wei and Wang, Jinbo and Zeyun, Tang and Song, Shichao and Xi, Chenyang and Yu, Yu and Kai, Chen and Xiong, Feiyu and Tang, Linpeng and Weinan, E},
title = {Memory$^3$: Language Modeling with Explicit Memory},
journal = {Journal of Machine Learning},
year = {2024},
volume = {3},
number = {3},
pages = {300--346},
issn = {2790-2048},
doi = {https://doi.org/10.4208/jml.240708},
url = {https://global-sci.com/article/91443/memory3-language-modeling-with-explicit-memory}
}
We welcome contributions from the community! Please read our contribution guidelines to get started.
MemOS is licensed under the Apache 2.0 License.