A modular Rust-based self-learning episodic memory system for AI agents, featuring hybrid storage with Turso (SQL) and redb (KV), async execution tracking, reward scoring, reflection, and pattern-based skill evolution. Designed for real-world applicability, maintainability, and scalable agent workflows.
npx skills add https://github.com/d-o-hub/rust-self-learning-memory --skill codebase-analyzerInstallez cette compétence avec la CLI et commencez à utiliser le flux de travail SKILL.md dans votre espace de travail.
A self-learning episodic memory system with semantic pattern search, embeddings, MCP server, and optional sandboxed code execution.
Overview • Features • Quick Start • Documentation • Contributing • Quality Gates • License
The Rust Self-Learning Memory System provides persistent memory across agent interactions through a comprehensive MCP (Model Context Protocol) server. It captures, stores, and learns from episodic experiences to improve future performance.
Architecture:
Tech Stack: Rust 2024 edition / Tokio + Turso/libSQL + redb cache + Wasmtime WASM + optional embeddings (OpenAI, Mistral, local)
search_patterns - Semantic pattern search with configurable rankingrecommend_patterns - Task-specific pattern recommendationsrecommend_playbook - Actionable step-by-step guidance (ADR-044)checkpoint_episode - Mid-task progress snapshots (ADR-044)./scripts/quality-gates.sh)use memory_core::{SelfLearningMemory, TaskContext, ComplexityLevel};
#[tokio::main]
async fn main() -> anyhow::Result<()> {
let memory = SelfLearningMemory::new();
// Search for patterns using natural language
let context = TaskContext {
domain: "web-api".to_string(),
language: Some("rust".to_string()),
framework: None,
complexity: ComplexityLevel::Moderate,
tags: vec!["rest".to_string(), "async".to_string()],
};
let results = memory.search_patterns_semantic(
"How to handle API rate limiting with retries",
context,
5 // limit
).await?;
for result in results {
println!("Pattern: {:?}", result.pattern);
println!("Relevance: {:.2}", result.relevance_score);
println!("Success Rate: {:.1}%", result.pattern.success_rate() * 100.0);
}
// Get task-specific recommendations
let recommendations = memory.recommend_patterns_for_task(
"Build async HTTP client with connection pooling",
context,
3
).await?;
for rec in recommendations {
println!("Recommended: {:?}", rec.pattern);
}
// NEW (ADR-044): Generate an actionable playbook
let playbooks = memory.retrieve_playbooks(
"Implement user authentication",
"security",
TaskType::CodeGeneration,
context,
1, // max playbooks
5 // max steps
).await;
if let Some(playbook) = playbooks.first() {
println!("Playbook ID: {}", playbook.playbook_id);
for step in &playbook.ordered_steps {
println!("{}. {}", step.order, step.action);
}
}
Ok(())
}
Documentation: See do-memory-core/PATTERN_SEARCH_FEATURE.md for complete API reference and examples.
# Clone the repository
git clone https://github.com/d-o-hub/rust-self-learning-memory.git
cd rust-self-learning-memory
# Build the project
cargo build --release
# Run tests (nextest recommended)
cargo nextest run --all
# Doctests separately
cargo test --doc
# Run quality gates
./scripts/quality-gates.sh
# Quick setup with the provided script
./scripts/setup-local-db.sh
# Or manual setup
cp do-memory-cli/.env.example .env
mkdir -p ./data ./backups
# Run interactive configuration wizard
do-memory-cli config wizard
# Follow the prompts to configure:
# - Database (local SQLite or remote Turso)
# - Storage (cache size, TTL, connection pool)
# - CLI (output format, progress bars, batch size)
# Validate configuration
do-memory-cli config validate
# Check configuration status
do-memory-cli config check
Configuration Wizard provides interactive step-by-step setup with sensible defaults and validation.
# Create an episode
do-memory-cli episode create --task "Implement user authentication" --context '{"language": "rust", "domain": "auth"}'
# List episodes
do-memory-cli episode list --limit 10
# Search episodes
do-memory-cli episode search "authentication" --limit 5
# Search patterns semantically
do-memory-cli pattern search --query "How to build REST API" --limit 5
# Analyze patterns
do-memory-cli pattern list --min-confidence 0.8
# Tag management
do-memory-cli tag add <episode-id> "important"
do-memory-cli tag search "important"
# Health check
do-memory-cli health check
# Playbook recommendation
do-memory-cli playbook recommend "Implement JWT auth" --domain security
# Start the MCP server
cargo run --bin do-memory-mcp-server
# Or run with custom config
cargo run --bin do-memory-mcp-server -- --config mcp-config-memory.json
use memory_core::{SelfLearningMemory, TaskContext, TaskType};
#[tokio::main]
async fn main() -> anyhow::Result<()> {
let memory = SelfLearningMemory::new(Default::default()).await?;
let context = TaskContext {
language: "rust".to_string(),
domain: "web".to_string(),
tags: vec!["api".to_string()],
};
let episode_id = memory.start_episode(
"Build REST API endpoint".to_string(),
context,
TaskType::CodeGeneration,
).await;
Ok(())
}
| Document | Description |
|---|---|
| Configuration Wizard | Interactive setup guide |
| API Reference | Current MCP tool contract index |
| Configuration Guide | Complete configuration options |
| Database Setup | Local database configuration |
| Quality Gates | Automated quality standards |
| YAML Validation | Configuration validation strategy |
| Testing Guide | Testing infrastructure and strategies |
| Contributing | Development workflow |
| Security | Security policies and practices |
| Deployment | Deployment strategies |
| Release Engineering | Release workflow and automation |
| Playbooks & Checkpoints | Actionable memory and handoff |
| Document | Description |
|---|---|
| Building the Project | Build commands and setup |
| Running Tests | Testing strategies and coverage |
| Code Conventions | Rust idioms and patterns |
| Service Architecture | System design and components |
| Database Schema | Data structures and relationships |
| Communication Patterns | Inter-service communication |
| Agent Docs Index | Workflow docs and high-impact reference files |
| Crate | Description |
|---|---|
| do-memory-core | Core episodic learning system |
| do-memory-mcp | MCP server with secure sandbox |
| do-memory-cli | Command-line interface |
| do-memory-storage-turso | Turso/libSQL storage backend |
| do-memory-storage-redb | redb cache backend |
The project maintains high quality standards through automated quality gates:
| Gate | Threshold | Description |
|---|---|---|
| Build | 0 errors | cargo build --all |
| Linting | 0 warnings | ./scripts/code-quality.sh clippy --workspace |
| Formatting | 100% | ./scripts/code-quality.sh fmt |
| Tests | All pass | cargo nextest run --all |
| File Size | ≤500 LOC | Production source files only |
| Security | 0 vulns | cargo audit in CI |
| Semver | No breaks | cargo semver-checks in CI |
Run quality gates locally:
./scripts/quality-gates.sh
For more details, see Quality Gates Documentation.
Enable optional features via Cargo:
# Basic features (default)
cargo build
# All features
cargo build --all-features
# Specific features
cargo build --features openai
cargo build --features mistral
cargo build --features local-embeddings
cargo build --features embeddings-full
Available Features:
openai: OpenAI API embeddings support (do-memory-core)mistral: Mistral AI embeddings support (do-memory-core)local-embeddings: CPU-based local embeddings (do-memory-core)embeddings-full: All embedding providers (do-memory-core)turso: Turso cloud storage with keepalive pool (do-memory-cli)redb: redb local cache layer (do-memory-cli, default)full: All features combined (do-memory-cli)wasmtime-backend: Wasmtime WASM sandbox (do-memory-mcp, default)compression: Network compression — lz4, zstd, gzip (do-memory-storage-turso)hybrid_search: FTS5 hybrid search (do-memory-storage-turso)# Turso Cloud (default)
TURSO_DATABASE_URL=libsql://your-db.turso.io
TURSO_AUTH_TOKEN=your-auth-token
# Local SQLite (fallback)
LOCAL_DATABASE_URL=sqlite:./data/memory.db
MEMORY_REDB_PATH=./data/memory.redb
# Cache settings
MEMORY_MAX_EPISODES_CACHE=1000
MEMORY_CACHE_TTL_SECONDS=3600
# CLI config
MEMORY_CLI_CONFIG=./do-memory-cli.toml
# Embeddings (CLI/MCP)
EMBEDDING_PROVIDER=openai|mistral|azure|local
OPENAI_API_KEY=sk-your-key
MISTRAL_API_KEY=your-mistral-key
AZURE_OPENAI_API_KEY=your-azure-key
OPENAI_API_KEY_ENV=OPENAI_API_KEY
EMBEDDING_MODEL=text-embedding-3-small
EMBEDDING_SIMILARITY_THRESHOLD=0.7
EMBEDDING_BATCH_SIZE=32
# Sandbox settings
MCP_USE_WASM=true
JAVY_PLUGIN=./do-do-memory-mcp/javy-plugin.wasm
[database]
turso_url = "libsql://your-db.turso.io"
turso_token = "your-auth-token"
redb_path = "memory.redb"
[storage]
max_episodes_cache = 1000
cache_ttl_seconds = 3600
pool_size = 10
[sandbox]
max_execution_time_ms = 5000
max_memory_mb = 128
max_cpu_percent = 50
allow_network = false
allow_filesystem = false
[cli]
default_format = "human"
progress_bars = true
batch_size = 100
┌─────────────────────────────────────────────────────────────┐
│ Memory CLI │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
│ │ Episode │ │ Pattern │ │ Storage │ │
│ │ Management │ │ Analysis │ │ Operations │ │
│ └─────────────┘ └─────────────┘ └─────────────┘ │
└─────────────────────────────────────────────────────────────┘
│
┌─────────────────────────────────────────────────────────────┐
│ Memory MCP Server │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
│ │ MCP Tools │ │ WASM │ │ Advanced │ │
│ │ Interface │ │ Sandbox │ │ Analysis │ │
│ └─────────────┘ └─────────────┘ └─────────────┘ │
└─────────────────────────────────────────────────────────────┘
│
┌─────────────────────────────────────────────────────────────┐
│ Memory Core │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
│ │ Episode │ │ Pattern │ │ Reward │ │
│ │ Management │ │ Extraction │ │ Scoring │ │
│ └─────────────┘ └─────────────┘ └─────────────┘ │
└─────────────────────────────────────────────────────────────┘
│
┌─────────────────────┼─────────────────────┐
│ │ │
┌───────▼────────┐ ┌────────▼────────┐ ┌────────▼────────┐
│ Turso Storage │ │ Redb Cache │ │ In-Memory │
│ │ │ │ │ │
│ libSQL/Remote │ │ Fast Access │ │ Temporary │
└────────────────┘ └─────────────────┘ └─────────────────┘
The MCP server exposes tools via lazy loading (ADR-024):
All operations meet or exceed performance targets:
| Operation | Target (P95) | Typical Performance |
|---|---|---|
| Episode Creation | < 50ms | ~2.5 µs (19,531x faster) |
| Step Logging | < 20ms | ~1.1 µs (17,699x faster) |
| Episode Completion | < 500ms | ~3.8 µs (130,890x faster) |
| Pattern Extraction | < 1000ms | ~10.4 µs (95,880x faster) |
| Memory Retrieval | < 100ms | ~721 µs (138x faster) |
| WASM Execution | < 200ms | ~50-200ms (typical) |
Typical performance numbers are from internal benchmarks on a warm cache; results vary by hardware and configuration. Run cargo bench for local measurements and see docs/QUALITY_GATES.md for the performance regression gate.
Run cargo bench for workspace benchmarks. CLI benchmarks live in do-memory-cli/benches/cli_benchmarks.rs; quality gate expectations and regression checks are documented in docs/QUALITY_GATES.md.
We welcome contributions! Please see our Contributing Guide for details.
git checkout -b feature-name./scripts/quality-gates.shcargo fmt and cargo clippy before committingThis project is licensed under the MIT License - see the LICENSE file for details.