npx skills add https://github.com/d4kooo/openclaw-token-memory-optimizer --skill token-optimizerInstall this skill with the CLI and start using the SKILL.md workflow in your workspace.
An optimization suite for OpenClaw agents to prevent token leaks and context bloat.
openclaw skill install token-optimizer
cd ~/.openclaw/workspace/skills/
git clone https://github.com/D4kooo/Openclaw-Token-memory-optimizer/tree/main
In your openclaw.json, set sessionTarget: "isolated" for cron jobs:
{
"cron": {
"jobs": [
{
"name": "My Background Task",
"schedule": { "kind": "every", "everyMs": 1800000 },
"sessionTarget": "isolated",
"payload": {
"kind": "agentTurn",
"message": "Do the thing. Use message tool if human needs to know."
}
}
]
}
}
Configure semantic search for your memory files:
{
"memorySearch": {
"embedding": {
"provider": "local",
"model": "hf:second-state/All-MiniLM-L6-v2-Embedding-GGUF"
},
"store": "sqlite"
}
}
When context exceeds 100k tokens:
openclaw gateway restart| Setting | Description | Default |
|---|---|---|
memorySearch.embedding.provider |
Embedding provider (local, openai) |
β |
memorySearch.embedding.model |
Model for embeddings | β |
memorySearch.store |
Storage backend (sqlite, memory) |
memory |
memorySearch.paths |
Paths to index | ["memory/", "MEMORY.md"] |
cron.jobs[].sessionTarget |
Session type (main, isolated) |
main |
Long-running OpenClaw sessions accumulate tokens from:
Without optimization, you'll hit context limits and experience:
This skill teaches your agent to stay lean.
PRs welcome! Areas we'd love help with:
MIT β Use freely, credit appreciated.
Part of the OpenClaw ecosystem. π¦¦