npx skills add https://github.com/d4kooo/openclaw-token-memory-optimizer --skill token-optimizerCLI를 사용하여 이 스킬을 설치하고 작업 공간에서 SKILL.md 워크플로 사용을 시작하세요.
An optimization suite for OpenClaw agents to prevent token leaks and context bloat.
openclaw skill install token-optimizer
cd ~/.openclaw/workspace/skills/
git clone https://github.com/D4kooo/Openclaw-Token-memory-optimizer/tree/main
In your openclaw.json, set sessionTarget: "isolated" for cron jobs:
{
"cron": {
"jobs": [
{
"name": "My Background Task",
"schedule": { "kind": "every", "everyMs": 1800000 },
"sessionTarget": "isolated",
"payload": {
"kind": "agentTurn",
"message": "Do the thing. Use message tool if human needs to know."
}
}
]
}
}
Configure semantic search for your memory files:
{
"memorySearch": {
"embedding": {
"provider": "local",
"model": "hf:second-state/All-MiniLM-L6-v2-Embedding-GGUF"
},
"store": "sqlite"
}
}
When context exceeds 100k tokens:
openclaw gateway restart| Setting | Description | Default |
|---|---|---|
memorySearch.embedding.provider |
Embedding provider (local, openai) |
— |
memorySearch.embedding.model |
Model for embeddings | — |
memorySearch.store |
Storage backend (sqlite, memory) |
memory |
memorySearch.paths |
Paths to index | ["memory/", "MEMORY.md"] |
cron.jobs[].sessionTarget |
Session type (main, isolated) |
main |
Long-running OpenClaw sessions accumulate tokens from:
Without optimization, you'll hit context limits and experience:
This skill teaches your agent to stay lean.
PRs welcome! Areas we'd love help with:
MIT — Use freely, credit appreciated.
Part of the OpenClaw ecosystem. 🦦