An improved implementation of the Ralph Wiggum technique for autonomous AI agent orchestration
npx skills add https://github.com/mikeyobrien/ralph-orchestrator --skill pr-demoقم بتثبيت هذه المهارة باستخدام واجهة سطر الأوامر (CLI) وابدأ في استخدام سير عمل SKILL.md في مساحة عملك.
A hat-based orchestration framework that keeps AI agents in a loop until the task is done.
"Me fail English? That's unpossible!" - Ralph Wiggum
Documentation | Getting Started | Presets
npm install -g @ralph-orchestrator/ralph-cli
curl --proto '=https' --tlsv1.2 -LsSf \
https://github.com/mikeyobrien/ralph-orchestrator/releases/latest/download/ralph-cli-installer.sh | sh
cargo install ralph-cli
Homebrew is not currently published from this repository's automated release flow. Prefer npm, Cargo, or the GitHub Releases installer.
# 1. Initialize Ralph with your preferred backend
ralph init --backend claude
# 2. Plan your feature (interactive PDD session)
ralph plan "Add user authentication with JWT"
# Creates: specs/user-authentication/requirements.md, design.md, implementation-plan.md
# 3. Implement the feature
ralph run -p "Implement the feature in specs/user-authentication/"
Ralph iterates until it outputs LOOP_COMPLETE or hits the iteration limit.
For simpler tasks, skip planning and run directly:
ralph run -p "Add input validation to the /users endpoint"
Alpha: The web dashboard is under active development. Expect rough edges and breaking changes.
Ralph includes a web dashboard for monitoring and managing orchestration loops.
ralph web # starts Rust RPC API + frontend + opens browser
ralph web --no-open # skip browser auto-open
ralph web --backend-port 4000 # custom RPC API port
ralph web --frontend-port 8080 # custom frontend port
ralph web --legacy-node-api # opt into deprecated Node tRPC backend
ralph mcp serve is scoped to a single workspace root per server instance.
ralph mcp serve --workspace-root /path/to/repo
Precedence is:
--workspace-rootRALPH_API_WORKSPACE_ROOTFor multi-repo use, run one MCP server instance per repo/workspace. Ralph's current
control-plane APIs persist config, tasks, loops, planning sessions, and collections
under a single workspace root, so server-per-workspace is the deterministic model.
Requirements:
ralph-api)On first run, ralph web auto-detects missing node_modules and runs npm install.
To set up Node.js:
# Option 1: nvm (recommended)
nvm install # reads .nvmrc
# Option 2: direct install
# https://nodejs.org/
For development:
npm install # install frontend + legacy backend deps
npm run dev:api # Rust RPC API (port 3000)
npm run dev:web # frontend (port 5173)
npm run dev # frontend only (default)
npm run dev:legacy-server # deprecated Node backend (optional)
npm run test # all frontend/backend workspace tests
Ralph can run as an MCP server over stdio for MCP-compatible clients:
ralph mcp serve
Use this mode from an MCP client configuration rather than an interactive terminal workflow.
Ralph implements the Ralph Wiggum technique — autonomous task completion through continuous iteration. It supports:
code-assist, debug, research, review, and pdd-to-code-assist, with more patterns documented as examplesRalph supports human interaction during orchestration via Telegram. Agents can ask questions and block until answered; humans can send proactive guidance at any time.
Quick onboarding (Telegram):
ralph bot onboard --telegram # guided setup (token + chat id)
ralph bot status # verify config
ralph bot test # send a test message
ralph run -c ralph.bot.yml -p "Help the human"
# ralph.yml
RObot:
enabled: true
telegram:
bot_token: "your-token" # Or RALPH_TELEGRAM_BOT_TOKEN env var
human.interact events; the loop blocks until a response arrives or times out@loop-id prefix, or default to primary/status, /tasks, /restart for real-time loop visibilitySee the Telegram guide for setup instructions.
Full documentation is available at mikeyobrien.github.io/ralph-orchestrator:
What is Ralph Orchestrator?
Ralph is a hat-based orchestration framework that implements the Ralph Wiggum technique — autonomous task completion through continuous iteration. It keeps AI agents in a loop until the task is done, supporting multiple backends like Claude Code, Gemini CLI, Codex, and more.
How is Ralph different from other AI coding tools?
Unlike single-shot AI assistants, Ralph iterates until completion using a "hat system" with specialized personas. It includes backpressure gates (tests, lint, typecheck) that reject incomplete work, plus persistent memories and tasks for continuous learning.
What are the system requirements?
ralph-api component)Which installation method should I use?
npm install -g @ralph-orchestrator/ralph-clicargo install ralph-cli (best for Rust developers)curl ... | shIs Homebrew supported?
Homebrew is not currently published from this repository's automated release flow. Prefer npm, Cargo, or the GitHub Releases installer.
How do I start a new project with Ralph?
ralph init --backend claude
ralph plan "Add user authentication with JWT"
ralph run -p "Implement the feature in specs/user-authentication/"
What backends does Ralph support?
Claude Code, Kiro, Gemini CLI, Codex, Amp, Copilot CLI, and OpenCode.
What is the "hat system"?
Ralph uses specialized personas (hats) that coordinate through events. Each hat has a specific role — code-assist, debug, research, review, and pdd-to-code-assist — enabling structured multi-step task execution.
What is RObot?
RObot enables human interaction during orchestration via Telegram. Agents can ask questions and block until answered; humans can send proactive guidance mid-loop.
How do I set up Telegram integration?
ralph bot onboard --telegram # guided setup
ralph bot status # verify config
ralph bot test # send a test message
How do I access the web dashboard?
Run ralph web to start the Rust RPC API + frontend and open your browser. The dashboard is currently in Alpha — expect rough edges and breaking changes.
Can I customize the dashboard ports?
Yes: ralph web --backend-port 4000 --frontend-port 8080
How do I run Ralph as an MCP server?
ralph mcp serve --workspace-root /path/to/repo
Each MCP server instance is scoped to a single workspace root. For multi-repo use, run one instance per workspace.
Ralph fails to start with "node_modules not found"
Run npm install in the project directory, or let ralph web auto-detect and install on first run.
How do I set up Node.js if not installed?
Use nvm (recommended): nvm install (reads .nvmrc), or install directly from https://nodejs.org/
Where can I get help?
Contributions are welcome! See CONTRIBUTING.md for guidelines and CODE_OF_CONDUCT.md for community standards.
MIT License — See LICENSE for details.
Join the ralph-orchestrator community to discuss AI agent patterns, get help with your implementation, or contribute to the roadmap.
"I'm learnding!" - Ralph Wiggum