Production-ready Claude Code 2.0 setup for autonomous development
npx skills add https://github.com/akaszubski/autonomous-dev --skill scientific-validationInstale esta skill com a CLI e comece a usar o fluxo de trabalho SKILL.md em seu espaço de trabalho.
A harness that wraps Claude Code with enforcement, specialist agents, and alignment gates to deliver consistent, production-grade software engineering outcomes.
A harness is the software and structure that wraps an AI model to keep it on track — the prompts, tools, feedback loops, constraints, and validation that turn a capable but undisciplined model into a reliable system. Without a harness, the model is a wild horse with raw power but no direction. With one, that power is controlled, directed, and accountable.
The problem: Claude Code is brilliant at execution but unreliable at process. It skips tests, declares "good enough" on failing code, bypasses security reviews, and drifts from your project's intent. Not out of malice — it's trained to complete tasks, not follow engineering discipline. Prompt-level instructions ("please run tests") get ignored under context pressure.
The solution: autonomous-dev is a harness that enforces the full software development lifecycle:
/implement "#72"In short: You define the rules once. The harness enforces them for every feature.
When you type /implement "#72", Claude coordinates specialist agents through a multi-stage pipeline:
| Step | Agent | What It Does |
|---|---|---|
| 1 | alignment check | Checks if the feature fits your PROJECT.md goals and scope. Blocks if it doesn't. |
| 2 | researcher (x2) | Searches codebase + web for patterns, best practices, existing solutions |
| 3 | planner | Designs the architecture and creates a file-by-file implementation plan |
| 3.5 | plan-critic | Adversarial review across 5 axes (assumption audit, scope creep, existing solutions, minimalism, uncertainty). Iterates planner until verdict is PROCEED. |
| 4 | acceptance tests | Writes acceptance tests that define what "done" means |
| 5 | implementer | Writes code + unit tests to make the acceptance tests pass |
| 5.5 | plan-implementation alignment gate | Compares planned files vs implemented files. Blocks on >50% divergence. |
| 6 | spec-validator | Spec-blind behavioral validation — writes tests from acceptance criteria without seeing the implementation, then validates against it |
| 7 | reviewer | Reviews code quality across 6 dimensions (correctness, tests, security, performance, maintainability, observability) |
| 8 | security-auditor | Scans for vulnerabilities (OWASP top 10, 29 security-sensitive path patterns) |
| 9 | doc-master | Updates documentation to match the new code |
After all agents complete:
One command. Full software development lifecycle. Aligned to your project.
Without a harness:
With autonomous-dev:
/implement "#X"The difference: Claude stops guessing. The harness keeps it honest.
autonomous-dev enforces process through three layers, each addressing a different failure mode:
Hooks (deterministic enforcement) — Run on every tool call, commit, and prompt. Hard gates that can't be argued with: tests must pass with 0 failures before code review starts, no stubs or placeholders allowed, security scan is mandatory, documentation must stay in sync. These are the harness equivalent of guardrails — if the model tries to skip a step, it's physically blocked.
Agents (adversarial evaluation) — Each pipeline step is handled by a specialist agent with a specific job and constrained tools. Critically, the implementer never reviews its own work — a separate reviewer agent with a skeptical mandate evaluates it. This follows the generator/evaluator pattern: the tension between agents improves quality, just like a GAN network. No single agent can skip steps or self-approve.
Skills (progressive context injection) — Instead of stuffing the context window with every rule upfront (which causes context anxiety and drift), skills inject domain knowledge only when relevant. Testing standards load during test writing. Security patterns load during security review. This keeps Claude focused within its current step.
The result: Every feature goes through every step. Not because Claude remembers to, but because the harness won't let it skip.
Substantial enforcement hardening shipped recently. Current focus areas:
/plan + adversarial plan-critic review (1-5 Likert across 5 axes, composite ≥3.0 → PROCEED)/implement, /advise, /refactorreviewer → security-auditor always sequential on security-sensitive changes~/.claude/archive/ with 17-column SQLite indexagents/*.md and commands/*.md (PRISM-validated anti-patterns blocked)See CHANGELOG.md for complete release-over-release details with issue references.
The core insight of harness engineering: reliability in multi-step AI workflows is difficult because failures compound. A 10-step process with 90% accuracy per step fails over 60% of the time. "Agent skills" that are just prompts or markdown files — where you hope the AI follows instructions — lack the dependability required for production work.
The solution is deterministic rails: a software layer that gates and validates every stage. autonomous-dev implements all 12 elements of this framework:
| # | Element | How autonomous-dev implements it |
|---|---|---|
| 1 | State Machine | pipeline_state.py — 13-phase state machine with Step enum, advance()/complete_step() API, JSON-persisted state |
| 2 | Validation Loops | STEP 8 HARD GATE — runs pytest, loops until 0 failures/0 errors. Anti-stubbing gate blocks raise NotImplementedError() shortcuts |
| 3 | Isolated Sub-Agents | Specialist agents, each spawned with fresh context and constrained tools. Model selection per agent (Haiku/Sonnet/Opus) |
| 4 | Virtual File System | Git worktree isolation for batch mode. checkpoint.py + artifacts.py persist outputs to .claude/artifacts/ per phase |
| 5 | Human-in-the-Loop | Plan mode (STEP 5) requires user approval before implementation. pause_controller.py for explicit gates |
| 6 | Hook Enforcement | 27 hooks with JSON {"decision": "block"} hard gates — not prompt-level nudges. Research-confirmed: nudges produce unreliable compliance. Includes plan_gate.py (blocks complex Write/Edit without validated plan) and Layer 6 prompt quality gate (blocks anti-pattern prompts to agents/*.md and commands/*.md) |
| 7 | State Persistence | CheckpointManager for resume after failure. batch_state_manager.py for multi-feature recovery. /implement --resume |
| 8 | Context Management | Progressive skill injection loads domain knowledge per-step. /clear between features. Agent isolation prevents context rot |
| 9 | Deterministic Ordering | agent_ordering_gate.py enforces pipeline sequence. "You MUST NOT run STEP 10 before STEP 8 test gate passes" |
| 10 | Output Validation | Parallel reviewer + security-auditor in STEP 10. genai_validate.py for LLM-as-judge tests. completion_verifier.py |
| 11 | Observability | session_activity_logger.py, conversation_archiver.py, pipeline_timing_analyzer.py. Structured JSONL + SQLite for long-term analytics |
| 12 | Error Recovery | failure_analyzer.py, batch_retry_manager.py, stuck_detector.py, qa_self_healer.py. Automatic retry with consent |
A simple markdown file that defines your project. Example:
# PROJECT.md
## GOALS
- Build a REST API for user management
- Achieve 80% test coverage
- Ship MVP by end of Q1
## SCOPE
IN: User CRUD, authentication, password reset
OUT: Admin dashboard, analytics, billing
## CONSTRAINTS
- Python 3.11+ with FastAPI
- PostgreSQL database
- JWT authentication (no sessions)
- All endpoints must have rate limiting
## ARCHITECTURE
- src/api/ - FastAPI route handlers
- src/models/ - SQLAlchemy models
- src/services/ - Business logic
- tests/ - Pytest test suite
When you run /implement "Add user registration":
tests/, code follows your architectureWhen you run /implement "Add analytics dashboard":
This is how Claude stays aligned - it reads PROJECT.md before every feature.
| Requirement | Install Command |
|---|---|
| Claude Code 2.0+ | Download |
| Python 3.9+ | python3 --version to verify |
| gh CLI (GitHub) | brew install gh && gh auth login |
bash <(curl -sSL https://raw.githubusercontent.com/akaszubski/autonomous-dev/master/install.sh)
Then fully quit Claude Code (Cmd+Q on Mac, Ctrl+Q on Windows/Linux) and reopen it.
Open your project folder in terminal, then start Claude Code:
cd /path/to/your/project
claude
Run the setup wizard:
/setup
This will walk you through creating PROJECT.md and configuring the plugin for your project.
For existing projects, use:
/align --retrofit
/sync
/sync --uninstall
/implement "#72" # From GitHub issue
/implement "Add user auth" # From description
/implement "#72" # Full pipeline (default) - acceptance-first
/implement --light "#72" # Light pipeline - docs/config/simple changes
/implement --tdd-first "#72" # TDD-first - unit tests before implementation
/implement --fix "#72" # Fix mode - root-cause analysis HARD GATE for bug fixes
Use /plan for changes touching >3 files, >100 lines, or with uncertain approach. The 7-step planning process is enforced by plan_gate.py (blocks complex Write/Edit without a validated plan in .claude/plans/).
/plan "Add OAuth login" # 7-step structured planning + iterative plan-critic
/plan --no-issues "..." # Skip auto-creation of GitHub issues
/plan-to-issues # Thorough-mode fallback for batch issue creation
The /plan flow: problem statement → existing solutions search → minimal path → adversarial critique by plan-critic (1-5 Likert score across 5 axes, iterates planner until composite ≥3.0) → on PROCEED, automatically creates GitHub issues when ≥2 independent work items exist. See docs/PLANNING-WORKFLOW.md.
When deployed user-globally, the plan-exit hooks (plan_mode_exit_detector.py,
unified_pre_tool.py) fire only inside autonomous-dev projects (detected via
repo_detector.is_autonomous_dev_repo). In foreign projects they silently no-op.
Three escape hatches exist (any one bypasses):
/implement --skip-review — one-shot, plan-only. Best for ad-hoc cases.export AUTONOMOUS_DEV_SKIP_PLAN_REVIEW=1 — persistent across sessions andtouch .claude/SKIP_PLAN_REVIEW — local sentinel file. Note: this file.claude/*) and is therefore not committed; it isgit clean -fdx or fresh clone. Use the env var for cross-sessionTo re-enable enforcement in foreign projects: export AUTONOMOUS_DEV_GLOBAL_ENFORCEMENT=1.
The escape hatches above still win over global enforcement.
Universal hook bypass (Issue #969): if any hook is blocking and you cannot run
a slash command to unstick it, set AUTONOMOUS_DEV_BYPASS=1 or touch .claude/.bypass
— every hook honors this and falls through to allow with a logged audit trail.
See docs/TROUBLESHOOTING.md.
/implement --issues 72 73 74 75 # Multiple GitHub issues with worktree isolation
/implement --batch backlog.txt # From a file (one feature per line)
/implement --resume <batch-id> # Resume after context reset
/plan "..." # 7-step planning workflow with adversarial plan-critic
/status # View PROJECT.md goal progress
/align # Check alignment (--project, --docs, --retrofit)
/create-issue "..." # Create GitHub issue with automated research
/plan-to-issues # Thorough-mode batch issue creation from plan output
/health-check # Verify plugin installation
/refactor # Code, docs, and test optimization (--tests, --docs, --code, --fix)
/sweep # Quick codebase hygiene (alias for /refactor --quick); --tests for AST-based test pruning analysis
/audit # Quality audit (--quick, --security, --docs, --code, --tests)
/skill-eval # Measure skill effectiveness via behavioral delta scoring (--quick, --skill, --update)
/advise # Critical thinking analysis
/improve # Automation health analysis (--auto-file to create GitHub issues)
/retrospective # Session drift detection and alignment updates
/autoresearch # Autonomous experiment loop — hypothesize, modify, benchmark, commit or revert
/sync # Update plugin (--github, --env, --marketplace, --all)
/worktree # Git worktrees (--list, --status, --merge, --discard)
/scaffold-genai-uat # Scaffold LLM-as-judge tests into any repo
/mem-search # Search claude-mem persistent memory
Inspired by the adversarial evaluation pattern (generator creates, evaluator judges), every /implement runs this pipeline:
/implement "feature"
|
PROJECT.md Alignment Check (blocks if misaligned)
|
Research Self-Critique (inline FEEDBACK pass, Self-Refine pattern)
|
+------------------+------------------+
| Research-Local | Research-Web | <- Parallel research
| (Haiku) | (Sonnet) |
+------------------+------------------+
|
Planning (Opus)
|
Plan Validation Gate (plan-critic, Opus) <- HARD GATE: adversarial review
| 5 axes, ≥3.0 composite to PROCEED
| REVISE re-invokes planner
|
Acceptance Tests (test-master) <- Acceptance-first (default)
| or TDD-first (--tdd-first)
Implementation (Opus) -> HARD GATE: 0 test failures
| -> HARD GATE: No stubs/placeholders
| -> HARD GATE: Root cause analysis (--fix)
| -> HARD GATE: Hook registration verified
|
Plan-Implementation Alignment Gate -> HARD GATE: planned vs diff files
| (blocks on >50% divergence)
|
Spec-Blind Validation (spec-validator) <- HARD GATE: behavioral tests
| from criteria only, no impl
|
+----------+------------+-----------+
| Review | Security | Docs | <- Parallel for low-risk,
| (Sonnet) | (Sonnet) | (Sonnet) | sequential for security-sensitive
+----------+------------+-----------+
|
Remediation Gate (max 2 cycles on findings)
|
Git Operations (commit, push, close issue)
|
Agent Completeness Gate (blocks commit if required agents missing)
|
Continuous Improvement (background analysis)
|
Session Archive (transcript + SQLite index)
Features validate against your PROJECT.md before work starts. If you request something OUT of scope, it's blocked - not implemented wrong.
autonomous-dev uses a diamond testing model — not the traditional TDD pyramid. The key insight: with agent-speed code generation, acceptance criteria are the cheapest form of meaningful specification. Unit tests generated by agents are useful as regression locks but poor as specifications — agents game them.
/ Acceptance Criteria \ Human-defined, LLM-as-judge evaluated
/ LLM-as-Judge Eval Layer \ Probabilistic, ~80-90% human agreement
/ Integration & Contract \ Generated from acceptance criteria
\ Property-Based Invariants / "Output must be valid JSON", etc.
\ Deterministic Unit Tests/ Generated by agent, constrained by above
\ Type System / Lints / Hard floor, zero tolerance
What changed from TDD:
tests/genai/)tests/property/)The pipeline default is acceptance-first (/implement "#X"). Legacy TDD-first is available via --tdd-first. See Testing Strategy for the full 6-layer model with research citations.
As the context window fills up, models exhibit context anxiety — they rush through steps, declare things done prematurely, and degrade output quality. autonomous-dev handles this with context resets between features:
/clear to reset context/implement --resume <batch-id> to continue from where you left offBatch processing handles this automatically with worktree isolation and checkpoint/resume.
autonomous-dev doesn't just run pipelines — it learns from them.
Long-term session analytics: Every conversation is archived to ~/.claude/archive/ with full transcripts and a 17-column SQLite index (per-session tokens, cache hit rates, tool calls, model, per-repo attribution). See docs/SESSION-ANALYTICS.md for schema + queries.
sqlite3 ~/.claude/archive/sessions.db \
"SELECT project, COUNT(*) sessions,
SUM(total_output_tokens) out_tok, SUM(cache_read_tokens) cache_read
FROM sessions GROUP BY project ORDER BY out_tok DESC;"
Closed-loop improvement: The system detects its own weaknesses and fixes them. See docs/EVALUATION.md for the full measurement surface (skill-eval, reviewer benchmark, /improve workflow, autoresearch).
pipeline runs → session logs → /improve detects drift → files GitHub issues
↓ ↓
/autoresearch picks up issues → hypothesize → modify → benchmark → deploy if better
↑ ↓
└──────────────────────────────────────────────────────────────┘
/improve analyzes recent sessions for bypasses, test drift, and pipeline degradation — auto-files issues/autoresearch runs autonomous experiments: modifies agent prompts, benchmarks the change, commits if it improves quality or reverts if it doesn't/retrospective detects intent evolution and proposes alignment updates/skill-eval measures whether skill injections actually change agent behavior (behavioral delta scoring)The result: The harness gets better every week without anyone thinking about it.
| Component | Count | Purpose |
|---|---|---|
| Commands | 23 | Slash commands for workflows |
| Agents | 16 | Specialized AI for each SDLC stage (added: plan-critic) |
| Skills | 19 | Domain knowledge (added: planning-workflow, prompt-engineering) |
| Hooks | 30 | Automatic validation and enforcement (added: plan_gate, conversation_archiver, prompt quality gate) |
| Libraries | 210 | Python utilities |
/plan + plan-critic adversarial review/implement mode matrix (full / light / fix / batch) with per-mode agent sets[BYPASS]/[INCOMPLETE]/[ORDERING] finding tags.mcp.json default, token-bleed prevention, migration helper# Install
bash <(curl -sSL https://raw.githubusercontent.com/akaszubski/autonomous-dev/master/install.sh)
# Restart Claude Code (Cmd+Q / Ctrl+Q)
# Daily workflow
/implement "#72" # Single feature
/implement --issues 1 2 3 # Multiple features
/implement --fix "#99" # Fix a failing test
/clear # Reset context between features
# Check status
/health-check # Verify installation
/status # View alignment
# Update
/sync # Get latest version
MIT License