True deliberative consensus MCP server where AI models debate and refine positions across multiple rounds
npx skills add https://github.com/blueman82/ai-counsel --skill config-schema-migratorInstall this skill with the CLI and start using the SKILL.md workflow in your workspace.
True deliberative consensus MCP server where AI models debate and refine positions across multiple rounds.
Cloud Models Debate (Claude Sonnet, GPT-5.1 Codex, Gemini):
mcp__ai-counsel__deliberate({
question: "Should we use REST or GraphQL for our new API?",
participants: [
{cli: "claude", model: "claude-sonnet-4-5-20250929"},
{cli: "codex", model: "gpt-5.2-codex"},
{cli: "gemini", model: "gemini-2.5-pro"}
],
mode: "conference",
rounds: 3
})
Result: Converged on hybrid architecture (0.82-0.95 confidence) ⢠View full transcript
Local Models Debate (100% private, zero API costs):
mcp__ai-counsel__deliberate({
question: "Should we prioritize code quality or delivery speed?",
participants: [
{cli: "ollama", model: "llama3.1:8b"},
{cli: "ollama", model: "mistral:7b"},
{cli: "ollama", model: "deepseek-r1:8b"}
],
mode: "conference",
rounds: 2
})
Result: 2 models switched positions after Round 1 debate ⢠View full transcript
AI Counsel enables TRUE deliberative consensus where models see each other's responses and refine positions across multiple rounds:
quick (single-round) or conference (multi-round debate)query_decisions tool (finds contradictions, traces evolution, analyzes patterns)Get up and running in minutes:
.mcp.json example in Configure in Claude Code.python server.py and trigger the deliberate tool using the examples in Usage.Try a Deliberation:
// Mix local + cloud models, zero API costs for local models
mcp__ai-counsel__deliberate({
question: "Should we add unit tests to new features?",
participants: [
{cli: "ollama", model: "llama2"}, // Local
{cli: "lmstudio", model: "mistral"}, // Local
{cli: "claude", model: "sonnet"} // Cloud
],
mode: "quick"
})
ā ļø Model Size Matters for Deliberations
Recommended: Use 7B-8B+ parameter models (Llama-3-8B, Mistral-7B, Qwen-2.5-7B) for reliable structured output and vote formatting.
Not Recommended: Models under 3B parameters (e.g., Llama-3.2-1B) may struggle with complex instructions and produce invalid votes.
Available Models: claude (opus 4.5, sonnet, haiku), codex (gpt-5.2-codex, gpt-5.1-codex-max, gpt-5.1-codex-mini, gpt-5.2), droid, gemini, HTTP adapters (ollama, lmstudio, openrouter).
See CLI Model Reference for complete details.
š§ Reasoning Effort Control
Control reasoning depth per-participant for codex and droid adapters:
participants: [ {cli: "codex", model: "gpt-5.2-codex", reasoning_effort: "high"}, // Deep reasoning {cli: "droid", model: "gpt-5.1-codex-max", reasoning_effort: "low"} // Fast response ]
- Codex:
none,minimal,low,medium,high,xhigh- Droid:
off,low,medium,high- Config defaults set in
config.yaml, per-participant overrides at runtime
For model choices and picker workflow, see Model Registry & Picker.
python3 --versiongit clone https://github.com/blueman82/ai-counsel.git
cd ai-counsel
python3 -m venv .venv
source .venv/bin/activate # macOS/Linux; Windows: .venv\Scripts\activate
pip install -r requirements.txt
python3 -m pytest tests/unit -v # Verify installation
ā Ready to use! Server includes core dependencies plus optional convergence backends (scikit-learn, sentence-transformers) for best accuracy.
Edit config.yaml to configure adapters and settings:
adapters:
claude:
type: cli
command: "claude"
args: ["-p", "--model", "{model}", "--settings", "{\"disableAllHooks\": true}", "{prompt}"]
timeout: 300
ollama:
type: http
base_url: "http://localhost:11434"
timeout: 120
max_retries: 3
defaults:
mode: "quick"
rounds: 2
max_rounds: 5
Note: Use type: cli for CLI tools and type: http for HTTP adapters (Ollama, LM Studio, OpenRouter).
Control which models are available for selection in the model registry. Each model can be enabled or disabled without removing its definition:
model_registry:
claude:
- id: "claude-sonnet-4-5-20250929"
label: "Claude Sonnet 4.5"
tier: "balanced"
default: true
enabled: true # Model is active and available
- id: "claude-opus-4-20250514"
label: "Claude Opus 4"
tier: "premium"
enabled: false # Temporarily disabled (cost control, testing, etc.)
Enabled Field Behavior:
enabled: true (default) - Model appears in list_models and can be selected for deliberationsenabled: false - Model is hidden from selection but definition retained for easy re-enablingdeliberate callsUse Cases:
Models automatically converge and stop deliberating when opinions stabilize, saving time and API costs. Status: Converged (ā„85% similarity), Refining (40-85%), Diverging (<40%), or Impasse (stable disagreement). Voting takes precedence: when models cast votes, convergence reflects voting outcome.
ā Complete Guide - Thresholds, backends, configuration
Models cast votes with confidence levels (0.0-1.0), rationale, and continue_debate signals. Votes determine consensus: Unanimous (3-0), Majority (2-1), or Tie. Similar options automatically merged at 0.70+ similarity threshold.
ā Complete Guide - Vote structure, examples, integration
Run Ollama, LM Studio, OpenRouter, or Nebius for flexible API costs and privacy options. Mix with cloud models (Claude, GPT-4) in single deliberation.
ā Setup Guides - Ollama, LM Studio, OpenRouter, cost analysis
Add new CLI tools or HTTP adapters to fit your infrastructure. Simple 3-5 step process with examples and testing patterns.
ā Developer Guide - Step-by-step tutorials, real-world examples
Ground design decisions in reality by querying actual code, files, and data:
// MCP client example (e.g., Claude Code)
mcp__ai_counsel__deliberate({
question: "Should we migrate from SQLite to PostgreSQL?",
participants: [
{cli: "claude", model: "sonnet"},
{cli: "codex", model: "gpt-4"}
],
rounds: 3,
working_directory: process.cwd() // Required - enables tools to access your files
})
During deliberation, models can:
TOOL_REQUEST: {"name": "read_file", "arguments": {"path": "config.yaml"}}TOOL_REQUEST: {"name": "search_code", "arguments": {"pattern": "database.*connect"}}TOOL_REQUEST: {"name": "list_files", "arguments": {"pattern": "*.sql"}}TOOL_REQUEST: {"name": "run_command", "arguments": {"command": "git", "args": ["log", "--oneline"]}}Example workflow:
read_file to check current configdatabase: sqlite, max_connections: 10search_code for database queriesBenefits:
Supported Tools:
read_file - Read file contents (max 1MB)search_code - Search regex patterns (ripgrep or Python fallback)list_files - List files matching glob patternsrun_command - Execute safe read-only commands (ls, git, grep, etc.)Control tool behavior in config.yaml:
Working Directory (Required):
working_directory parameter when calling deliberate toolworking_directory: process.cwd() in JavaScript MCP clientsTool Security (deliberation.tool_security):
exclude_patterns: Block access to sensitive directories (default: transcripts/, .git/, node_modules/)max_file_size_bytes: File size limit for read_file (default: 1MB)command_whitelist: Safe commands for run_command (ls, grep, find, cat, head, tail)File Tree (deliberation.file_tree):
enabled: Inject repository structure into Round 1 prompts (default: true)max_depth: Directory depth limit (default: 3)max_files: Maximum files to include (default: 100)Adapter-Specific Requirements:
| Adapter | Working Directory Behavior | Configuration |
|---|---|---|
| Claude | Automatic isolation via subprocess {working_directory} |
No special config needed |
| Codex | No true isolation - can access any file | Security consideration: models can read outside {working_directory} |
| Droid | Automatic isolation via subprocess {working_directory} |
No special config needed |
| Gemini | Enforces workspace boundaries | Required: --include-directories {working_directory} flag |
| Ollama/LMStudio | N/A - HTTP adapters | No file system access restrictions |
Learn More:
"File not found" errors:
working_directory is set correctly in your MCP client calllist_files ā read_file"Access denied: Path matches exclusion pattern":
transcripts/, .git/, node_modules/ by defaultdeliberation.tool_security.exclude_patterns in config.yamlGemini "File path must be within workspace" errors:
--include-directories flag uses {working_directory} placeholderTool timeout errors:
deliberation.tool_security.tool_timeout for slow operationsLearn More:
AI Counsel learns from past deliberations to accelerate future decisions. Two core capabilities:
When starting a new deliberation, the system:
query_decisionsQuery past deliberations programmatically:
Configuration (optional - defaults work out-of-box):
decision_graph:
enabled: true # Auto-injection on by default
db_path: "decision_graph.db" # Resolves to project root (works for any user/folder)
similarity_threshold: 0.6 # Adjust to control context relevance
max_context_decisions: 3 # How many past decisions to inject
Works for any user from any directory - database path is resolved relative to project root.
ā Quickstart | Configuration | Context Injection
python server.py
Option A: Project Config (Recommended) - Create .mcp.json:
{
"mcpServers": {
"ai-counsel": {
"type": "stdio",
"command": ".venv/bin/python",
"args": ["server.py"],
"env": {}
}
}
}
Option B: User Config - Add to ~/.claude.json with absolute paths.
After configuration, restart Claude Code.
list_models.set_session_models; leave model blank in deliberate to use those defaults.Quick Mode:
mcp__ai-counsel__deliberate({
question: "Should we migrate to TypeScript?",
participants: [{cli: "claude", model: "sonnet"}, {cli: "codex", model: "gpt-5.2-codex"}],
mode: "quick"
})
Conference Mode (multi-round):
mcp__ai-counsel__deliberate({
question: "JWT vs session-based auth?",
participants: [
{cli: "claude", model: "sonnet"},
{cli: "codex", model: "gpt-5.2-codex"}
],
rounds: 3,
mode: "conference"
})
Search Past Decisions:
mcp__ai-counsel__query_decisions({
query_text: "database choice",
threshold: 0.5, // NEW! Adjust sensitivity (0.0-1.0, default 0.6)
limit: 5
})
// Returns: Similar past deliberations with consensus and similarity scores
// NEW! Empty results include helpful diagnostics:
{
"type": "similar_decisions",
"count": 0,
"results": [],
"diagnostics": {
"total_decisions": 125,
"best_match_score": 0.45,
"near_misses": [{"question": "Database indexing...", "score": 0.45}],
"suggested_threshold": 0.45,
"message": "No results found above threshold 0.6. Best match scored 0.450. Try threshold=0.45..."
}
}
// Find contradictions
mcp__ai-counsel__query_decisions({
operation: "find_contradictions"
})
// Returns: Decisions where consensus conflicts
// Trace evolution
mcp__ai-counsel__query_decisions({
query: "microservices architecture",
operation: "trace_evolution"
})
// Returns: How opinions evolved over time on this topic
All deliberations saved to transcripts/ with AI-generated summaries and full debate history.
ai-counsel/
āāā server.py # MCP server entry point
āāā config.yaml # Configuration
āāā adapters/ # CLI/HTTP adapters
ā āāā base.py # Abstract base
ā āāā base_http.py # HTTP base
ā āāā [adapter implementations]
āāā deliberation/ # Core engine
ā āāā engine.py # Orchestration
ā āāā convergence.py # Similarity detection
ā āāā transcript.py # Markdown generation
āāā models/ # Data models (Pydantic)
āāā tests/ # Unit/integration/e2e tests
āāā decision_graph/ # Optional memory system
pytest tests/unit -v # Unit tests (fast)
pytest tests/integration -v -m integration # Integration tests
pytest --cov=. --cov-report=html # Coverage report
See CLAUDE.md for development workflow and architecture notes.
git checkout -b feature/your-feature)MIT License - see LICENSE file
Built with:
Inspired by the need for true deliberative AI consensus beyond parallel opinion gathering.
Production Ready - Multi-model deliberative consensus with cross-user decision graph memory, structured voting, and adaptive early stopping for critical technical decisions!