Refactor high-complexity React components in Dify frontend. Use when `pnpm analyze-component...
npx skills add matteoscurati/ai-consultants
Or install specific skill: npx add-skill https://github.com/matteoscurati/ai-consultants
# Description
Consult Gemini CLI, Codex CLI, Mistral Vibe, Kilo CLI, Cursor, Claude, Amp, Qwen, and Ollama as external experts for coding questions. Automatically excludes the invoking agent from the panel to avoid self-consultation. Use when you have doubts about implementations, want a second opinion, need to choose between different approaches, or when explicitly requested with phrases like "ask the consultants", "what do the other models think", "compare solutions".
# SKILL.md
name: ai-consultants
description: Consult Gemini CLI, Codex CLI, Mistral Vibe, Kilo CLI, Cursor, Claude, Amp, Qwen, and Ollama as external experts for coding questions. Automatically excludes the invoking agent from the panel to avoid self-consultation. Use when you have doubts about implementations, want a second opinion, need to choose between different approaches, or when explicitly requested with phrases like "ask the consultants", "what do the other models think", "compare solutions".
AI Consultants v2.8.1 - AI Expert Panel
Simultaneously consult multiple AIs as "consultants" for coding questions. Each consultant has a configurable persona that influences their response style.
Quick Start
/ai-consultants:config-wizard # Initial setup
/ai-consultants:consult "Your question here"
What's New in v2.8
- Amp CLI Consultant: New "The Systems Thinker" persona for system design
- Qwen CLI Support: CLI/API mode switching for Qwen3 (v2.7)
- CLI/API Mode Switching: Gemini, Codex, Claude, Mistral, Qwen3 can use CLI or API (v2.6)
- Model Quality Tiers: premium, standard, economy with
apply_model_tier()(v2.5) - Budget Enforcement: Configurable cost limits with
ENABLE_BUDGET_LIMIT(v2.4) - Premium Model Defaults: All consultants now use flagship models by default
- 13 Consultants: Gemini, Codex, Mistral, Kilo, Cursor, Aider, Amp, Claude, Qwen3, GLM, Grok, DeepSeek, Ollama
Slash Commands
Consultation Commands
| Command | Description |
|---|---|
/ai-consultants:consult |
Main consultation - ask AI consultants a coding question |
/ai-consultants:ask-experts |
Quick query alias for consult |
/ai-consultants:debate |
Run consultation with multi-round debate |
/ai-consultants:help |
Show all commands and usage |
Configuration Commands
| Command | Description |
|---|---|
/ai-consultants:config-wizard |
Full interactive setup (CLI detection, API keys, personas) |
/ai-consultants:config-check |
Verify CLI agents are installed and authenticated |
/ai-consultants:config-status |
View current configuration |
/ai-consultants:config-preset |
Set default preset (minimal, balanced, high-stakes, local) |
/ai-consultants:config-strategy |
Set default synthesis strategy |
/ai-consultants:config-features |
Toggle features (Debate, Synthesis, Peer Review, etc.) |
/ai-consultants:config-personas |
Change consultant personas |
/ai-consultants:config-api |
Configure API-based consultants (Qwen3, GLM, Grok, DeepSeek) |
Configuration Workflow
Set your preferences using slash commands:
/ai-consultants:config-preset # Choose default preset
/ai-consultants:config-strategy # Choose synthesis strategy
/ai-consultants:config-features # Enable/disable features
/ai-consultants:config-status # View current settings
Consultants and Personas
| Consultant | CLI | Persona | Focus |
|---|---|---|---|
| Google Gemini | gemini |
The Architect | Design patterns, scalability |
| OpenAI Codex | codex |
The Pragmatist | Simplicity, proven solutions |
| Mistral Vibe | vibe |
The Devil's Advocate | Edge cases, vulnerabilities |
| Kilo Code | kilocode |
The Innovator | Creativity, unconventional |
| Cursor | agent |
The Integrator | Full-stack perspective |
| Aider | aider |
The Pair Programmer | Collaborative coding |
| Amp | amp |
The Systems Thinker | System design, interactions |
| Claude | claude |
The Synthesizer | Big picture, synthesis |
| Qwen | qwen |
The Analyst | Data-driven, metrics |
| Ollama | ollama |
The Local Expert | Privacy-first, zero cost |
API-only consultants: GLM (The Methodologist), Grok (The Provocateur), DeepSeek (The Code Specialist)
CLI/API Mode: Gemini, Codex, Claude, Mistral, and Qwen can switch between CLI and API mode via *_USE_API environment variables.
Self-Exclusion: The invoking agent is automatically excluded from the panel. When invoked from Claude Code, Claude is excluded; when invoked from Codex CLI, Codex is excluded, etc.
Requirements
- At least 2 consultant CLIs installed and authenticated
- jq for JSON processing
Quick Install
curl -fsSL https://raw.githubusercontent.com/matteoscurati/ai-consultants/main/scripts/install.sh | bash
~/.claude/skills/ai-consultants/scripts/doctor.sh --fix
CLI Installation
npm install -g @google/gemini-cli # Gemini
npm install -g @openai/codex # Codex
pip install mistral-vibe # Mistral
npm install -g @kilocode/cli # Kilo
npm install -g @qwen-code/qwen-code@latest # Qwen
curl -fsSL https://ampcode.com/install.sh | bash # Amp
brew install jq # Required
# For local inference (optional)
curl -fsSL https://ollama.com/install.sh | sh
ollama pull llama3.2
Configuration Presets
| Preset | Consultants | Use Case |
|---|---|---|
minimal |
2 (Gemini + Codex) | Quick questions |
balanced |
4 (+Mistral +Kilo) | Standard use |
thorough |
5 (+Cursor) | Comprehensive |
high-stakes |
All + debate | Critical decisions |
local |
Ollama only | Full privacy |
security |
Security-focused | +Debate |
cost-capped |
Budget-friendly | Low cost |
Synthesis Strategies
| Strategy | Description |
|---|---|
majority |
Most common answer wins (default) |
risk_averse |
Weight conservative responses |
security_first |
Prioritize security |
cost_capped |
Prefer cheaper solutions |
compare_only |
No recommendation |
Usage Examples
Basic Consultation
/ai-consultants:consult "How to optimize this SQL query?"
With File Context
/ai-consultants:consult "Review this authentication flow" src/auth.ts
With Debate
/ai-consultants:debate "Microservices or monolith for our new service?"
Bash Usage
cd ~/.claude/skills/ai-consultants
# With preset
./scripts/consult_all.sh --preset balanced "Best approach for caching?"
# With strategy
./scripts/consult_all.sh --strategy risk_averse "Security question"
# With local model
./scripts/consult_all.sh --preset local "Private question"
Workflow
Query -> Classify -> Parallel Queries -> Voting -> Synthesis -> Report
| | |
Gemini (8) Consensus Recommendation
Codex (7) Analysis Comparison
Mistral (6) Risk Assessment
With debate:
Round 1 -> Cross-Critique -> Round 2 -> Final Synthesis
Usage Triggers
Automatic
- Doubts about implementation approach
- Validating complex solutions
- Exploring architectural alternatives
Explicit
- "Ask the consultants..."
- "What do the other models think?"
- "Compare solutions"
- "I want a second opinion"
Features
| Feature | Description | Toggle |
|---|---|---|
| Personas | Each consultant has a role that shapes responses | ENABLE_PERSONA |
| Synthesis | Auto-combine responses into recommendation | ENABLE_SYNTHESIS |
| Debate | Consultants critique each other's answers | ENABLE_DEBATE |
| Peer Review | Consultants anonymously rank each other | ENABLE_PEER_REVIEW |
| Smart Routing | Auto-select best consultants per question type | ENABLE_SMART_ROUTING |
| Cost Tracking | Track API usage costs | ENABLE_COST_TRACKING |
| Panic Mode | Auto-add rigor when uncertainty detected | ENABLE_PANIC_MODE |
Configuration
# Defaults (v2.8)
DEFAULT_PRESET=balanced # Preset when --preset not given
DEFAULT_STRATEGY=majority # Strategy when --strategy not given
# Core features
ENABLE_DEBATE=true # Multi-agent debate
ENABLE_SYNTHESIS=true # Automatic synthesis
ENABLE_PEER_REVIEW=false # Anonymous peer review
ENABLE_PANIC_MODE=auto # Auto-rigor for uncertainty
# CLI/API Mode Switching (v2.6+)
GEMINI_USE_API=false # Use Google AI API instead of CLI
CODEX_USE_API=false # Use OpenAI API instead of CLI
CLAUDE_USE_API=false # Use Anthropic API instead of CLI
MISTRAL_USE_API=false # Use Mistral API instead of CLI
QWEN3_USE_API=true # Use DashScope API (default) or CLI
# New consultants (v2.7-2.8)
ENABLE_AMP=false # Amp CLI - The Systems Thinker
AMP_MODEL=amp
ENABLE_QWEN3=false # Qwen CLI/API - The Analyst
QWEN3_MODEL=qwen3-max
# Ollama (local models)
ENABLE_OLLAMA=true
OLLAMA_MODEL=qwen2.5-coder:32b
# Budget management (v2.4)
ENABLE_BUDGET_LIMIT=false
MAX_SESSION_COST=1.00
BUDGET_ACTION=warn # warn or stop
Output
/tmp/ai_consultations/TIMESTAMP/
├── gemini.json # Individual responses
├── codex.json
├── voting.json # Consensus
├── synthesis.json # Recommendation
└── report.md # Human-readable
Doctor Command
Diagnose and fix issues:
./scripts/doctor.sh # Full check
./scripts/doctor.sh --fix # Auto-fix
./scripts/doctor.sh --json # JSON output
Interpreting Results
| Scenario | Recommendation |
|---|---|
| High confidence + High consensus | Proceed confidently |
| Low confidence OR Low consensus | Consider more options |
| Mistral disagrees | Investigate risks |
| Panic mode triggered | Add debate rounds |
Best Practices
Security
- Never include credentials in queries
- Use
--preset localfor sensitive code
Effective Queries
- Be specific about the question
- Include constraints (performance, etc.)
- Use debate for controversial decisions
Troubleshooting
| Issue | Solution |
|---|---|
| "Unknown skill" | Run install script or check ~/.claude/commands/ |
| "Exit code 1" | Run /ai-consultants:config-check to diagnose |
| No consultants | Run /ai-consultants:config-wizard |
| API errors | Check /ai-consultants:config-status |
| CLI not found | Run ./scripts/doctor.sh --fix |
Extended Documentation
- Setup Guide - Installation, authentication, Claude Code setup
- Cost Rates - Model pricing
- Smart Routing - Category routing
- JSON Schema - Output format
Known Limitations
- Minimum 2 consultants required
- Smart Routing off by default
- Synthesis requires Claude CLI (fallback available)
- Estimated costs (heuristic token counting)
# README.md
AI Consultants v2.8.1
Query multiple AI models simultaneously for expert opinions on coding questions. Get diverse perspectives, automatic synthesis, confidence-weighted recommendations, and multi-agent debate.
Table of Contents
- Why AI Consultants?
- Quick Start
- Prerequisites
- Supported CLI Agents
- Claude Code
- OpenAI Codex CLI
- Gemini CLI
- Cursor / Copilot / Windsurf
- Aider
- Standalone Bash
- Consultants
- Quality Tiers
- Configuration
- How It Works
- Best Practices
- Documentation
- Changelog
- License
Why AI Consultants?
Making important technical decisions? Get multiple expert perspectives instantly:
- 10+ AI consultants with unique personas (Architect, Pragmatist, Devil's Advocate, etc.)
- Automatic synthesis combines all responses into a weighted recommendation
- Confidence scoring tells you how certain each consultant is
- Multi-agent debate lets consultants critique each other
- Anonymous peer review identifies the strongest arguments without bias
- Local model support via Ollama for complete privacy
Quick Start
Get started in 30 seconds:
# Install the skill
curl -fsSL https://raw.githubusercontent.com/matteoscurati/ai-consultants/main/scripts/install.sh | bash
# Run the setup wizard (in Claude Code)
/ai-consultants:config-wizard
# Ask your first question
/ai-consultants:consult "How should I structure my authentication system?"
Update & Uninstall
# Update to latest version
~/.claude/skills/ai-consultants/scripts/install.sh --update
# Uninstall completely
~/.claude/skills/ai-consultants/scripts/install.sh --uninstall
Prerequisites
Before installing AI Consultants, ensure you have the following dependencies installed.
Required Dependencies
| Dependency | Purpose |
|---|---|
| jq | JSON processing |
| curl | HTTP requests and connectivity |
| Bash 4.0+ | Script execution (macOS ships with 3.2) |
Installation by Platform
macOS
# Install Homebrew if not already installed
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
# Install required dependencies
brew install jq bash coreutils
# Verify installation
jq --version && bash --version | head -1
Note: macOS ships with Bash 3.2. The Homebrew version (4.x) is installed to
/opt/homebrew/bin/bash.
Linux (Ubuntu/Debian)
# Install required dependencies
sudo apt-get update
sudo apt-get install -y jq curl bash
# Verify installation
jq --version && bash --version | head -1
Linux (Fedora/RHEL/CentOS)
# Install required dependencies
sudo dnf install -y jq curl bash
# Verify installation
jq --version && bash --version | head -1
Linux (Arch)
# Install required dependencies
sudo pacman -S jq curl bash
# Verify installation
jq --version && bash --version | head -1
Windows
Use WSL (Windows Subsystem for Linux):
# Install WSL (run in PowerShell as Administrator)
wsl --install
# After restart, open WSL and follow Linux instructions
sudo apt-get update
sudo apt-get install -y jq curl bash
Alternatively, use Git Bash or MSYS2 with the required packages.
Optional Dependencies
For CLI-based consultants, you'll also need:
| Dependency | Required for |
|---|---|
| Node.js 18+ | Gemini CLI, Codex CLI, Kilo CLI |
| Python 3.8+ | Mistral Vibe CLI, Aider |
# macOS
brew install node python
# Ubuntu/Debian
sudo apt-get install -y nodejs npm python3 python3-pip
# Verify
node --version && python3 --version
Verify All Prerequisites
Run the doctor command to check everything is installed:
./scripts/doctor.sh
Supported CLI Agents
AI Consultants follows the open Agent Skills standard, enabling cross-platform compatibility.
Claude Code
Status: ✅ Native support
Installation:
curl -fsSL https://raw.githubusercontent.com/matteoscurati/ai-consultants/main/scripts/install.sh | bash
Slash Commands:
| Command | Description |
|---|---|
/ai-consultants:consult |
Ask AI consultants a coding question |
/ai-consultants:ask-experts |
Quick query (alias for consult) |
/ai-consultants:debate |
Run consultation with multi-round debate |
/ai-consultants:config-wizard |
Full interactive setup |
/ai-consultants:config-check |
Verify CLIs are installed |
/ai-consultants:config-status |
View current configuration |
/ai-consultants:config-preset |
Set default preset (minimal, balanced, high-stakes) |
/ai-consultants:config-strategy |
Set default synthesis strategy |
/ai-consultants:config-features |
Toggle features (debate, synthesis, etc.) |
/ai-consultants:config-personas |
Change consultant personas |
/ai-consultants:config-api |
Configure API consultants (Qwen3, GLM, Grok, DeepSeek) |
/ai-consultants:help |
Show all commands and usage |
Self-Exclusion: Claude consultant is automatically excluded when invoked from Claude Code.
Verify:
/ai-consultants:config-check
OpenAI Codex CLI
Status: ✅ Compatible
Installation:
git clone https://github.com/matteoscurati/ai-consultants.git ~/.codex/skills/ai-consultants
~/.codex/skills/ai-consultants/scripts/doctor.sh --fix
Commands:
Use the same slash commands as Claude Code. Codex CLI loads skills from ~/.codex/skills/.
Self-Exclusion: Codex consultant is automatically excluded when invoked from Codex CLI.
Verify:
~/.codex/skills/ai-consultants/scripts/doctor.sh
Gemini CLI
Status: ✅ Compatible
Installation:
git clone https://github.com/matteoscurati/ai-consultants.git ~/.gemini/skills/ai-consultants
~/.gemini/skills/ai-consultants/scripts/doctor.sh --fix
Commands:
Use the same slash commands as Claude Code. Gemini CLI loads skills from ~/.gemini/skills/.
Self-Exclusion: Gemini consultant is automatically excluded when invoked from Gemini CLI.
Verify:
~/.gemini/skills/ai-consultants/scripts/doctor.sh
Cursor / Copilot / Windsurf (via SkillPort)
Status: ✅ Via SkillPort
Installation:
# Install SkillPort if not already installed
npm install -g skillport
# Add AI Consultants skill
skillport add github.com/matteoscurati/ai-consultants
# Load skill in your agent
skillport show ai-consultants
Or clone and use the included installer:
git clone https://github.com/matteoscurati/ai-consultants.git
cd ai-consultants
./scripts/skillport-install.sh
Commands:
SkillPort translates skill commands to the native agent format.
Self-Exclusion: Cursor consultant is automatically excluded when invoked from Cursor.
Verify:
skillport status ai-consultants
Aider
Status: ✅ Via AGENTS.md
Installation:
git clone https://github.com/matteoscurati/ai-consultants.git
cd ai-consultants
# Aider reads AGENTS.md for skill instructions
Usage:
Reference the skill in your Aider session:
/add AGENTS.md
# Then ask: "Use ai-consultants to review my code"
Self-Exclusion: When using Aider as the invoking agent, set INVOKING_AGENT=aider.
Verify:
./scripts/doctor.sh
Standalone Bash
Status: ✅ Direct execution
Installation:
git clone https://github.com/matteoscurati/ai-consultants.git
cd ai-consultants
./scripts/doctor.sh --fix
./scripts/setup_wizard.sh
Commands:
# Basic consultation
./scripts/consult_all.sh "How to optimize this function?" src/utils.py
# With preset
./scripts/consult_all.sh --preset balanced "Redis or Memcached?"
# With debate
ENABLE_DEBATE=true DEBATE_ROUNDS=2 ./scripts/consult_all.sh "Microservices vs monolith?"
# With smart routing
ENABLE_SMART_ROUTING=true ./scripts/consult_all.sh "Bug in auth code"
# Follow-up questions
./scripts/followup.sh "Can you elaborate on that point?"
./scripts/followup.sh -c Gemini "Show me code example"
Self-Exclusion: Set INVOKING_AGENT environment variable:
INVOKING_AGENT=claude ./scripts/consult_all.sh "Question" # Claude excluded
INVOKING_AGENT=codex ./scripts/consult_all.sh "Question" # Codex excluded
./scripts/consult_all.sh "Question" # No exclusion
Verify:
./scripts/doctor.sh
Consultants
CLI-Based Consultants
| Consultant | CLI | Persona | Focus |
|---|---|---|---|
| Google Gemini | gemini |
The Architect | Design patterns, scalability, enterprise |
| OpenAI Codex | codex |
The Pragmatist | Simplicity, quick wins, proven solutions |
| Mistral Vibe | vibe |
The Devil's Advocate | Problems, edge cases, vulnerabilities |
| Kilo Code | kilocode |
The Innovator | Creativity, unconventional approaches |
| Cursor | agent |
The Integrator | Full-stack perspective |
| Aider | aider |
The Pair Programmer | Collaborative coding |
| Amp | amp |
The Systems Thinker | System design, interactions, emergent behavior |
| Claude | claude |
The Synthesizer | Big picture, synthesis, connecting ideas |
API-Based Consultants
| Consultant | Default Model | Persona | Focus |
|---|---|---|---|
| Qwen3 | qwen3-max | The Analyst | Data-driven analysis |
| GLM | glm-4.7 | The Methodologist | Structured approaches |
| Grok | grok-4-1-fast-reasoning | The Provocateur | Challenge conventions |
| DeepSeek | deepseek-v3.2-speciale | The Code Specialist | Algorithms, code generation |
Local Consultants
| Consultant | Default Model | Persona | Focus |
|---|---|---|---|
| Ollama | qwen2.5-coder:32b | The Local Expert | Privacy-first, zero API cost |
Installing Consultant CLIs
At least 2 consultant CLIs are required:
npm install -g @google/gemini-cli # Gemini
npm install -g @openai/codex # Codex
pip install mistral-vibe # Mistral
npm install -g @kilocode/cli # Kilo
curl https://cursor.com/install -fsS | bash # Cursor
# Optional CLI-based consultants
curl -fsSL https://ampcode.com/install.sh | bash # Amp
npm install -g @qwen-code/qwen-code@latest # Qwen (alternative to API)
# For local inference (optional)
curl -fsSL https://ollama.com/install.sh | sh
ollama pull llama3.2
Quality Tiers
Choose the right balance of quality, speed, and cost with model quality tiers.
Tier Presets
| Preset | Tier | Agents | Debate | Reflection | Use Case |
|---|---|---|---|---|---|
max_quality |
Premium | 7 (all) | 3 rounds | 2 cycles + peer review | Critical decisions |
medium |
Standard | 4 | 1 round | No | General questions |
fast |
Economy | 2 | No | No | Quick checks |
local |
Economy | 1 (Ollama) | No | No | Full privacy |
Models by Tier
| Consultant | Premium | Standard | Economy |
|---|---|---|---|
| Claude | claude-opus-4-5 | claude-sonnet-4-5 | claude-3-5-haiku |
| Gemini | gemini-3.0-pro | gemini-3.0-flash | gemini-2.0-flash-lite |
| Codex | gpt-5.2-codex | gpt-5.2 | gpt-4o-mini |
| Mistral | mistral-large-3 | mistral-medium | devstral-small-2 |
| DeepSeek | deepseek-v3.2-speciale | deepseek-v3.2 | deepseek-chat |
| GLM | glm-4.7 | glm-4.7 | glm-4-flash |
| Grok | grok-4-1-fast-reasoning | grok-3 | grok-3-mini |
| Qwen3 | qwen3-max | qwen3-235b | qwen3-32b |
| Ollama | qwen2.5-coder:32b | llama3.3 | llama3.2 |
Usage
Claude Code:
/ai-consultants:consult --preset max_quality "critical architecture decision"
/ai-consultants:consult --preset fast "quick syntax question"
Bash:
./scripts/consult_all.sh --preset max_quality "microservices vs monolith?"
./scripts/consult_all.sh --preset fast "how to use async/await?"
# Programmatic tier selection
source scripts/config.sh
apply_model_tier "premium" # Set all to premium models
apply_model_tier "economy" # Set all to economy models
Configuration
Presets
Choose how many consultants to use:
| Preset | Consultants | Tier | Use Case |
|---|---|---|---|
max_quality |
7 (all) + debate + reflection | Premium | Critical decisions |
medium |
4 + light debate | Standard | General questions |
fast |
2 | Economy | Quick checks |
minimal |
2 (Gemini + Codex) | Default | Quick questions, low cost |
balanced |
4 (+ Mistral + Kilo) | Default | Standard consultations |
thorough |
5 (+ Cursor) | Default | Comprehensive analysis |
high-stakes |
All + debate | Default | Critical decisions |
local |
Ollama only | Economy | Full privacy |
security |
Security-focused + debate | Default | Security reviews |
cost-capped |
Budget-conscious | Default | Minimal API costs |
Claude Code:
/ai-consultants:config-preset
Bash:
./scripts/consult_all.sh --preset balanced "Question"
Synthesis Strategies
Control how responses are combined:
| Strategy | Description |
|---|---|
majority |
Most common answer wins (default) |
risk_averse |
Weight conservative responses higher |
security_first |
Prioritize security considerations |
cost_capped |
Prefer simpler, cheaper solutions |
compare_only |
No recommendation, just comparison |
Claude Code:
/ai-consultants:config-strategy
Bash:
./scripts/consult_all.sh --strategy risk_averse "Question"
Environment Variables
# Core features
ENABLE_DEBATE=true # Multi-agent debate
ENABLE_SYNTHESIS=true # Automatic synthesis
ENABLE_SMART_ROUTING=true # Intelligent consultant selection
ENABLE_PANIC_MODE=auto # Automatic rigor for uncertainty
# Defaults
DEFAULT_PRESET=balanced # Preset when --preset not given
DEFAULT_STRATEGY=majority # Strategy when --strategy not given
# Ollama (local models)
ENABLE_OLLAMA=true # Enable Ollama consultant
OLLAMA_MODEL=qwen2.5-coder:32b # Model to use (premium default)
OLLAMA_HOST=http://localhost:11434
# Cost management
MAX_SESSION_COST=1.00 # Budget limit in USD
WARN_AT_COST=0.50 # Warning threshold
# Panic mode
PANIC_CONFIDENCE_THRESHOLD=5 # Trigger threshold
PANIC_EXTRA_DEBATE_ROUNDS=1 # Additional rounds in panic mode
Doctor Command
Diagnose and fix configuration issues:
./scripts/doctor.sh # Full diagnostic
./scripts/doctor.sh --fix # Auto-fix common issues
./scripts/doctor.sh --json # JSON output for automation
How It Works
Query -> Classify -> Parallel Queries -> Voting -> Synthesis -> Report
| | |
Gemini (8) Consensus Recommendation
Codex (7) Analysis Comparison
Mistral (6) Risk Assessment
Kilo (9) Action Items
With debate enabled:
Round 1 -> Cross-Critique -> Round 2 -> Updated Positions -> Final Synthesis
With peer review:
Responses -> Anonymize -> Peer Ranking -> De-anonymize -> Peer Scores
Output
Each consultation generates:
/tmp/ai_consultations/TIMESTAMP/
├── gemini.json # Individual responses
├── codex.json # with confidence scores
├── mistral.json
├── kilo.json
├── voting.json # Consensus calculation
├── synthesis.json # Weighted recommendation
├── report.md # Human-readable report
└── round_2/ # (if debate enabled)
Best Practices
When to Use High-Stakes Mode
- Architectural decisions affecting system design
- Security-critical code changes
- Performance-critical optimizations
- Decisions that are difficult to reverse
Interpreting Results
| Scenario | Recommendation |
|---|---|
| High confidence + High consensus | Proceed with confidence |
| Low confidence OR Low consensus | Consider more options |
| Mistral (Devil's Advocate) disagrees | Investigate the risks |
| Panic mode triggered | Add more consultants or debate rounds |
Security
- Never include credentials or API keys in queries
- Use
--preset localfor sensitive code - Files in
/tmpare automatically cleaned up
Documentation
- Setup Guide - Installation, authentication, Claude Code setup
- Cost Rates - Model pricing and budgets
- Smart Routing - Category-based routing
- JSON Schema - Output format specification
- Contributing - How to contribute
Changelog
v2.8.1
- Bug fixes: Fixed
((count++))abort underset -e, missing Amp in consultant map, hardcodedclaudein synthesize.sh - Security: Variable name validation before
exportin escalation and cost-aware routing - DRY refactoring: Rewrote
query_kilo.shandquery_cursor.shusing sharedprocess_consultant_response(); addedget_model_for_tier()as single source of truth
v2.8.0
- Amp CLI support: New consultant with "The Systems Thinker" persona
- 13 consultants total: Gemini, Codex, Mistral, Kilo, Cursor, Aider, Amp, Claude, Qwen3, GLM, Grok, DeepSeek, Ollama
- Installation:
curl -fsSL https://ampcode.com/install.sh | bash
v2.7.0
- Qwen CLI support: CLI/API mode switching for Qwen3 via qwen-code
- 5 switchable agents: Gemini, Codex, Claude, Mistral, and now Qwen3 support CLI/API mode
- Backward compatible:
QWEN3_USE_APIdefaults totrueto preserve v2.6 behavior
v2.6.0
- CLI/API mode switching: Gemini, Codex, Claude, and Mistral can switch between CLI and API mode
- New environment variables:
*_USE_APIand*_API_URLfor each switchable agent - Unified API query module:
lib/api_query.shfor consistent API handling
v2.5.0
- Model quality tiers: Premium, standard, and economy tiers for all consultants
- New presets:
max_quality,medium,fastfor quick tier selection - Premium defaults: All consultants now use premium models by default (January 2026)
apply_model_tier()function: Programmatically switch all models to a tier- Updated models: claude-opus-4-5, gemini-3.0-pro, gpt-5.2-codex, mistral-large-3, etc.
v2.4.0
- Budget enforcement: Optional budget limits with configurable actions (warn/stop)
- Budget checks: 4 enforcement points (before/after consultation, debate, synthesis)
/ai-consultants:config-budget: New slash command for budget configuration
v2.3.0
- Semantic caching: Cache responses to avoid redundant API calls (15-25% savings)
- Cost-aware routing: Route simple queries to cheaper models (30-50% savings)
- Fallback escalation: Auto-escalate to premium model if confidence < 7
- Debate optimization: Skip debate if all consultants agree (opt-in)
- Category exceptions: SECURITY/ARCHITECTURE always trigger debate
- Quality monitoring:
optimization_metrics.jsontracks optimization impact - Compact reports: Shorter reports by default (summaries only)
- Response limits: Per-category token limits (opt-in)
v2.2.0
- Claude consultant: New consultant with "The Synthesizer" persona
- Self-exclusion: Invoking agent automatically excluded from panel
- Presets: Quick configuration with
--preset minimal/balanced/high-stakes/local - Doctor command: Diagnostic and auto-fix tool
- Synthesis strategies:
--strategy majority/risk_averse/security_first/compare_only - Confidence intervals: Statistical confidence ranges (e.g., "8 +/- 1.2")
- Anonymous peer review: Unbiased evaluation of responses
- Ollama support: Local model inference for privacy
- Panic mode: Automatic rigor when uncertainty detected
- One-liner install:
curl | bashinstallation
v2.1.0
- New consultants: Aider, DeepSeek
- 17 configurable personas
- Token optimization with AST extraction
v2.0.0
- Persona system with 15 predefined roles
- Confidence scoring (1-10) on every response
- Auto-synthesis with weighted recommendations
- Multi-Agent Debate (MAD)
- Smart routing by question category
- Session management and cost tracking
License
MIT License - see LICENSE for details.
Contributing
Contributions welcome! See CONTRIBUTING.md for guidelines.
# Supported AI Coding Agents
This skill is compatible with the SKILL.md standard and works with all major AI coding agents:
Learn more about the SKILL.md standard and how to use these skills with your preferred AI coding agent.