Refactor high-complexity React components in Dify frontend. Use when `pnpm analyze-component...
npx skills add ktaletsk/multi-agent-code-review-skill
Or install specific skill: npx add-skill https://github.com/ktaletsk/multi-agent-code-review-skill
# Description
Run parallel code reviews with multiple AI agents, then synthesize into one report. Triggers on "review code" or "multi-agent review".
# SKILL.md
name: multi-agent-code-review
description: Run parallel code reviews with multiple AI agents, then synthesize into one report. Triggers on "review code" or "multi-agent review".
Multi-Agent Code Review Skill
This skill runs the same code review prompt against multiple AI agents in parallel using Cursor CLI, then synthesizes their findings into a single comprehensive report.
When to Use
Activate this skill when the user asks to:
- "Review my code"
- "Run a code review"
- "Review the staged changes"
- "Do a multi-agent review"
- "Get multiple perspectives on this code"
CRITICAL: Target Directory
You must pass the USER'S PROJECT DIRECTORY as an argument to the script.
The user's project directory is where they started their Claude Code session - NOT this skill's directory. Look for the git repository path in the conversation context (e.g., /Users/.../git/jupyter_server).
Workflow
Step 1: Identify the Target Repository
Determine the user's project directory from the conversation context. This is typically shown at the start of the session or can be found by checking where CLAUDE.md is located. It is NOT /Users/.../skills/multi-agent-code-review/.
Step 2: Run Parallel Reviews
Run the review script and pass the user's project directory as an argument:
~/.claude/skills/multi-agent-code-review/scripts/run-reviews.sh /path/to/users/project
For example, if the user is working in /Users/ktaletskiy/git/jupyter_server:
~/.claude/skills/multi-agent-code-review/scripts/run-reviews.sh /Users/ktaletskiy/git/jupyter_server
IMPORTANT: Always pass the full path to the user's project as the first argument.
This will:
- Run multiple agents in parallel (configurable in the script)
- Save individual JSON results to <project>/.reviews/
- Take 1-3 minutes depending on code size
Step 3: Synthesize Results
After the script completes, read all JSON files from <project>/.reviews/ (in the user's project directory) and synthesize them into a combined report.
Synthesis Rules:
1. Do NOT mention which agent found which issue
2. Deduplicate similar issues (same file + same line + same problem = one entry)
3. If reviewers disagree on severity, use the higher severity
4. Preserve unique findings from each reviewer
5. Present findings as if from a single thorough review
Output Format:
Write the combined report to <project>/.reviews/COMBINED_REVIEW.md using this structure:
# Code Review Report
**Repository:** [repo name from user's directory]
**Date:** [today's date]
---
## Summary
[1-2 paragraph summary]
**Consensus:** [X of Y reviewers recommended changes / approved]
---
## Critical Issues (Require Action)
### 1. [Issue Title]
**Severity:** π΄ HIGH
**File:** `path/to/file` (line X)
[Description]
**Recommendation:** [How to fix]
---
## Medium Issues (Should Address)
[Same format, π MEDIUM]
## Low Issues (Consider Addressing)
[Same format, π‘ LOW]
## Suggested Improvements
[Numbered list]
---
## Verdict
**[π΄ REQUEST CHANGES / π’ APPROVE]**
[Priority action items table]
Step 4: Report to User
After writing the combined report, summarize the key findings:
- Total issues found (by severity)
- Top 3 priority items to address
- Overall verdict
Customization
The user can customize:
- Agents/Models: Edit ~/.claude/skills/multi-agent-code-review/scripts/run-reviews.sh β MODELS array
- Review focus: Edit ~/.claude/skills/multi-agent-code-review/prompts/review-prompt.md
- Thinking depth: Add "think hard" or "ultrathink" to the prompt
Files
~/.claude/skills/multi-agent-code-review/
βββ SKILL.md # This file
βββ scripts/
β βββ run-reviews.sh # Parallel review runner
βββ prompts/
βββ review-prompt.md # Review prompt template
# Output is saved to the user's project:
<project>/.reviews/
βββ review_*.json # Individual agent outputs
βββ COMBINED_REVIEW.md # Synthesized report
# README.md
multi-agent-code-review
Ensemble code reviews. Run the same review prompt against multiple AI
agents in parallel, then synthesize their findings into one comprehensive
report β because different models catch different bugs.
Why This Exists
No single AI model catches everything. GPT might spot a race condition that
Opus misses, while Gemini flags a performance issue neither noticed. By running
the same critical review prompt against multiple agents and combining their
findings, you get more thorough coverage than any single model provides.
| Single Model Review | multi-agent-code-review | |
|---|---|---|
| One perspective | β | β Multiple perspectives |
| Model-specific blind spots | π¬ | β Cross-validated findings |
| Fast | β | β Parallel but slower |
| Simple | β | β Requires Cursor CLI |
How It Works
- Parallel Execution β Spawns multiple
cursor-agentprocesses simultaneously - Independent Reviews β Each agent reviews staged git changes in read-only mode
- Synthesis β Claude Code combines outputs into a single deduplicated report
Default Models
opus-4.5-thinkingβ Anthropic's strongest reasoning modelgpt-5.2-highβ OpenAI with high thinking effortgemini-3-proβ Google's flagship model
Installation
Personal skill (all projects)
mkdir -p ~/.claude/skills
git clone https://github.com/ktaletsk/multi-agent-code-review ~/.claude/skills/multi-agent-code-review
Project skill (specific project)
mkdir -p .claude/skills
git clone https://github.com/ktaletsk/multi-agent-code-review .claude/skills/multi-agent-code-review
Requirements
- Cursor CLI (
cursor-agent) installed and authenticated - Active Cursor subscription
- Claude Code for synthesis
Usage
Start a code review:
/multi-agent-code-review
Or trigger naturally:
Review my staged changes
Run a multi-agent review
Example Session
You: /multi-agent-code-review
Claude: I'll run parallel code reviews using multiple AI agents.
Running reviews on /Users/you/project...
β³ Starting: opus-4.5-thinking
β³ Starting: gpt-5.2-high
β³ Starting: gemini-3-pro
Waiting for reviews to complete (this may take 1-3 minutes)...
β Completed: opus-4.5-thinking
β Completed: gpt-5.2-high
β Completed: gemini-3-pro
Now synthesizing results...
# Code Review Report
## Summary
The changes introduce timestamp handling improvements with proper
fallback logic. All 3 reviewers found issues worth addressing.
## Critical Issues
None identified.
## Medium Issues
### 1. Pre-1970 timestamp edge case
**File:** `filemanager.py` (line 60)
Negative timestamps (valid for pre-1970 dates) are treated as invalid...
[continued...]
Output
Results are saved to your project's .reviews/ directory:
<your-project>/.reviews/
βββ review_opus-4.5-thinking.json
βββ review_gpt-5.2-high.json
βββ review_gemini-3-pro.json
βββ COMBINED_REVIEW.md
Customization
Change Models
Edit scripts/run-reviews.sh:
MODELS=(
"opus-4.5-thinking"
"gpt-5.2-high"
"gemini-3-pro"
)
Run cursor-agent --list-models for available options.
Change Review Focus
Edit prompts/review-prompt.md to adjust:
- What aspects to focus on (security, performance, etc.)
- Output format
- How critical the review should be
Thinking Depth
Add keywords to prompts/review-prompt.md:
- think β basic reasoning
- think hard β more thorough
- think harder β very thorough
- ultrathink β maximum depth (slower)
Files
multi-agent-code-review/
βββ SKILL.md # Skill definition for Claude Code
βββ README.md # This file
βββ scripts/
β βββ run-reviews.sh # Parallel review runner
βββ prompts/
βββ review-prompt.md # Review prompt template
Compatibility
This skill uses the open Agent Skills standard and should work with:
- Claude Code (~/.claude/skills/)
- Cursor (.cursor/skills/)
- VS Code, GitHub Copilot, and other compatible agents
License
MIT
# Supported AI Coding Agents
This skill is compatible with the SKILL.md standard and works with all major AI coding agents:
Learn more about the SKILL.md standard and how to use these skills with your preferred AI coding agent.