/code-review
Performs focused multi-agent code review that surfaces only critical, high-impact findings for solo developers using AI tools.
Core Philosophy
This command prioritizes needle-moving discoveries over exhaustive lists. Every finding must demonstrate significant impact on: - System reliability & stability - Security vulnerabilities with real exploitation risk - Performance bottlenecks affecting user experience - Architectural decisions blocking future scalability - Critical technical debt threatening maintainability
🚨 Critical Findings Only
Issues that could cause production failures, security breaches, or severe user impact within 48 hours.
🔥 High-Value Improvements
Changes that unlock new capabilities, remove significant constraints, or improve metrics by >25%.
❌ Excluded from Reports
Minor style issues, micro-optimizations (<10%), theoretical best practices, edge cases affecting <1% of users.
Auto-Loaded Project Context:
@/CLAUDE.md
@/docs/ai-context/project-structure.md
@/docs/ai-context/docs-overview.md
Command Execution
The command follows a dynamic, multi-step process to deliver a high-impact review.
Step 1: Understand User Intent & Gather Context
The AI first parses your request to determine the scope and focus. It then reads the relevant documentation to build a mental model of risks and priorities before allocating agents.
Step 2: Define Mandatory Coverage Areas
Every review must analyze these core areas: 1. Critical Path Analysis: User-facing functionality, data integrity, error handling. 2. Security Surface: Input validation, auth flows, data exposure. 3. Performance Impact: Bottlenecks, resource consumption, scalability. 4. Integration Points: API contracts, service dependencies.
Step 3: Dynamic Agent Generation
Based on the scope, the AI generates specialized agents. A small review might use 2-3 agents covering multiple areas, while a large review might spawn 4-6 dedicated agents (e.g., Critical_Path_Validator
, Security_Scanner
, Performance_Profiler
).
Step 4: Execute Dynamic Multi-Agent Review
All agents are launched simultaneously to work in parallel. Each agent is given a specific mandate to focus only on high-impact findings within its domain, using MCP servers like Gemini or Context7 for deeper analysis where needed.
Step 5: Synthesize Findings
After the agents complete their work, a master process performs an ultrathink
step to:
- Filter out all low-priority findings.
- Identify systemic issues and root causes from across the agent reports.
- Prioritize fixes based on a calculated ROI for a solo developer.
Step 6: Present Comprehensive Review
The final output is a structured report with an executive summary, a prioritized action plan, and detailed findings from each specialized agent.
Step 7: Interactive Follow-up
After presenting the review, the AI will offer to take action on the findings, such as fixing critical issues, creating a refactoring plan, or generating GitHub issues.