Coordinate parallel code reviews across multiple quality dimensions with finding deduplication, severity calibration, and consolidated reporting. Use this skill when organizing multi-reviewer code reviews, calibrating finding severity, or consolidating review results.
name: multi-reviewer-patterns
description: Coordinate parallel code reviews across multiple quality dimensions with finding deduplication, severity calibration, and consolidated reporting. Use this skill when organizing multi-reviewer code reviews, calibrating finding severity, or consolidating review results.
version: 1.0.2
Multi-Reviewer Patterns
Patterns for coordinating parallel code reviews across multiple quality dimensions, deduplicating findings, calibrating severity, and producing consolidated reports.
When to Use This Skill
Organizing a multi-dimensional code review
Deciding which review dimensions to assign
Deduplicating findings from multiple reviewers
Calibrating severity ratings consistently
Producing a consolidated review report
Review Dimension Allocation
Available Dimensions
Dimension
Focus
When to Include
Security
Vulnerabilities, auth, input validation
Always for code handling user input or auth
Performance
Query efficiency, memory, caching
When changing data access or hot paths
Architecture
SOLID, coupling, patterns
For structural changes or new modules
Testing
Coverage, quality, edge cases
When adding new functionality
Accessibility
WCAG, ARIA, keyboard nav
For UI/frontend changes
Recommended Combinations
Scenario
Dimensions
API endpoint changes
Security, Performance, Architecture
Frontend component
Architecture, Testing, Accessibility
Database migration
Performance, Architecture
Authentication changes
Security, Testing
Full feature review
Security, Performance, Architecture, Testing
Finding Deduplication
When multiple reviewers report issues at the same location:
Merge Rules
Same file:line, same issue — Merge into one finding, credit all reviewers
Same file:line, different issues — Keep as separate findings
Same issue, different locations — Keep separate but cross-reference
Conflicting severity — Use the higher severity rating
Conflicting recommendations — Include both with reviewer attribution
Deduplication Process
For each finding in all reviewer reports:
1. Check if another finding references the same file:line
2. If yes, check if they describe the same issue
3. If same issue: merge, keeping the more detailed description
4. If different issue: keep both, tag as "co-located"
5. Use highest severity among merged findings
Severity Calibration
Severity Criteria
Severity
Impact
Likelihood
Examples
Critical
Data loss, security breach, complete failure
Certain or very likely
SQL injection, auth bypass, data corruption
High
Significant functionality impact, degradation
Likely
Memory leak, missing validation, broken flow
Medium
Partial impact, workaround exists
Possible
N+1 query, missing edge case, unclear error
Low
Minimal impact, cosmetic
Unlikely
Style issue, minor optimization, naming
Calibration Rules
Security vulnerabilities exploitable by external users: always Critical or High
Performance issues in hot paths: at least Medium
Missing tests for critical paths: at least Medium
Accessibility violations for core functionality: at least Medium