Applied Intelligence
Module 8: Code Review and Testing

Using Agents to Pre-Review Code

The confirmation bias problem

Page 4 ended with a warning: using the same AI that wrote the code to approve its own work creates circular validation. This is not theoretical.

GitClear's analysis of 211 million lines of code found an 8x increase in duplicated code when AI reviews its own output. Refactored code dropped by 39.9%. Critical vulnerabilities increased by 37.6% after multiple AI improvement cycles. The AI becomes anchored to its original decisions.

Agent-based review tools tackle this by using different agents or different configurations of the same model to create adversarial separation between generation and review. The review agent has no access to the generation prompt. It sees only the diff.

Claude Code's code review system

Claude Code includes a /code-review command that runs four specialized agents in parallel.

/code-review           # Review to terminal
/code-review --comment # Post review as PR comment

The architecture creates separation by design:

AgentModelFocus
Agent 1SonnetCLAUDE.md compliance audit
Agent 2SonnetCLAUDE.md compliance audit (parallel)
Agent 3OpusBug detection in changed code
Agent 4OpusSecurity issues and logic errors

Two Sonnet agents check CLAUDE.md compliance independently. One Opus agent scans the diff for obvious bugs without attempting to understand broader context. A second Opus agent analyzes security implications and logic errors.

Why parallel? Different agents catch different failures. CLAUDE.md violations slip past because the generating agent did not read the file carefully. Logic errors appear when the agent optimized for functionality without considering edge cases. Security vulnerabilities emerge because the agent followed patterns from training data without understanding the threat model.

Confidence scoring

Each finding receives a confidence score from 0 to 100:

ScoreMeaning
0Almost certainly a false positive
25Might be real, low confidence
50Moderate confidence, likely real but minor
75High confidence, real and important
100Absolute certainty

The default threshold is 80. Only issues scoring 80 or higher appear in the output. Lower-confidence findings are discarded to reduce noise.

Adjust the threshold by editing commands/code-review.md in your project's .claude directory. Lower thresholds catch more issues but introduce more false positives. Higher thresholds reduce noise but may miss real problems.

What it flags

The review agents focus on high-signal issues:

  • Code that will fail to compile or parse (syntax errors, type errors, missing imports)
  • Code that will produce incorrect results (clear logic errors)
  • CLAUDE.md violations with exact rule citations
  • Security vulnerabilities in the changed code

The agents explicitly avoid:

  • Style or quality concerns
  • Potential issues that depend on specific inputs
  • Nitpicks that linters will catch
  • Pre-existing issues not introduced in the PR

The output format links directly to code locations:

## Code review

Found 2 issues:

1. Missing error handling for OAuth callback (CLAUDE.md says "Always handle OAuth errors")
https://github.com/owner/repo/blob/abc123/src/auth.ts#L67-L72

2. Memory leak: OAuth state not cleaned up (missing cleanup in finally block)
https://github.com/owner/repo/blob/abc123/src/auth.ts#L88-L95

Each issue includes context about why it matters. For CLAUDE.md violations, the exact rule is cited. For bugs and security issues, the failure mode is described.

Codex code review

Codex integrates directly with GitHub pull requests through comment-based invocation.

@codex review

Post this comment on any PR to request a review. Codex acknowledges the request with a 👀 reaction, then posts a standard GitHub code review.

Add focus areas to direct attention:

@codex review for security regressions
@codex review for performance implications
@codex review for API compatibility

The focus parameter narrows what the review examines. Broader reviews without a focus parameter check for general issues.

Priority-based filtering

Codex categorizes issues by priority:

  • P0: Critical issues that block the PR
  • P1: High-priority issues that should be addressed
  • P2 and below: Lower priority, not shown by default

Only P0 and P1 issues appear in GitHub reviews. This aggressive filtering reduces noise but may hide legitimate concerns that do not reach the P1 threshold.

Configure what reaches P1 through AGENTS.md:

## Review guidelines

- Treat documentation typos as P1
- Flag all authentication issues as P0
- Mark deprecated API usage as P1

Codex reads AGENTS.md files from the repository root down to the changed file's directory. The nearest AGENTS.md file takes precedence, allowing different review rules for different parts of the codebase.

Output format

Codex produces standard GitHub code reviews:

  • Inline comments on specific lines
  • A review summary at the end
  • Suggested changes that can be applied with one click

The format matches human reviewer output, so teams see a consistent interface whether the reviewer was human or agent.

GitHub Copilot code review

GitHub Copilot's code review feature holds approximately 67% market share among AI code review tools. Most developers will encounter Copilot reviews even if their primary tool is something else.

Invocation methods

PlatformHow to invoke
GitHub.comOpen Reviewers menu, select "Copilot"
VS CodeClick Copilot Code Review button in Source Control view
Visual StudioClick "Review changes with Copilot" in Git Changes
JetBrains IDEsClick "Copilot: Review Code Changes"
XcodeOpen Copilot Chat, click Code Review button
GitHub MobileExpand Reviews section, add Copilot as reviewer

Copilot is available wherever developers already work.

Agentic features

Copilot's October 2025 update introduced agentic capabilities:

  • Tool calling: The review agent actively gathers project context (directory structure, related files, reference implementations)
  • Static analysis integration: CodeQL, ESLint, and PMD findings are incorporated into the review
  • Agentic handoff: Mention @copilot in a PR comment to have fixes automatically applied via stacked PRs

Tool calling means Copilot reviews are not limited to the diff. The agent can examine related code, check how similar patterns are implemented elsewhere, and verify consistency with project conventions.

Configuration

Create .github/copilot-instructions.md to provide repository-wide instructions:

# Copilot Review Instructions

Focus on:
- Memory safety in C++ code
- Thread safety in concurrent operations
- Error handling completeness

Skip reviews for:
- Auto-generated files in /generated
- Third-party code in /vendor

For path-specific rules, create files in .github/instructions/:

.github/instructions/api-handlers.instructions.md
.github/instructions/database-operations.instructions.md

Each instruction file applies to code matching its pattern.

Output characteristics

Copilot reviews post as "Comment" reviews they do not approve or request changes. This distinction matters for merge rules:

  • Copilot reviews do not count toward required approvals
  • Copilot reviews do not block merging
  • Human review remains required for merge eligibility

Suggested code changes appear as committable suggestions. Clicking "Apply suggestion" commits the change directly. Suggestions of six lines or fewer can be applied with one click.

When AI review augments human review

AI code review works best as a first pass, not a final verdict. The Qodo 2025 State of AI Code Quality report found that 80% of pull requests with AI review enabled require no human comments. Not because AI caught everything, but because AI caught the obvious issues early.

AI handles well:

  • Syntax errors and type mismatches
  • Style inconsistencies against documented standards
  • Missing null checks and error handling
  • Test coverage gaps
  • Known vulnerability patterns (SQL injection, XSS, hardcoded secrets)
  • CLAUDE.md and AGENTS.md compliance

Humans handle well:

  • Business logic validation
  • Architecture and design decisions
  • Security review for authentication, authorization, and data handling
  • Performance implications at scale
  • Cross-system integration correctness
  • Whether the code actually solves the intended problem

AI excels at pattern matching against known rules. Humans excel at understanding intent and context.

When AI review fails

AI review introduces specific failure modes that differ from human review failures.

Confirmation bias when reviewing its own output: If the same agent that generated the code also reviews it, the review becomes a rubber stamp. The agent anchors to its original decisions and fails to recognize its own mistakes. Separate generation and review agents architecturally.

Noise from false positives: The most common complaint about AI code review is noise. Low-confidence findings flood the review with issues that are not real problems. Use confidence thresholds (Claude Code) or priority filtering (Codex) to maintain signal quality.

Missing context: AI reviewers see the diff and, at best, some surrounding code. They do not understand why the code was written, what alternatives were considered, or what constraints apply. Business logic errors that violate unstated assumptions slip through.

Circular validation loops: When developers use AI to write code, then AI to review it, then AI to fix the review comments, quality degrades with each cycle. Human involvement at the review stage breaks this loop.

Combining tools effectively

Different tools catch different issues. A layered approach:

Layer 1: Automated CI checks Linting, formatting, and static analysis run automatically on every commit. These catch issues before human or AI review sees them. Fail the build for severe violations.

Layer 2: AI pre-review Run AI code review before requesting human review. Authors address AI findings before the PR enters the review queue. This catches obvious issues early and reduces human review burden.

Layer 3: Human review Human reviewers focus on what AI cannot verify: business logic, architecture, security implications, and integration correctness. The checklist from Page 4 guides this focused review.

Layer 4: Domain expert review (when needed) Security-sensitive changes go to security-aware reviewers. Architecture changes go to architects. Not every PR needs this layer, but Tier 1 code always does.

The point is not replacing human review with AI review. The point is allocating human attention where it provides the most value.

Setting up AI pre-review

For Claude Code:

  1. Ensure the gh CLI is installed and authenticated
  2. Run /code-review on your feature branch
  3. Address findings before requesting human review
  4. Use /code-review --comment to post findings to the PR for visibility

For Codex:

  1. Configure Codex cloud integration with your GitHub repository
  2. Create AGENTS.md with review guidelines
  3. Comment @codex review on PRs or enable automatic reviews in settings
  4. Review and resolve findings before human review

For GitHub Copilot:

  1. Enable Copilot in your organization's GitHub settings
  2. Create .github/copilot-instructions.md with review focus areas
  3. Add Copilot as a reviewer from the Reviewers menu
  4. Apply suggestions or address comments before human review

All three tools can coexist. Some teams use Copilot for initial review (wide availability), Claude Code for deep bug analysis (higher-fidelity findings), and human review for final approval.

The efficiency calculation

The METR study found a paradox: developers believe AI makes them 20% faster but measure 19% slower. AI review exhibits a similar pattern.

AI review speeds up the feedback cycle. Findings arrive faster than waiting for human reviewers. Authors can iterate before the PR sits in a review queue.

But AI review also adds a step. If AI review catches nothing new, it was overhead. If AI review produces false positives, it wastes author time on non-issues.

The efficiency gain depends on calibration:

  • Confidence thresholds tuned to minimize false positives
  • CLAUDE.md and AGENTS.md configured to match project standards
  • Human review focused on high-value verification rather than redundant checks

When calibrated correctly, teams report that 80% of PRs need no human comments after AI review. The 20% that do need comments are the PRs where human judgment matters most.

On this page