Skip to main content

What You’ll Build

Build an AI-powered code review bot that:
  • Reads code changes locally from your git repository
  • Analyzes code for security vulnerabilities, code smells, and style issues
  • Posts detailed review comments to GitHub (both general feedback and line-specific suggestions)
  • Automatically requests changes for issues or approves clean PRs
This tutorial demonstrates practical AI-driven code analysis using Metorial’s GitHub integration.
What you’ll learn:
  • Deploying the GitHub MCP server
  • Setting up OAuth for GitHub repositories
  • Creating an AI agent that reviews code
  • Posting GitHub review comments programmatically
Before you begin:Time to complete: 10-15 minutes

Prerequisites

Before building the code review bot, ensure you have:
  1. Metorial setup:
    • Active Metorial account at app.metorial.com
    • Project created in your organization
    • Metorial API key (generate in Dashboard → Home → Connect to Metorial)
  2. GitHub repository:
    • Local clone of your GitHub repository
    • Repository with write access on GitHub
    • At least one pull request for testing
    • Admin access to authorize OAuth
  3. AI provider:
    • Anthropic API key (Claude Sonnet 4 or newer recommended for code analysis)
  4. Development environment:
    • Node.js 18+ (TypeScript) or Python 3.9+ installed
    • Basic knowledge of async/await patterns

Architecture Overview

The code review bot workflow:
  1. Input: User provides local repository path, branch names, and PR number
  2. Fetch Code Changes: Bot reads git diff locally from your repository
  3. AI Analysis: Claude analyzes the diff for:
    • Security vulnerabilities (SQL injection, XSS, exposed secrets)
    • Code smells (duplication, long functions, complexity)
    • Style violations (naming, formatting, consistency)
    • Best practices (error handling, documentation, testing)
  4. Post Review: Bot posts the review to GitHub via the GitHub MCP server:
    • Overall summary comment
    • Line-specific feedback on issues
    • Review decision (approve or request changes)
Tools used: Local git (read code changes) + AI Model (code analysis) + GitHub MCP Server (post reviews)

Step 1: Deploy GitHub MCP Server

Deploy the GitHub MCP server from Metorial’s catalog to enable your bot to interact with GitHub.
1

Navigate to Server Catalog

In the Metorial Dashboard, go to Servers and search for “GitHub”.
2

Deploy GitHub Server

Click the GitHub server, then click Deploy ServerServer Deployment.Give your deployment a descriptive name like “Code Review Bot GitHub”.
3

Note Your Deployment ID

After deployment, copy your Server Deployment ID from the deployment page. You’ll need this for OAuth setup and in your bot code.
Save your GitHub deployment ID—you’ll need it for OAuth setup (Step 2) and in your bot code (Step 3).

Step 2: Set Up OAuth Authentication

Your code review bot needs permission to access your GitHub repositories.
1

Install Dependencies

Install the Metorial SDK and Anthropic:
npm install metorial @metorial/anthropic @anthropic-ai/sdk
2

Create OAuth Session

Run this code to generate the GitHub OAuth URL:
import { Metorial } from 'metorial';

const metorial = new Metorial({
  apiKey: "YOUR-METORIAL-API-KEY"
});

async function setupGitHubOAuth() {
  const githubOAuth = await metorial.oauth.sessions.create({
    serverDeploymentId: 'YOUR-GITHUB-DEPLOYMENT-ID'
  });

  console.log('Authorize GitHub here:', githubOAuth.url);
  console.log('OAuth Session ID:', githubOAuth.id);

  // Wait for authorization
  await metorial.oauth.waitForCompletion([githubOAuth]);
  console.log('✓ GitHub authorized!');

  // Save githubOAuth.id for future use
  return githubOAuth.id;
}

setupGitHubOAuth();
3

Authorize in Browser

  1. Open the printed OAuth URL in your browser
  2. Sign in to GitHub if needed
  3. Review and approve the permissions (the bot needs repo scope to read PRs and post comments)
  4. You’ll be redirected to your callback URL (or see a confirmation page)
4

Store OAuth Session ID

Save the OAuth session ID securely. You’ll reuse it for all future bot operations without re-authorizing.For production apps, store OAuth session IDs in your database per user/repository.
Required OAuth Scopes:The GitHub MCP server requires the repo scope for posting reviews, which provides:
  • Write access to post review comments and line-specific feedback
  • Permission to approve PRs or request changes on your repositories
Note: If you plan to implement webhook-based automation (mentioned in Production Considerations), you may need additional scopes like admin:repo_hook. The basic bot functionality shown in this tutorial only requires repo.The required scopes are automatically requested when you authorize via the OAuth URL.

Step 3: Build the Code Review Bot

Create the main bot that analyzes pull requests and posts review comments.
import { Metorial } from 'metorial';
import { metorialAnthropic } from '@metorial/anthropic';
import Anthropic from '@anthropic-ai/sdk';
import { execSync } from 'child_process';
import * as path from 'path';
import * as fs from 'fs';

const metorial = new Metorial({
  apiKey: "YOUR-METORIAL-API-KEY"
});

const anthropic = new Anthropic({
  apiKey: "YOUR_ANTHROPIC_API_KEY"
});

// GitHub integration credentials
const GITHUB_DEPLOYMENT_ID = "GITHUB_DEPLOYMENT_ID";
const GITHUB_OAUTH_SESSION_ID = "GITHUB_OAUTH_SESSION_ID";

async function reviewPullRequest(
  repoPath: string,
  owner: string,
  repo: string,
  branchName: string,
  prNumber: number,
  baseBranch: string = 'main'
) {
  console.log(`Starting review of PR #${prNumber} (${branchName})`);

  // Validate repository path
  if (!fs.existsSync(repoPath)) {
    throw new Error(`Repository path does not exist: ${repoPath}`);
  }

  const gitDir = path.join(repoPath, '.git');
  if (!fs.existsSync(gitDir)) {
    throw new Error(`Not a git repository: ${repoPath}`);
  }

  // Get diff from local repository
  console.log(`Fetching diff: ${baseBranch}...${branchName}`);
  const gitDiffCmd = `cd "${repoPath}" && git diff ${baseBranch}...${branchName}`;

  let diffContent: string;
  try {
    diffContent = execSync(gitDiffCmd, { encoding: 'utf-8' });
  } catch (error) {
    throw new Error(`Git diff failed: ${error}`);
  }

  if (!diffContent.trim()) {
    console.log('No changes found in diff');
    return;
  }

  console.log(`Analyzing ${diffContent.length} characters of diff`);

  // Create Metorial session with GitHub integration
  await metorial.withProviderSession(
    metorialAnthropic,
    {
      serverDeployments: [
        {
          serverDeploymentId: GITHUB_DEPLOYMENT_ID,
          oauthSessionId: GITHUB_OAUTH_SESSION_ID
        }
      ],
      streaming: false
    },
    async ({ tools, callTools }) => {
      // Prepare review prompt with diff content
      const messages: Anthropic.MessageParam[] = [
        {
          role: 'user',
          content: `You are an expert code reviewer. You MUST post a review to GitHub after analyzing the code.

Repository: ${owner}/${repo}
Branch: ${branchName}
PR Number: #${prNumber}

DIFF CONTENT:
\`\`\`diff
${diffContent}
\`\`\`

CRITICAL: You MUST call the create_pull_request_review tool. Do NOT just provide analysis - you must POST the review to GitHub.

INSTRUCTIONS:
1. Analyze the diff for:
   - Security vulnerabilities (SQL injection, XSS, exposed secrets, unsafe operations)
   - Code quality issues (duplication, complexity, error handling)
   - Style problems (naming, formatting, consistency)
   - Best practices (documentation, testing, edge cases)

2. IMMEDIATELY call create_pull_request_review tool with:
   - owner: "${owner}"
   - repo: "${repo}"
   - pull_number: ${prNumber}
   - body: Your summary of findings
   - event: "APPROVE" (if no issues) or "REQUEST_CHANGES" (if issues found)
   - comments: Array of line-specific comments if you found issues

You MUST use the tool to post the review. START NOW.`
        }
      ];

      // Send initial request to Claude
      let response = await anthropic.messages.create({
        model: 'claude-sonnet-4-5',
        max_tokens: 8192,
        tools,
        messages
      });

      // Handle tool calls in agentic loop
      while (response.stop_reason === 'tool_use') {
        const toolUseBlocks = response.content.filter(
          (block): block is Anthropic.ToolUseBlock => block.type === 'tool_use'
        );

        console.log(`Executing ${toolUseBlocks.length} tool(s)...`);

        // Execute tools via Metorial
        const toolResults = await callTools(toolUseBlocks);

        // Add assistant response and tool results to conversation
        messages.push({ role: 'assistant', content: response.content });
        messages.push(toolResults);

        // Continue conversation
        response = await anthropic.messages.create({
          model: 'claude-sonnet-4-5',
          max_tokens: 8192,
          messages,
          tools
        });
      }

      // Get final summary
      const finalText = response.content
        .filter((block): block is Anthropic.TextBlock => block.type === 'text')
        .map(block => block.text)
        .join('\n');

      console.log(`Review completed: ${finalText}`);
    }
  );
}
What this code does:
  1. Reads git diff locally from your repository between the base and feature branches
  2. Creates a provider session with GitHub MCP server for posting reviews
  3. Sends the diff to Claude with analysis instructions
  4. AI analyzes the code and identifies issues
  5. AI posts review to GitHub using create_pull_request_review tool with findings
  6. Handles multi-step workflow through agentic loop until review is complete
  7. Handles errors gracefully: If tool calls fail, the AI receives error messages and can retry or adjust its approach
This uses Claude’s agentic capabilities—the AI decides which tools to call and when. You don’t need to write explicit logic for fetching files, analyzing code, or posting comments.

Step 4: Test with a Security Issue

Let’s test the bot with a PR containing a security vulnerability. Scenario: Create a test PR with SQL injection vulnerability. Test PR content (example):
function getUserData(userId) {
  const query = `SELECT * FROM users WHERE id = ${userId}`;
  return database.query(query);
}
Run the bot:
reviewPullRequest(
  '/path/to/your/local/repo',
  'your-username',
  'your-repo',
  'feature-branch',
  123
);
Expected behavior:
  1. Bot reads git diff from local repository for PR #123
  2. AI detects SQL injection vulnerability in the query string
  3. Bot posts review with:
    • General comment: “Found 1 security vulnerability that needs immediate attention.”
    • Line-specific comment on the SQL query line: ”🚨 SQL injection vulnerability detected. User input is directly interpolated into the query. Use parameterized queries instead: SELECT * FROM users WHERE id = ? with bound parameters.”
  4. Bot submits review with REQUEST_CHANGES status
The bot workflow:
  1. Reads git diff from your local repository
  2. Sends diff content to AI for analysis
  3. AI identifies the security issue in the code
  4. AI calls create_pull_request_review tool to post review to GitHub with REQUEST_CHANGES status and detailed comments
The AI autonomously analyzes code and posts reviews—no manual orchestration needed!

Step 5: Test with Clean Code

Test the bot with a clean PR to verify the approval workflow. Scenario: PR with well-written code. Test PR content (example):
export function isValidEmail(email: string): boolean {
  const emailRegex = /^[^\s@]+@[^\s@]+\.[^\s@]+$/;
  return emailRegex.test(email);
}

export function sanitizeInput(input: string): string {
  return input
    .replace(/&/g, '&')
    .replace(/</g, '&lt;')
    .replace(/>/g, '&gt;')
    .replace(/"/g, '&quot;')
    .replace(/'/g, '&#x27;');
}
Run the bot:
reviewPullRequest(
  '/path/to/your/local/repo',
  'your-username',
  'your-repo',
  'feature-branch',
  124
);
Expected behavior:
  1. Bot reads git diff from local repository for PR #124
  2. AI finds:
    • ✓ Proper documentation with JSDoc comments
    • ✓ Clear function names following conventions
    • ✓ Security-conscious implementation (XSS prevention)
    • ✓ No code smells or style violations
  3. Bot posts review comment: “Code looks excellent! Clean implementation with proper documentation, security considerations, and clear naming. The XSS sanitization is thorough and the email validation regex is appropriate.”
  4. Bot submits review with APPROVE status

Troubleshooting

Common issues and solutions when building your code review bot:
Possible causes:
  • OAuth session expired or invalid
  • PR doesn’t exist or has no changed files
  • AI model didn’t call the review tools
Solutions:
  1. Verify your OAuth session is active: re-run the OAuth setup if needed
  2. Check the PR exists and has commits: gh pr view <pr-number>
  3. Increase max_tokens in the AI request (try 8192 instead of 4096)
  4. Review Metorial dashboard logs to see which tools were called
  5. Ensure your prompt explicitly instructs the AI to post reviews (see code examples)
Possible causes:
  • Incorrect tool name in code
  • GitHub MCP server deployment not active
  • OAuth permissions insufficient
Solutions:
  1. Verify you’re using the correct tool name: create_pull_request_review
  2. Check your GitHub server deployment is running in Metorial dashboard
  3. Confirm OAuth session is associated with the correct deployment ID
  4. Verify the repo scope was granted during OAuth authorization
Possible causes:
  • AI prompt lacks specific guidelines
  • Code context is truncated
  • Model capabilities insufficient
Solutions:
  1. Add specific coding standards to your prompt (see “Custom Review Rules” in Advanced Customization)
  2. Increase max_tokens to allow longer analysis (8192-16384 for large PRs)
  3. Use Claude Sonnet 4 or newer for better code understanding
  4. Provide example issues in the prompt to guide the AI’s analysis style
Possible causes:
  • Callback URL mismatch
  • Missing repository access
  • GitHub App permissions not configured
Solutions:
  1. Verify your callback URL matches exactly (including http/https)
  2. Ensure you have admin access to the repository you’re testing with
  3. Check the OAuth authorization screen shows the repo scope
  4. Try revoking and re-authorizing the OAuth connection
  5. Confirm your Metorial account and GitHub account are properly linked
Possible causes:
  • Too many changed files
  • AI context window exceeded
  • API rate limits
Solutions:
  1. Filter files by extension or directory (modify the prompt to focus on specific file types)
  2. Implement batching: review files in chunks rather than all at once
  3. Set a maximum file size limit (skip files >1000 lines)
  4. Use streaming responses to handle longer processing times
  5. For PRs with >20 files, consider the performance optimizations in Production Considerations
GitHub API limits: 5,000 requests/hour for authenticated appsSolutions:
  1. Implement exponential backoff when rate limit errors occur
  2. Cache PR data when running multiple reviews on the same PR
  3. Use conditional requests with ETags to avoid fetching unchanged data
  4. For production deployment, consider GitHub Enterprise with higher limits
  5. Track your usage in the Metorial dashboard to identify bottlenecks
Prevention: Queue bot reviews rather than triggering all at once, especially during peak hours.
If you encounter errors not covered here, check the Metorial dashboard logs (Monitoring section) to see detailed tool execution traces and error messages. You can also inspect the actual API requests being made.

Advanced Customization

Enhance your code review bot with these customizations:

Custom Review Rules

Add company-specific coding standards to the AI prompt (e.g., “all public functions must have JSDoc comments”, “use async/await instead of promises”).

Language-Specific Analysis

Customize prompts for different languages:
  • Python: PEP 8 compliance, type hints
  • TypeScript: strict mode, interface usage
  • JavaScript: ESLint rules, modern syntax

Review Severity Levels

Categorize issues as CRITICAL, WARNING, or SUGGESTION and adjust review status accordingly. Only block PRs for critical security issues.

Automatic Fixes

Generate suggested code fixes for common issues. The AI can propose corrections in review comments (e.g., reformatted code, added error handling).
Example: Adding custom rules Update the AI prompt with your standards:
const customPrompt = `You are an expert code reviewer following these standards:

Company Coding Standards:
- All functions must have JSDoc comments
- Use async/await instead of .then() promises
- Maximum function length: 50 lines
- All API responses must include error handling
- Never use "any" type in TypeScript

Review pull request #${prNumber}...`;

Production Considerations

Before deploying to production:
  1. Webhook Integration: Set up GitHub webhooks to trigger reviews automatically when PRs are opened or updated. You’ll need the admin:repo_hook OAuth scope and a webhook endpoint that receives GitHub events. See GitHub’s Webhook documentation for implementation details.
  2. Rate Limiting: Implement rate limiting to avoid hitting GitHub API limits (5000 requests/hour for authenticated apps)
  3. Concurrency: Queue reviews to handle multiple PRs simultaneously without overwhelming the AI API
  4. Error Handling: Add try/catch blocks and retry logic for API failures
  5. Review History: Store review results in a database for analytics and team insights
  6. Configurable Rules: Allow teams to customize review criteria per repository via config files
  7. Cost Management: Monitor AI API usage and token costs, especially for large PRs with many files
  8. Privacy: Ensure sensitive code doesn’t get logged or sent to unauthorized services
Performance Tip:For large PRs (>20 files), consider:
  • Reviewing only changed lines instead of full files
  • Batching file reviews to reduce token usage
  • Implementing a maximum file size limit
  • Allowing users to request specific file reviews

What’s Next?

Congratulations! You’ve built an AI-powered code review bot that analyzes pull requests for security issues, code quality, and best practices.

Learn More

Need help? Email us at support@metorial.com.