a
Lower Costs on AWS with AllCode

🤖 Building an AI Code Review Agent for GitHub

The complete guide to creating a GitHub PR Review Assistant with Amazon Bedrock AgentCore

🎯 Overview & Architecture

This tutorial will guide you through building an AI Code Review Agent that automatically analyzes GitHub pull requests using Amazon Bedrock AgentCore and Claude by Anthropic.

🔍 Security Analysis

Detects hardcoded credentials, injection vulnerabilities, and authentication issues

📊 Code Quality

Assesses readability, complexity, naming conventions, and maintainability

⚡ Performance

Identifies bottlenecks, inefficiencies, and optimization opportunities

✅ Best Practices

Ensures adherence to framework conventions and industry standards

Architecture Components

  • Amazon Bedrock AgentCore Runtime: Hosts the AI agent with code execution capabilities
  • Claude 3.5 Haiku: Provides intelligent code analysis
  • Memory Integration: Maintains context across reviews
  • GitHub Webhook: Automates the PR review process

📋 Prerequisites

Required Tools & Accounts

  • AWS Account with Bedrock access and the Claude module configured
  • Python 3.8+ installed
  • GitHub account and repository
  • AWS CLI configured
  • AgentCore CLI installed
⚠️ Important: Ensure you have access to AWS Bedrock and Claude models in your region. Some regions may require requesting access.

🛠️ Environment Setup

1

Install Dependencies

Create a new project directory and install required packages:

mkdir pr-review-agent
cd pr-review-agent
python -m venv .venv
source .venv/bin/activate # On Windows: .venv\Scripts\activate
pip install bedrock-agentcore strands-agents strands-agents-tools python-dotenv

2

Create Requirements File

# requirements.txt
bedrock-agentcore
strands-agents
strands-agents-tools
python-dotenv
requests

3

Environment Configuration

Create a .env file with your AWS credentials:

# .env
AWS_ACCESS_KEY_ID=your_access_key
AWS_SECRET_ACCESS_KEY=your_secret_key
REGION_ID=us-west-2
MODEL_ID=anthropic.claude-3-5-haiku-20241022-v1:0
GITHUB_TOKEN=your_github_token

🤖 Building the Agent

1

Core Agent Structure

Create the main agent file pr_review_agent.py:

import os
from dotenv import load_dotenv
from strands import Agent
from strands_tools.code_interpreter import AgentCoreCodeInterpreter
from bedrock_agentcore.memory.integrations.strands.config import AgentCoreMemoryConfig
from bedrock_agentcore.memory.integrations.strands.session_manager import AgentCoreMemorySessionManager
from bedrock_agentcore.runtime import BedrockAgentCoreAppload_dotenv()# Environment variables
MEMORY_ID = os.getenv(“MEMORY_ID”)
MODEL_ID = os.getenv(“MODEL_ID”)
REGION_ID = os.getenv(“REGION_ID”)app = BedrockAgentCoreApp()

2

Agent Entry Point

Define the main function that handles both PR reviews and general queries:

@app.entrypoint
def invoke(payload, context):
session_id = getattr(context, “session_id”, None)
session_manager = None# Setup memory if available
if MEMORY_ID:
memory_config = AgentCoreMemoryConfig(
memory_id=MEMORY_ID,
session_id=session_id or ‘default’,
actor_id=”pr_reviewer”,
region=REGION_ID
)
session_manager = AgentCoreMemorySessionManager(memory_config, REGION_ID)# Setup code interpreter
code_interpreter = AgentCoreCodeInterpreter(
region=REGION_ID,
session_name=session_id,
auto_create=True
)

3

PR Review Logic

Add intelligent PR analysis with specialized prompts:

# Check if this is a PR review request
if payload.get(“pr_data”):
system_prompt = “””You are an expert code reviewer. Analyze pull requests for:1. **Security Issues**: Hardcoded credentials, injection vulnerabilities, auth problems
2. **Code Quality**: Readability, complexity, naming conventions
3. **Performance**: Bottlenecks, inefficiencies, optimization opportunities
4. **Best Practices**: Framework conventions, industry standardsFormat your response as:
## PR Review Summary
- Overall assessment
- Key concerns
- Approval recommendation## File-by-File Analysis
### filename.ext
- Issues found
- Specific suggestions
- Code examples (if helpful)”””

pr_data = payload.get(“pr_data”, {})
pr_context = format_pr_for_analysis(pr_data)
prompt = f”Please review this pull request:\n\n{pr_context}”
else:
# General assistant mode
system_prompt = “You are a helpful coding assistant with execution capabilities.”
prompt = payload.get(“prompt”, “”)

4

Complete Agent Implementation

# Create the agent
agent = Agent(
model=MODEL_ID,
tools=[code_interpreter.code_interpreter],
session_manager=session_manager,
system_prompt=system_prompt
)# Get response
results = agent(prompt)# Format response based on request type
if payload.get(“pr_data”):
return {
“review”: results.message.get(‘content’, [{}])[0].get(‘text’, str(results)),
“pr_title”: pr_data.get(‘title’),
“files_reviewed”: len(pr_data.get(‘files’, []))
}
else:
return {“response”: results.message.get(‘content’, [{}])[0].get(‘text’, str(results))}def format_pr_for_analysis(pr_data):
“””Format PR data for AI analysis”””
context = f”””
PR Title: {pr_data.get(‘title’, ‘N/A’)}
Author: {pr_data.get(‘author’, ‘N/A’)}
Description: {pr_data.get(‘description’, ‘N/A’)}

Files Changed ({len(pr_data.get(‘files’, []))} files):
“””

for file_data in pr_data.get(‘files’, []):
context += f”\n— {file_data.get(‘filename’, ‘unknown’)} —\n”
context += file_data.get(‘patch’, ‘No diff available’) + “\n”

return context

if __name__ == “__main__”:
app.run()

🧪 Testing & Deployment

1

Create Test Script

Build a comprehensive test to validate your agent:

# test_pr_review.py
import os
from dotenv import load_dotenvload_dotenv()# Set environment variables
os.environ[“REGION_ID”] = “us-west-2”
os.environ[“MODEL_ID”] = “anthropic.claude-3-5-haiku-20241022-v1:0″from pr_review_agent import invoke

# Sample PR with security issues
sample_pr = {
“pr_data”: {
“title”: “Add user authentication endpoint”,
“author”: “developer123”,
“description”: “JWT-based authentication implementation”,
“files”: [{
“filename”: “auth.py”,
“patch”: “””@@ -0,0 +1,15 @@
+import jwt
+
+def login(username, password):
+ # TODO: Add proper validation
+ if username == ‘admin’ and password == ‘password123’:
+ token = jwt.encode({‘user_id’: 1}, ‘secret_key’, algorithm=’HS256′)
+ return {‘token’: token}
+ return {‘error’: ‘Invalid credentials’}”””
}]
}
}

class Context:
session_id = “test-session”

result = invoke(sample_pr, Context())
print(“Review Result:”)
print(result[‘review’])

2

Deploy to AWS

Use AgentCore CLI to deploy your agent:

# Deploy with environment variables
agentcore launch –auto-update-on-conflict \
–env AWS_ACCESS_KEY_ID=your_key \
–env AWS_SECRET_ACCESS_KEY=your_secret \
–env REGION_ID=us-west-2 \
–env MODEL_ID=anthropic.claude-3-5-haiku-20241022-v1:0

3

Test Cloud Deployment

# Test via AgentCore CLI
agentcore invoke ‘{“pr_data”: {“title”: “Test PR”, “files”: […]}}’
✅ Success! Your agent should now provide detailed security and code quality analysis for the sample PR.

🔗 GitHub Integration

1

Create Webhook Handler

Build an AWS Lambda function to process GitHub webhooks:

# github_webhook.py
import json
import requests
import os
import boto3def lambda_handler(event, context):
try:
# Parse GitHub webhook
body = json.loads(event.get(‘body’, ‘{}’))# Only process PR events
if body.get(‘action’) not in [‘opened’, ‘synchronize’]:
return {‘statusCode’: 200, ‘body’: ‘Event ignored’}# Extract PR data
pr_data = extract_pr_data(body)

# Call AgentCore
client = boto3.client(‘bedrock-agentcore’, region_name=’us-west-2′)
response = client.invoke_agent_runtime(
agentRuntimeArn=os.getenv(‘AGENTCORE_RUNTIME_ARN’),
payload=json.dumps({‘pr_data’: pr_data}),
sessionId=f”pr-{pr_data.get(‘pr_number’)}”
)

# Post review to GitHub
review_result = json.loads(response[‘payload’])
post_review_comment(pr_data, review_result.get(‘review’))

return {‘statusCode’: 200, ‘body’: ‘Review completed’}

except Exception as e:
return {‘statusCode’: 500, ‘body’: json.dumps({‘error’: str(e)})}

2

Deploy Lambda Function

Package and deploy your webhook handler:

# Create deployment package
zip -r webhook.zip github_webhook.py# Deploy via AWS CLI
aws lambda create-function \
–function-name pr-review-webhook \
–runtime python3.9 \
–role arn:aws:iam::ACCOUNT:role/lambda-execution-role \
–handler github_webhook.lambda_handler \
–zip-file fileb://webhook.zip

3

Configure GitHub Webhook

Set up GitHub to call your Lambda function:

  1. Go to your GitHub repository settings
  2. Navigate to “Webhooks” → “Add webhook”
  3. Set Payload URL to your Lambda function URL
  4. Select “Pull requests” events
  5. Set Content type to “application/json”

🚀 Advanced Features

🧠 Memory Integration

Store review patterns and learn from past reviews to improve accuracy over time.

📊 Custom Rules

Add organization-specific coding standards and security requirements.

🔄 CI/CD Integration

Integrate with build pipelines to block merges on critical issues.

📈 Analytics

Track code quality metrics and team improvement over time.

Enhanced Security Checks

Add specialized security analysis tools:

def enhanced_security_analysis(code_content):
“””Advanced security pattern detection”””
security_patterns = {
‘hardcoded_secrets’: r'(password|secret|key)\s*=\s*[“\’][^”\’]+[“\’]’,
‘sql_injection’: r'(SELECT|INSERT|UPDATE|DELETE).*\+.*\+’,
‘xss_vulnerability’: r’innerHTML\s*=.*\+’,
‘weak_crypto’: r’md5|sha1(?!256)|des’
}findings = []
for pattern_name, regex in security_patterns.items():
if re.search(regex, code_content, re.IGNORECASE):
findings.append(f”Potential {pattern_name.replace(‘_’, ‘ ‘)} detected”)return findings
💡 Pro Tip: Consider implementing rate limiting and cost controls to manage API usage, especially for high-volume repositories.

🎉 Conclusion

You’ve successfully built an AI Code Review Agent, which intelligently reviews PRS to:

  • Automatically analyze code for security vulnerabilities
  • Assess code quality and maintainability
  • Provide actionable improvement suggestions
  • Integrate seamlessly with GitHub workflows

🚀 Next Steps:

  • Customize review criteria for your team’s needs
  • Add support for multiple programming languages
  • Implement team-specific coding standards
  • Set up monitoring and analytics