34.4 PR descriptions and code review assistance
Overview and links for this section of the guide.
On this page
Automated PR Descriptions
Developers hate writing PR descriptions. AI loves it. And good PR descriptions are essential for:
- Code reviewers: Understanding what changed and why
- Future you: Understanding what you were thinking 6 months later
- Compliance: Documenting changes for audits
The input is simple: the git diff. The output is a structured description.
The PR Template Prompt
// pr-description-prompt.ts
const PR_DESCRIPTION_PROMPT = `
You are a senior engineer writing a Pull Request description.
Given the following git diff, write a clear, concise PR description.
GIT DIFF:
\`\`\`
${gitDiff}
\`\`\`
COMMIT MESSAGES:
${commitMessages.join('\n')}
Write a PR description with these sections:
## Summary
One paragraph explaining WHAT changed and WHY.
## Changes
Bullet list of specific changes, grouped by file or feature.
## Testing
How to verify this works:
- Manual testing steps
- New/modified automated tests
## Breaking Changes
List any breaking changes to APIs, database schemas, or behavior.
If none, write "None."
## Rollback Plan
How to revert if something goes wrong.
Use Markdown formatting. Be concise but complete.
`;
async function generatePRDescription(repoPath: string): Promise {
// Get the diff against main branch
const diff = execSync('git diff main...HEAD', {
cwd: repoPath,
encoding: 'utf8',
maxBuffer: 10 * 1024 * 1024 // 10MB for large diffs
});
// Get commit messages
const commits = execSync('git log main...HEAD --pretty=format:"%s"', {
cwd: repoPath,
encoding: 'utf8'
}).split('\n');
const prompt = PR_DESCRIPTION_PROMPT
.replace('${gitDiff}', diff.slice(0, 50000)) // Truncate huge diffs
.replace('${commitMessages.join(\'\\n\')}', commits.join('\n'));
return await callGemini(prompt);
}
Example Output
## Summary
Adds a `createdAt` timestamp to the User model to track when accounts were
created. This supports the upcoming analytics dashboard that shows user
growth over time.
## Changes
**Database**
- Added `createdAt` column to `users` table with `DEFAULT NOW()`
- Migration: `20240115_add_user_created_at.sql`
**Backend**
- Updated `User` TypeScript interface with `createdAt: Date`
- Modified `/api/users` endpoint to include timestamp in responses
- Added `createdAt` to user serialization
**Tests**
- Added test for `createdAt` presence on new users
- Updated API response snapshots
## Testing
1. Run `npm run migrate` to apply database changes
2. Create a new user via the signup flow
3. Verify `createdAt` appears in the user profile API response
4. Run `npm test` - all tests should pass
## Breaking Changes
- API response for `/api/users/:id` now includes `createdAt` field
- Clients parsing the response strictly may need updates
## Rollback Plan
1. Revert this PR
2. Run `npm run migrate:rollback` to remove the column
3. Redeploy
AI Code Reviewer
Beyond writing descriptions, the AI can review code for common issues—either its own generated code or human-written code.
// code-review-prompt.ts
const CODE_REVIEW_PROMPT = `
You are a senior engineer reviewing a Pull Request.
Review this diff for issues, but be high-signal. Only comment on:
1. **Security vulnerabilities** (SQL injection, XSS, auth bypass, secrets in code)
2. **Bugs** (logic errors, off-by-one, null pointer, race conditions)
3. **Performance** (N+1 queries, missing indexes, memory leaks)
4. **Breaking changes** (API contract violations, removed fields)
DO NOT comment on:
- Code style (that's what linters are for)
- "Consider using X instead of Y" (subjective preferences)
- Documentation (unless security-critical)
GIT DIFF:
\`\`\`
${diff}
\`\`\`
For each issue found, output in this format:
---
FILE: path/to/file.ts
LINE: 42
SEVERITY: critical | high | medium | low
ISSUE: Brief description
SUGGESTION: How to fix it
---
If no issues found, respond with: "No significant issues found."
`;
Review Categories
Different types of reviews focus on different concerns:
| Review Type | Focus | When to Use |
|---|---|---|
| Security Review | Injection, auth, secrets, CSRF | Any code touching auth, user input, or APIs |
| Performance Review | Database queries, memory, caching | Code touching DB, file I/O, or hot paths |
| API Review | Breaking changes, backwards compatibility | Any public API modifications |
| Correctness Review | Logic errors, edge cases, tests | Complex algorithms, financial code |
// Specialized review prompts
const SECURITY_REVIEW = `
Focus ONLY on security issues:
- SQL injection (string concatenation in queries)
- XSS (unescaped user input in HTML)
- Authentication bypass
- Authorization flaws (IDOR, privilege escalation)
- Secrets in code (API keys, passwords, tokens)
- CSRF vulnerabilities
- Path traversal
- Insecure deserialization
`;
const PERFORMANCE_REVIEW = `
Focus ONLY on performance issues:
- N+1 database queries
- Missing database indexes (queries on non-indexed columns)
- Unbounded loops or recursion
- Memory leaks (event listeners, closures holding references)
- Synchronous operations that should be async
- Large objects in hot paths
`;
Instruct the model to be "high signal." If it complains about variable names being "not descriptive enough" but they're fine, tell it to suppress subjective feedback. AI reviewers tend to be overly pedantic—tune them toward actionable issues only.
Full Implementation
Here's a complete CLI command that generates PR descriptions and reviews:
// commands/pr.ts
import { Command } from 'commander';
import { execSync } from 'child_process';
export function registerPRCommands(program: Command) {
program
.command('pr-describe')
.description('Generate a PR description from the current branch')
.option('--base ', 'Base branch to compare against', 'main')
.action(async (options) => {
console.log('Analyzing changes...');
const diff = execSync(`git diff ${options.base}...HEAD`, {
encoding: 'utf8',
maxBuffer: 10 * 1024 * 1024
});
if (!diff.trim()) {
console.log('No changes detected.');
return;
}
const description = await generatePRDescription(diff);
console.log('\n' + '='.repeat(60));
console.log('GENERATED PR DESCRIPTION');
console.log('='.repeat(60) + '\n');
console.log(description);
// Optionally copy to clipboard
if (process.platform === 'darwin') {
execSync('pbcopy', { input: description });
console.log('\n✓ Copied to clipboard');
}
});
program
.command('pr-review')
.description('Review the current branch for issues')
.option('--base ', 'Base branch', 'main')
.option('--type ', 'Review type: security|performance|all', 'all')
.action(async (options) => {
console.log(`Running ${options.type} review...`);
const diff = execSync(`git diff ${options.base}...HEAD`, {
encoding: 'utf8'
});
const issues = await reviewCode(diff, options.type);
if (issues.length === 0) {
console.log('✓ No significant issues found');
} else {
console.log(`Found ${issues.length} issue(s):\n`);
for (const issue of issues) {
const icon = issue.severity === 'critical' ? '🔴' :
issue.severity === 'high' ? '🟠' :
issue.severity === 'medium' ? '🟡' : '🔵';
console.log(`${icon} [${issue.severity.toUpperCase()}] ${issue.file}:${issue.line}`);
console.log(` ${issue.issue}`);
console.log(` → ${issue.suggestion}\n`);
}
}
});
}
You can run the AI reviewer as a GitHub Action on every PR. Just call the review function on the PR diff and post comments via the GitHub API. Be careful about rate limits and API costs for high-volume repos.