AI coding assistants have become integral to modern development workflows, but their default behaviors often miss the mark when it comes to your team’s specific code review standards. Rather than fighting against AI-generated code that fails pull request reviews, you can write custom instructions that guide the AI to produce code matching your team’s conventions from the start.
This guide shows you how to create effective custom instructions that enforce your code review standards, reducing iteration cycles and helping your AI pair-programmer become a truly valuable team member.
Understanding Custom Instructions
Custom instructions are system-level prompts that shape how an AI assistant behaves across all your interactions. Most AI coding tools support some form of custom instructions—whether through Claude’s CLAUDE.md, Cursor’s .cursorrules, or GitHub Copilot’s custom instructions file.
The key insight is that these instructions work best when they are specific, enforceable, and aligned with your actual code review checklist. Generic advice like “write clean code” rarely produces the results you want. Instead, you need precise rules that the AI can follow without ambiguity.
Structuring Your Custom Instructions
Effective custom instructions follow a structured approach. Start with your team’s code review pain points—what gets flagged most often in pull requests? Common offenders include missing error handling, inadequate test coverage, inconsistent naming, and lack of documentation.
Here’s a template for structuring custom instructions that actually work:
# Project Code Standards
## Language and Framework Conventions
- Use TypeScript strict mode for all new TypeScript files
- Prefer functional components in React; use hooks over class components
- Follow Airbnb JavaScript Style Guide with exceptions listed below
## Code Review Requirements
- All functions over 10 lines need JSDoc comments
- Error handling required for all async operations
- Include unit tests for utility functions
- Use early returns to reduce nesting depth
The structure matters because it gives the AI a mental framework for generating code. When you organize instructions by category, the AI can reference the appropriate section when making different types of decisions.
Practical Examples for Common Standards
Enforcing Naming Conventions
If your team requires specific naming patterns, make them explicit. Instead of vague preferences, provide concrete rules:
## Naming Conventions
- Variables and functions: camelCase
- React components: PascalCase
- Constants: UPPER_SNAKE_CASE
- File names: kebab-case
- Component files: ComponentName.tsx format
- Test files: componentName.test.ts format
This approach eliminates guesswork. When the AI needs to name a new component, it has clear guidance rather than choosing arbitrarily.
Error Handling Standards
Code review often flags inconsistent error handling. Address this directly:
## Error Handling
- Never leave console.log in production code; use a proper logger
- Always handle Promise rejections with try/catch or .catch()
- Wrap async operations in proper error boundaries in React
- Create custom error classes for domain-specific errors
- Include error context in error messages (what failed, why, what to do next)
With these instructions, the AI will automatically include proper error handling rather than adding it as an afterthought.
Test Coverage Requirements
If your team requires tests, specify the expectations clearly:
## Testing Requirements
- Minimum 80% test coverage for business logic
- Test edge cases, not just happy paths
- Use describe/it structure for all test files
- Include integration tests for API endpoints
- Mock external services; use real implementations only when necessary
The AI will then write tests alongside code rather than treating testing as a separate step.
Making Instructions Actionable
The difference between custom instructions that work and those that get ignored comes down to actionability. Vague instructions like “write secure code” are meaningless to an AI. Specific, actionable instructions produce consistent results.
Consider this ineffective instruction:
“Make sure to follow security best practices”
Versus this actionable version:
“Never use eval(), always sanitize user inputs, use parameterized queries for SQL, implement proper authentication checks on all API routes”
The second version gives the AI concrete behaviors to avoid or adopt.
Iterating on Your Instructions
Custom instructions are not an one-time setup. Start with your top five code review concerns, implement instructions for those, and observe the results. Track what gets approved on first review versus what still needs fixes.
Most teams find that their instructions evolve over time. You might discover that a particular rule is too strict or not strict enough. The key is treating your custom instructions as a living document that improves through feedback from your actual code review process.
Advanced: Context-Aware Instructions
For larger projects, consider creating instruction tiers that apply based on context. Some AI tools support conditional instructions that activate based on file type, directory, or project area:
# Backend API Standards
[Apply to: /api/**, /services/**]
- Use RESTful URL patterns
- Return consistent JSON response structures
- Include pagination for list endpoints
# Frontend Component Standards
[Apply to: /components/**, /pages/**]
- Follow component composition patterns
- Use CSS-in-JS or CSS modules, never inline styles
- Implement proper loading and error states
This targeted approach keeps instructions relevant to the task at hand rather than overwhelming the AI with rules that don’t apply.
Real-World Custom Instructions Examples
Here are complete, production-tested custom instruction sets for different teams:
Startup SaaS Team (.cursorrules)
# Cursor Rules: Startup SaaS Development
## Tech Stack
- React 18 with TypeScript
- Next.js 14 (App Router)
- Supabase for authentication and database
- Tailwind CSS for styling
- Vercel for deployment
## Code Standards
### React Components
- Use functional components only
- Prefer TypeScript interfaces over types for props
- All components must have TypeScript prop definitions
- Use React hooks (useState, useContext, useCallback)
- Implement proper loading and error states
- Example pattern:
```typescript
interface ButtonProps {
onClick: () => void;
loading?: boolean;
variant?: 'primary' | 'secondary';
}
export function Button({ onClick, loading, variant = 'primary' }: ButtonProps) {
return <button disabled={loading} className={`btn-${variant}`} onClick={onClick} />
}
API Routes
- Use Next.js API routes in /app/api
- Validate all request bodies with zod
- Return { success: boolean; data?: T; error?: string }
- Include proper HTTP status codes (200, 400, 401, 500)
- Never expose database errors to client
Database
- Use Supabase client library
- All queries in @/lib/database/queries
- Always use parameterized queries
- Add row-level security policies
- Use migrations for schema changes
Testing Requirements
- Jest for unit tests
- Playwright for E2E tests
- Minimum 80% coverage for utils
- All API routes need integration tests
- Test files: componentName.test.ts
Common Code Review Issues
- Missing error boundaries in components
- Unhandled async errors in useEffect
- Missing null checks on optional data
- Hardcoded values instead of env variables
- No loading states on async operations
What NOT to do
- No console.log in production code (use winston logger)
- No raw SQL queries (use ORM/parameterized)
- No mixing component logic with styling
- No default exports for components
- No modifying props directly in components
Security Checklist
- Sanitize all user input before display
- CSRF tokens on all state-changing requests
- Rate limit API endpoints
- Validate on both client and server
- Use Content Security Policy headers ```
Enterprise Backend Team (.cursorrules)
# Cursor Rules: Enterprise Backend (Python/FastAPI)
## Architecture
- Python 3.11+
- FastAPI with async/await
- PostgreSQL with SQLAlchemy ORM
- Redis for caching
- OpenTelemetry for observability
## Code Style
- Black formatter (line length: 100)
- isort for imports
- mypy for type checking (strict mode)
- pylint with score threshold 8.0
## API Standards
- RESTful design with resource versioning (/v1/, /v2/)
- OpenAPI documentation via FastAPI
- Structured error responses with error codes
- Request/response logging to CloudWatch
- All endpoints require authentication
## Database Patterns
- Alembic for migrations (never manually alter schema)
- ORM entities in /models/
- Queries in repository classes
- Always use transactions for multi-step operations
- Soft deletes for customer data (never hard delete)
## Testing Requirements
- pytest with 85% code coverage minimum
- Unit tests for business logic
- Integration tests for API endpoints
- Load tests for critical paths (> 1000 req/sec)
- Fixtures for test data
## Required Code Review Checks
- No hardcoded credentials (use environment variables)
- All external API calls have timeout and retry logic
- Database queries use connection pooling
- Sensitive data logged as [REDACTED]
- Proper logging at info/warning/error levels
## Deployment
- Docker containers with minimal base images
- Kubernetes manifests in /k8s/
- Helm charts for configuration
- Blue-green deployment strategy
- Automatic rollback on failure
Data Team/Jupyter Notebooks (custom instructions)
# Custom Instructions: Data Analysis Notebooks
## Notebook Structure
- Clear markdown cells explaining each section
- Descriptive cell comments for complex analysis
- Results always include confidence intervals
- Visualizations with titles, axes labels, legends
- Summary cell at top with key findings
## Code Quality
- Use pandas, numpy, scikit-learn from official docs
- Always check for data quality issues first (nulls, duplicates, outliers)
- Validate assumptions before modeling
- Seeds for reproducibility (random_state=42)
- All plots should be publication-quality (matplotlib.style.use('seaborn-v0_8-darkgrid'))
## Analysis Standards
- Sample size and statistical significance always noted
- P-values reported, not just p < 0.05
- Effect sizes included, not just p-values
- Explain why chosen that statistical test
- Limitations of analysis clearly stated
## Visualization Rules
- Color-blind friendly palettes (use colorblind=True in seaborn)
- No pie charts (use bar charts instead)
- Proper axis labels and units
- Caption describing what to see in plot
- Show 95% confidence intervals on estimates
Testing Your Custom Instructions
Create a validation checklist to verify instructions actually work:
## Custom Instructions Validation Checklist
### Test 1: Basic Compliance
- [ ] AI generates code following naming conventions
- [ ] AI uses specified frameworks/libraries
- [ ] Generated code matches error handling style
- [ ] Comments/docstrings follow template
### Test 2: Code Review Standards
- [ ] Generated tests meet coverage requirement
- [ ] Error handling present without asking
- [ ] Logging implemented correctly
- [ ] Security best practices included
### Test 3: Edge Cases
- [ ] AI handles constraints mentioned (e.g., no console.log)
- [ ] AI avoids anti-patterns listed
- [ ] AI includes required patterns automatically
- [ ] Multi-file changes consistent with rules
### Test Feature
- [ ] Request: "Create a new API endpoint for user signup"
- [ ] Verify: Route structure, validation, error response, logging, testing all follow instructions
- [ ] If any deviation: Update instructions to be clearer/more specific
Measuring Instruction Effectiveness
Track the impact of your custom instructions:
# Analyze code review feedback over time
pull_request_data = {
"before_instructions": {
"avg_review_comments": 8.2,
"common_issues": [
"Missing error handling (30%)",
"No tests (25%)",
"Wrong naming (20%)",
"Security issues (15%)"
],
"rework_iterations": 2.3
},
"after_instructions": {
"avg_review_comments": 3.1, # 62% reduction
"common_issues": [
"Logic issues (40%)",
"Performance (35%)",
"Style edge cases (25%)"
],
"rework_iterations": 1.1 # 52% reduction
}
}
When AI-generated code passes review comments drop by 60%+, your instructions are working.
Integrating Instructions Across Tools
Most modern AI tools support instructions, but syntax varies:
Cursor: .cursorrules file in project root
VS Code + Copilot: .github/copilot-instructions.md
Claude: claude_system_prompt.md or via Project settings
GitHub Copilot: Settings in repository or organization
For consistency across tools, maintain a single source:
# sync-instructions.sh
# Copy instructions to all tools' expected locations
cp team-instructions.md .cursorrules
cp team-instructions.md .github/copilot-instructions.md
cp team-instructions.md claude_system_prompt.md
git add .cursorrules .github/copilot-instructions.md claude_system_prompt.md
git commit -m "Update custom instructions across all AI tools"
Common Mistakes and How to Fix Them
Mistake 1: Too Generic ❌ “Write clean code and follow best practices” ✅ “Use early returns to reduce nesting. Max function length 30 lines. Avoid else blocks.”
Mistake 2: Too Long ❌ 500-line instruction document that no one reads ✅ One-page summary with links to detailed guidelines
Mistake 3: Not Enforceable ❌ “Be mindful of performance” ✅ “Use .includes() instead of .find() for existence checks. Batch database queries when selecting >10 items.”
Mistake 4: Out of Date ❌ Instructions reference old tech stack ✅ Review instructions quarterly as tools/standards evolve
Related Articles
- How to Write Custom Instructions That Make AI Follow Your
- Writing Custom Instructions That Make AI Follow Your Team’s
- How to Write ChatGPT Custom Instructions
- How to Write Custom Instructions That Make AI Respect Your
- ChatGPT Custom GPT Not Following Instructions
Built by theluckystrike — More at zovo.one