AI Tools Compared

Manage context across multiple repos by symlinking shared packages, maintaining an unified context in your IDE settings, and being explicit about which repo each task targets. This guide shows the configuration patterns that prevent hallucinated imports and cross-repo inconsistencies.

Why Context Management Matters Across Repositories

When you work with multiple repositories that share code, libraries, or domain logic, AI assistants need to understand the relationships between them. A typical scenario involves a shared library repository used by several service repositories, or a monorepo split into separate Git repositories over time. Without proper context management, AI tools generate code that conflicts with existing patterns or duplicate functionality that already exists elsewhere.

The core issue is that each repository operates in isolation from the AI’s perspective. Your AI assistant does not automatically know about the other repositories in your ecosystem unless you explicitly provide that context. This means you need strategies to bridge that gap effectively.

Strategy 1: Centralized Context Documentation

Create a master context file that lives in your primary repository or a dedicated docs repository. This file should document the repository relationships, shared dependencies, and architectural decisions that span across all your projects.

A practical approach uses a CONTEXT.md file in each repository that references the central documentation:

# Repository Context

This service depends on:
- shared-lib: https://github.com/yourorg/shared-lib (v2.3.0)
- common-utils: https://github.com/yourorg/common-utils (v1.5.0)

Shared patterns:
- Error handling follows common-utils/ErrorHandler
- API responses use shared-lib/ApiResponse<T>
- Authentication via common-utils/auth

See central-docs: /architecture/overview for full context

Update this file whenever you add new cross-repository dependencies or change integration patterns.

Strategy 2: Git Submodules for Shared Code

If you have shared code that lives in separate repositories, Git submodules provide a clean way to keep that code accessible within each dependent repository. Your AI assistant can then reference the submodule directly.

# Add shared library as a submodule
git submodule add git@github.com:yourorg/shared-lib.git libs/shared-lib

# Initialize submodules when cloning
git clone --recurse-submodules git@github.com:yourorg/your-service.git

After adding submodules, tell your AI assistant where to find shared code:

This project uses libs/shared-lib for common data models.
Import path: from libs.shared_lib import DataModel

Strategy 3:上下文快照 for Complex Features

When working on a feature that spans multiple repositories, create a context snapshot that captures the relevant code from each repository. This is particularly useful for cross-cutting concerns like authentication, logging, or data models.

Create a temporary file in your working repository that aggregates the essential context:

# context_snapshot.py
# From shared-lib/src/models.py (commit: abc123)
class User:
    id: str
    email: str
    roles: List[str]

    def has_permission(self, permission: str) -> bool:
        return permission in self.roles

# From auth-service/src/auth.py (commit: def456)
async def verify_token(token: str) -> User:
    # Token verification logic
    pass

Reference this file when discussing feature implementation with your AI assistant. Update the snapshot as you make changes across repositories.

Strategy 4: Consistent Project Structure

AI assistants provide better suggestions when projects follow consistent patterns. Establish a template or scaffold that all your repositories use, then reference this standard in your context documentation.

For example, if all your services follow this structure:

src/
  models/      # Data models
  services/    # Business logic
  handlers/    # HTTP/gRPC handlers
  tests/       # Test files
config/        # Configuration files

Your AI assistant will make fewer structural mistakes when working across repositories.

Strategy 5:上下文提示 in AI Commands

Most AI coding assistants support custom instructions or system prompts. Configure your editor or CLI to include persistent context about your repository ecosystem.

For Claude Code, create a CLAUDE.md file in your project root:

# Project Context

This repository is part of a microservices architecture:
- Auth service: github.com/yourorg/auth-service
- User service: github.com/yourorg/user-service
- Billing service: github.com/yourorg/billing-service

Shared library: github.com/yourorg/platform-lib

All services:
- Use platform-lib for database operations
- Follow the same error handling pattern
- Expose health checks at /health
- Use environment variables for configuration

The AI assistant reads this file automatically and applies the context to all conversations.

Strategy 6: Repository Tags and References

When discussing code with your AI assistant, be explicit about which repository you mean. Use a consistent naming convention:

- [shared-lib] for the shared library repository
- [api-gateway] for the API gateway
- [frontend-app] for the frontend application

This makes it clear which codebase you are referring to, especially when similar class names or functions exist in multiple repositories.

Putting It All Together

Effective context management across multiple repositories requires combining these strategies. Start by documenting your repository relationships in a central location. Use submodules for shared code that you actively develop. Create context snapshots for feature work that spans repositories. Maintain consistent project structures. Configure persistent instructions in your AI tools.

The investment in context management pays off quickly. You spend less time repeating explanations, receive more accurate suggestions, and maintain consistency across your entire codebase. Review and update your context documentation regularly, especially when adding new repositories or changing integration patterns.

Advanced Context Snapshot Patterns

For complex multi-repo features, create more detailed context snapshots:

# context_snapshot.py
# Snapshot created: 2026-03-20
# Feature: Add OAuth 2.0 authentication across all services

# From auth-service/models.py (commit: a1b2c3d)
class OAuth2Provider:
    """Base class for OAuth 2.0 provider integrations"""
    client_id: str
    client_secret: str

    def exchange_code_for_token(self, code: str) -> str:
        """Exchange authorization code for access token"""
        pass

# From shared-lib/auth.py (commit: e4f5g6h)
def validate_oauth_token(token: str, provider: str) -> Dict:
    """Validate OAuth token and return user info"""
    cache_key = f"oauth:{provider}:{token}"
    # Check cache first, then validate with provider
    pass

# From api-gateway/routes.py (commit: i7j8k9l)
@router.post("/auth/oauth/callback")
async def oauth_callback(code: str, provider: str):
    """Handle OAuth 2.0 callback from providers"""
    token = await auth_service.exchange_code_for_token(code)
    user = await shared_lib.validate_oauth_token(token, provider)
    # Create session and return to client
    pass

This snapshot shows the exact contracts and signatures across repositories, preventing incompatibilities.

Context Management for Microservices

Microservice architectures require special context handling:

# Microservices Context

## Service Communication
- All services use REST over HTTP with JSON
- Service discovery via Consul
- Timeouts: 5s for internal, 30s for external APIs
- Retry logic: exponential backoff up to 3 attempts

## Data Models
- User service owns User model
- Order service owns Order model
- Events published to message queue for cross-service updates
- No direct database access between services

## Common Patterns
- Saga pattern for distributed transactions
- Event sourcing for audit trail
- Circuit breaker for failing services
- Bulkhead isolation for resource constraints

When working across microservices, this context prevents the AI from suggesting inappropriate tight coupling.

Version Management Across Repos

Track which versions of dependencies are used across services:

# versions.txt - centralized version registry
shared-lib: v2.3.0
  - Used by: auth-service, user-service, api-gateway
  - Breaking changes: None in 2.2->2.3 migration
  - Last checked: 2026-03-15

message-queue-client: v1.8.2
  - Used by: billing-service, notification-service
  - Known issues: Connection pooling bug in 1.8.0-1.8.1
  - Upgrade plan: Plan 1.9.0 for Q2 2026

database-driver: v3.1.0
  - Used by: product-service, inventory-service
  - Compatibility matrix: Python 3.9+, PostgreSQL 12+

This prevents version-related incompatibilities when working across repos.

Documentation Requirements for Multi-Repo Features

Create a feature documentation template for cross-repo changes:

# Feature: [Feature Name]

## Affected Repositories
- [shared-lib]: Updated models and validators
- [auth-service]: New OAuth provider support
- [api-gateway]: New authentication endpoints

## Dependencies Between Repos
1. shared-lib changes must be merged first (1 day buffer for integration)
2. auth-service can proceed once shared-lib is released
3. api-gateway depends on auth-service being deployed

## Testing Strategy
- Unit tests in each repository
- Integration tests in api-gateway (tests all services)
- E2E tests verify entire flow end-to-end
- Staging deployment verification before production

## Rollback Plan
- If api-gateway fails: revert to previous version (5 min)
- If auth-service fails: fall back to legacy auth (immediate)
- If shared-lib fails: requires full rollback of all services (15 min)

Managing Context Window Limits

With multiple repos, you can easily exceed AI context windows. Manage this by:

# Be explicit about what matters for this task
prompt = """
We're adding a new payment provider to our system.

KEY FILES (please prioritize these):
- shared-lib/payment/models.py: PaymentProvider interface
- billing-service/providers/: Existing provider implementations
- api-gateway/routes/payment.py: API endpoints

REFERENCE ONLY (if needed):
- Database migrations
- Documentation files
- Test files

IGNORE:
- Other services (auth-service, user-service)
- Build configurations
- Unrelated dependencies
"""

This helps the AI focus on the most important context within window limits.

Automated Context Updates

Create a system to automatically update context documentation:

#!/bin/bash
# update-context.sh

# Check for broken imports across repos
for repo in auth-service user-service billing-service; do
    cd $repo
    python -m pytest --collect-only > /tmp/imports_$repo.txt
    if grep -l "import.*not found" /tmp/imports_$repo.txt; then
        echo "WARNING: Broken imports in $repo"
    fi
done

# Validate that all dependencies are pinned to the same versions
python validate_dependency_versions.py

# Update context snapshot with latest commits
for repo in shared-lib auth-service; do
    echo "# From $repo (commit: $(git rev-parse --short HEAD))" >> CONTEXT.md
done

Run this as a CI job to catch context issues before they cause problems.

Communication With Your Team

Document the context management approach so teammates understand it:

# Context Management for Multi-Repo Development

## When starting work on a feature:
1. Check CONTEXT.md for repository relationships
2. Review versions.txt to confirm compatible versions
3. If working across multiple repos, create a snapshot of key files
4. Specify your scope to the AI tool (focus on these repos, ignore those)

## When adding new repositories:
1. Document dependencies in CONTEXT.md
2. Add version constraints in versions.txt
3. Update CI checks for broken imports
4. Review with team before first use

## When you discover an issue:
1. Update CONTEXT.md immediately
2. Notify team in Slack
3. Add to "known issues" section
4. Plan fix in next sprint

Built by theluckystrike — More at zovo.one