AI organization and setup within your code repositories
Every team I talk to is using AI coding tools. Almost none of them have thought about how those tools interact with their repository structure. They install Copilot, maybe drop a .cursorrules file in the root, and call it done. Then they wonder why the AI keeps suggesting patterns that don't match their codebase.
Your repo structure, configuration files, and conventions are the context window for AI tools. If that context is messy, the output will be too. Here's how I organize repositories to get the most out of AI-assisted development.
Instruction files: teach the AI your codebase
Most AI coding tools support some form of project-level instructions. Cursor has .cursorrules, GitHub Copilot has .github/copilot-instructions.md, Claude Code has CLAUDE.md. These files are your chance to front-load context that the AI would otherwise have to infer (badly).
What goes in these files:
- Tech stack and versions. "This project uses Node 20, TypeScript 5.4, Express, and Prisma with PostgreSQL." Simple, but it prevents the AI from suggesting Python patterns or outdated Node APIs.
- Coding conventions. "We use functional components only. Error handling follows the pattern in
src/lib/errors.ts. All database queries go through the repository layer." The more specific, the better. - File organization rules. "Tests live next to source files as
*.test.ts. Migrations are indb/migrations/with sequential numbering. API routes follow the patternsrc/routes/{resource}/{action}.ts." - What to avoid. "Do not use
anytype. Do not addconsole.logstatements. Do not import fromsrc/legacy/— those modules are deprecated."
An AI instruction file isn't documentation for humans. It's a contract with the tool. Write it like you're onboarding a fast but context-free junior developer.
Directory structure matters more than you think
AI tools use file paths and directory names as signals. A well-organized repo gives the AI structural context that improves suggestions significantly.
Patterns that help:
- Consistent naming. If your services are in
src/services/and they all follow{entity}.service.ts, the AI will pick up that pattern and apply it to new files. If half are insrc/services/and half are insrc/lib/with inconsistent naming, the AI guesses — and guesses wrong. - Colocated tests. When tests live next to the code they test, the AI can reference the implementation while generating tests. When tests are in a separate
tests/tree with a different directory structure, the AI has to work harder to find the connection. - Clear boundaries. Separate directories for API routes, business logic, data access, and shared utilities tell the AI where new code should go. A flat
src/directory with 50 files gives no structural hints.
Context files for complex domains
For codebases with complex business logic, I create context files that live in the repository and explain domain concepts. These aren't documentation in the traditional sense — they're reference material that AI tools can pull from.
I keep these in a docs/context/ or .ai/ directory:
domain-glossary.md— Defines business terms. What's a "settlement"? What's the difference between "authorization" and "capture"? AI tools reference this when generating code that uses domain language.architecture-decisions.md— Key decisions and why they were made. "We use event sourcing for the ledger because..." This prevents the AI from suggesting approaches you've already considered and rejected.api-conventions.md— How your APIs are structured, error formats, pagination patterns, auth schemes. Gives the AI a template for generating new endpoints.
The key is keeping these concise and current. A 50-page architecture document that hasn't been updated in a year is worse than no document. Short, accurate files that evolve with the codebase are what AI tools can actually use.
Keep your tooling config in version control
This should be obvious, but I still see teams where AI tool configurations live on individual developer machines. Your .cursorrules, CLAUDE.md, .github/copilot-instructions.md, and any context files should be committed to the repo.
This means:
- Every developer gets the same AI behavior out of the box.
- Changes to AI configuration go through code review.
- The instructions evolve alongside the code they describe.
- New team members (and new AI tools) inherit the context immediately.
Iterate on the setup
Your AI configuration isn't a set-it-and-forget-it thing. As your codebase evolves, the instructions should too. I treat AI instruction files like any other code — when I refactor a major pattern, I update the instruction file in the same PR.
Some teams add a recurring reminder to review their AI config files quarterly. That works, but I prefer the organic approach: if the AI keeps making the same mistake, update the instruction file to address it. The config becomes a living record of "things the AI gets wrong about our codebase," and it gets better over time.
The teams getting the most value from AI tools aren't the ones with the best prompts. They're the ones with the best-organized repositories. Clean structure, clear conventions, and explicit instructions turn an AI tool from a generic code generator into something that actually understands your project. The setup cost is a few hours. The compounding return is every AI interaction after that being slightly more useful.