Written coding conventions
Documenting your team's semantic coding decisions - not just style rules - gives AI agents the judgment framework they need to suggest code that fits your architecture, not just code that compiles.
- ·CLAUDE.md or equivalent exists with project description, tech stack, and top conventions
- ·Written coding conventions document exists and is referenced from agent instruction files
- ·Agent instruction files are committed to the repository (not local-only)
- ·CLAUDE.md includes explicit prohibitions (banned libraries, anti-patterns)
- ·Agent instruction files are reviewed as part of the standard PR process
Evidence
- ·CLAUDE.md, .cursorrules, or .github/copilot-instructions.md in repository root
- ·Coding conventions document accessible from agent instruction files
- ·Commit history showing agent instruction file updates
What It Is
Most teams have two kinds of coding standards: the ones enforced by linters and formatters, and the ones that live in senior engineers' heads. ESLint catches var declarations and missing semicolons. Nothing catches "we don't put business logic in controllers" or "repositories should return domain objects, never raw database rows" - except code review, which is slow, inconsistent, and unavailable to AI agents.
Written coding conventions are the second kind: documented semantic decisions about how code should be structured, organized, and extended. They answer questions like: When should I create a new service vs. extending an existing one? Where does validation logic live? How should modules communicate with each other - direct calls, events, or a message bus? What patterns are explicitly off-limits and why?
At L2 (Guided), these conventions are written down in a form accessible to AI agents, typically in CLAUDE.md, a .cursorrules file, or a separate CONVENTIONS.md referenced from CLAUDE.md. The writing of conventions is itself a team exercise - it surfaces disagreements, forces clarity, and creates alignment. Teams that have done this exercise often discover that their "shared" conventions were not nearly as shared as they assumed.
The scope of written conventions is narrower than a full style guide but deeper than linter rules. It covers the decisions that require judgment - the ones that can't be expressed as a regular expression check. These are precisely the decisions that AI agents get wrong most often at L1.
Why It Matters
When AI agents generate code without explicit conventions, they default to the most statistically common patterns in their training data. For widely-used frameworks, this often produces reasonable code. For your specific architectural decisions - which are, by definition, specific to your organization - the defaults are wrong.
- Prevents architecture erosion - every agent that generates code in your codebase either reinforces your patterns or degrades them; conventions tip the balance
- Reduces review burden - reviewers spend less time correcting pattern violations and more time evaluating logic
- Accelerates junior developer onboarding - written conventions answer the questions they'd otherwise need to ask senior engineers
- Aligns team understanding - the exercise of writing conventions surfaces implicit disagreements that would otherwise manifest as inconsistent code
- Creates computable guardrails - at L3, written conventions become the basis for lint rules, automated checks, and agent validation
The relationship between written conventions and AI tools is bidirectional: well-written conventions make AI suggestions better, and bad AI suggestions - when reviewed - reveal which conventions haven't been written down yet. Every time a developer says "the agent keeps suggesting X when we always do Y," that's a convention waiting to be written.
Start by writing down the five most common corrections you make in code review. If you're correcting the same pattern repeatedly, it's a convention that should be explicit. Move from your head to the CLAUDE.md.
Getting Started
6 steps to get from here to the next level
Common Pitfalls
Mistakes teams actually make at this stage - and how to avoid them
How Different Roles See It
Bob has noticed that code review takes longer for PRs that include AI-generated code. Reviewers are spending significant time pushing back on pattern violations that the AI consistently makes. His senior engineers are frustrated that their architecture decisions aren't being respected - by the AI or, increasingly, by junior developers who copy what the AI produces.
What Bob should do - role-specific action plan
Sarah is tracking developer time allocation and discovers that code review is consuming a growing share of engineering time - not because there's more code to review, but because more corrections are needed per PR. When she investigates, the pattern points to AI-generated code that violates architectural conventions that were never written down.
What Sarah should do - role-specific action plan
Victor is exhausted from code review. He writes the same 10 review comments every week - the same architectural corrections, the same pattern guidance, the same "don't do this, do that" feedback. He's started to wonder if there's any point to having AI tools if they're generating more review work than they save.
What Victor should do - role-specific action plan
Further Reading
5 resources worth reading - hand-picked, not scraped
From the Field
Recent releases, projects, and discussions relevant to this maturity level.