Agent-aware coding conventions (explicit > implicit)
How to rewrite your team's coding conventions to be machine-readable - precise, example-driven, and structured for AI consumption rather than human reading.
- ·CLI agents (Claude Code, Codex) are the primary coding interface for 50%+ of feature work
- ·Per-team or per-repo rules files exist and are maintained with code review
- ·Coding conventions are written as explicit, agent-parseable rules (not implicit tribal knowledge)
- ·Agent usage is tracked per developer and per repository
- ·Agent instruction files follow a standardized template across the organization
Evidence
- ·CLI agent session logs or telemetry showing primary usage
- ·Rules files in repository with commit history showing regular updates
- ·Coding conventions document cross-referenced from agent instruction files
What It Is
Agent-aware coding conventions are conventions documented with the explicit goal of being consumed by an AI agent, not just read by a human. The principle is simple: AI agents don't pick up implicit conventions through osmosis. They don't learn by sitting in code reviews. They don't absorb culture through Slack. They follow instructions - and if the instructions are vague or assumed, the AI will make confident guesses that violate your team's actual intentions.
At L3 (Systematic), teams recognize this asymmetry and respond by making their conventions explicit. "Explicit > implicit" is not a general principle about documentation - it's a specific claim about what AI agents require. A human developer reading "use consistent naming" can infer what that means from examples in the codebase. An AI agent reading "use consistent naming" has no idea what that means and will apply generic patterns from its training data.
The transformation is practical: take each implicit convention - things everyone "just knows" - and rewrite it as a precise, example-driven rule. "Use camelCase for variables" becomes "Use camelCase for all variable and function names. Use PascalCase for class names and React component names. Use SCREAMING_SNAKE_CASE for module-level constants. Examples: getUserById, UserProfile, MAX_RETRY_COUNT." The second version is actionable for an AI agent; the first is not.
This work is also valuable independent of AI. Conventions that are precise enough for an AI to follow are clear enough for a new human developer to follow as well. Agent-aware documentation is better documentation, full stop.
Why It Matters
The quality of AI agent output is directly bounded by the quality of explicit conventions. This is the core L3 insight:
- Eliminates the largest source of agent errors - most agent mistakes at L2 are not model failures; they're convention ambiguity failures that better documentation solves
- Scales tribal knowledge - implicit conventions locked in senior engineers' heads are finally captured in a form that benefits every developer and every AI agent
- Reduces review friction - when conventions are explicit and the AI follows them, reviewers stop finding style violations and focus on logic and design
- Makes conventions enforceable - explicit conventions can be referenced in automated checks and lint rules; implicit conventions cannot
- Directly measures context quality - the rate of agent misfires (convention violations in AI-generated code) is a direct measure of how explicit your conventions are; improving explicitness decreases misfire rate
The counterintuitive insight is that the work of making conventions explicit for AI agents benefits humans more than the AI. It forces teams to surface the tacit knowledge that experienced developers have accumulated but never written down - the "everyone knows you don't do it that way" patterns that confuse new joiners and confound agents alike.
Run an "implicit convention audit" by asking a new team member (or using an AI agent with no CLAUDE.md) to implement a small feature from scratch. Every place where their implementation differs from "what we would have done" is an implicit convention that needs to be made explicit.
Getting Started
6 steps to get from here to the next level
Common Pitfalls
Mistakes teams actually make at this stage - and how to avoid them
How Different Roles See It
Bob is frustrated that AI-generated code consistently fails code review on the same issues. Reviewers keep leaving comments like "we don't use this pattern," "this library is deprecated," and "this should use the centralized error handler." The same AI mistakes repeat across developers and across weeks.
What Bob should do - role-specific action plan
Sarah has been tracking code review cycle time and has noticed that PRs with significant AI involvement take longer to merge than expected - not because the code is functionally wrong, but because it consistently violates style and pattern conventions. Reviewers spend time on style corrections that feel avoidable.
What Sarah should do - role-specific action plan
Victor has been doing this informally for months. When he notices the AI making the same mistake twice, he immediately adds an explicit rule to CLAUDE.md. His team's AI misfire rate is dramatically lower than other teams, and his code reviews are mostly about logic rather than style.
What Victor should do - role-specific action plan
Further Reading
5 resources worth reading - hand-picked, not scraped
From the Field
Recent releases, projects, and discussions relevant to this maturity level.