Back to Development
developmentL3 SystematicCoding Agent Usage

Rules files per-team/per-repo

How to evolve from a single project-level CLAUDE.md to a layered system of context files tailored to each team's tech stack and conventions.

  • ·CLI agents (Claude Code, Codex) are the primary coding interface for 50%+ of feature work
  • ·Per-team or per-repo rules files exist and are maintained with code review
  • ·Coding conventions are written as explicit, agent-parseable rules (not implicit tribal knowledge)
  • ·Agent usage is tracked per developer and per repository
  • ·Agent instruction files follow a standardized template across the organization

Evidence

  • ·CLI agent session logs or telemetry showing primary usage
  • ·Rules files in repository with commit history showing regular updates
  • ·Coding conventions document cross-referenced from agent instruction files

What It Is

Rules files per-team and per-repo is the systematic evolution of the L2 CLAUDE.md approach. Instead of one root-level instruction file for the entire codebase, each team and repository has tailored context files that reflect their specific technology, patterns, and constraints. A frontend team's rules differ from a backend team's. A data pipeline repository has different conventions than an API repository. In a monorepo, individual packages have their own CLAUDE.md files that supplement (and can override) the root-level file.

Claude Code supports this natively through hierarchical CLAUDE.md resolution: it reads the root-level file, then the directory-level file for the current working path, building a layered context picture. Cursor supports per-directory .cursorrules with similar hierarchy. This layered system means the AI can have both "this is our company's general approach to TypeScript" and "this specific package uses a different testing pattern because of legacy constraints" - simultaneously, without confusion.

At L3 (Systematic), the proliferation of rules files is intentional and governed. There's a policy for who maintains each file, how conflicts are resolved between layers, and how files are reviewed for accuracy. This transforms context engineering from a one-time setup task into an ongoing discipline - like documentation, but with direct, measurable impact on AI output quality.

The move from single CLAUDE.md (L2) to layered per-team files (L3) typically happens when teams start noticing that the root-level file is either too generic (it doesn't capture team-specific patterns) or too specific (it includes rules that only apply to one module and confuse agents working in others).

Why It Matters

Layered rules files are the foundation of systematic context engineering:

  • Precision improves agent behavior - a backend team's CLAUDE.md that says "use PostgreSQL with Prisma ORM, following patterns in /src/db/" produces far better results than a root-level file that says "we use a database"
  • Reduces cross-team interference - without per-team files, the frontend team's rules about React patterns confuse agents working on the Python backend; scoped files eliminate this
  • Scales with organizational complexity - as the codebase grows and teams diverge, layered files grow with it; a single file becomes an unmanageable compromise
  • Creates team ownership - each team owns and maintains their own rules file, creating accountability and keeping context current
  • Enables specialization without fragmentation - the root-level file ensures company-wide standards; team files add specificity; both layers are active simultaneously

Without per-team files, context engineering hits a ceiling at L2. Large codebases have too much diversity to capture in a single file, and the compromise rules that emerge serve no team particularly well.

Tip

When a developer says "the AI keeps getting X wrong in our module even though CLAUDE.md mentions it," that's a signal that the root-level rule is too general to apply correctly in their context. The fix is a per-directory or per-team CLAUDE.md with a more specific version of the rule.

Getting Started

6 steps to get from here to the next level

Common Pitfalls

Mistakes teams actually make at this stage - and how to avoid them

How Different Roles See It

B
BobHead of Engineering

Bob has rolled out CLAUDE.md across all repositories (L2 success), but the files are already showing signs of drift. The frontend team's React conventions are mixed into the same file as backend Python patterns, and different teams keep adding conflicting rules. The file has become a 600-line mess that nobody maintains.

What Bob should do - role-specific action plan

S
SarahProductivity Lead

Sarah has data showing that AI quality varies significantly across teams even though they're using the same tools. The frontend team reports much higher satisfaction with AI suggestions than the backend team. She suspects the difference is context quality, but needs to verify.

What Sarah should do - role-specific action plan

V
VictorStaff Engineer - AI Champion

Victor's been advocating for per-team CLAUDE.md files for months. He maintains his own team's file meticulously and has noticed that his team's AI quality is consistently better than teams with generic root-level rules. But getting other teams to own their files has been harder than expected.

What Victor should do - role-specific action plan