Basic linter rules
A shared linter configuration enforced in CI is the fastest, cheapest quality investment a team can make - and the essential foundation for every higher-level quality automation.
- ·AI-assisted review tool (CodeRabbit, Qodo, or equivalent) is active on all repositories
- ·Linter rules are configured and run in CI on every PR
- ·PRs clearly indicate whether code is AI-generated or AI-assisted (labels, tags, or commit metadata)
- ·AI review suggestions are triaged (accepted/rejected) rather than ignored
- ·Linter configuration is committed to the repository and versioned
Evidence
- ·AI review tool configuration in CI pipeline
- ·Linter configuration file in repository
- ·PR labels or commit metadata distinguishing AI-generated code
What It Is
Linting is automated static analysis that enforces code quality rules without running the code. ESLint for JavaScript/TypeScript, Pylint or Ruff for Python, golangci-lint for Go, Checkstyle for Java, RuboCop for Ruby - every major language ecosystem has one or more linting tools. These tools check for a range of issues: syntax errors, undefined variables, unused imports, potential null pointer dereferences, inconsistent formatting, overly complex functions, security anti-patterns, and violations of style conventions.
At L2 (Guided), the team has moved past individual developer choice on linting. A shared linter configuration is committed to the repository, every developer uses the same rules, and - critically - CI fails if linting fails. A PR that introduces lint errors cannot be merged until the errors are resolved. This is the key difference between L1 (where linting might exist on some developers' machines, informally) and L2: the rules are enforced by the build system, not by reviewer diligence.
Basic linter rules cover the obvious things: code formatting (indentation, line length, spacing), naming conventions, unused variables and imports, unreachable code, obvious potential errors (comparing with == instead of === in JavaScript, for example). The configuration is shared across the team but is deliberately not over-engineered - it catches the things everyone agrees on, without creating friction from overly opinionated rules.
This simplicity is intentional. At L2, the goal is getting the team to agree on and enforce a baseline. The more sophisticated use cases - custom lint rules that enforce architectural decisions - are the domain of L3 (Lint-as-architecture).
Why It Matters
Linting is among the highest-ROI investments in software quality because it catches a predictable class of issues automatically, before any human sees the code:
- Eliminates the lowest-value review comments - "Use const instead of let here," "this import is unused," "missing semicolon" - these are review comments that waste reviewer attention. Linting handles them automatically, freeing reviewers for substantive feedback.
- Enforces consistency without conflict - Style debates ("tabs vs spaces," "80 vs 120 column limit") are resolved once in the linter configuration, not relitigated in every PR review. The linter is the authority.
- Catches errors before tests - Some lint rules catch real bugs (undefined variables, unreachable code, potential type errors). Finding these during lint is faster and cheaper than finding them during testing or production.
- Makes onboarding easier - New team members don't need to memorize style conventions - the linter tells them when they've violated one. The configuration file serves as machine-readable documentation of team standards.
- Foundation for advanced automation - L3's lint-as-architecture, L4's auto-merge policies, and L5's continuous auto-refactoring all depend on a trustworthy linting infrastructure. You can't build on top of inconsistent quality gates.
The key cultural shift at L2 is accepting that a machine will block your PR. Some developers initially resist having a bot tell them their code is wrong. The right framing is: "The linter is our agreed-upon standards, running automatically. It's what we would have told you in review anyway, just faster."
Start with a well-known shared configuration (Airbnb style guide for JavaScript, Google's Python style guide) and selectively override rules your team disagrees with. This is faster than building from scratch and ensures you're starting from a community-vetted baseline.
Getting Started
6 steps to get from here to the next level
Common Pitfalls
Mistakes teams actually make at this stage - and how to avoid them
How Different Roles See It
Bob's PRs frequently get comments like "add this import," "unused variable," "use camelCase here." These comments are taking up reviewer attention and creating unnecessary revision cycles. Two of his teams use ESLint, three don't. Codebases from different teams look visibly different in style, which causes friction when developers rotate between them.
What Bob should do - role-specific action plan
Sarah's metrics show that style and convention comments account for about 25% of all PR review comments. She's calculated that eliminating these comments would save approximately 8 minutes per PR reviewed - across 45 PRs per day, that's 6 hours of reviewer time per day. She wants to propose linting as a cost-saving measure but needs to frame it in business terms.
What Sarah should do - role-specific action plan
Victor has been maintaining a personal ESLint configuration that he applies to his code, but it's not enforced on the rest of the team. He keeps leaving comments in review about issues his linter catches automatically on his machine. He's frustrated that the same issues recur every sprint.
What Victor should do - role-specific action plan
Further Reading
6 resources worth reading - hand-picked, not scraped
From the Field
Recent releases, projects, and discussions relevant to this maturity level.