Basic audit: who uses what
Basic audit at L2 means the organization has established visibility into which developers are using which AI tools, at what frequency, and for what purposes.
- ·Official AI tool policy exists and is communicated to all developers
- ·Basic audit tracking is in place (which developers use which AI tools)
- ·EU AI Act awareness training or briefing has been conducted
- ·AI tool policy is reviewed at least annually
- ·Approved tool list is maintained and accessible
Evidence
- ·Published AI tool policy document with distribution records
- ·AI tool usage tracking dashboard or report
- ·EU AI Act training completion records
What It Is
Basic audit at L2 means the organization has established visibility into which developers are using which AI tools, at what frequency, and for what purposes. It's not a comprehensive provenance record (that's L3-L4), but it answers the first-order governance question: who is using AI in our delivery pipeline, and are they using approved tools?
This level of audit typically draws on two sources: tooling vendor dashboards (GitHub Copilot usage stats, Claude for Teams analytics) and PR disclosure fields that developers fill in manually. Together, these give a reasonable picture of adoption without requiring custom instrumentation. The picture is incomplete - vendor dashboards show license seat usage and acceptance rates but not the content of AI interactions; PR disclosures are self-reported and inconsistently filled - but it's a genuine improvement over the zero visibility of L1.
The audit data at L2 serves three purposes simultaneously. For compliance, it provides evidence of controlled AI use that can be presented in SOC2 or ISO 27001 audits. For management, it shows which teams have adopted AI tools and which haven't, enabling targeted support. For productivity analysis, it creates the first dataset correlating AI tool use with delivery metrics - even imperfect data starts to reveal patterns when collected consistently over time.
The critical discipline at L2 is to define audit questions before collecting data, not after. Organizations that collect everything they can access and then try to find meaning in it get drowned in noise. The useful audit questions at L2 are specific: Is every developer using an approved tool (not a personal subscription)? Are high-risk repositories showing appropriate AI disclosure rates? Is the distribution of AI use consistent with what developers report in surveys? Start with these questions and collect the data that answers them.
Why It Matters
- Makes compliance claims defensible - "we audit AI tool use and here is the data" is qualitatively different from "we have a policy" in an audit; actual usage data gives compliance claims evidentiary support
- Identifies shadow AI persistence - when vendor dashboard adoption numbers are lower than survey-reported AI use, the gap is shadow AI; the basic audit makes the gap visible and measurable rather than theoretical
- Reveals adoption patterns that inform support - some teams will adopt quickly, others will lag; the adoption map tells you where to focus enablement efforts and what barriers are preventing adoption in slower teams
- Creates the measurement baseline for ROI analysis - the same data that supports compliance audits also supports the business case for AI investment; correlating AI tool usage with PR throughput is the foundation of the productivity ROI argument
- Enables proactive issue detection - usage patterns that deviate from policy (tools being used that aren't on the approved list, usage patterns that suggest prohibited data types being processed) are visible in audit data before they become audit findings
Getting Started
6 steps to get from here to the next level
Common Pitfalls
Mistakes teams actually make at this stage - and how to avoid them
How Different Roles See It
Bob has published the official AI tool policy and procured GitHub Copilot Enterprise for the team. Three months later, the CISO asks for evidence that the policy is being followed and that all AI use is through approved tools. Bob has the vendor dashboard but doesn't know how to answer the question "is anyone still using shadow AI?"
What Bob should do - role-specific action plan
Sarah has access to the Copilot Enterprise dashboard and three months of PR disclosure data. She wants to use this to build the first AI productivity correlation report - but she's not sure her data is good enough to make meaningful claims.
What Sarah should do - role-specific action plan
Victor is the heaviest AI user on the team - his acceptance rate is high, his PR throughput is high, and his code quality metrics are strong. But he's also done things the audit doesn't capture: running Claude Code sessions that produce entire modules, using AI for architecture review, using agent workflows that touch multiple repositories. The basic audit doesn't have a good category for his actual workflow.
What Victor should do - role-specific action plan
Further Reading
5 resources worth reading - hand-picked, not scraped
Governance & Compliance