Back to Delivery
deliveryL3 SystematicGovernance & Compliance

Minimum viable audit trail: model, timestamp, context, approver

The minimum viable audit trail (MVAT) is the smallest set of structured metadata that, when captured consistently for every AI-assisted change, creates a defensible provenance reco

  • ·Minimum viable audit trail is captured per AI-assisted change: model identifier, timestamp, context description, human approver
  • ·Policy-as-code enforces compliance rules in CI (OPA or equivalent)
  • ·Compliance gates run on every PR to in-scope repositories
  • ·Audit trail fields are validated by CI (missing fields fail the build)
  • ·Policy exceptions are logged and require follow-up within 48 hours

Evidence

  • ·Sample commit or PR metadata showing model, timestamp, context, approver fields
  • ·OPA policy configuration in CI pipeline
  • ·Compliance gate pass/fail logs

What It Is

The minimum viable audit trail (MVAT) is the smallest set of structured metadata that, when captured consistently for every AI-assisted change, creates a defensible provenance record for compliance and incident investigation purposes. The four core fields are: model (which AI system was used, including version), timestamp (when the AI was invoked and when the change was reviewed), context (what the AI was asked to do and in what codebase context), and approver (which human reviewed and approved the AI-generated output before merge).

At L3 (Systematic), the MVAT is captured consistently - not because developers remember to fill it in, but because the process makes it automatic or near-automatic. The difference between L2 (basic audit, manually filled PR fields) and L3 (minimum viable audit trail) is enforcement and structure: L3 audit trails block merges if the required fields are absent, use structured formats that can be queried programmatically, and are captured at the tool level rather than requiring developer manual entry.

The four-field schema is not arbitrary. Each field answers a distinct question that matters in different scenarios. Model answers "what system generated this?" - critical for understanding capability limitations and for tracing systematic AI errors to model versions. Timestamp answers "when was this AI involvement?" - necessary for correlating AI-generated changes with deployment incidents, model version changes, and regulatory windows. Context answers "what was the AI asked to do?" - essential for understanding whether the AI was operating within its intended use case and for reproducing the scenario during investigation. Approver answers "who was responsible?" - preserves human accountability in the chain even as AI automation increases.

The MVAT lives in the commit history and is queryable. A well-implemented MVAT means that in response to "show me every AI-generated change to the payment processing module in the last six months, with the model version, the human approver, and the ticket that prompted the change," you can produce that report in minutes rather than days. This is the operational value of the audit trail that goes beyond compliance theater.

Why It Matters

  • Creates a queryable provenance record - audit trails in structured formats can be queried by security tools, compliance dashboards, and incident investigation scripts; unstructured PR comments cannot
  • Enables incident attribution - when a production issue correlates with AI-generated code, the MVAT lets you identify which model version, which developer, and which task prompt were involved - turning a forensic investigation from days to minutes
  • Satisfies SOC2 change management controls - SOC2 Trust Service Criteria CC8 (Change Management) requires evidence of approval and testing for changes; the MVAT's approver and timestamp fields directly satisfy this requirement for AI-assisted changes
  • Builds the dataset for model version impact analysis - by recording model versions consistently, you can later analyze whether changes from specific model versions had higher defect rates - a feedback loop that improves AI adoption decisions
  • Scales to automated enforcement - once the schema is defined and enforced in CI, every team and every repository benefits automatically without additional discipline from developers

Getting Started

6 steps to get from here to the next level

Common Pitfalls

Mistakes teams actually make at this stage - and how to avoid them

How Different Roles See It

B
BobHead of Engineering

Bob's SOC2 auditors have asked specifically about AI change management controls - they want to see evidence that AI-generated changes are reviewed by a human and that the review is documented. Bob has the PR template disclosure fields from L2, but the auditors want something more structured and queryable than free-text PR comments.

What Bob should do - role-specific action plan

S
SarahProductivity Lead

Sarah wants to use the MVAT data to build model version performance analysis - correlating which Claude model version generated code with downstream defect rates and review round counts. This would be the first data-driven input to AI tool version adoption decisions.

What Sarah should do - role-specific action plan

V
VictorStaff Engineer - AI Champion

Victor runs complex multi-session agent workflows where a single feature involves multiple Claude Code sessions: one for the implementation, one for tests, one for documentation updates, and sometimes a separate session for a security review. The four-field MVAT schema covers single sessions well but doesn't naturally capture multi-session work.

What Victor should do - role-specific action plan

Governance & Compliance