Back to Delivery
deliveryL2 GuidedGovernance & Compliance

EU AI Act awareness

The EU AI Act (Regulation 2024/1689) is the world's first comprehensive legal framework for artificial intelligence, entering force in August 2024 with phased compliance deadlines through 2027.

  • ·Official AI tool policy exists and is communicated to all developers
  • ·Basic audit tracking is in place (which developers use which AI tools)
  • ·EU AI Act awareness training or briefing has been conducted
  • ·AI tool policy is reviewed at least annually
  • ·Approved tool list is maintained and accessible

Evidence

  • ·Published AI tool policy document with distribution records
  • ·AI tool usage tracking dashboard or report
  • ·EU AI Act training completion records

What It Is

The EU AI Act (Regulation 2024/1689) is the world's first comprehensive legal framework for artificial intelligence, entering force in August 2024 with phased compliance deadlines through 2027. It applies to any organization that places AI systems on the EU market or uses AI systems affecting EU residents - which means any organization with EU customers, EU employees, or EU operations is potentially in scope.

EU AI Act awareness at L2 means the engineering organization understands what the regulation covers, which parts apply to their systems and workflows, and what the compliance obligations are for the AI systems they're building and using. It does not yet mean full compliance infrastructure - that's L3 and beyond. It means the team has done the reading, classified their AI use cases by risk tier, and identified the obligations that apply.

The Act operates on a risk-tiered model. Prohibited AI systems (Article 5) include social scoring, real-time biometric surveillance in public spaces, and systems that exploit psychological vulnerabilities - almost certainly not in scope for an engineering team. High-risk AI systems (Annex III) cover specific use cases in critical infrastructure, employment decisions, education, law enforcement, credit scoring, and health. The most likely engineering relevance here is AI systems used in hiring, code-driven credit decisions, or healthcare applications. General-purpose AI models (Article 51-52, Title VIII) like GPT-4 and Claude are subject to transparency requirements when integrated into downstream products. Limited-risk systems require transparency disclosures (users must know they're interacting with AI). Most AI coding tools fall into the minimal-risk category and have no specific obligations.

For most engineering organizations, the immediate practical impact of the EU AI Act is in three areas: (1) if you're building AI-powered products that customers use, those products may be subject to the Act's requirements; (2) if you use AI systems in decisions that materially affect EU employees (performance management, hiring, work allocation), those uses require specific safeguards; (3) if you integrate general-purpose AI models (like Claude or GPT-4) into your products, you inherit transparency obligations from the model provider and may need to pass them to your customers.

Why It Matters

  • Enforcement timelines are real - the EU AI Act is not theoretical future regulation; prohibited systems were banned in February 2025, high-risk system requirements apply from August 2026, general-purpose AI model obligations from August 2025. Organizations that have not started compliance work are already behind on some provisions
  • Fines are material - penalties of up to 35 million euros or 7% of global annual turnover for prohibited AI violations; up to 15 million euros or 3% turnover for other violations. These are GDPR-scale enforcement risks
  • Compliance requires documentation that takes time to build - high-risk systems require technical documentation, conformity assessments, registration in the EU database, and ongoing monitoring. Building these from scratch after a notice of inquiry is not feasible; they need to be built proactively
  • Customer contracts are already changing - EU enterprise customers are starting to require AI Act compliance representations in their contracts. Engineering organizations without awareness of their obligations cannot make accurate representations
  • The Act affects what AI tools you can use in your delivery pipeline - using AI systems that make or influence employment decisions about EU employees triggers high-risk requirements even if those systems are internal

Getting Started

6 steps to get from here to the next level

Common Pitfalls

Mistakes teams actually make at this stage - and how to avoid them

How Different Roles See It

B
BobHead of Engineering

Bob's company sells software to EU enterprise customers. The sales team has started getting contract addendums from EU customers asking for AI Act compliance representations. Bob has been CCed on several of these and doesn't know how to respond - he doesn't know which parts of the product use AI, which risk tier they fall in, or what obligations apply.

What Bob should do - role-specific action plan

S
SarahProductivity Lead

Sarah uses AI tools internally to help analyze developer productivity data - including data about EU-based team members. She's been asked by the HR team whether the AI-assisted performance analysis she's been running falls under the EU AI Act's employment provisions.

What Sarah should do - role-specific action plan

V
VictorStaff Engineer - AI Champion

Victor is building an internal tool that uses Claude to analyze code quality trends and generate recommendations for which engineers should take on which types of work (matching developers to tasks based on their apparent strengths). He thinks of it as a productivity tool, not an AI employment system.

What Victor should do - role-specific action plan