AI Cookbook
← Back to blog

EU AI Act phase 2 enforcement starts today — first fines hit Mistral, Stability AI, three lesser-known startups

Phase 2 of the EU AI Act took effect at midnight Brussels time, and the first enforcement actions arrived before the morning was out. The European AI Office, working with the Spanish AESIA and the French CNIL, opened formal proceedings against five providers — including Mistral and Stability AI — over compliance gaps in the General-Purpose AI obligations. Provisional fines disclosed today total €34.7M; final amounts could climb significantly higher after appeals.

This is the first time the AI Act's teeth have been visible. The provisions everyone debated theoretically through 2024 and 2025 are now operational, and the message from Brussels is unambiguous: the Act is real, the deadlines are not aspirational, and even European-headquartered champions get fined.

What phase 2 actually requires

Phase 2 covers the General-Purpose AI (GPAI) provisions — the Act's core rules for foundation model providers. As of today, GPAI providers must:

  • Maintain technical documentation covering training data sources, compute usage, and known limitations, available to the AI Office on request
  • Publish a copyright-compliance statement demonstrating adherence to the EU Copyright Directive's text and data mining provisions
  • Implement systemic-risk evaluations for models above the 10^25 FLOP training threshold (the "systemic risk" tier) — including red-teaming, adversarial testing, and incident reporting
  • Disclose training data summaries at a level of granularity that the AI Office is still finalizing through a delegated act, but which clearly exceeds "trained on publicly available data"
  • Operate an EU-resident representative for non-EU providers, with concrete contact and escalation procedures

The penalty ceiling is the headline-grabbing piece. Top-line fines reach €35M or 7% of global annual turnover, whichever is higher — for the largest providers, that is a meaningful number. Phase 2 fines today are well below that ceiling because they are first-instance compliance failures, not systemic violations.

Today's actions

  • Mistral AI (France): provisional fine of €11.2M for incomplete training-data disclosure on Mistral Large 3 and inadequate copyright-compliance statement. Mistral has 30 days to remediate or appeal
  • Stability AI (UK, EU rep in Berlin): provisional fine of €8.4M for failure to file required systemic-risk evaluation on Stable Diffusion 4.0 and inadequate red-team documentation
  • Aleph Alpha (Germany): provisional fine of €5.1M for technical-documentation gaps on Luminous Pro 2
  • Silo AI (Finland, now Cerebras subsidiary): provisional fine of €3.8M for missing EU representative procedures during the transition period
  • Wayve (UK): provisional fine of €6.2M for inadequate disclosure on its driving-domain foundation model

Notable absences: OpenAI, Anthropic, Google, and Meta are not in today's enforcement actions. Two readings. Either the bigger US providers prepared more thoroughly, which is plausible given their compliance budgets, or the AI Office is sequencing its targets and the larger providers are next. Brussels-watchers lean toward the second reading.

The strategic angle

Three observations. First: enforcement starting with European companies is politically clever. The accusation that the AI Act is a protectionist tool aimed at US providers gets harder to make when the first fines hit Paris and Berlin. The AI Office knew exactly what it was doing.

Second: Mistral's position is uncomfortable. The company has positioned itself as Europe's champion frontier lab, lobbied actively against the strictest GPAI provisions, and now finds itself the largest European fine on day one. Internally, the discussion is reportedly less about the €11M (which is survivable) and more about the reputational signal — being out of compliance with the home jurisdiction's flagship AI law is not a story Mistral wanted in front of its Series E investors.

Third: the systemic-risk threshold is going to bite. The 10^25 FLOP threshold was set in 2024 with the assumption that only a handful of frontier labs would clear it. By today, an estimated 14 publicly-disclosed models exceed that threshold — and at least 6 more are believed to be over the line but undisclosed. The compliance burden for systemic-risk models is materially higher, and the AI Office has indicated that systematic non-compliance there is the priority enforcement target for Q3.

What providers are doing

Compliance investments at the major US labs have ballooned. Reporting from POLITICO Europe and Reuters indicates:

  • OpenAI has roughly 90 staff working on EU AI Act compliance, up from a handful in 2024
  • Anthropic established a Dublin office in Q1 with approximately 40 EU-focused staff
  • Google's combined AI Act team across legal, policy, and engineering exceeds 200 FTE
  • Meta's Llama family compliance team is reportedly the largest at any single provider — over 250 staff between policy, legal, and engineering

The implicit conclusion: large incumbents can afford the compliance overhead. Smaller European providers — exactly the kind of company the Act's authors said they wanted to nurture — face a meaningfully harder time clearing the bar. There is genuine policy tension here, and Brussels acknowledges it but has not yet proposed concrete relief.

What it means for builders

If you ship AI products to EU users, three concrete implications:

  • Provider compliance becomes a procurement question. Enterprise buyers in regulated EU sectors are now asking foundation-model providers for AI Act compliance attestations. Expect this to standardize within months
  • Open-weight strategies face a sharper choice. Releasing weights without retaining provider obligations is hard to thread legally. Most open-weight providers will need to clarify their position by Q3
  • Documentation discipline pays off. The single most common failure mode in today's fines is not absence of practice — it is absence of documentation. If you cannot produce evidence of red-teaming, copyright due diligence, or systemic-risk evaluation, you do not have those things in the regulator's view

The bigger frame: the EU's gamble was that a comprehensive regulatory framework, properly enforced, would make the EU a more attractive market for trustworthy AI rather than a hostile one. Today is the first real test of that thesis. Watch the next 90 days closely — the appeals, the systemic-risk enforcement actions, and the response from the largest US providers will tell us whether the gamble is paying off or whether AI Act compliance is going to fragment along provider-size lines.

Sources

  • European Commission press release (May 2, 2026): AI Office opens first phase 2 enforcement actions
  • POLITICO Europe (May 2, 2026): Brussels fines Mistral, Stability AI on day one
  • Reuters (May 2, 2026): EU AI Act enforcement begins with €34.7M in fines
  • Financial Times (May 2, 2026): EU AI Act bites home-grown champions first