“Who do I call if I want to speak to Europe?” — Kissinger
It was sort of an accident that the EU was first to regulate AI. Issues will tend to drift up to the European level if they are politically uninteresting to national governments, and needing an unpleasant solution or technical implementation. In October 2020, AI seemed to fit the latter description and so the drafting process for regulation began.1
ChatGPT’s explosive growth interrupted this process. The earlier draft of the AI Act placed all of the regulatory burden on end-deployers of AI products, not anticipating the shift to foundation models, and so additional provisions for ‘General-Purpose AI’ needed to be made. But the greater change was the shift in perception: AI was no longer an issue of low political salience. Once countries began to notice that foundation models would be the next general-purpose technology, negotiating the additional provisions became much more difficult. Germany, Italy, and France were concerned the AI Act would hamstring nascent foundation models providers and railed against regulation. A barebones proposal prevailed in the negotiations, with substantial implementation work left to do.
With hindsight, the introduction of the AI Act will be, we believe, the high point of the European Commission’s relative importance in AI policy. A set of unassailable macro forces will pull power away from Brussels:
The models will get a lot better, quickly.
As this happens, access to powerful AI becomes increasingly important for national productive capacity and so EU member states will face mounting pressure to weaken the enforcement of the AI Act, and or make bilateral agreements with AI makers, to access the most advanced models.
Likewise, access to powerful AI becomes increasingly necessary for security, where too, member states will be minded to make bilateral agreements with AI makers for reliable access to state-of-the-art models.
In this context, the Trump administration has made clear they will not tolerate overburdensome regulation of their tech companies by the EU.
This combination of forces exacerbates existing headwinds: national governments have grown evermore sceptical of the EU’s approach to tech regulation—the Digital Services Act and GDPR are often blamed for the weakness of Europe’s digital economy.
The Commission has assembled a group of experts to set out a Code of Practice which will set out implementation of the AI Act to the most powerful models. But de facto authority for this process has spread beyond Brussels: national economic interest and transatlantic pressure limit its teeth, and foundation model providers can choose not to opt into the Code of Practice altogether. Down this path, they would face alternative case-by-case enforcement of the Act, but who is to say whether the Commission would have the political backing to take dissenters to court? Arriving at a strong, politically achievable code that is a blueprint rather than a cautionary tale is a very thin needle to thread. Perhaps the strongest influence of the AI Act is its influence on others; positively or negatively.
The next set of questions of AI policy will relate to the supply chain, encouraging adoption, and governing agents.2 On these issues, we can expect much higher political salience and so we can expect much stronger national engagement. There will be a looming threat of falling further behind, of labour market disruption, and countries will need to be able to redistribute between ‘winners’ and ‘losers’. Historically-powerful labour unions in France and Germany will make demands through their national parties where they have a much stronger footprint. Service businesses will demand better access to inference compute and support for adoption initiatives. The EU is already perceived as having a weak track record on issues of competitiveness and supply chain buildup.
These next issues are likely to remain with the national government, as they will move faster in areas of clear national interest and local need. Either the EU policies will be dead on arrival in the council, or greatly influenced by existing national approaches: it is no sign of Brussels’ influence if the EU parliament passes a law already on the French and German books. Even in the purportedly ‘European’ approach at present—the Commission President’s announcement of €200 billion investment for AI infrastructure—comes from a combination of private funding, member state investment, and EU funds that have previously been restricted by member states; rather than any discretionary Commission funding.3 This is the kind of approach that the Commission President could be referring to with the idea of an EU-led ‘CERN for AI’. But wherever an EU megaproject might seem like evidence for a prominent role for Brussels, it often turns out that any major member state can set off a chain reaction to question its merits, demand local favouritism, or choke off its funding at will. Brussels is hardly in the driving seat.
As with economic policy, when the security and geopolitical implications of AI are sharpened, national governments will move to make deals with AI makers. Already, some European countries are being treated preferentially in the tiers of US export controls on frontier AI chips. In tier two countries, commercial orders of GPUs are capped at 50,000 per year. A small number of countries — France, Germany, Italy, the Netherlands, the Scandinavian countries, and perhaps Poland — are likely to be treated preferentially by the US for access to models. The incentive for any one of these actors to defect from the EU negotiating as a bloc will only get stronger as the speed of improvement hastens, and the dominance of the technology becomes clearer.

This mirrors the joint European initiative to procure COVID vaccines. The EU remained in lockstep, because of the actions of Chancellor Merkel in particular, but the delayed and patchy vaccine rollouts cost the cause of collective action in the future. Received wisdom is that the EU’s most advanced economies paid the price for this. With the benefit of hindsight, a new security situation, and fewer Europhiles in the national governments; it seems hard to imagine an EU-led approach. EU leaders would need to commit to it unequivocally, and Brussels would need to prove itself worthy of that commitment.
So while at present the AI policy discourse has Brussels as a central actor, the transatlantic or inter-European political currents will pull away from this unstable equilibrium as AI gets more capable. If — for whatever reason — you want to dial Europe on AI policy in future, you might well have to call Paris, Berlin, and The Hague instead. Maybe Brussels will get to listen in.
GPT-3 was released in June 2020.
Long-horizon agents are not well covered by the AI Act.
Euractiv has a full breakdown.
I think you underestimate institutional persistence. The AI Office was just set up and has sole authority on GPAI models, creating path dependency that's historically difficult to reverse.
For point 2, could you clarify whether you mean labs won't deploy in the EU due to the AI Act, or that they'll give model weights to specific EU countries through bilateral agreements? I believe these macro forces mostly affect the strictness of enforcement rather than institutional authority. Do you have any examples where tech regulation authority actually transferred back from the EU to member states?
Given current AI developments and timelines, I find it hard to understand why you describe such a scenario without clarifying that it's meant for 2+ years in the future. While member states will compete over AI resources, the COVID vaccine procurement example you mention actually contradicts your argument - it was handled centrally despite similar incentives for fragmentation.
Especially regarding Germany, I don't see evidence of willingness to contribute meaningfully to independent AI policy outside the EU framework. Do you have specific information suggesting otherwise?
I agree with Herbie Bradley's point that compliance costs for large AI labs may not be prohibitive enough to trigger your bilateral agreement scenario in the near term. The most likely outcome seems to be labs doing additional evaluation and reporting work, with approvals coming slightly later than US deployments.
I agree with the general thrust of this, but:
> As this happens, access to powerful AI becomes increasingly important for national productive capacity and so EU member states will face mounting pressure to weaken the enforcement of the AI Act, and or make bilateral agreements with AI makers, to access the most advanced models.
given the political pressure on the EU Commission to go easy on US companies, and that it probably realistically doesn't take too much effort to comply for a large AI lab (unless releasing open-source) for the foreseeable future, then don't you think the likely outcome is just that labs do a little more evals and reporting logistics work than normal and get approved by the AI Office a month or so later than any US deployment? I'm not sure we should expect bilateral agreements under that scenario.