Discussion about this post

User's avatar
Jacob's avatar

I think you underestimate institutional persistence. The AI Office was just set up and has sole authority on GPAI models, creating path dependency that's historically difficult to reverse.

For point 2, could you clarify whether you mean labs won't deploy in the EU due to the AI Act, or that they'll give model weights to specific EU countries through bilateral agreements? I believe these macro forces mostly affect the strictness of enforcement rather than institutional authority. Do you have any examples where tech regulation authority actually transferred back from the EU to member states?

Given current AI developments and timelines, I find it hard to understand why you describe such a scenario without clarifying that it's meant for 2+ years in the future. While member states will compete over AI resources, the COVID vaccine procurement example you mention actually contradicts your argument - it was handled centrally despite similar incentives for fragmentation.

Especially regarding Germany, I don't see evidence of willingness to contribute meaningfully to independent AI policy outside the EU framework. Do you have specific information suggesting otherwise?

I agree with Herbie Bradley's point that compliance costs for large AI labs may not be prohibitive enough to trigger your bilateral agreement scenario in the near term. The most likely outcome seems to be labs doing additional evaluation and reporting work, with approvals coming slightly later than US deployments.

Expand full comment
Herbie Bradley's avatar

I agree with the general thrust of this, but:

> As this happens, access to powerful AI becomes increasingly important for national productive capacity and so EU member states will face mounting pressure to weaken the enforcement of the AI Act, and or make bilateral agreements with AI makers, to access the most advanced models.

given the political pressure on the EU Commission to go easy on US companies, and that it probably realistically doesn't take too much effort to comply for a large AI lab (unless releasing open-source) for the foreseeable future, then don't you think the likely outcome is just that labs do a little more evals and reporting logistics work than normal and get approved by the AI Office a month or so later than any US deployment? I'm not sure we should expect bilateral agreements under that scenario.

Expand full comment
2 more comments...

No posts