“I insist that you give me the access codes at once…please,” said Britain air attaché, Lionel Mandrake, to the American General Ripper, who had just launched a pre-emptive nuclear strike. The General had become entirely convinced that communists had fluoridated the water source, to reduce the “purity” of the American people. Mandrake, lacking any formal power, must resort to personal charm, shortly followed by downright begging to avert apocalypse. Much humour comes from his attempts, in Steve Coogan’s recent performance of Dr. Strangelove in the West End.
It echoes the Schmittian tenor in present-day US relations. However strongly people in Britain feel, or however erroneous they think US policy is, it holds no sway on events. Mandrake embodies this powerlessness. Rarely will someone point out this kind of ineffectiveness, either to be polite or from indifference, but for a moment this has been laid bear again. The Leader of the Free World has told the President of an invaded country, “With us, you have the cards. Without us, you don’t have any cards.”
From this comes the task at hand, to reevaluate on what basis is national sovereignty upheld? How does AGI, and the ‘compressed 21st century’ it will bring, change this? How much should we value sovereignty anyway? How does one avoid being Lionel Mandrake?
The negotiations to end the war in Ukraine provide a lens. The key question has been to what extent Europe can backstop Ukraine, as the US pause their involvement. The US commitment to any negotiated settlement is uncertain: perhaps they will provide a de facto security guarantee, through a mineral deal, but would this hold off a Russian invasion? Perhaps they provide a de jure security guarantee, but it is unclear if they would be committed, if this was tested again. The incoming Undersecretary for Defence, Elbridge Colby, wrote in his 2021 book, The Strategy of Denial:
[T]he United States might very well not fill the gap in Eastern NATO left by any European unwillingness to strengthen their own defense efforts. Indeed, my argument in this book is that the United States should not plug these gaps. If China succeeds in its focused and sequential strategy in Asia, it can establish hegemony over the world's most important region. If Russia succeeds in a fait accompli in Eastern Europe, it will call NATO into question and open the East to Moscow's predominance, but it will not be able to dominate the wealthiest parts of the continent.1
He could not be clearer about their intentions here!
Without either of these US guarantees, to what extent would a 20,000-strong European peacekeeping force in Ukraine be respected? When the French, Germans, and Ukrainians negotiated the Minsk Accords in 2015; Russia later rescinded. If a settlement were to fail, to what extent would Europe be able to make up the shortfall in US support?
The EU and the UK could find the money if they had to. Together, they have an annual GDP of more than 20 trillion euros; while over three years, the US Congress had appropriated $175 billion for Ukraine and provided $65.9 billion in military support.

One has to wonder, why is there such a weak exchange rate between money and sovereignty? Why does European backing seem so weak by comparison?
The crucial difference is that Europe does not have the same state-of-the-art capabilities to provide Ukraine. The nature of wars is changing. Either you need “cheap mass”, lots of inexpensive drones, for example, or exquisite capabilities. Ukraine’s drone manufacturing is larger than any other European country and the US was providing its state-of-the-art capabilities.
Second-rate European capabilities are a poor substitute for the very best. American Patriot Missiles have a longer range than alternative European air defences and can neutralise faster-moving missiles. Likewise, American counter-battery artillery has a longer range, and is actually produced at scale. American electronic warfare offers more generalised drone and precision missile jamming, whereas European countries can only offer point-solutions. American intelligence, recently paused, offers real-time visibility of attacks, where Britain can only offer lower latency. Starlink continues to run, though if Elon were to turn it off, the alternatives are significantly worse. Starlink has 7,000 satellites, whereas the European replacement has just 600. In sum, if Ukraine continues to fight beyond the next couple of months it will do so with patchier, shallower, and lower-scale defences.
In this case, Ukraine’s sovereignty rests on deep supply chains for “cheap mass” and guaranteed access to the very best capabilities. Without which, it has no cards.
How will this change in the future?
The “Compressed 21st Century”
The most important change to national power will be the development of powerful AI systems.
In the most aggressive view, Dario Amodei, the CEO of Anthropic, has written that AI systems with the cognitive capabilities of a Nobel Prize-level scientist in all domains could be created “as early as 2026, though there are also ways it could take much longer”. In some domains, he thinks this could lead to a compressed 21st century—100 years of progress in just a decade. The Chief Scientist at Meta has expressed the most sceptical view of any lab leader: he thinks that human-level AI could take a decade. However, Mark Zuckerberg has also said that AI systems will be able to perform the work of a “mid-level software engineer at Meta” by the end of 2025. We should be preparing for very fast progress.
Already, the public state of the art already outperforms human ML engineers on some tasks, could have written 42% of OpenAI’s changes to their code base, scores comparably to PhD-level experts on tests of scientific expertise. For introductions to technical AI progress, see “AGI is an engineering problem” and “on o1”. Crucially, even if AI progress plateaued at human-level, it would be an enormously important tool. Some people have speculated that it will be possible to run millions of copies, much faster than humans can process information.
The most critical step is what comes after human-level AI. When AI systems could automate all the steps of the AI research and development process, including re-training improved copies of themselves, there could be a very fast acceleration in AI capabilities. This period of recursive self-improving has been termed an “Intelligence Explosion”. In our view, this will be bottlenecked on the most aggressive time horizons (~2 years), but it is possible in the future.
How does powerful AI affect national power?
Some researchers and AI lab leaders have written that whoever reaches the Intelligence Explosion first might be able to parlay this lead into a decisive strategic advantage over all other countries.2 The thinking goes that this advantage could be used to create a unipolar world order, negotiated or otherwise. This high-level abstraction is useful to keep in mind, but in a more concrete manner there are three ways AI will change national power.
First, AI is dual-use. It can be turned into a weapon much more easily than previous general-purpose technologies, like electricity or computers. In a recent paper, Eric Schmidt, the former Google CEO, and his coauthors suggest AI cyberweapons would be able to “suddenly and comprehensively destroy a state’s critical infrastructure”. AI systems could also be used in drone jamming, targeting, and stealth capabilities.
Second, just as AI systems will be able to automate all steps of the AI research process, it will also be able to augment or take-over other R&D processes. Think: drones, robots, sensors, chips, missiles. An essay by a former OpenAI researcher summarised this:
Imagine if we had gone through the military technological developments of the 20th century in less than a decade. We’d have gone from horses and rifles and trenches, to modern tank armies, in a couple years; to armadas of supersonic fighter planes and nuclear weapons and ICBMs a couple years after that; to stealth and precision that can knock out an enemy before they even know you’re there another couple years after that.
That is the situation we will face with the advent of superintelligence: the military technological advances of a century compressed to less than a decade.
For this reason, Eric Schmidt’s paper also suggests that some AI “superweapons” could obfuscate mutually assured destruction, which keeps the nuclear balance in check. AI could be used to create a “transparent ocean”, that means submarines can no longer operate in stealth; it could enable a nuclear power to find its adversary’s land nuclear launchers, or deceive its adversary about its intention or capabilities. The delicate equilibrium currently depends on a robust escalation ladder, which AI systems could shake.
Third, AI will boost productivity across almost all industries. A recent book, Technology and the Rise of Great Powers, Jeffrey Ding makes the case that national power shifts in previous Industrial Revolutions are the result of deep, broad deployment across many sectors, rather than arising from the eureka moment of discovery. We have written previously about the rate of deployment we expect through R&D and the cognitive economy. The economic advantage from AI could be more important in the short-term, as many of the military applications of AI depend on very capable systems. Over time though, differential adoption and productivity would have an exponential effect on any country’s economic power. (It is important to note that an Intelligence Explosion would reduce the relative importance of this factor, however.)
Whether the most dangerous capabilities are unlocked in two years or ten, the path is clear: AI will be totally essential for military and economic power.
What does this mean for the world order?
In a far-sighted essay from 2018, AI nationalism, Ian Hogarth predicted the emergence of “a kind of dependency would be tantamount to a new kind of colonialism”, whereby the world is split into countries without frontier AI capabilities, who are forced to depend economically and militarily on countries who do. This is sometimes summarised as being an “AI taker” or an “AI maker”. Such thinking was based on the work of Kai Fu-Lee, who wrote in his book AI Superpowers in 2019:
I fear this ever-growing economic divide will force poor countries into a state of near-total dependence and subservience. Their governments may try to negotiate with the superpower that supplies their AI technology, trading market and data access for guarantees of economic aid for their population. Whatever bargain is struck, it will not be one based on agency or equality between nations.
At present, capabilities seem to be more greatly diffused than the kind of ‘superintelligence-in-a-bottle’ which Ian Hogarth and Kai Fu-Lee seem to have in mind. However, this currently depends on AI labs near the frontier continuing to make their best capabilities available, whether open-source or through the API. As we have written about previously, it seems probable to imagine that the gap between the actual frontier and what AI labs make available to the public will grow with capabilities.
While the UK self-styles as an ‘AI superpower’, or at least wanting to be an AI superpower, there are no UK companies with state-of-the-art capability in any major step of the production general-purpose AI. (This would mean capability in energy, chip manufacturing equipment, chip fabrication, AI accelerator design, grid connection, gigawatt-scale datacentre capacity, datacom and telecommunications.) On what basis would the UK negotiate its access to frontier capabilities?
It could look something (slightly) like this:
[Enter scene. The US President and staff, with AI labs, are sat across from the UK Prime Minister and staff.]
The US President kicks off: “We’d like to make a deal for your access to our frontier capabilities. For too long America has been taken advantage of by its allies. Would you be able to give us some additional training capacity for our AI labs?” If the lobbying in 2025 was successful, the Prime Minister would be forced to say, “Unfortunately not, Mr President, we decided to make it illegal to train models under our copyright rules.”The President: “Not to worry, our American companies will continue to train their models on the work of UK creatives in the US instead, it matters not. Do you have any datacentre capacity for inference they might be able to use instead?”. Again the Prime Minister would be forced to respond: “Alas, it's ‘no’ again I’m afraid. When we were deciding to build datacentres we blocked their construction to preserve the view from motorway bridges nearby. However, we can offer you a large population of rare bats if you need to repopulate places where you built datacentres.”
“That’s a shame, Prime Minister, I saw you announced reforms to improve planning for datacentres, if not completed datacentres can you offer us your future capacity?”
“Mr President, you must understand that in 2025, our grid people said they were ‘very confident that we can accommodate the increasing power demand that would come from AI’, so unlike you, we did not double our grid.”
Exasperated, the President responds, “In 2025, the projections were showing that AI accelerator orders in 2030 could require 300 gigawatts globally, what did you think was going to happen?”. The President sighs, and moves on, “I am told that getting a grid connection in the UK is falling from 10 years to 8 years, is there any chance we could at least have a grid connection in a few years?”.
“Ah, again, unfortunately, the only reason the grid connection queue is falling is because our national operator has banned entering the queue.”
The President: “Do you have any industrial manufacturing capacity at all; either for chips or for robots?”
“Ah, again, Mr President, we have the highest industrial energy prices in the world and we chose to become a ‘high-skill, high-wage’ economy that doesn’t focus on low-value added tasks like manufacturing. However, we did become a clean energy superpower and our economy is focused on high value-added tasks like making films and doing financial services. Do you have any use for these things?”
“Well Prime Minister, with the US models that we’ve trained on the entire corpus of British films, so we can now sell back to you, the ideal British film. And our models are already extremely good at augmenting financial services in New York, so we expect London to become less important for us over time.”
“What can we offer you then?”
The President pauses, and looks up for a minute, takes a short breath, and says, “It would be great for American tourists who are rich from the AI wealth to be able to land more often at Heathrow. Anything you can do here?”
[End scene. Author’s note: Some artistic license was taken for effect. Also, some readers may note that Google DeepMind is based in London, but since it is a US company this does not seem to provide any support, and Arm designs a chip for each NVIDIA H100 server but it only handles non-core tasks like system management, so it seems reasonable to imagine there is no strategic benefit.]
Sovereignty is a market failure.
To begin to find a solution, it is first appropriate to look back to answer, how did the UK become so dependent? In 1962, two years before Stanley Kubrick created the ineffectual Lionel Mandrake, the former US Secretary of State commented that, “Great Britain has lost an empire and has not yet found a role”. This question was never really answered; the UK just followed the US course on neoliberalism. In effect, it was left to intellectuals at the University of Chicago and Mont Pelerin Society.
In the neoliberal conception, values and beliefs remain in the private sphere, and in the public sphere, there is just a minimal state to uphold the market. The big question, of what we value collectively, was left to the invisible hand. As Thatcher put it, “There is no such thing as society.” Just as in AI research we pick the objective and hillclimb towards that.
The UK has done this to the extreme. In investing terms, the UK took on very high factor exposure to globalisation: becoming an exporter of services and making fewer and fewer things.

During the supply chain crunch in 2021, Ryan Petersen wrote that the issues were caused by an obsessive focus with return on equity:
“To show great ROE almost every CEO stripped their company of all but the bare minimum of assets. Just in time everything. No excess capacity. No strategic reserves. No cash on the balance sheet. Minimal R&D. We stripped the shock absorbers out of the economy in pursuit of better short term metrics.”
Britain has “done a Boeing”: outsourced its supply chain, and forgot how to make things. Now the plane is falling apart as we are flying. In 2008, the UK was richer than the US per head, now the UK is poorer than all but the poorest US state. The North of England has become even poorer than former communist countries, like East Germany and Poland. We eked out the gains of financialisation, but we didn’t make anything new in the real world. It turns out that a lot of value exists in the connective tissue between steps in the supply chain, because when you understand the whole process you can innovate. This is how SpaceX and Tesla have done so well.
Emmanuel Macron described the error of the neoliberal consensus in 2019, which applies equally to Britain:
“Europe has forgotten that it is a community, by increasingly thinking of itself as a market, with expansion as its end purpose. This is a fundamental mistake, because it has reduced the political scope of its project, essentially since the 1990s. A market is not a community. A community is stronger: it has notions of solidarity, of convergence, which we’ve lost, and of political thought."
Hollowing out your industries, in pursuit of better GAAP metrics for quarter-end, is not just a bad economic decision, it is a spiritual hollowing out. There is no longer a political project or direction or values; we are “just individuals” in a fragile, exposed, competitive, global economy. Clearly this is not all there is. And for whatever ‘else’ might be, sovereignty is a necessary precondition. Sovereignty is not priced by the market so it cannot be valued by the market alone.
Sovereignty, to do what?
In some sense, being sovereign is intrinsically good. Even if an AI system could run the world “more optimally” or exactly the same as humans would, it would be a disappointing outcome. The option value; the freedom to choose otherwise, is worthwhile. But aside from this, it can be useful to reflect on to what end this will be valuable, when we think about why it is worth upholding.
One reason that Britain might have struggled to find a role in the second half of the 20th century, as Acheson pointed out, is that there is not clearly a “British project” in the same way there is an American experiment. The United States’ founding was explicitly a project in self-government based on democracy, individual liberty, and the rule of law; in opposition to what it viewed as the tyranny of the Old World. Its self-conception as “the last best hope of earth” is both a useful fallback and self-corrective. The same sense of purpose, or direction, can be found in Britain too; if motivated as a contrast…
Given the UK’s weak position the economically optimal thing to do would be to become the 51st state—if the US would accept it—But if any politician suggested joining, there would probably be a revolt. One just has to look at the response in Canada to the Trump Administration’s suggestion that they might join the Union. Just this week, in Mark Carney’s first address as Canadian Prime Minister he said: “Canada will never, ever be part of America”.
What explains this strong reaction, especially when there are so many advantages to joining in economic terms?
The most compelling explanation is that Britain, and others, have a slightly different flavour of the Western project, despite sharing a lot with the American cousins. To make two observations about the distinctiveness of Britain…
First, it has incredible longevity. ‘England’ has been a nation for over a millennium. Only Denmark and Japan can make comparable claims. From this, comes a steadier, more rooted culture. Perhaps the aristocracy, and their focus on lineage and preservation of tradition, were the original longtermists! This is combined with a Whiggish consensus for improvement. The economic historian Anton Howes found the Industrial Revolution happened in Britain, not elsewhere, because of an “improving mentality”. Joel Mokyr wrote, too, that British workers wanted to accumulate ‘useful knowledge’ and experiment pragmatically. In investing terms, buying Britain is buying a compounder: half a percent productivity growth a year, over centuries adds up. (It is possible to tolerate large drawdowns in the long run.)
Second, across this long period, the people have been unusually immune to extremism. Some have suggested this is because the elite are unusually responsive to the people. In George Orwell’s essay, The Lion and the Unicorn, he wrote, “The nation is bound together by an invisible chain…let popular opinion really make itself heard, let them get a tug from below that they cannot avoid feeling, and it is difficult for them not to respond.” While not strictly popular opinion, the abdication of Richard II, the restoration of the monarchy, and the Glorious Revolution are both unusual cases of a leader giving up their power in response to elite views. Likewise, Robert Tombs noted in The English and Their History, “It is hard to think of any major improvement in England since Magna Carta [1215] brought about by violence…[m]any of the things we consider pillars of liberty — the common law, trial by jury, habeas corpus, religious toleration — came not from popular protest but from politics of the Crown developed by royal judges.”
Corruption, by any international standards, is minor. There was a ‘scandal’ when the Prime Minister received suits. Another Prime Minister was criticised for redecorating Downing Street. While it might have gone on too long after the COVID lockdown parties were revealed, there was eventually a cascade of resignations by Conservative ministers and the leader was replaced. The system of informal principles worked.
The compressed 21st century is likely to be an enormously turbulent period. When I think about the things that could go wrong—an impetus to use models which could be misaligned, power grabs, international conflict, enormous inequality, or gradual disempowerment—it seems clear that the UK has something to offer. That is, to bring to bear its flavour of the Enlightenment project on the development and governance of AI. To be the harbinger for reasonableness, patience, common sense, with whiggish eagerness for improvement; to complement the American frontier, day I say cowboy, spirit. It is extremely serious that we get this right: Elon Musk and Geoffrey Hinton have both said there is a 20% chance that AI kills us all.
The UK has a fair-minded tradition of scientific inquiry, has made public goods available for the world before—like common law, the joint stock corporation, and the parliamentary system—as AI should be, and has a different emphasis to America, which is worth having too. Who else will project the spirit of Locke, Hume, and Mill into the lightcone of the universe?
That, or there are two other options: join as the 51st state, or become a cold, wet version of Portugal.
Making technology to uphold sovereignty
Just as the US has used the dollar as a tool of statecraft, so too will countries use state-of-the-art capabilities as a foreign policy tool. The US was able to change the ruler of Iran leveraging international banks’ access to dollars, and perhaps the war in Ukraine will be “switched off” by the withdrawal of US capabilities. In the future, if you run someone else’s models, on someone else’s servers, made using their tools; you are not in control. As Sam Currie highlights in his excellent recent piece, during the Pandemic, the US attempted to seize all Moderna vaccines and diagnostic supplies that were manufactured in the US. Only when Germany said it would withhold access to reagents from their domestic firms, was this avoided.
From this, the goal is clear. A country upholds its technological sovereignty not by trying to domestically produce everything—this would just lead to subpar capabilities—but having strategic leverage (state-of-the-art) in some areas, to guarantee access to all necessary capabilities on good terms.
What are the necessary capabilities? A paper by Jeffrey Ding and Allan Dafoe provides a framework for determining the logic of strategic assets. In their rubric, there are three features of a technology which determine its importance: how valuable it is economically or militarily, to what extent it creates benefits or costs that companies don’t capture (and so would be underinvested in), and to what extent the benefits or costs can be ‘nationalised’ by the country where it is produced. There are three ‘logics’ which amplify the strategic importance further. The cumulative logic; whether initial advantages grow over time, infrastructure logic; whether it supports many technologies or sectors, dependency logic; whether it is at risk from concentrated supply or potential disruption.
This is why the foundation model layer is so important. It ticks all the boxes for importance. Foundation models will be a central input into all future frontier science and technology progress, almost all processes with a cognitive element, and will have military applications. The benefits of general-purpose technologies spread far throughout the economy. While countries cannot ‘nationalise’ open-weight models which have already been released, labs can withdraw API access, impose usage limits, or not release models at all. Next, being early to develop foundation models has compounding returns: once the automation of AI R&D has begun, it will be almost impossible to join in later. It will be like an ‘infrastructure layer’ for cognitive work (“a steam train for the mind”), and the frontier is made in just two countries.
Beyond this layer, there are five questions of vital importance for all countries:
Do you have abundant electrons?
Do you have abundant FLOP?
Do you have the most capable and abundant tokens?
Do you have the cheapest, and most capable, robots and drones?
Do you have the lowest latency and highest throughput communication networks?
To simplify: energy, chips, models, robots, drones, and networking. How sure is the supply chain for each of these? On what terms is your supply guaranteed? We are all believers in the legalistic global order when the sun is shining. Let’s hope our counterparties are too, if the storm comes.
Conclusion
While the overall tone of this essay has been to embrace issues of national power, sovereignty, and defence; this is not the impression I hope to leave. None of these instrumental goals are for their own sake. As I hope to have made the case for, my hope is that if the UK has sovereign capability, it is good not just for the UK, but as a counterweight to excess variance in the world. Things are dangerous now, and the development of powerful AI could make the next decade even more turbulent. A sovereign AI effort, I hope, could help to reduce race dynamics between great powers, and alter the emphasis from a potential arms race towards a scientific endeavour which would benefit all humanity.
Optimistically, AI sovereignty for Britain could be the lynchpin of a new pluralistic, tolerant, and peaceful world order.
The Strategy of Denial (2021), Elbridge Colby, p.276
Superintelligence (2014), Nick Bostrom; Situational Awareness (2023), Leopold Aschenbrenner.
The United States’ founding was explicitly a project in self-government based on democracy, individual liberty, and the rule of law; in opposition to what it viewed as the tyranny of the Old World??
'Democracy' was not explicit, certainly. The Founders loathed democracy, made no mention of it in any draft, and left us with a republic that masquerades as a democracy. That's why Congress never does what we want. It answers to our 'optimes', our best and wealthiest few, just as Rome's did.
'The central point of our research is that economic elites and organized business interests have substantial independent impacts on US government policy, while mass-based interest groups and average citizens have little or no independent influence'. Gilens & Page.
https://tinyurl.com/yc3mvrxh