The Eagle in Chains: How America is Creating an AI Opening for Britain
America's statist interference in AI development and AI companies has meant that its long uncontested tech dominance may now be over. Britain must take advantage of this opportunity.
It is down to America’s encouragement of free enterprise that the world’s tech haven - Silicon Valley - popped up between Californian mountains. America allowed tech to develop on its own terms; founders were free to compete, with no ceiling imposed on their companies’ growth. That liberty, however, is now under dire threat. Placing your chips on America’s uncontested tech dominance may no longer be so prudent; by the US administration’s hand, the possibility for other nations to compete in what has been staunchly America’s field has been forced open. Britain must take advantage of this opportunity.
Anthropic-Pentagon fallout
The most critical casualty is Anthropic, the AI company behind the LLM Claude. A few months ago, most outside of the tech sphere had never heard of it, but it had created AI so effective that the Pentagon had chosen Claude as their LLM of choice. What seemed a workable contract has morphed into a damning exposé of the ailing future of liberty in the US. One would not expect to see, as Dean Ball aptly labelled it, “corporate murder” committed under the star-spangled banner. But in February, the US Department of War attempted to kill Anthropic. Secretary for War Pete Hegseth declared the company a “supply chain risk” in February - this amounts to branding it a threat to national security, which is an unhinged label for a service that has enabled the US offensive in Iran. And despite aiding the US in its military endeavours, Anthropic became the first ever American company to receive the designation. Its purpose was to blacklist foreign adversaries - not those who help combat them. Anthropic is currently challenging its lawfulness in court.
As if the private company were a deadly disease, Hegseth put it in quarantine, declaring that Anthropic may not conduct any commercial activity with any contractor, supplier, or partner doing business with the United States military. This left $180 million worth of private deals out in the cold. The treason the company committed was refusing a contract that stated Claude could be involved in “any lawful use” – a phrase which flirted with the potentiality for mass domestic surveillance and autonomous drone strikes. How this is deeply unsettling for American business and American life needs no explanation.
The US administration was determined not to let Anthropic walk away unharmed, by any means necessary. A paradox of approach, typical of a state desperate for control, emerged when it became clear that Anthropic’s compliance could not be secured through negotiation. Days before the “supply chain risk” designation, the administration threatened the Defence Production Act - a Korean War era emergency statute - to compel Anthropic’s cooperation. Emergency statutes are seldom repealed, but instead expanded to fit the bespoke needs of the administration in power; this is a lesson we have learnt from Higgs. The US administration was prepared to brandish all the knives of the state - and the bells may be tolling for freedom of association where we thought it most secure.
The market, however, came back with vengeance, to prove that it cannot be beaten by the executive. The day after the designation, Claude soared to number one in the App Store. The app also unseated ChatGPT, proving that OpenAI’s quick seizure of the contracts Anthropic vacated was equally condemned. All the efforts of the state cannot beat the invisible hand, which may well wave on the exodus of tech from its long-time home.
The eagle is tagged, and its wings clipped
Indeed, despite parading with the same elephant as Reagan, the Trump administration never was, or even postured as, economically liberal. But it wouldn’t be sensationalist to deem these actions as anti-American, at least in our understanding of what has fostered the nation’s success throughout its lifetime.
This administration has remarkably succeeded in violating the very principles inherent to what America is - or so we have been led to believe - trust in the free market, the ability to exit without the state bending the law against you, and the preservation of a civil society independent of the executive. Their statist behaviour leaves the “land of the free” hobbling in apathy, exiled by the dirigisme that has taken hold of the American right.
America’s difficulty may be Britain’s opportunity
Beyond ideology, there is pragmatic significance in the Anthropic case. An executive fiat to sever a company’s commercial relationships and designate it a national security risk sends a clear message to every tech enterprise in America; any of them may find themselves in the crosshairs of political interference. As we can also take from Higgs - amidst an unnerving period of regime uncertainty, perhaps the US can no longer be innovation’s refuge.
Britain has a great AI opportunity. The rule of law, after all, is still intact across the Atlantic, and there is no Defence Production Act to be threatened. That could be attraction enough for AI to move. In a rare occasion of lucidity from Sadiq Khan, the London mayor wrote to Anthropic, offering to facilitate the expansion of the company’s presence in London. Google’s Gemini was spearheaded in London by Google DeepMind, which remains in King’s Cross, and the city is considered Europe’s AI capital, boasting hundreds of high-momentum AI startups. And in 2025, AI startups captured a third of all British venture capital.
Europe is also vying for Anthropic’s attention. On paper, it seems like Anthropic’s focus on responsible AI, which led to the Pentagon’s blacklisting, aligns with the EU’s strong-arming approach to AI regulation. But this is ignorant of the difference between “regulation” and “responsibility”. Often, regulation is actually inimical to responsibility; This is especially true for a fast-moving technology like AI, where trapping it in outdated regulation removes the very possibility of responsible frontier research.
Britain understands this and offers real possibilities for cutting-edge research. The AI Growth Lab encases AI in a regulatory sandbox, allowing AI to grow without pre-emptive regulation blocking its way. The Advanced Research and Invention Agency (ARIA) avoids the dangers of petty ministerial interference, which it is independent of, and therefore offers perhaps the most promising recent institutional development in British science policy. It mandates the freedom to conduct speculative research, exactly the kind that attracts the best researchers. And the proof of Britain’s research prowess is in the pudding; OpenAI has committed to building its largest research presence outside of the US in London.
But Britain has built a ceiling
The dynamism of Britain’s AI scene and sandboxed innovation will eventually face the tomes of regulation hot off the Whitehall press. Although it is not as forceful as the US’s - more pathetic, perhaps - the British government has ushered in its own clandestine creep of statism that will turn away those looking for the freedom to build their innovation beyond the ceiling imposed by the state. AI research has cut through, but the heuristic that you can regulate yourself out of anything still looms over Britain, and must be broken down before it sprawls too far.
AI founders are forced to navigate an expansive and unpredictable obstacle course of regulation for Whitehall jobsworths to deem them compliant. Before their product can touch the market, their funding has to be stretched to cover sprawling legality - the Online Safety Act, the Investigatory Powers (Amendment) Act, data protection obligations, and roughshod sector-specific guidance. Britain needs to realise that AI is not just a subject to be researched, but has unprecedented commercial potential that deserves better than Whitehall’s regulatory ire.
Two other hurdles remain: tax and energy. Ireland emerges as the usual competitor in attractive tax policy, vindicated by OpenAI’s placement of their European HQ in Dublin. Ireland gives the simplest of offerings; it has a 12.5% corporation tax, which allows companies to keep significantly more of what they earn than Britain’s 25% (admittedly, OECD Pillar Two dictates that Ireland’s rate rises to 15% for companies with annual revenues of over €750 million). Ireland chose a low-tax policy and reaped the benefits. Britain didn’t, and has suffered its losses. On energy, it is well-known that Britain bears the highest industrial electricity prices in the IEA. This is a real curb on data centres, which are instead beginning to pop up in France, where a nuclear doctrine has best prepared them for the future.
The solution to tax couldn’t be simpler: cut the corporation tax rate. The solution to the energy bottleneck requires a long-term strategy, which seems almost unthinkable for a British government. But to begin, we must sever unconditional devotion to Net Zero and allow nuclear to compete on its merits
A holistic AI strategy, moving beyond research, necessitates a political patience to which a government chasing cheap headlines will never commit. It is tempting for politicians to treat electoral insecurity by demonstrating activity; to constantly prove themselves by accumulating fragments of regulation and tax policy. It will require confidence in the ability of a competitive, lightly-governed economy for politicians to avoid chasing this validation, and let Britain build its AI base while we wait - and hope - for the eagle to be set free.
Jennifer Holly is a student at Magdalen College, University of Oxford. She volunteers with Fighting for a Free Future and is committed to the fight for free markets and civil liberties. She can be followed on X through @jenniferaholly.



