The Battle for Sovereign AI
In an increasingly fractured geopolitical landscape, whose AI is the United Kingdom actually running on? And what happens when access to it is no longer guaranteed?
This month I had the privilege of speaking at CYNAM 25.3: Breaking Point, alongside speakers from GCHQ and the Cabinet Office. The event brought together cybersecurity practitioners, policy professionals, and technology leaders to wrestle with some uncomfortable questions about national resilience.
My talk AI Without Borders: The Battle for Sovereign AI in 2025 focused on one question in particular: in an increasingly fractured geopolitical landscape, whose AI is the United Kingdom actually running on? The slides are available here.
We are one executive order away
The most underappreciated systemic risk in the UK’s current AI posture is political, not technical. A single executive order from the US administration could materially restrict access to US-produced foundational AI models for non-US entities. This is not hypothetical.
The same legislative machinery that has progressively tightened semiconductor export controls, restricted GPU sales to specific regions, and constrained access to advanced compute infrastructure has the full authority to extend those restrictions to model weights, APIs, and training data. Washington is treating advanced AI as a strategic asset, not a commodity service. The UK’s dependency on a small number of US hyperscalers and foundation model providers for critical public services, defence-adjacent functions, and financial infrastructure makes this a national resilience issue.
The CLOUD Act: the risk inside the contract
A related, and perhaps less well-understood, risk is structural rather than political: the US CLOUD Act.
Passed in 2018, the Clarifying Lawful Overseas Use of Data Act empowers US law enforcement and intelligence agencies to compel US-based companies to produce data stored anywhere in the world (including EU and UK data centres) regardless of local data protection law and regardless of contractual commitments to the contrary.
The critical point here is that the relevant trigger is not where the data is hosted, but who the hosting company is. If a cloud provider, AI platform, or managed service has a US-registered parent company or topco, the CLOUD Act applies to the data that entity manages or has access to, even if it sits in a London or Frankfurt data centre, and even if your contract says otherwise.
For organisations handling sensitive personal data, commercially sensitive IP, or anything touching national security functions, this creates a structural data sovereignty gap that cannot be closed by contractual insulation alone. The only reliable mitigation is to use a provider whose corporate structure falls entirely outside US jurisdiction, or to operate the infrastructure yourself.
This is one of the more significant, and frequently glossed over, reasons why the sovereign AI conversation matters for sectors well beyond defence.
The fork in the road
The UK faces a genuine strategic choice, and it is not binary.
Path One: Continued Dependence. Integrate into US-dominated AI ecosystems, benefit from cutting-edge capability, accept vulnerability to export controls, pricing shifts, and the geopolitical weaponisation of technology access.
Path Two: Sovereign Capability. Invest in domestic AI infrastructure and open-weight models, build resilience against external disruption, retain control over critical national technology and data.
Most organisations will, quite reasonably, occupy a point on the spectrum between these poles rather than one extreme. The practical question is: for which specific functions does sovereignty matter, and what does the right architecture look like for each?
The open-weights opportunity
The infrastructure challenge facing the UK is real. Training frontier models requires compute resources that currently sit predominantly in American hands. The gap in cluster capacity, energy availability, and investment appetite between the UK and the US hyperscalers is substantial.
But there is a credible pathway to meaningful sovereignty that does not require matching hyperscaler investment: open-weight models.
Models like Meta’s LLaMA, Mistral, and Alibaba’s Qwen have become sufficiently capable, particularly for reasoning and inference tasks, that they represent viable alternatives to proprietary foundation models for a significant range of use cases. Critically, they can be deployed in UK data centres, run by UK-registered businesses, on infrastructure that sits entirely outside US legal jurisdiction.
The capability gap relative to frontier proprietary models is real, approximately 6 to 18 months depending on the task domain, and there is a reasonable argument that the gap in multimodal and complex reasoning tasks may be widening as proprietary developers leverage their compute advantages. However, the key question is not whether open models match GPT-4o or Claude on every benchmark. It is whether they are good enough for the specific functions that require data sovereignty and operational resilience.
For many of those functions, they already are, and the operational profile is increasingly attractive. Quantised open-weight models can run effectively on significantly lower GPU configurations than was required even 12 months ago, reducing the compute cost of sovereign deployment substantially.
A tiered approach: sovereignty where it matters
Blanket sovereignty across all workloads is neither realistic nor necessary. The practical framework is tiered:
- Critical Sovereignty. Defence, intelligence, critical national infrastructure. Full domestic control, no foreign-jurisdiction providers, mandatory red-teaming of all models regardless of origin.
- Strategic Capability. Healthcare, finance, essential public services. UK-jurisdiction providers strongly preferred, open-weight models where capability is sufficient, architectural insulation from CLOUD Act exposure required.
- Selective Independence. Research, education, commercial applications. Mix of open and proprietary tools with clear governance, data residency controls, and model provenance tracking.
- Pragmatic Integration. Consumer services, non-sensitive applications. Proprietary tools acceptable where capability advantage is clear and data sensitivity is genuinely low.
The error most organisations are currently making is applying tier four thinking to tier one and tier two workloads because proprietary tools are faster to deploy. That calculus changes materially once legal and geopolitical risk is properly priced in.
The window is closing
Three forces are narrowing the window for meaningful action.
Regulatory lock-in. AI governance standards crystallising now around model evaluation, safety certification, and procurement frameworks will persist for years. Late entrants face compliance burdens designed around incumbents.
Ecosystem entrenchment. Every additional integration of a proprietary AI system into UK enterprise or public sector infrastructure raises the cost and disruption of future migration. Path dependency compounds monthly.
Capability acceleration. The performance gap between frontier proprietary models and open-weight alternatives may widen as hyperscalers deepen their compute advantages. The catch-up trajectory becomes harder, not easier, over time.
Procurement decisions and infrastructure investments being made now will determine Britain’s AI trajectory for the next decade. Organisations and policymakers treating sovereign AI as a second-order concern will find that by the time they prioritise it, the choices have already been made for them.
Why these conversations matter
The conversations happening in rooms like CYNAM matter precisely because the people in them understand that technology decisions are not purely technical. They are strategic, legal, and geopolitical.
The UK has genuine strengths to build on: world-class AI research, a mature cybersecurity community, strong financial services and healthcare sectors that create natural demand for sovereign AI capability, and a legal and regulatory environment that, if handled well, could become a differentiator rather than a constraint.
But strengths are not destiny. The question is not whether to engage with this. It is whether to engage before or after the window closes.
Got comments?
We'd love to hear your thoughts on this article.
Thanks for your feedback
We appreciate you taking the time to share your thoughts.
Something went wrong. Please try again, or email us directly at contact@yellowrad.io.