Trusted Data and AI Sovereignty: Building Europe’s Foundation for the Future of Public Sector Modernization
AI sovereignty is no longer just an infrastructure debate. It’s about how institutions strengthen and leverage control, governance, and trust in the data that drives critical decision-making.
AI is moving beyond experimentation and into increasingly high-stakes environments. For our customers and partners, one big factor sits above the rest: trust. That trust depends on whether AI governance holds up under real-world scrutiny. Against this backdrop, enterprises and government agencies are seeking to establish AI sovereignty. A 2025 McKinsey survey of 300 executives found that 71% characterized sovereign AI as an “existential concern or strategic imperative.”
Those concerns are increasingly shaped by geopolitical tensions, pushing governments to rethink technology and AI sovereignty. That shift is already visible in the EU AI Act’s classification system and obligations for high-risk AI systems (enforceable from August 2026), Australia’s Sovereign Capability Requirements (December 2025), and a Canadian court ruling (January 2026) asserting jurisdiction over OpenAI despite US-based servers. Other EU nations, such as France, Germany, and the Netherlands, are revisiting their AI strategies explicitly through the lens of economic sovereignty.
Reducing dependency on foreign-controlled AI, strengthening institutions, and building national data assets are ways to build a true competitive advantage. Government agencies that have already built trusted data foundations will be the ones that can deploy explainable AI at scale in the future, faster and more safely.
HMRC has recently begun working with Quantexa to enact a sweeping data modernization program. The £175 million, 10-year project will upgrade HMRC’s data foundation and enable sovereign, governed AI at a national scale in one of the public sector’s largest Decision Intelligence deployments. The initiative will drive efficiency across key workflows, including closing the tax gap, identifying tax at risk, and supporting faster customer service. By unifying fragmented data into a trusted, governed foundation, HMRC is setting the standard for digital transformation while maintaining sovereignty, auditability, and control.
The challenge of true control and governance
Nations need independent, secure AI ecosystems, but the global technology marketplace remains interconnected, making absolute AI sovereignty unrealistic.
However, absolute sovereignty is not truly necessary if government agencies retain control over their data, models, and decisions, wherever AI is deployed. There is a narrative that technological sovereignty is defined by hardware ownership; to be self-sufficient, countries must buy and house physical compute stacks: GPUs, data centers, networks. (NVIDIA has committed over $100B with over 40 countries to build “sovereign AI factories.”) We view this type of “zero-sum game” mindset as incomplete because it doesn’t account for risks that occur when the data used for decision-making lacks the context to be trusted.
The real shift requires moving away from an infrastructure-focused view of sovereign AI, and toward the capabilities that underpin an agency’s AI use and level of control. Instead of questioning whether a system uses cloud, on-prem, or hybrid infrastructure, leaders should focus on whether their agencies retain control over data and authority over decisions.
What’s still missing from the sovereign AI debate is the concept of control under distress. Can a public sector agency maintain trust, oversight, and decision-making authority when systems are under pressure during crisis, conflict, regulatory scrutiny, cyber disruption, or operational failure?
For AI to be truly sovereign, governments must have confidence in the inputs, governance over the models, and ownership of the outputs. They need to know what data is being used, how it has been connected, why a model has reached a conclusion, and whether the resulting decision can be explained, audited, challenged, and defended.
This has been central to Quantexa’s mission since inception. We have focused relentlessly on giving our clients and partners control and transparency over their data inputs, the models applied to them, and the applications and decisions they inform. In the public sector, this matters profoundly.
Sovereign AI must not become a black box at national scale. It must be built on trusted data, contextual intelligence, explainable models, and clear accountability, so that even under distress, institutions remain in control of the decisions that shape outcomes for citizens, economies, and national resilience.
With this in mind, governments are recognizing that modernizing legacy systems and building sovereign data capabilities enable more effective public services, reduce fraud and waste, improve resource targeting, and strengthen public trust.
What capabilities matter for sovereign AI
Modernizing the tech stack to enable agencies to more effectively unify data at scale, put it in context, and then use AI to augment and automate decision-making will provide measurable operational gains. For instance, HMRC’s investment in data modernization is projected to support billions of pounds in improved tax compliance: cross-agency data collaboration can help to reduce duplicate payments and benefit fraud. Real-time, context-enriched data means faster, more accurate decisions at the frontline. Sovereign data and AI should be viewed as the enabler of a more effective, efficient, and trusted agency.
To gain a clear picture of how your agency’s sovereign approach falls within the matrix of capability and control, it’s important to be able to answer these questions:
Can I maintain control without surrendering flexibility? Technology should support deployment flexibility while ensuring decision governance and accountability remain firmly within the agency, not with the platform provider.
Do I maintain clear ownership and accountability? Prioritize approaches that keep data under the agency’s control, with clarity about where it is stored, how it is processed, and how outputs are generated. This avoids architectures that obscure responsibility or entangle data ownership with vendor choices.
Does my AI framework allow transparent, explainable, and auditable decision-making? For AI-driven decisions to be trusted, data must be accurate, auditable, and explainable. Demand capabilities that ensure decisions remain transparent and defensible, even when compute or models operate in third-party environments. (Note that the requirement for explainable AI becomes legally binding, within the EU AI Act, in August 2026.)
Does my framework have interoperability without dependency? Sovereign AI is strengthened by interoperability, so technologies should integrate with major cloud, data, and AI platforms. Operating across best-of-breed tools, including on-prem and air-gapped environments where required, allows innovation without locking yourself into (or out of) a single ecosystem.
Is my framework modular and reversible? Resilience depends on choice, so prioritize architectures that are modular and reversible, meaning systems that can be updated, replaced, or even removed without disrupting data, pipeline, or governance frameworks. If a provider exits a market, faces sanctions, or changes licensing terms, can you continue operating? Can the model be replaced without resetting governance? Modular frameworks reduce long-term dependency and protect against unforeseen changes.
Does it demonstrate proven impact in high-stakes environments? Decisions about technologies and approaches should favor those that are proven in complex, highly regulated, and data-intensive environments, where auditability and operational resilience are non-negotiable. Experience in these environments is a strong indicator of where sovereign AI can operate effectively under real-world pressure.
From principles to practice
Ultimately, we’re seeing that agency leaders are no longer as motivated by abstract AI capabilities as they were a few years ago. Having the newest, fastest AI technology is no longer enough, as governments cannot operationalize AI responsibly on fragmented data foundations. AI frameworks that are flexible, accurate, transparent, and operationally resilient are what give leaders and their constituents confidence in their approach.
Within the scope of HMRC’s wider modernization efforts, for instance, a trusted data foundation with operationalized AI delivers necessary capabilities:
Secure data-sharing frameworks between agencies such as HMRC, Department for Work and Pensions (DWP), and border forces in the UK Home Office to create a more complete intelligence picture.
Entity resolution capabilities that can match individuals and agencies across fragmented datasets, reducing duplication and improving the accuracy of targeting and risk assessment.
Using knowledge graph technology to gain contextual intelligence, combining real-time and linked operational data to make decisions using a complete, up-to-date picture of an entity, rather than isolated data points.
Strong governance frameworks, including clear accountability structures, audit trails, explainability standards, and policies that make AI-driven decisions transparent and defensible.
The result is not only stronger control over national data assets, but measurable operational improvements felt by both the government and its constituents: faster processing times, reduced manual reviews, lower fraud and error rates, and more efficient delivery of public services.
What happens next
As public sector agencies move toward more autonomous systems and real-time decision-making, success will depend on a trusted data foundation and the ability to prove how decisions are made, governed, and defended.
This is why the shift from batch processes to real-time intelligence and decision-making matters. It increases the need for interoperable systems, auditability, and control. At Quantexa, we have built our approach for high-stakes regulatory environments, where accuracy, auditability, accountability, and public trust are non-negotiable. Our focus is to help partners strengthen trust, control, and transparency in their data and AI so they can modernize with confidence.
Learn more about our approach to helping government agencies achieve their data and AI ambitions.
We are committed to helping our partners gain trust and control over their data and AI, ensuring that every decision, big or small, is grounded in transparent, explainable reasoning. This enables agencies to modernize with confidence and create institutional resiliency that will sustain objectives and the nations they serve for years to come.



