Quick Facts
- Category: Technology
- Published: 2026-05-14 17:06:57
- How to Capture Stunning Lunar Far-Side Images: An Astronaut-Astrophotographer Collaboration Guide
- 12 Critical Developments Shaping the AI Data Center Boom
- Linux 7.2 Kernel Update: 'Fair' DRM Scheduler and AMD AIE4 Hardware Integration Coming
- Ubuntu Under Siege: DDoS Attack, Twitter Hack, and New Linux Exploit Emerge in Same Week
- 10 Key Insights into Movable Qubits for Quantum Computing
Introduction
When generative artificial intelligence first emerged from research labs into commercial use, organizations struck an implicit deal: prioritize immediate capabilities over long-term governance. Feed proprietary data into third-party AI models, and gain powerful results. However, this arrangement means data traverses infrastructure outside the company's ownership, governed by policies set by external providers. The protections assumed are only as reliable as the vendor's next terms-of-service update.

Today, as generative AI becomes embedded in daily operations and sophisticated agentic AI systems advance rapidly, enterprises are reconsidering the terms of that original bargain. The question of who controls data and the models that process it has moved from a back-office concern to a strategic imperative.
The Shift from Capability to Control
"Data is really a new currency; it's the IP for many companies," says Kevin Dallas, CEO of EDB, echoing a sentiment heard repeatedly from customers. "The big concern is, if you're deploying an AI-infused application with a cloud-based large language model, are you losing your IP? Are you losing your competitive position?"
This anxiety is driving a movement toward reclaiming both data and AI systems that have rapidly become core business infrastructure. AI and data sovereignty—breaking dependence on centralized providers and establishing genuine control over models and data estates—is now an urgent priority. According to internal EDB data, 70% of global executives believe they need a sovereign data and AI platform to be successful.
The Data as Currency Argument
The concept of sovereignty extends beyond mere legal compliance. It encompasses the ability to govern data lifecycle, model training, and inference processes within an organization's own boundaries. As Dallas notes, intellectual property built on unique datasets represents a company's competitive advantage. When that data flows through third-party models, the risk of leakage or misappropriation grows.
Consider the implications: if a bank uses a public cloud LLM to analyze customer transactions for fraud detection, does the model provider gain access to sensitive financial patterns? Could that knowledge be used to benefit competitors? Such questions are prompting enterprises to explore private AI infrastructure that keeps data under their direct control.
Global Policy and National AI Infrastructure
The sovereignty conversation is not limited to enterprises; it is a global policy discussion. NVIDIA CEO Jensen Huang recently addressed this at the World Economic Forum's annual meeting in Davos in January 2026: "I really believe that every country should get involved to build AI infrastructure, build your own AI, take advantage of your fundamental natural resource—which is your language and culture—develop your AI, continue to refine it, and have your national intelligence be part of your ecosystem."
Huang's remarks highlight a growing recognition that AI models trained on local languages, cultural contexts, and regulatory frameworks preserve national identity and ensure alignment with regional values. This national sovereignty push parallels the enterprise movement toward reclaiming control over AI assets.

The Enterprise Sovereignty Movement
This report, based on a survey of more than 2,050 senior executives conducted by EDB and a series of expert interviews, confirms that the sovereignty movement at the enterprise level is well underway. Key findings include:
- 70% of executives see a sovereign data and AI platform as essential for success
- Organizations are investing in private cloud and on-premises AI infrastructure to reduce reliance on third-party providers
- The need to protect intellectual property and maintain competitive advantage drives the push for sovereignty
- Regulatory pressures, such as GDPR and local data localization laws, further accelerate the shift
The report also explores practical steps enterprises can take, such as deploying open-source models on owned infrastructure, implementing data governance frameworks, and establishing AI governance boards. The goal is not to abandon cloud AI entirely but to create a hybrid model where sensitive workloads remain under sovereign control.
Conclusion
The era of autonomous systems demands a new approach to AI and data sovereignty. What began as a pragmatic trade-off—capability now, control later—is evolving into a strategic necessity as companies recognize that their most valuable asset, data, cannot be entrusted to external platforms without safeguards. By reclaiming sovereignty over models and data estates, enterprises can protect their intellectual property, comply with diverse regulations, and build AI systems that truly serve their unique business needs. The movement is global, the momentum is real, and the time to act is now.
This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review's editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.