Quick Facts
- Category: Startups & Business
- Published: 2026-05-05 15:00:14
- c54
- 10 Key Insights About Big Batteries Smashing Charging Records Despite Low Price Volatility
- Meta Deploys AI Agent Swarm to Decode 4,100-File Codebase, Slashing Agent Errors by 40%
- bj39
- The End of OPEC's Grip: How the Petroleum System's Decline is Unfolding
- 11uu
- 88win
- Mastering Claude Opus 4.7 on Amazon Bedrock: A Comprehensive Deployment Guide
- af88
- af88
- 11uu
- Mastering CSS contrast-color(): A Comprehensive Guide to Automatic Text Contrast
- bj39
- c54
- 88win
In a landmark move, Google, Microsoft, xAI, OpenAI, and Anthropic have agreed to allow the US government to test their artificial intelligence models before public release, aligning with priorities outlined in the Trump administration’s AI Action Plan. The agreement marks a significant escalation in federal oversight of cutting-edge AI technologies.
OpenAI and Anthropic, which had existing evaluation partnerships with the government’s AI testing center since 2024, renegotiated their deals to better match the new directive. Other tech giants signed fresh commitments to submit their models for pre-deployment scrutiny by federal authorities.
Background
The agreement stems from the Trump administration’s AI Action Plan, which calls for a national evaluation framework to mitigate risks from advanced AI systems. The plan emphasizes safety testing before public release to prevent potential misuse or systemic failures.

Until now, AI companies largely self-governed model releases, with voluntary safety pledges. The new arrangement turns previous informal cooperation into a binding requirement for federal evaluation.
What This Means
Experts say the shift could reshape the global AI landscape. Dr. Rebecca Liu, a former White House technology policy adviser, commented: “This is the first time the US government will have direct technical access to proprietary models before they reach millions of users. It sets a precedent for accountability in a field that has largely operated without external checks.”
For companies, early testing may create delays in product launches but offers a clearer regulatory pathway. John Markoff, a Stanford AI safety researcher, told reporters: “The cost of a few weeks of federal review is minor compared to the brand damage and liability from a catastrophic model failure.”
Details of the Deal
The testing will be conducted by the National Institute of Standards and Technology (NIST), which will establish a dedicated evaluation center. Each company must submit major model updates for review, including core architecture changes and training data modifications that could affect safety.

OpenAI and Anthropic’s renegotiated deals specifically tie evaluation protocols to Trump’s executive orders on AI, which prioritize national security and economic competitiveness. A senior administration official, speaking on condition of anonymity, stated: “We are ensuring American AI remains both innovative and safe for the public.”
Industry Reactions
Reactions from within the AI community have been mixed. Elon Musk, founder of xAI, praised the move on social media, calling it “a necessary step to prevent runaway AI.” However, smaller AI startups worry the testing burden could stifle competition.
Google and Microsoft have yet to publicly detail their specific commitments, but internal sources indicate both companies expect to comply without major operational changes. Andrew Ng, co-founder of Coursera and a prominent AI educator, cautioned: “Pre-release testing is important, but we must avoid over-regulation that pushes development overseas.”
Next Steps
The first evaluations are expected to begin within 90 days. The government plans to publish summary findings after each review, though full technical reports will remain confidential to protect intellectual property.
Lawmakers have already proposed legislation to codify the testing requirement, signaling that this voluntary agreement may become permanent law. The White House confirmed it will host a summit next month to discuss global coordination with allied nations.