US Officials Want Faster Access to Advanced AI, and Big Companies Agree

Microsoft, Google DeepMind and Elon Musk's xAI have offered to allow the US government to access new AI models before their general release, which sets up a new stage in the often fractured relationship in Silicon Valley and the US government's fear of AI threats, based on a recent report of AI companies providing models to US officials in the name of security review, in the hope that AI systems can process security. cyber attacks and military use before it is exposed for public use by developers and users, and, of course, those who should have no business getting their hands on a weaponized AI model.
The review will be conducted by the Center for AI Standards and Innovation of the Department of Commerce, or CASI, which says that the company's agreement with Google DeepMind, Microsoft and xAI gives it the opportunity to test AI models in the pre-deployment phase, conduct research in certain areas, and review after they are introduced into production.
That might sound boring, but it's not. This is a government that requires the hood to be removed from the roof before the car goes on the road, and that hood gets hot during the day.
It remains to be seen, but there is an understandable fear that more advanced AI will help cyber criminals become even more successful in their crimes. “U.S. officials have begun to view border migrants with suspicion and fear, noting that some have raised the level of stress among top government officials,” Reuters wrote.
One of the AI tools that has raised the most concerns is Anthropic's Mythos, a recently revealed model. The problem isn't that AI can detect security flaws that humans can't. That one tool can allow security people to find security flaws and an attacker can find security flaws.
Microsoft has entered the AI debate. Microsoft promised to “work with US and UK scientists to identify and mitigate the unintended consequences of AI models and contribute to the creation of shared datasets and testing methods for safety and model performance,” according to its press release.
In an example of this type of collaboration, Microsoft signed an agreement this month with the UK AI Security Institute to work with officials from both countries to work together to manage AI risks. This suggests that this topic has significance beyond the confines of the American capital.
CASI does not come from a blank slate. The agency says it has already conducted more than 40 tests, including those of high-end, unreleased models; developers sometimes share versions with defenses stripped or scaled down to expose more serious national security risks. Yes, that sounds scary, and it's meant to be; after all, you don't guarantee the lock's effectiveness by begging the door to stay closed.
Additionally, the new agreements expand prior government access to models available from OpenAI and Anthropic; separately, OpenAI has given the US government GPT-5.5 to test in national security contexts, according to OpenAI's Chris Lehane. Put those elements together and a different picture begins to emerge: the most talented AI labs are drawn into a government testing environment early before their technology goes live.
There are some interesting (and messy) politics at work here. For the most part, the Trump administration has focused its AI strategy on acceleration, deregulation and American global dominance. But any forward-looking AI strategy must face the stark reality that boundary models are not just productivity tools.
Trump's AI Action Plan for the United States is primarily aimed at increasing innovation, building the infrastructure needed to develop and advance US leadership in international AI advocacy and security. That last piece really carries the load.
There is also a security component that cannot be ignored. Only days before these model revision agreements were announced, the Pentagon made agreements with leading AI and technology companies to access the best programs in classified networks, according to a report on the armed forces' effort to include commercial AI in government operations.
AI in the military workflow brings a host of new challenges and consequences. A bug doesn't have to be a bug; the error output can be more than normal. It can work, and it can be expensive.
Naturally, the problem is that this may disrupt the establishment. Tech companies will argue that they need latitude; and they are absolutely right that AI is currently a knife fight in the telecom space, with fast iterations, fierce competition, massive computing infrastructure costs, and a global challenge to China.
If every new AI model is held up months before it is launched, US tech firms will surely charge Washington with giving a gift with a big bow to our enemies.
But it can be said that the US would prefer to avoid having the first public demonstration of a threatening or dangerous AI capability be a public release, as that is how it ends up apologetically ruling.
Testing before it's used and released won't be fun, and may even offend some or all, which is a good sign that the law has reached somewhere in between.
The challenge will be to keep things focused. Examining all the releases of a single chatbot would not make sense, but scrutinizing the most advanced frontier models, especially those with military or cyber, bio or chem implications is another matter.
This isn't about a government official authorizing your automation, but more about an engineer reviewing a rocket before it launches. It's probably not as dramatic, but it's the same.
There is also a trust issue here. Tech giants have told regulators they can't regulate themselves, and the latter have told tech companies they've failed to keep pace with rapidly evolving technology.
The result is this uncomfortable middle ground where companies give early access to AI models, government researchers do independent tests and everyone hopes that the process will filter out the worst results but it doesn't end up being tied up in red tape.
It's hard not to feel like this moment was inevitable. Once AI models reached a point where they were powerful enough to impact areas such as cybersecurity, national security and infrastructure, it would not make sense for these companies to simply test their own models forever.
The average person may not know the intricacies of a rating or a red team report, but they certainly know that the sheer potential of these systems to cause physical harm makes them worth checking out before they go on the market.
And while Big Tech still wants to race ahead and Washington still wants to avoid being caught off guard, both sides seem to be on the same path, at least for now, as possible: Turn on the AI models before the engine roars.


