Skip to main content

News: Trump Bans Anthropic From Government Use and Pentagon Labels It a National Security Risk — OpenAI Swoops In With Classified Network Deal

8 min read
Share:

Key Takeaways

  • Trump ordered all federal agencies to immediately stop using Anthropic's AI technology after the company refused to remove safety restrictions on mass surveillance and autonomous weapons.
  • The Pentagon designated Anthropic a supply chain risk to national security — a classification typically reserved for foreign adversaries — effectively blacklisting it from military contracts.
  • OpenAI struck a deal with the Pentagon for classified network access within hours of Trump's ban, claiming its agreement includes the same safety safeguards Anthropic had demanded.
  • Anthropic, valued at $380 billion and planning an IPO, will challenge the designation in court, arguing it sets a 'dangerous precedent for any American company that negotiates with the government.'
  • Over 70 OpenAI employees signed an open letter supporting Anthropic's safety position, even as their own company moved to fill the vacuum left by the ban.

President Trump ordered all federal agencies to immediately cease using Anthropic's artificial intelligence technology on Friday, capping an increasingly bitter dispute between the AI company and the Pentagon over whether military contractors can set limits on how their technology is deployed in warfare. Defense Secretary Pete Hegseth followed through on his threat to designate Anthropic a supply chain risk to national security — a classification traditionally reserved for foreign adversaries like China's Huawei — effectively blacklisting the $380 billion AI company from military work.

Within hours of Trump's announcement, rival OpenAI struck a deal with the Defense Department to deploy its own AI models on classified networks, positioning itself as the Pentagon's preferred AI partner. The rapid sequence of events marks the most dramatic confrontation between a U.S. technology company and the federal government since the battles over encryption in the 1990s, with profound implications for the AI industry's relationship with government, the trajectory of military AI adoption, and the valuations of companies preparing for public offerings.

At the center of the dispute are two questions that will define AI's role in national defense for decades: whether AI companies can prevent their tools from being used for mass surveillance of American citizens, and whether today's AI models are reliable enough to make lethal targeting decisions without human oversight.

The Breaking Point: A $200 Million Contract and Two Red Lines

The showdown between Anthropic and the Pentagon had been building for months. Anthropic CEO Dario Amodei had consistently maintained two restrictions on the company's AI model, Claude: it must not be used for mass domestic surveillance of Americans, and it must not power fully autonomous weapons systems — meaning AI that selects and engages targets without any human approval.

The Pentagon countered that it had no intention of using Anthropic's tools for either purpose, but insisted that all AI contractors must allow the U.S. government to use their technology "for all lawful purposes." Pentagon officials argued that it is the military's responsibility — not a contractor's — to determine what constitutes lawful use.

"At some level, you have to trust your military to do the right thing," Emil Michael, the Pentagon's chief technology officer, told CBS News. But Anthropic maintained that the new contract language offered by the Pentagon "made virtually no progress" on the company's core concerns, with "legalese that would allow those safeguards to be disregarded at will."

The Pentagon set a hard deadline of 5:01 PM ET on Friday for Anthropic to accept unrestricted terms. When the deadline passed without agreement, the administration moved swiftly. "The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War," Trump wrote on Truth Social, directing every federal agency to cease all use of Anthropic's technology with a six-month phaseout period.

OpenAI's Classified Network Deal Reshuffles the AI Defense Landscape

Perhaps the most consequential development was OpenAI CEO Sam Altman's announcement, made just hours after Trump's ban, that his company had "reached an agreement with the Department of War to deploy our models in their classified network."

In a notable twist, Altman claimed that OpenAI's deal includes the very same safeguards Anthropic had demanded. "Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems," Altman wrote. "The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement." He called on the Pentagon to offer the same terms to all AI companies.

The timing and terms of OpenAI's deal raise questions about whether the dispute was truly about safety principles or about which company would dominate the lucrative military AI market. Anthropic was the first AI company approved for use on the Pentagon's classified networks through a partnership with Palantir. With Anthropic now blacklisted, OpenAI steps into that vacuum.

Altman's earlier internal memo to staff struck a more conciliatory tone, noting that OpenAI shared Anthropic's red lines and that he wanted the company to "try to help de-escalate things." Over 70 OpenAI employees had signed an open letter titled "We Will Not Be Divided," expressing solidarity with Anthropic's position on AI safety guardrails.

Market Implications: Anthropic's IPO Plans Meet a Government Blacklist

The government's actions arrive at a particularly sensitive moment for Anthropic, which is valued at $380 billion and has been planning an initial public offering this year. While the Pentagon contract is worth up to $200 million — a relatively small portion of Anthropic's $14 billion in annual revenue — the broader implications of being labeled a national security supply chain risk could ripple through the company's commercial relationships.

Hegseth's order stated that "no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic." If interpreted broadly, this could force defense contractors and their affiliates to choose between the Pentagon and one of the world's most advanced AI platforms.

Anthropic pushed back forcefully, arguing that the Pentagon's designation "would both be legally unsound and set a dangerous precedent for any American company that negotiates with the government." The company contends that the supply chain risk designation can only apply to Claude's use in Pentagon contracts, not to how contractors use Claude for other customers.

CEO Amodei has noted that Anthropic's valuation and revenue have actually grown since the company first pushed back against the Pentagon's demands — suggesting that investors and commercial customers view the stand on principle as a positive signal rather than a liability. But whether that sentiment survives a formal government blacklisting and the uncertainty of a court battle remains to be seen.

A Legal and Constitutional Battle Takes Shape

Anthropic announced it will challenge the supply chain risk designation in court, setting up what could become a landmark case at the intersection of government contracting law, corporate free speech, and AI governance.

The legal questions are substantial. Geoffrey Gertz, a senior fellow at the Center for a New American Security, noted that the supply chain risk designation has "traditionally been used for foreign adversary technology" and called the situation "highly unusual." He pointed out an inherent contradiction in the government's position: "It's this funny mix where they both are such a risk that they need to be kicked out of all systems, and so essential that they need to be compelled to be part of the system no matter what."

The Pentagon also threatened to invoke the Korean War-era Defense Production Act to compel Anthropic to remove its guardrails — a power designed for national emergencies. Combined with Trump's warning of "major civil and criminal consequences" if Anthropic doesn't cooperate during the phaseout, the administration appears willing to use extraordinary legal tools to force compliance.

Democratic Senator Mark Warner of Virginia, vice chair of the Senate Intelligence Committee, accused Trump and Hegseth of "bullying" Anthropic to deploy "AI-driven weapons without safeguards." He warned that the president's directive "raises serious concerns about whether national security decisions are being driven by careful analysis or political considerations."

The Broader AI Industry Watches and Calculates

The Anthropic showdown is forcing every major AI company to weigh its relationship with the government against its stated safety commitments. Google, OpenAI, and Elon Musk's xAI all hold Defense Department contracts and have agreed to allow their tools to be used in any "lawful" scenarios. xAI recently became the second company after Anthropic to be approved for classified use.

The dispute highlights a fundamental tension in the rapidly growing AI defense market. AI companies have marketed safety and responsibility as core brand values to attract talent, investors, and commercial customers. But those same guardrails can become friction points with a government customer that demands unrestricted access.

For OpenAI, the immediate prize is clear: access to classified Pentagon networks and the ability to compete for the growing pool of military AI contracts. But Altman's claim that his deal includes the same safeguards Anthropic demanded raises a question: if the Pentagon agreed to those terms with OpenAI, why was it unwilling to offer the same to Anthropic?

The AI defense market is projected to grow substantially in coming years as the Pentagon accelerates its adoption of AI for intelligence analysis, logistics, and operational planning. How this dispute resolves will set the template for government-AI relationships for years to come — determining whether technology companies retain any say in how their most powerful tools are deployed.

Conclusion

The Trump administration's ban on Anthropic and the simultaneous OpenAI deal represent a watershed moment in the relationship between Silicon Valley and the national security establishment. For the first time, a U.S. president has effectively blacklisted an American AI company for refusing to remove safety restrictions, while a rival capitalized on the fallout within hours.

The financial stakes extend well beyond the $200 million Pentagon contract. Anthropic's planned IPO, its $380 billion valuation, and its commercial relationships with defense contractors all face new uncertainty as the company prepares for a legal battle. OpenAI's rapid deal-making positions it as the Pentagon's go-to AI partner, but Altman's claim of identical safety terms undermines the notion that Anthropic's restrictions were the real obstacle.

What unfolds in the courts and in Congress over the coming months will establish the rules of engagement between AI companies and the government — determining whether the builders of the world's most powerful AI systems retain any authority over how their technology is used in matters of life, death, and civil liberties.

Frequently Asked Questions

Enjoyed this article?
Share:

Disclaimer: This content is AI-generated for informational purposes only. While based on real sources, always verify important information independently.

Related Articles