Pentagon Blacklists Anthropic Over AI Safeguards
Key Takeaways
- The Pentagon formally designated Anthropic a supply chain risk after the company refused to remove AI safeguards against mass surveillance and autonomous weapons.
- Amazon, Microsoft, and Google confirmed Claude remains available for non-defense work, but defense contractors like Lockheed Martin are transitioning away.
- Former defense officials and bipartisan lawmakers criticized the move as government overreach that misuses authorities designed for foreign adversaries.
- Consumer adoption of Claude surged past ChatGPT in 20+ countries as the public sided with Anthropic's ethical stance.
- The legal challenge will set a precedent for how much control AI companies retain over their technology once deployed in military settings.
The Pentagon has formally designated Anthropic, the maker of the Claude AI chatbot, as a "supply chain risk" — an unprecedented move that effectively bars defense contractors from using the company's technology. The decision, announced Thursday, escalates a standoff that began when Anthropic refused to remove safeguards preventing its AI from being used for mass domestic surveillance or fully autonomous weapons systems.
The designation marks the first time the U.S. government has used a national security tool — originally designed to counter foreign adversaries like China and Russia — against an American technology company. Former defense and intelligence officials, including former CIA director Michael Hayden, have called it "a dangerous precedent" that could chill innovation across the AI sector. Anthropic CEO Dario Amodei says the company has "no choice" but to challenge the ruling in court.
How the Standoff Escalated
The dispute traces back to Anthropic's $200 million Department of Defense contract, awarded in mid-2025, which made it the first AI lab to integrate its models into classified military networks. Anthropic had partnered with Palantir and Amazon Web Services in late 2024 to provide defense and intelligence agencies access to Claude.
The relationship soured when the Pentagon demanded unrestricted use of Claude for all lawful purposes. Anthropic drew two red lines: no mass surveillance of Americans, and no fully autonomous weapons. The Pentagon's chief technology officer, Emil Michael, said the military offered written acknowledgements of existing federal laws restricting those activities, but Anthropic said the offer was "paired with legalese" that would allow the guardrails to be circumvented.
President Trump and Defense Secretary Pete Hegseth announced the threatened punishments last Friday, on the eve of the Iran conflict, accusing Anthropic of endangering national security. Trump gave the military six months to phase out Claude, which is already widely embedded in military and national security platforms.
Cloud Giants Navigate the Fallout
Amazon, Microsoft, and Google — the three largest cloud providers — all moved quickly to reassure customers that Anthropic's Claude models remain available for non-defense work. Amazon, which has invested $8 billion in Anthropic since 2023, said AWS customers can continue using Claude "for all workloads not associated with the Department of War."
The stakes are significant for Amazon in particular. Anthropic committed to using 500,000 of Amazon's custom Trainium 2 chips as part of an $11 billion data center project called Project Rainier. Amazon has also won billions in contracts to provide cloud services to more than 11,000 government agencies.
Defense contractor Lockheed Martin said it will follow the Pentagon's direction and look to other large language model providers, adding that it expects "minimal impacts" since it is not dependent on any single AI vendor. The supply chain risk designation requires defense vendors to certify they do not use Anthropic's models in their Pentagon work.
Critics Call It Government Overreach
The Pentagon's decision has drawn criticism from across the political spectrum. Senator Kirsten Gillibrand, a member of both the Senate Armed Services and Intelligence committees, called it "a dangerous misuse of a tool meant to address adversary-controlled technology" and "a gift to our adversaries."
A group of former defense and national security officials, including former CIA director Michael Hayden and retired military leaders, sent a letter to lawmakers expressing "serious concern." They argued the supply chain risk authority was designed to protect against infiltration by foreign adversaries — "from companies beholden to Beijing or Moscow, not from American innovators operating transparently under the rule of law."
Neil Chilson, a Republican former chief technologist at the Federal Trade Commission, called the decision "massive overreach that would hurt both the U.S. AI sector and the military's ability to acquire the best technology." Sarah Kreps, a Cornell professor and former Air Force officer, noted that once AI software is handed to the military, the company loses all leverage over how it is used — a key concern driving Anthropic's resistance.
OpenAI Steps In, Then Steps Back
Hours after the Pentagon punished Anthropic last Friday, rival OpenAI announced a deal to replace Claude with ChatGPT in classified military environments. The timing deepened the already bitter rivalry between the two companies — Anthropic was founded in 2021 by former OpenAI leaders, including Amodei.
However, OpenAI CEO Sam Altman later acknowledged the deal "looked opportunistic and sloppy," saying he should not have rushed the announcement. OpenAI said it had sought similar protections against domestic surveillance and autonomous weapons but later amended its agreements with the Pentagon.
Amodei also expressed regret, apologizing for an internal memo that leaked in which he attacked OpenAI's behavior and suggested Anthropic was being punished for not offering "dictator-like praise" to Trump. He told investors at the Morgan Stanley Technology Conference that Anthropic and the Pentagon "have much more in common than we have differences" and that the company is working to "deescalate the situation."
Consumer Surge Amid Corporate Losses
While the Pentagon designation threatens Anthropic's government and enterprise revenue, it has produced an unexpected windfall in consumer adoption. More than one million people per day signed up for Claude this past week, the company said, lifting it past OpenAI's ChatGPT and Google's Gemini as the top AI app in more than 20 countries on Apple's App Store.
The surge reflects public sympathy for Anthropic's stance on AI ethics in warfare. The company, which has cultivated a reputation as the safety-focused alternative to OpenAI, appears to have converted a government crisis into a consumer brand moment.
However, the longer-term business implications remain uncertain. Two sources familiar with the military's use of AI confirmed to CBS News that the U.S. used Claude for the attack on Iran — underscoring how deeply embedded the technology already is in defense operations. Amodei said ensuring warfighters are not "deprived of important tools in the middle of major combat operations" remains a priority, even as the legal challenge proceeds.
Conclusion
The Anthropic-Pentagon standoff raises fundamental questions about the relationship between technology companies and national security in the AI age. Unlike hardware that can be physically controlled, AI software can be repurposed once deployed — making the question of pre-deployment safeguards particularly urgent.
The case also tests whether the federal government can use national security authorities designed for foreign threats against domestic companies that disagree with policy. If the supply chain risk designation stands, it could deter other AI companies from setting any boundaries on government use of their technology — or alternatively, from engaging with defense work at all.
As AI becomes more deeply integrated into military and intelligence operations during an active conflict, the tension between innovation, ethics, and national security is no longer theoretical. How this dispute is resolved — in the courts, in Congress, or through quiet negotiation — will set the template for every AI company that follows.
Frequently Asked Questions
Sources & References
Disclaimer: This content is for informational purposes only. While based on real sources, always verify important information independently.