
Pentagon Blacklists Anthropic as OpenAI Secures Classified Defense Agreement
An AI Company Said No to the Pentagon. Here's What Happened Next.
Tara Ferguson with AI Wave Content Generator App
3/8/20267 min read
In the space of a single week, the US defence establishment blacklisted one of the world's most prominent AI companies and handed a classified military contract to its biggest rival.
If you're watching the AI industry and wondering where the power really lies β between the companies building these models and the governments deploying them β this story is a defining moment.
Let me walk you through what happened, what it means, and why the details matter more than the headlines suggest.
The Dispute That Started It All
At the centre of this saga is a deceptively simple question: who gets to decide how military AI is used?
Anthropic, the maker of the Claude AI model, had a $200 million contract with the Pentagon and was the only AI firm whose model was deployed on the Pentagon's classified networks. That's a massive deal β it means Claude was being used for some of the most sensitive intelligence work the US military does.
But when the Pentagon demanded the ability to use Claude for "all lawful purposes," Anthropic pushed back. The company insisted on two explicit contractual guardrails: no domestic mass surveillance of Americans, and no fully autonomous lethal weapons without human oversight.
The Pentagon's position? Those things are already illegal or restricted by existing policy, so writing them into a contract is unnecessary β and amounts to a private company trying to dictate military operations. As Pentagon CTO Emil Michael told CBS News: "At some level, you have to trust your military to do the right thing."
Neither side blinked. A Friday deadline of 5:01 PM on February 27 passed without agreement.
The Blacklisting: Swift and Unprecedented
What happened next was extraordinary β and fast.
On February 27, 2026, Defense Secretary Pete Hegseth declared on X that Anthropic was a "supply chain risk to national security" and that no contractor, supplier, or partner doing business with the US military could conduct any commercial activity with Anthropic.
"America's warfighters will never be held hostage by the ideological whims of Big Tech. This decision is final," Hegseth wrote.
That same day, President Trump ordered all federal agencies to immediately cease using Anthropic's technology, calling it a "radical Left AI company." Trump later told Politico: "Well, I fired Anthropic. Anthropic is in trouble because I fired [them] like dogs."
Here's the thing that makes this unprecedented: according to The Guardian, the "supply chain risk" label has never been used before against a US company. It's a designation designed for foreign adversaries β think companies with ties to hostile governments β not domestic firms having a contract dispute.
The Formal Designation Was Narrower Than the Rhetoric
The Pentagon formally delivered the supply chain risk designation to Anthropic on March 4 β but the official terms were significantly milder than Hegseth's initial verbal threats.
According to Anthropic CEO Dario Amodei on the company's website: "The vast majority of our customers are unaffected by a supply chain risk designation. It plainly applies only to the use of Claude by customers as a direct part of contracts with the Department of War, not all use of Claude by customers who have such contracts."
In plain English: if you're a defence contractor using Claude for non-military work, you're likely fine. The ban doesn't extend to all commercial activity, despite what Hegseth initially threatened.
Microsoft, which uses Anthropic's Claude in its software suite, confirmed to CNBC that Anthropic products "can remain available to our customers other than the Department of War through platforms such as M365, GitHub, and Microsoft's AI Foundry."
Still, Amodei said the company would challenge the designation in court: "We do not believe this action is legally sound, and we see no choice but to challenge it in court."
OpenAI Steps In β Hours Later
Now here's where the timing gets really interesting.
Just hours after Anthropic's ban on February 28, OpenAI CEO Sam Altman announced on X: "Tonight, we reached an agreement with the Department of War to deploy our models in their classified network."
The speed was breathtaking. One AI company out, another one in, practically overnight.
So what did OpenAI agree to that Anthropic wouldn't? On the surface, the principles look remarkably similar. According to OpenAI, the contract includes:
A prohibition on domestic mass surveillance
Human responsibility for the use of force, including for autonomous weapon systems
No high-stakes automated decisions without human approval
Models operating solely on secure cloud infrastructure β not in edge environments like autonomous weapon systems
Cleared OpenAI engineers embedded alongside government personnel to ensure compliance
"The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement," Altman said. "We also will build technical safeguards to ensure our models behave as they should, which the DoW also wanted."
Same Safety Principles? Not Exactly.
Here's where it pays to read the fine print β and it's where this story gets genuinely complicated.
Both companies claim to oppose mass surveillance and autonomous weapons. But as The Decoder noted, the language differs in meaningful ways:
Anthropic's position: "No fully autonomous weapons without human oversight" β meaning a human must be actively involved in decision-making before a weapon is deployed.
OpenAI's position: "Human responsibility for the use of force, including for autonomous weapon systems" β responsibility being a much more flexible term that can be assigned after the fact.
That's a subtle but potentially enormous difference. Oversight implies active, real-time human involvement. Responsibility can mean someone is accountable later if something goes wrong.
Anthropic CEO Amodei wasn't shy about calling this out. In a leaked internal message to employees, he reportedly called Altman "mendacious" and said Altman's claims about Pentagon safeguards amounted to nothing more than "safety theater."
The key structural difference? OpenAI agreed to "all lawful purposes" and negotiated the right to build technical safeguards into its systems. Anthropic wanted contractual restrictions that would legally bind the Pentagon's behaviour. Those are very different things.
The Operational Stakes Are Real
This isn't just a philosophical debate. There are real-world military operations in the balance.
According to CBS News, two sources familiar with the matter said the US military has used Anthropic's Claude in its recent strikes on Iran. Exactly how the AI model was deployed isn't clear, but it highlights just how embedded this technology already is in active military operations.
The Pentagon has a six-month phase-out period for Anthropic's technology, but it's unknown how quickly OpenAI's models β or others β can be spun up to replace Claude's specific intelligence analysis capabilities.
On that note, a Pentagon official confirmed that Elon Musk's xAI (Grok) has also been cleared for classified use on the Pentagon's cloud β so OpenAI won't be the only game in town.
What We Don't Know Yet
There's a lot still up in the air. Honestly, the situation is changing almost daily.
Will the blacklisting survive a legal challenge? Paul Scharre, a former Army ranger and executive vice president at the Center for a New American Security, told Breaking Defense that "the law was written to keep foreign companies from sabotaging the military supply chain" and doubted the designation would hold up in court.
Are backchannel negotiations still happening? According to The Guardian, citing the Financial Times and Bloomberg, negotiations reportedly restarted between Pentagon CTO Emil Michael and Amodei. But Michael himself publicly denied any active negotiation in an X post.
How will enforcement actually work? The defence industrial base is enormous. Policing which contractors use Claude for what purpose is an enforcement nightmare nobody has fully addressed.
What are OpenAI's enforcement mechanisms? OpenAI says it will embed engineers and build technical safeguards, but the specifics of what happens if the Pentagon pushes past those guardrails remain unclear.
The Consumer Backlash Nobody Expected
Here's something that surprised a lot of observers: regular users had strong opinions about this.
According to Built In, ChatGPT uninstalls surged by nearly 300 percent in the days following the announcement, while US downloads of Claude skyrocketed β temporarily making it the number one app in the country, overtaking ChatGPT.
That's a remarkable signal. It shows that AI safety principles aren't just abstract policy debates β they're becoming something consumers care about deeply enough to vote with their wallets (or at least their app store downloads).
Internally, OpenAI employees were reportedly agitated too. According to Gizmodo, citing leaked transcripts, Altman addressed his staff in an all-hands meeting, calling the ordeal "painful" and seemed to regret looking "not united with the field."
The Bigger Picture: An AI Gold Rush in Defence
Zoom out, and you can see a much larger pattern forming.
OpenAI isn't just looking at the Pentagon. Altman reportedly told staff at the all-hands meeting that OpenAI is "looking at a contract to deploy on all North Atlantic Treaty Organization classified networks." If confirmed, that would be a massive expansion from one country's military to an entire allied defence bloc.
This is happening in the context of NATO members dramatically increasing their defence budgets, creating what one venture capitalist described as an "AI gold rush" in defence tech.
The message being sent to the entire AI industry is clear: if you want to work with Western governments on defence, accept "all lawful purposes" and negotiate technical safeguards β don't try to impose contractual restrictions on military operations.
Whether that's the right approach for the long-term governance of military AI is a question that will be debated for years to come.
What This Means for You
You might be thinking β I'm not a defence contractor, so why should I care? Here's why this matters far beyond the Pentagon.
If you use Claude or ChatGPT for work: Anthropic's Claude is unaffected for commercial and non-defence use. Microsoft confirmed its integrations remain available. Your tools aren't going anywhere.
If you're in the defence supply chain: Start paying close attention to the enforcement details. The designation technically means you can't use Claude on DoD contracts, even if the broader commercial ban didn't materialise.
If you care about AI safety and governance: This is a watershed moment. It demonstrates that governments can β and will β use extreme economic pressure to override private AI safety commitments. Every AI company is watching and recalibrating its approach right now.
If you're evaluating AI providers: Consider what each company's relationship with government tells you about their values, their stability, and the regulatory risks you might face by depending on them.
Quick Takeaways
Anthropic was designated a "supply chain risk" β the first time this label has ever been applied to a US company β after refusing to remove contractual restrictions on surveillance and autonomous weapons from its Pentagon agreement. The formal designation is narrower than initially threatened, applying only to direct DoD contracts.
OpenAI secured a classified Pentagon deployment deal hours later, agreeing to "all lawful purposes" while negotiating technical safeguards including cloud-only deployment and embedded engineers. Critics note that OpenAI's language around autonomous weapons ("human responsibility") is weaker than Anthropic's ("human oversight").
This story isn't over. Anthropic plans to sue, backchannel negotiations have reportedly resumed, legal experts doubt the designation will hold up in court, and consumer backlash against OpenAI has been significant. Watch this space β how this resolves will set the template for government-AI relationships globally.
