The Pentagon has threatened to invoke the Defence Production Act against Anthropic, marking the first time the wartime powers act would be used to compel a software company to alter its product terms
Anthropic has raised $7.3 billion in funding with a valuation above $18 billion, but faces potential designation as a "supply chain risk" that would blacklist it from all government contracts
Two specific uses remain non-negotiable for Anthropic: mass domestic surveillance of Americans and fully autonomous weapons systems without human oversight
Claude AI was already deployed in a sensitive operation to seize Venezuelan President Nicolás Maduro, making the dispute over future use cases immediately concrete
The defence establishment wants full control. Anthropic, the company that built its entire brand on "responsible AI," is saying no. The Pentagon has responded with threats that would have been unthinkable 18 months ago: invoke emergency wartime powers to compel a software company to hand over its technology without restrictions, or brand it a national security risk and banish it from government work entirely.
This confrontation, coming to a head this week after months of private negotiations, marks the first major test of whether artificial intelligence companies can actually maintain ethical boundaries when the US military comes calling. The answer will shape how every other AI firm approaches similar demands.
Military technology and artificial intelligence concept
Anthropic's chief executive Dario Amodei made his position clear on Thursday: his company will not accept Pentagon demands for "any lawful use" of Claude, its flagship AI system. Two uses in particular are non-negotiable. Mass domestic surveillance and fully autonomous weapons remain off the table, even if it means losing government contracts worth millions.
Enjoying this article?
Get stories like this in your inbox every week.
That stance provoked an unusually personal attack from Undersecretary of Defense Emil Michael, who wrote on X that Amodei "wants nothing more than to try to personally control the US Military and is ok putting our nation's safety at risk."
The rhetoric suggests the Pentagon views this as more than a contract negotiation.
The Defence Production Act gambit
What makes this confrontation novel isn't the disagreement itself. Tech companies and intelligence agencies have clashed before over encryption backdoors and data access. The difference is the weapon the Pentagon has deployed.
A former Department of Defense official, speaking on condition of anonymity, described Hegseth's grounds for invoking the Act as "extremely flimsy." The alternative threat carries more immediate weight: designating Anthropic as a "supply chain risk," which would effectively blacklist the company from any government work and potentially trigger broader scrutiny from other federal agencies.
Technology and defence systems integration
The timing of these threats tells its own story. Negotiations between Anthropic and the Pentagon have dragged on for months, according to sources familiar with the discussions. What changed was the public revelation that Claude had already been deployed in a sensitive operation to seize Venezuelan President Nicolás Maduro. That disclosure made abstract concerns about surveillance and autonomous weapons suddenly concrete.
The reliability question
Amodei's argument against fully autonomous weapons rests on a technical claim: current AI systems simply aren't reliable enough for life-and-death decisions. "Without proper oversight, fully autonomous weapons cannot be relied upon to exercise the critical judgement that our highly trained, professional troops exhibit every day," he wrote in a company blog post.
Defence experts may contest that assessment. The military has been developing autonomous systems for years, from missile defence to drone navigation. Whether Claude specifically is too unreliable for weapons systems, or whether Amodei is drawing a principled line regardless of technical capability, matters for evaluating his stance.
What's more interesting is what Anthropic offered instead: direct collaboration on research and development to improve system reliability, with proper safeguards built in. The Pentagon declined.
That refusal suggests the Department of Defense wants unfettered access more than it wants a purpose-built military AI.
On domestic surveillance, Amodei's concerns centre on Claude's ability to "assemble scattered, individually innocuous data into a comprehensive picture of any person's life—automatically and at massive scale." He drew a distinction between foreign intelligence operations, which Anthropic supports, and mass surveillance of Americans. The Pentagon's proposed contract language, according to an Anthropic spokeswoman, included "legalese that would allow those safeguards to be disregarded at will."
The credibility test
For Anthropic, this confrontation is existential in ways that transcend any single contract. The company positioned itself from launch as the responsible alternative to OpenAI, with stronger safety commitments and more conservative deployment practices. That brand differentiation attracted a certain type of enterprise customer and investor. Capitulating to Pentagon demands would destroy that positioning overnight.
Corporate decision-making and business strategy
Whether Anthropic can actually maintain its stance is another question entirely. The company raised $7.3 billion in funding rounds that have valued it above $18 billion, but it remains dependent on major corporate partnerships and cloud computing deals. Being designated a national security risk would send chills through that ecosystem.
The broader precedent extends beyond Anthropic. If the Pentagon succeeds in compelling compliance through the Defence Production Act or blacklist threats, every AI company will understand the cost of saying no. If Anthropic holds firm and survives, it establishes that ethical red lines are commercially viable even under government pressure.
Defence contractors watching this standoff will note how quickly the relationship deteriorated. Two-day turnarounds between meetings and public threats suggest both sides believe they have leverage. The Pentagon clearly thinks Anthropic needs government business more than it needs its principles. Anthropic is betting that its commercial customers value those principles enough to sustain the company through a government blackout.
The legal challenges, if Hegseth follows through on the Defence Production Act threat, could take months or years to resolve. Whether other AI companies publicly support Anthropic's position or quietly signal their willingness to be more accommodating will become clear within weeks.
This confrontation will establish whether AI companies can maintain ethical boundaries under government pressure, setting precedent for how every other tech firm handles similar military demands
Watch how other major AI companies respond publicly or privately—their positioning in the coming weeks will reveal whether industry-wide resistance is viable or if Anthropic stands alone
The legal question of whether the Defence Production Act can compel software companies to alter terms of service remains untested, with potential ramifications extending far beyond the defence sector into surveillance, law enforcement, and intelligence operations
Multi-award winning serial entrepreneur and founder/CEO of Venntro Media Group, the company behind White Label Dating. Founded his first agency while at university in 1997. Awards include Ernst & Young Entrepreneur of the Year (2013) and IoD Young Director of the Year (2014). Co-founder of Business Fortitude.