Anthropic, valued at $380bn, faces supply chain risk designation after refusing Pentagon demands for "unfettered access" to its Claude AI system
The disputed Pentagon contract is worth approximately $200m, representing a fraction of Anthropic's valuation
President Trump has directed all federal agencies to cease using Anthropic's technology within six months, threatening "major civil and criminal consequences"
The Defence Production Act, previously used against foreign adversaries like Huawei, has been invoked against the San Francisco-based company
A technology company worth $380bn has just been threatened with treatment typically reserved for Chinese telecom giants and Russian cybersecurity firms. The cause? Anthropic, the San Francisco-based AI developer, refuses to grant the Pentagon what it calls "unfettered access" to its Claude chatbot for any purpose the military deems lawful. The conflict exposes a fundamental question that will define the next decade of American technology policy: can the federal government punish private companies for declining contracts on ethical grounds?
Artificial intelligence technology and government surveillance concept
President Trump announced Friday he would direct every federal agency to cease using Anthropic's technology, calling the company uncooperative and warning of "major civil and criminal consequences" if it doesn't facilitate a smooth transition. Defence Secretary Pete Hegseth went further, designating Anthropic a "supply chain risk" and invoking the Defense Production Act, tools previously wielded against Huawei and other foreign adversaries during trade wars.
When commercial leverage meets military demands
Anthropic's position in this standoff is singular. The Pentagon contract at stake is worth roughly $200m, a rounding error against the company's reported $380bn valuation from earlier this month. That figure, based on current revenue and projected future earnings, gives Anthropic something few defence contractors possess: the financial independence to walk away.
Enjoying this article?
Get stories like this in your inbox every week.
The company, led by former OpenAI executive Dario Amodei, has drawn specific lines around mass domestic surveillance and fully autonomous weapons systems. Its concern centres on the Pentagon's insistence that Anthropic agree to "any lawful use" of Claude, language the company interprets as creating legal cover for applications it finds objectionable.
What makes this particularly thorny is that "lawful use" isn't a simple binary. Domestic surveillance programmes have operated for years in legal grey zones, challenged in courts but not definitively struck down.
Autonomous weapons systems remain subject to international debate about compliance with laws of armed conflict. The Pentagon's demand effectively asks Anthropic to surrender its ability to refuse applications that may be technically legal but ethically contentious.
Pentagon and military technology operations
A former Department of Defense official, speaking to the BBC on condition of anonymity, called the legal basis for both the Defense Production Act invocation and supply chain risk designation "extremely flimsy". Courts will ultimately decide whether the administration can compel a private company to provide services it doesn't wish to provide, but the immediate commercial impact arrives first.
The competitive dynamics of compliance
OpenAI CEO Sam Altman sent an internal memo expressing support for Anthropic's "red lines" around domestic surveillance and autonomous offensive weapons. He later announced that OpenAI had finalised its own deal with what Trump now calls the Department of War, allowing the Pentagon to deploy OpenAI models on classified networks. The apparent contradiction isn't lost on observers.
Both companies claim ethical boundaries. Both compete directly for enterprise customers and government contracts. Only one currently faces supply chain risk designation.
This creates what game theorists would recognise as a prisoner's dilemma for AI companies. If your competitor accommodates government demands whilst you refuse, they gain market access and regulatory goodwill whilst you face sanctions. If everyone refuses collectively, the industry might successfully establish boundaries. But collective action requires trust between rivals who are simultaneously fighting for market dominance.
Anthropic has been deployed across US government agencies since 2024, including for classified work. The company was the first major AI developer to achieve that level of integration. That early cooperation makes the current rupture more significant. This isn't an activist company that never wanted defence business; it's a firm that worked successfully with the military until the terms became unacceptable.
The six-month phase-out Trump announced Friday will force agencies using Claude to migrate to alternative providers. For Anthropic's private-sector customers who also hold Pentagon contracts, the supply chain risk designation could force them to choose between their preferred AI tool and their defence work.
Presidential power and corporate refusal
The constitutional dimension here cuts deeper than the immediate dispute. Supply chain risk designations exist to protect national security from foreign adversaries who might compromise critical infrastructure or steal sensitive data. Using that authority against a US company for refusing a contract sets a precedent with implications beyond artificial intelligence.
No amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons.
Corporate technology headquarters and business operations
Anthropic has pledged to challenge the designation in court, arguing it's both legally unsound and dangerous precedent. Whether courts will agree that the administration has authority to punish refusal to contract remains uncertain. What's already clear is that the Trump White House is willing to test those limits.
The president's threat of "major civil and criminal consequences" if Anthropic doesn't cooperate during the transition suggests an administration prepared to escalate beyond commercial penalties. The broader AI industry is watching with acute interest. Microsoft, Google, Amazon, and Meta all maintain significant government relationships whilst simultaneously marketing ethical AI frameworks to enterprise customers.
If Anthropic successfully resists and courts limit presidential authority to compel cooperation, it establishes protective boundaries for the sector. If the administration prevails, every AI company will understand that maintaining market access requires accommodating government demands, regardless of internal ethical guidelines.
The legal precedent established here will determine whether US tech companies can refuse government contracts on ethical grounds without facing punitive designation as security risks
AI competitors now face a strategic choice: accommodate Pentagon demands for "any lawful use" and gain market advantage, or maintain ethical boundaries and risk commercial penalties
Watch for court challenges to the supply chain risk designation and whether other major AI providers publicly support or distance themselves from Anthropic's position
Multi-award winning serial entrepreneur and founder/CEO of Venntro Media Group, the company behind White Label Dating. Founded his first agency while at university in 1997. Awards include Ernst & Young Entrepreneur of the Year (2013) and IoD Young Director of the Year (2014). Co-founder of Business Fortitude.