Business Fortitude
    US threatens Anthropic with deadline in dispute on AI safeguards
    Policy & Regulation

    US threatens Anthropic with deadline in dispute on AI safeguards

    Ross WilliamsByRoss Williams··5 min read
    • US Defence Secretary Pete Hegseth has given Anthropic until Friday evening to remove restrictions on military use of its Claude AI model or face forced compliance under the Defence Production Act
    • Anthropic secured a Pentagon contract worth up to £148m last summer alongside OpenAI, Google and xAI, but appears to be the only company resisting blanket military access
    • The dispute reportedly stems from Claude allegedly being deployed during the January operation that captured former Venezuelan President Nicolás Maduro
    • If enforced, the Defence Production Act threat would establish legal precedent for compelling tech compliance on national security grounds

    The confrontation between the Pentagon and Anthropic represents the first major test of whether Silicon Valley can maintain ethical boundaries once it accepts government contracts. With a 72-hour deadline now ticking, the dispute will determine whether AI companies can take Pentagon money whilst controlling how their products are used. The answer could reshape the relationship between tech firms and government agencies for years to come.

    Pentagon building exterior
    Pentagon building exterior

    Pete Hegseth issued the ultimatum during a Tuesday meeting at the Pentagon with Anthropic chief executive Dario Amodei, threatening to invoke the Defence Production Act and designate the company a supply chain risk if it refuses to grant unrestricted access to its Claude AI model. According to sources familiar with the discussions, Amodei outlined specific red lines during the meeting, including refusal to participate in autonomous kinetic operations where AI makes final targeting decisions without human involvement. The company also opposes the use of Claude for mass domestic surveillance.

    Pentagon officials, however, insist the current dispute has nothing to do with autonomous weapons or surveillance—a claim that sits uneasily with Anthropic's stated concerns and demands independent scrutiny. That raises an uncomfortable question for an industry that spent years wrapping itself in the language of responsible development: have the others already quietly capitulated?

    Enjoying this article?

    Get stories like this in your inbox every week.

    When ethical guardrails meet national security demands

    The dispute reportedly stems from what observers describe as a "breach of trust" after Claude was allegedly deployed during the operation that captured former Venezuelan President Nicolás Maduro in January. The AI model was used through Palantir, according to BBC sources, suggesting the Pentagon may already be circumventing Anthropic's restrictions by routing access through third-party contractors.

    The Pentagon's position is blunt: once it buys a product, Anthropic shouldn't have a say in how military officials use it.

    Anthropic was the first tech company approved to work within the Pentagon's classified military networks. Those partnerships now look like a double-edged sword. The company built its brand around safety-first AI development, regularly publishing transparency reports that acknowledged its technology had been "weaponised" by hackers for sophisticated cyber-attacks.

    Artificial intelligence concept with digital technology
    Artificial intelligence concept with digital technology

    But brand values and government contracts operate under different rules. Defence department official Emil Michael previously stated the agency expects all four contracted AI companies to allow "any model for all lawful use cases"—a formulation that leaves significant room for interpretation around what constitutes lawful military deployment.

    The Defence Production Act enters the AI era

    The threat to invoke the Defence Production Act represents an extraordinary escalation. Originally designed to compel industrial production during wartime—think manufacturing tanks and ammunition—the law is now being wielded to force AI deployment. If Hegseth follows through, it would establish legal precedent for compelling tech compliance on national security grounds, fundamentally reshaping the relationship between Silicon Valley and government agencies.

    Pentagon officials characterise the Tuesday meeting as cordial, whilst Anthropic described it as "good-faith conversations" about usage policy. These diplomatic platitudes are difficult to square with a 72-hour compliance deadline and threats of forced production orders. When a government official gives you until Friday evening to surrender your principles or face legal compulsion, the conversation has moved well beyond good faith.

    When American lives are at stake, corporate ethics policies become an unaffordable luxury.

    The question facing Anthropic is whether any tech company can maintain meaningful ethical boundaries once it accepts government funding. The company's statement emphasised supporting "the government's national security mission in line with what our models can reliably and responsibly do"—language that suggests it's trying to thread an increasingly narrow needle between cooperation and capitulation.

    Military personnel working with technology equipment
    Military personnel working with technology equipment

    Georgetown University's Emelia Probasco framed the dispute as one requiring resolution, arguing that service members deserve "every possible advantage." That perspective reflects a broader view within defence circles: when American lives are at stake, corporate ethics policies become an unaffordable luxury.

    What happens after Friday evening will signal whether AI companies can maintain red lines at all. If Anthropic backs down, it confirms that national security demands trump corporate principles once government contracts are signed. If it holds firm and faces Defence Production Act orders, it tests whether wartime manufacturing laws can legally compel AI deployment—a question that could end up before federal courts and reshape the industry's relationship with government for years. Either way, the fantasy that tech companies could take Pentagon money whilst controlling how their products are used appears to be ending.

    • The outcome will determine whether AI companies can enforce ethical boundaries after accepting government contracts, or whether national security demands automatically override corporate principles
    • Watch for whether other Pentagon AI contractors—OpenAI, Google and xAI—have already quietly removed similar restrictions, making Anthropic's resistance the exception rather than the norm
    • If the Defence Production Act is invoked, expect legal challenges that could establish precedent for government authority over AI deployment and fundamentally reshape Silicon Valley's relationship with defence agencies
    Ross Williams
    Ross Williams

    Co-Founder

    Multi-award winning serial entrepreneur and founder/CEO of Venntro Media Group, the company behind White Label Dating. Founded his first agency while at university in 1997. Awards include Ernst & Young Entrepreneur of the Year (2013) and IoD Young Director of the Year (2014). Co-founder of Business Fortitude.

    More articles by Ross Williams

    Comments

    💬 What are your thoughts on this story? Join the conversation below.

    to join the conversation.

    More in Policy & Regulation

    View all →