
OpenAI's Pentagon Deal Backfires: User Ethics Trump Revenue Ambitions
- ChatGPT uninstalls surged 200% over the weekend following OpenAI's Pentagon deal announcement
- Anthropic's Claude AI seized the top spot on Apple's App Store as users fled OpenAI's platform
- Britain's Ministry of Defence has signed a £240m contract with Palantir for military AI systems
- OpenAI's revised contract now explicitly prohibits domestic surveillance of U.S. persons and nationals
OpenAI admitted on Monday it had cocked up its Pentagon contract, acknowledging that Friday's hastily announced deal was 'opportunistic and sloppy' after watching ChatGPT uninstalls spike by 200% over the weekend. Chief executive Sam Altman is now scrambling to add explicit prohibitions on domestic surveillance and fresh restrictions on intelligence agency access to what was supposed to be a straightforward military services agreement. The debacle offers a rare glimpse into what happens when a tech giant tries to capitalise on a competitor's principled stance whilst underestimating its own users' ethical red lines.
According to data from Sensor Tower, the daily uninstall rate for ChatGPT surged to triple its normal level after the Pentagon partnership was announced. Meanwhile, Anthropic's Claude AI seized the top spot on Apple's App Store, where it remained through Tuesday. That represents an unusual moment in tech platform dynamics.
Users of dominant platforms rarely organise meaningful revolts over corporate policy decisions, yet AI appears to attract a more ethically engaged user base. Whether this reflects the technology's novelty or genuine concerns about autonomous weapons systems is harder to parse.
Enjoying this article?
Get stories like this in your inbox every week.
When ethics meets opportunism
The backstory makes the misstep worse. Anthropic had just been blacklisted by the Trump administration after refusing to drop what it called a corporate 'red-line' principle: its Claude AI would not be deployed in fully autonomous weapons systems. OpenAI moved swiftly to fill the void, announcing its own Pentagon deal on Friday with assurances it contained 'more guardrails than any previous agreement for classified AI deployments, including Anthropic's'.
By Monday, Altman was walking that back. The revised contract now explicitly states OpenAI's systems cannot be 'intentionally used for domestic surveillance of U.S. persons and nationals'. Intelligence agencies including the National Security Agency would require a 'follow-on modification' to gain access.
We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy.
Altman's mea culpa on X was unusually candid for a tech executive defending a commercial decision. The admission is telling. OpenAI saw an opening when a competitor took a principled stand and rushed to position itself as the pragmatic alternative.
The calculation appears to have been that users would either not notice or not care. They were wrong on both counts. Professor Mariarosaria Taddeo of Oxford University told the BBC that Anthropic's exit meant 'the most safety-conscious actor' was now 'out from the room'.
Yet even that framing deserves scrutiny. CBS News reported this week that Claude remains in use in US-Israel operations against Iran, despite the blacklisting. The Pentagon declined to comment on its ongoing relationship with Anthropic, raising uncomfortable questions about how corporate red lines function when government customers have already deployed the technology.
The UK's £240m bet on lethal efficiency
Britain is making its own calculations in this space. The Ministry of Defence recently signed a £240m contract with Palantir for military AI systems. The company's Maven platform aggregates satellite data, intelligence reports, and battlefield information, analysed by commercial AI systems to enable what Louis Mosley, head of Palantir's UK operations, described as 'faster, more efficient, and ultimately more lethal decisions where that's appropriate'.
That phrase 'where that's appropriate' does substantial work. Large language models are known to hallucinate, generating plausible but false information when confronted with gaps in their training data. The prospect of such systems contributing to lethal targeting decisions sits uneasily alongside assurances of human oversight.
It would never be the case that an AI would make a decision for us.
Lieutenant Colonel Amanda Gustave, chief data officer for Nato's Task Force Maven, stressed that operators were 'always introducing a human in the loop' and that it 'would never be the case' that an AI would 'make a decision for us'. The language is carefully chosen. Current policy requires human approval.
Whether that remains true as autonomous systems advance and battlefield tempo increases is a different question entirely. Palantir's position differs from Anthropic's outright ban on autonomous weapons. The company supports maintaining 'a human in the loop' rather than prohibiting the technology outright, a stance that allows continued government contracts whilst gesturing towards ethical boundaries.
What the backlash reveals
The OpenAI reversal demonstrates that AI companies face reputational risks their social media predecessors largely avoided. Facebook and Google built surveillance-adjacent business models with minimal user revolt. Yet ChatGPT users voted with their feet within 48 hours of a military contract announcement, even one wrapped in assurances about oversight and guardrails.
Whether this reflects genuine ethical concern or performative anxiety about AI's darker applications matters less than the commercial impact. OpenAI has invested heavily in positioning itself as thoughtful about safety and alignment. Watching a competitor get blacklisted for refusing military applications, then rushing to fill that gap, punctured that carefully constructed image.
The revised contract may restore some trust, though the speed of the U-turn suggests brand management rather than reconsidered principles. OpenAI still wants Pentagon revenues. It simply needs those revenues to come with enough restrictions to satisfy users who prefer their AI tools not contribute to battlefield decisions.
The competitive dynamics here deserve attention. As AI capabilities expand and military applications multiply, companies face a choice between lucrative defence contracts and maintaining civilian user trust. The market may be signaling those goals are harder to reconcile than anticipated. What remains unclear is whether any contractual guardrails, however carefully worded, can bridge the gap between commercial ambition and the messy reality of algorithmic systems deployed in warfare.
- AI companies cannot assume their users will tolerate military contracts the way social media users tolerated surveillance capitalism—the technology attracts a more ethically engaged audience willing to switch platforms over principle
- Corporate 'red lines' on weapons systems may prove meaningless when government clients have already deployed the technology, raising questions about whether any guardrails can survive contact with defence procurement reality
- The gap between lucrative Pentagon revenues and civilian user trust is widening, forcing AI firms to choose between markets in ways that Facebook and Google never faced
Co-Founder
Multi-award winning serial entrepreneur and founder/CEO of Venntro Media Group, the company behind White Label Dating. Founded his first agency while at university in 1997. Awards include Ernst & Young Entrepreneur of the Year (2013) and IoD Young Director of the Year (2014). Co-founder of Business Fortitude.
Comments
💬 What are your thoughts on this story? Join the conversation below.
to join the conversation.



