
OpenAI's Ad Plans for ChatGPT: Ethics Experts Are Walking Away
- Zoe Hitzig, an OpenAI policy researcher focused on AI ethics, resigned over plans to introduce advertising into ChatGPT
- ChatGPT became the fastest-growing consumer application in history after launching in late 2022
- OpenAI's operating expenses have been reported in the tens of millions monthly
- Multiple senior safety researchers have departed from OpenAI, Google DeepMind, Anthropic and xAI over the past year
A policy researcher employed specifically to worry about the ethical implications of artificial intelligence has walked away from OpenAI. Her reason? The company plans to introduce advertising into ChatGPT, a chatbot that has quietly become a confessional booth for millions of users sharing their deepest anxieties about health, relationships, money and mental wellbeing. This isn't search history. This is something far more intimate.
Zoe Hitzig, writing in the New York Times after her resignation, pointed to what she described as 'an archive of human candour that has no precedent'. Users don't just query ChatGPT the way they might Google something. They confide in it. The distinction matters enormously when advertising enters the equation, particularly when the revenue pressures on firms like OpenAI have reached what can only be described as fever pitch.
What's particularly striking is that Hitzig isn't alone. Several safety researchers have departed from leading AI laboratories in recent months, including Mrinank Sharma, who led safeguards research at Anthropic. In explaining his exit, Sharma noted he had 'repeatedly seen how hard it is to truly let our values govern our actions'. When the people hired specifically to pump the brakes start walking away, the question isn't whether to be concerned. It's why more of us aren't alarmed already.
Enjoying this article?
Get stories like this in your inbox every week.
The commercial reality behind the research
The transformation has been swift and brutal. ChatGPT launched in late 2022 as what was essentially a research demonstration. Within months, it had become the fastest-growing consumer application in history, triggering an arms race amongst Microsoft, Google and Amazon to integrate generative AI into everything from search to cloud infrastructure.
That acceleration has come with eye-watering costs. Training and operating large language models requires vast data centres packed with expensive graphics processing units, each system costing billions to develop and run. OpenAI's operating expenses have been reported in the tens of millions monthly, forcing a reckoning between research ideals and commercial survival.
These systems need to generate revenue at a scale commensurate with their costs, whilst simultaneously protecting the intimate data users have entrusted to them.
The company has pledged that any advertisements would be clearly labelled and that private conversations would remain protected from advertiser access. Yet the specifics remain notably vague, and the tension is obvious: these systems need to generate revenue at a scale commensurate with their costs, whilst simultaneously protecting the intimate data users have entrusted to them. According to public statements from departing researchers, that balance is proving harder to maintain than the laboratories would like to admit.
When the internal brakes fail
The departures aren't isolated incidents. Over the past year, senior figures working on alignment and safety have left OpenAI, Google DeepMind and Elon Musk's xAI. Ethics hasn't always been explicitly cited as the driving factor, but multiple researchers have hinted at disagreements over deployment speed and the prioritisation of commercial objectives over safety protocols.
This presents a structural problem that extends well beyond individual companies. The roles these researchers occupied were created precisely because the technology they were developing carried significant risks: misinformation, bias, copyright infringement, potential misuse in healthcare or financial decision-making, and the wholesale scraping of creative work to train systems that now compete with their creators.
Safety teams were meant to be the internal brake pedal, the people asking uncomfortable questions about transparency, oversight and long-term consequences. Their exodus suggests that brake pedal is being overridden by the accelerator, just as these systems become embedded in critical infrastructure across sectors. Anthropic, to its credit, has invested in what it calls 'constitutional AI', designed to guide how its Claude model responds to sensitive queries. But the commercial incentives pulling in the opposite direction are immense, and capital allocation tends to follow revenue potential rather than ethical reassurance.
The UK's uncomfortable mirror
Britain's position in this landscape reflects the same tension these researchers faced internally. The government has positioned the country as a global leader in AI safety, establishing the UK AI Safety Institute and claiming to have 'long studied' the risks these systems pose. Simultaneously, ministers are courting billions in AI investment, promoting the sector as an engine for economic growth.
The policy environment has shifted rapidly in recent months, with investment attraction often appearing to take precedence over regulatory caution.
That claim of long study deserves scrutiny. The policy environment has shifted rapidly in recent months, with investment attraction often appearing to take precedence over regulatory caution. The result is a balancing act that looks increasingly precarious as the technology evolves faster than oversight mechanisms can adapt.
Investors are deploying capital into AI startups at remarkable velocity whilst regulators struggle to understand systems that change materially between one quarter and the next. The pressure this creates for roles like the one Hitzig held is immense: asking difficult questions about fast-moving technology whilst your employer faces existential pressure to monetise and scale.
The departures from major AI laboratories should prompt serious questions about what happens when commercial imperatives override the safety mechanisms built into these organisations. These weren't junior employees frustrated with corporate politics. They were senior researchers hired specifically to consider long-term risks, who concluded they could no longer perform that function effectively.
As these systems become further embedded in search, healthcare diagnostics, financial services and enterprise software, the absence of robust internal safety oversight becomes more consequential. The ethics experts are leaving the room. Whether anyone fills their seats with people empowered to say no when it matters most will likely determine whether this technology serves the public interest or simply the bottom line.
- The wave of departures from senior safety researchers signals that commercial pressures are overriding ethical safeguards at precisely the moment AI systems become embedded in critical infrastructure
- Watch whether AI laboratories replace departing ethics experts with equally empowered roles, or whether safety functions become increasingly ceremonial as revenue imperatives intensify
- The introduction of advertising into intimate AI interactions like ChatGPT represents a fundamental shift in how these platforms monetise user trust, with implications that extend far beyond traditional digital advertising models
Co-Founder
Multi-award winning serial entrepreneur and founder/CEO of Venntro Media Group, the company behind White Label Dating. Founded his first agency while at university in 1997. Awards include Ernst & Young Entrepreneur of the Year (2013) and IoD Young Director of the Year (2014). Co-founder of Business Fortitude.
Comments
💬 What are your thoughts on this story? Join the conversation below.
to join the conversation.



