The government has set a clock ticking that most tech platforms aren't equipped to beat. Under amendments to the Crime and Policing Bill, companies will face penalties of up to 10% of global revenue—or outright UK bans—if they fail to remove non-consensual intimate images within 48 hours of a report. For Meta, that's a potential £9.4 billion fine based on 2023 revenues. For Google, £22 billion.
The question isn't whether platforms want to comply. The question is whether they actually can.
The amendments outline an ambitious vision: a victim reports an image once, and it vanishes not just from the original platform but from every service, with automatic blocking to prevent re-uploads. In theory, this creates a cross-industry removal system that treats revenge porn with the same severity as child sexual abuse material or terrorist content—categories where hash-matching technology and international databases already enable rapid, coordinated takedowns.
In practice, the technical infrastructure to achieve this doesn't exist at scale. At least not yet.
The enforcement mechanics problem
A 48-hour countdown sounds straightforward until you consider how content moderation actually works across global platforms. Most major tech companies operate tiered review systems where reports enter queues, get assessed by automated tools, then escalate to human moderators. Average response times vary wildly—Meta's transparency reports show turnaround times ranging from under 24 hours for priority violations to several days for lower-tier complaints.
The new rules would require platforms to fast-track non-consensual intimate image reports above almost everything except CSAM and terrorist material. That means building separate reporting pipelines, training moderators to prioritise these cases, and potentially hiring significantly more staff. For smaller platforms operating in the UK market without the compliance budgets of tech giants, this represents a genuine barrier to entry.
What's particularly aggressive here is the scope of the penalty. The government has structured fines as a percentage of worldwide revenue, not UK revenue. This mirrors the European Union's approach under GDPR and the Digital Services Act—Brussels-style enforcement mechanisms now being deployed in a specific content category by a post-Brexit Britain that spent years promising a lighter regulatory touch than the continent. The irony isn't lost on tech policy observers.
The government's stated aim—one report triggering removal across multiple platforms—requires something that currently doesn't exist in commercial practice: a shared database of non-consensual intimate image hashes accessible to all major platforms in real time.
Tech companies already use PhotoDNA and similar perceptual hashing tools to identify and block known CSAM across their services. These systems create unique digital fingerprints of images that persist even when files are cropped, filtered, or slightly altered. Platforms check uploaded content against databases maintained by organisations like the National Center for Missing & Exploited Children in the US, enabling automatic blocking of previously flagged material.
Extending this model to non-consensual intimate images presents both technical and policy challenges. CSAM databases work because there's international consensus that such material is illegal everywhere, with clear legal frameworks for sharing information between platforms and law enforcement. Non-consensual intimate images occupy murkier territory—what constitutes consent varies by jurisdiction, and false reports could weaponise such a system for censorship.
Ofcom, which would oversee enforcement, hasn't detailed how this cross-platform architecture would work or who would maintain the reference database. The regulator is still "considering" plans to treat non-consensual intimate images with CSAM-level severity, according to government statements—language that suggests policy development rather than implementation-ready systems.
The deepfake dimension
These amendments arrive in the context of escalating concerns about AI-generated sexual imagery. The government criminalised AI deepfake creation earlier this year, and multiple UK regulatory bodies are currently investigating X over its Grok chatbot's ability to generate such images.
The 48-hour removal rule would apply to AI-generated non-consensual content as well as real photographs. This is where the technical challenge intensifies. Deepfakes can be generated and distributed faster than platforms can identify and hash them, particularly when bad actors deliberately modify outputs to evade detection. Each variation requires a new hash entry, turning the database into a game of whack-a-mole at industrial scale.
The platforms themselves acknowledge these limitations, though typically in the careful language of policy submissions rather than public statements. During consultations on the Online Safety Act, several major tech companies flagged that automated detection tools for non-consensual intimate images have higher error rates than CSAM detection, with significant risks of both false positives and false negatives.
What happens next
The Crime and Policing Bill is making its way through Parliament, with these amendments likely to pass given cross-party support for measures targeting online violence against women. Implementation timelines haven't been confirmed, but platforms should expect an enforcement regime operational within 12 to 18 months.
Between parliamentary approval and enforcement, Ofcom needs to issue detailed guidance on reporting mechanisms, set standards for hash-matching technology, and clarify how the "one report, universal removal" system will function in practice. That guidance will determine whether this becomes a genuine shift in how platforms handle image-based abuse or an aspirational policy that proves unenforceable at scale.
The broader implication is clear: the UK is testing whether aggressive, category-specific content moderation requirements backed by revenue-based penalties can force behavioural change at tech giants without comprehensive EU-style platform regulation. If it works—and that's a significant if given the technical hurdles—expect other categories of harmful content to receive similar treatment. If platforms can't meet the 48-hour deadline consistently, those potential billion-pound fines may need to be levied before the policy gains real teeth.