Business Fortitude
    🔥 Trending
    UK Regulators Demand Age Checks. Silicon Valley Holds the Cards.
    Policy & Regulation

    UK Regulators Demand Age Checks. Silicon Valley Holds the Cards.

    Ross WilliamsByRoss Williams··5 min read
    • 86% of 10 to 12-year-olds in Britain have created social media accounts, according to Ofcom research
    • Ofcom and the ICO have jointly requested age verification from seven major platforms including Facebook, Instagram, TikTok and YouTube
    • Current law only mandates rigorous age checks for adult content sites, not general social media platforms
    • TikTok removed over 90 million suspected under-13 accounts between October 2024 and September 2025

    Two of Britain's most powerful watchdogs have just issued a remarkable demand to Silicon Valley: introduce rigorous age verification to stop children under 13 accessing your platforms. The problem? They cannot actually force the companies to comply. What follows is either a spectacular regulatory failure or a spectacular corporate one—perhaps both.

    Ofcom and the Information Commissioner's Office sent a joint letter to Facebook, Instagram, Snapchat, TikTok, YouTube, Roblox and X this week, calling for them to abandon the self-declaration systems that currently allow the vast majority of pre-teens to create accounts. The appeal for "highly effective age checks" comes with a telling caveat. Under the Online Safety Act, such measures are only legally required for services providing adult content.

    Child using smartphone and social media
    Child using smartphone and social media

    Pornography sites, in other words, face a stricter legal obligation to verify users than the platforms British children actually spend hours scrolling each day.

    Enjoying this article?

    Get stories like this in your inbox every week.

    The enforcement gap

    What makes this intervention unusual is not just the joint approach from two separate regulators, but the admission embedded within it. By requesting voluntary adoption of stronger controls rather than mandating them, Ofcom and the ICO have exposed a critical gap in Britain's new online safety framework. The regulator lacks the statutory power to compel compliance for this specific age group.

    Melanie Dawes, Ofcom's chief executive, said services were "failing to put children's safety at the heart of their products". True enough. But the regulator she leads lacks the statutory power to compel them to do so.

    Technology secretary Liz Kendall insisted no platform would get a "free pass" and declared that "no company should need a court order to act responsibly to protect children". The irony is thick: the government cannot issue such an order because the legislation does not provide for it.

    The companies responded with the expected combination of defensiveness and deflection. Google, which owns YouTube, said it was "surprised" by Ofcom's move away from a risk-based approach, describing its youth safety work as "industry-leading". This characterisation sits awkwardly alongside the reality that nearly nine in ten pre-teens have managed to bypass the platform's age restrictions.

    Social media applications on mobile phone screen
    Social media applications on mobile phone screen

    Meta pointed to existing measures including AI-based age detection and facial estimation technology. Snapchat said it was testing verification tools. TikTok highlighted that it had removed more than 90 million suspected under-13 accounts between October 2024 and September 2025. Whether this represents diligent enforcement or evidence of a system fundamentally unable to prevent children signing up in the first place is left as an exercise for the reader.

    What works, what doesn't

    The current model relies almost entirely on users honestly reporting their birth date during sign-up. Children lie. This is not a profound insight.

    What Ofcom and the ICO are requesting mirrors the age verification technology now being deployed for adult services under legal mandate. Methods range from credit card checks and identity document scanning to facial analysis and third-party verification services. None are perfect. All raise privacy concerns. But they demonstrably work better than a dropdown menu asking someone to select their birth year.

    Professor Amy Orben, a digital mental health expert at Cambridge University, welcomed the intervention but warned it must represent only the beginning of stronger regulation. Safety must be "built into products by design rather than treated as an afterthought."

    Social media analyst Matt Navarra went further, arguing that age verification is merely "step one". Designing platforms that do not exploit children's attention represents a far harder challenge. The question regulators appear reluctant to confront is why this gap exists at all.

    The Online Safety Act received Royal Assent in October 2023 after years of consultation and debate. Parliament understood that children were accessing age-restricted platforms. The 86% figure may be recent, but the phenomenon is not.

    Regulatory oversight and digital safety concept
    Regulatory oversight and digital safety concept

    One explanation is political squeamishness about mandating intrusive verification for services used by hundreds of millions, including adults who value anonymity or simply do not wish to upload their passport to a social media company. Another is the lobby power of platforms that would prefer the flexibility of self-regulation to the certainty of legal requirements.

    What happens next

    Ofcom has said it will monitor how platforms respond to this request. Without enforcement powers specific to under-13 age verification, that monitoring amounts to strongly worded follow-up letters if companies drag their feet. The companies themselves now face a choice.

    They could implement robust verification, accepting the friction this creates for genuine users and the privacy concerns it raises. They could continue with incremental improvements to existing systems, pointing to AI tools and transparency reports as evidence of good faith. Or they could simply wait out the regulators, gambling that public pressure will force legislative change rather than corporate action.

    Parliament could, of course, amend the Online Safety Act to mandate age verification for all services with minimum age limits. Whether it will do so depends partly on how this voluntary phase plays out, and partly on whether the political appetite exists to pick another fight with Silicon Valley. The platforms are betting it does not.

    • Britain's online safety framework contains a critical enforcement gap: regulators can request but not mandate age verification for social media platforms, leaving child protection dependent on voluntary corporate compliance
    • The success of this initiative hinges on whether platforms implement meaningful verification or continue incremental improvements while awaiting legislative action
    • Watch for parliamentary appetite to amend the Online Safety Act—if voluntary measures fail, the question is whether politicians will force another confrontation with Silicon Valley or accept continued self-regulation
    Ross Williams
    Ross Williams

    Co-Founder

    Multi-award winning serial entrepreneur and founder/CEO of Venntro Media Group, the company behind White Label Dating. Founded his first agency while at university in 1997. Awards include Ernst & Young Entrepreneur of the Year (2013) and IoD Young Director of the Year (2014). Co-founder of Business Fortitude.

    More articles by Ross Williams

    Comments

    💬 What are your thoughts on this story? Join the conversation below.

    to join the conversation.

    More in Policy & Regulation

    View all →