
Meta's Smart Glasses Privacy Breach: A Regulatory Reckoning Looms
- UK's Information Commissioner's Office has written to Meta after reports contractors in Nairobi are reviewing intimate footage from Ray-Ban smart glasses
- Workers at outsourcing firm Sama told Swedish newspapers they regularly view sensitive content including people using toilets, having sex, and undressing
- Meta's face-blurring protections 'sometimes failed' according to workers, leaving faces clearly visible
- The footage is used to train Meta's AI systems through manual labelling work that underpins machine learning
The UK's data watchdog has written to Meta after reports emerged that contractors in Nairobi are reviewing intimate footage captured by the company's Ray-Ban smart glasses — including recordings of people using the toilet, having sex, and undressing in bedrooms. The revelation raises uncomfortable questions about what consumers actually consent to when they buy AI-powered wearables. Whether directing users to expansive terms of service documents counts as adequate transparency will be central to any investigation.
According to an investigation by Swedish newspapers Svenska Dagbladet and Göteborgs-Posten, workers employed by outsourcing firm Sama in Kenya told reporters they regularly view highly sensitive content. The footage is meant to train Meta's AI systems to interpret images more accurately, part of the labelling work that underpins machine learning. Users manually activate recording or use voice commands, but may have little idea their bedroom videos could end up on a contractor's screen thousands of miles away.
We see everything — from living rooms to naked bodies
Meta insists the data is 'filtered to protect people's privacy', including face-blurring systems. Yet workers described to the Swedish papers how these protections 'sometimes failed' and faces remained clearly visible. When the BBC asked Meta to point to specific terms covering human review, the company provided a link to its Supplemental Meta Platforms Terms of Service but couldn't identify which sections actually addressed contractor access.
Enjoying this article?
Get stories like this in your inbox every week.
The UK AI terms of service mention that Meta 'will review your interactions with AIs' and this 'may be automated or manual (human)', but there's a significant gap between that vague language and the reality of Kenyan data annotators watching you in the bathroom.
The consent theatre problem
What's striking here is the disconnect between consumer expectations and operational reality. Most people purchasing Ray-Ban Meta glasses likely assume any AI training happens through automated systems, perhaps with some anonymised sampling. The idea that a human being in Nairobi might watch footage of your spouse undressing — as one worker described happening when a man's glasses were left recording in a bedroom — probably doesn't feature in that mental model, regardless of what's buried in the privacy policy.
The Information Commissioner's Office described the claims as 'concerning' and confirmed it would request information on how Meta is meeting UK data protection law obligations. Under GDPR and UK data protection standards, companies must provide appropriate transparency and put users in control. Whether directing users to expansive terms of service documents counts as 'clearly explaining' will be central to any investigation.
This isn't Sama's first brush with controversy over working conditions and content exposure. The company previously faced legal action from former employees over content moderation contracts with tech firms, work it has since abandoned. Sama began as a non-profit focused on creating tech employment opportunities and holds B-corp certification for ethical business practices, which adds a layer of irony to workers describing constant workplace surveillance whilst themselves reviewing deeply private moments from strangers' lives.
The wearables blindspot
Meta announced an expanded partnership with Ray-Ban and Oakley in September, pushing further into AI-powered wearables at precisely the moment regulatory frameworks struggle to keep pace. These devices offer genuine utility: real-time translation, assistance for blind and partially sighted users, hands-free information access. But they also create a new category of intimate surveillance, one where the line between the wearer's consent and bystanders' privacy becomes dangerously blurred.
Surveillance creep doesn't require government overreach or dystopian legislation. Sometimes all it takes is a fashionable product, ambitious AI training needs, and terms of service nobody reads.
The glasses include a recording light that activates when capturing images or video. Meta advises against recording in private spaces and suggests users show others when the light is on. That guidance seems almost quaint given workers' testimony about the content they've reviewed, which reportedly included glasses-wearers watching pornography.
Women have previously told the BBC they were filmed without consent by smart glasses users, highlighting how quickly these devices enable covert recording. Combined with this latest reporting about where that footage might end up, the technology starts to look less like a convenience and more like a privacy catastrophe waiting to scale.
What happens next
The ICO's inquiry will test whether current data protection law is adequate for AI wearables. Meta's inability to clearly identify which terms cover human review suggests the company may struggle to demonstrate meaningful consent under UK transparency requirements. If the regulator finds violations, fines could follow — though Meta has absorbed substantial penalties before without fundamentally changing its data practices.
More broadly, this case may force clearer disclosure requirements across the AI wearables sector before these devices become ubiquitous. Consumers deserve to know, in plain language, that buying AI-enabled glasses could mean outsourced workers in Kenya, the Philippines, or elsewhere might watch their most private moments. Whether that knowledge would actually stop people buying them is another question entirely, but at minimum the choice should be informed rather than obscured in legal boilerplate.
The AI gold rush has consistently prioritised speed over safeguards. What the Meta glasses episode demonstrates is that surveillance creep doesn't require government overreach or dystopian legislation. Sometimes all it takes is a fashionable product, ambitious AI training needs, and terms of service nobody reads until journalists start asking uncomfortable questions. EU lawmakers are also confronting Meta over the alleged privacy breaches as the issue gains international attention.
- Current data protection frameworks may be inadequate for AI wearables that create new categories of intimate surveillance requiring urgent regulatory clarity
- The disconnect between vague terms of service and operational reality of human review demands plain-language disclosure about where private footage actually ends up
- Watch for the ICO's findings on whether Meta demonstrated meaningful consent — this could set precedent for transparency requirements across the entire AI wearables sector before these devices become ubiquitous
Co-Founder
Multi-award winning serial entrepreneur and founder/CEO of Venntro Media Group, the company behind White Label Dating. Founded his first agency while at university in 1997. Awards include Ernst & Young Entrepreneur of the Year (2013) and IoD Young Director of the Year (2014). Co-founder of Business Fortitude.
Comments
💬 What are your thoughts on this story? Join the conversation below.
to join the conversation.



