Meta’s Ray-Ban smart glasses are at the center of yet another controversy. A Kenyan AI training firm called Sama, which Meta used to help train its AI, saw its contract abruptly terminated shortly after its workers came forward with deeply troubling allegations (via BBC).
The workers claim they were repeatedly exposed to graphic content captured through Meta’s glasses, and now more than a thousand of them have lost their jobs.
The disturbing footage behind Meta’s AI training
Sama’s workers were data annotators, a role that involves manually labeling video content to teach Meta’s AI how to interpret images. They also reviewed transcripts of Meta AI conversations to make sure the chatbot was giving accurate responses.
What they didn’t sign up for, allegedly, was reviewing footage of people having sex or using the toilet, all filmed through Meta’s glasses without users’ knowledge. In one account, a man’s glasses were left recording in a bedroom, capturing his wife undressing.
Meta’s glasses do have a small indicator light that turns on when the camera is active, though that clearly hasn’t prevented misuse. The company admitted that contracted workers may occasionally review content shared with Meta AI, framing it as standard practice for improving user experience.
Why did Meta pull the contract?
Less than two months after those accounts surfaced, Meta terminated its agreement with Sama, leaving 1,108 workers without jobs. Sama says it met every standard Meta required and was never told otherwise. However, Meta disagrees, saying Sama fell short of its expectations.
A Kenyan workers’ organization believes the real reason was to silence staff who had gone public about humans reviewing smart glasses footage.
The UK’s Information Commissioner’s Office called the situation “concerning” in a letter to Meta. Additionally, Kenya’s data protection authority opened a formal investigation.
This isn’t Sama’s first difficult encounter with Meta. An earlier Facebook content moderation contract ended in similar controversy, with former employees describing exposure to traumatizing content.
Sama later said it wished it had never taken that work on. With regulators now circling and a legal case ongoing, the pressure on Meta to explain its decision is only growing.
Meta’s smart glasses have a much bigger privacy problem

Meta’s smart glasses are moving deeper into controversy as reports suggest they could soon identify people in real time. That has intensified privacy and civil rights concerns around face recognition in everyday public spaces.
Civil rights groups are pushing back against the idea citing that always-on identification could happen without clear consent.
Apps like Godsend are emerging in response to that threat, warning people when nearby smart glasses might be secretly recording them. That shows how uneasy people have become about being filmed without knowing it.
The technology is also showing up in less flattering ways, including reports of students using smart glasses to cheat in exams. That has added a new layer to the debate around misuse.
That said, it’s not all bad. The glasses have found genuinely good uses too, particularly in helping visually impaired people navigate spaces with assistance from strangers.






