A coalition of over 70 civil liberties, domestic violence, reproductive rights, and LGBTQ+ organizations, including the ACLU, Fight for the Future, Access Now, and more, has sent a letter to Meta CEO Mark Zuckerberg demanding that the company kill a rumored facial recognition feature for its Meta Ray-Ban smart glasses before it ever reaches consumers.
According to a Wired report, the feature, internally called “Name Tag”, would allow wearers to point their glasses at a stranger and pull up information about them using Meta’s AI assistant. Engineers are reportedly weighing two versions: one that identifies people you’re already connected with on Meta platforms, and a broader version that could recognize anyone with a public Facebook or Instagram account.
The civil rights group argues that no amount of design tweaks or opt-out mechanisms can make this feature safe. Bystanders on the street have no way to consent to being identified, and the coalition says the technology could be weaponized by stalkers, abusers, and federal law enforcement agencies.
Why is the timing so suspicious?
What makes this story particularly troubling is a leaked internal Meta memo from May 2025. As reported by the NY Times, the company reportedly noted that it planned to launch during a “dynamic political environment,” where civil society groups would have their attention pulled elsewhere. The coalition has called this “vile behavior,” and rightly so.
Meta Ray-Bans were already in hot water as an investigation revealed that the smart glasses were sending video recordings of users’ most personal moments for AI training. The new facial recognition feature is another big slap in the face of its customers’ privacy.
Should you be worried?
If you own a pair of Ray-Ban Meta glasses, the existing hardware can secretly record video. Adding facial recognition on top of that would mean anyone you walk past could, in theory, be silently identified and matched to a trail of personal data, and other Meta Ray-Ban users could do the same to you.

I don’t really have high hopes about privacy and security from a company like Meta, but this is really traversing uncharted waters and could potentially cause physical harm to people in the real world.
Meta has responded by saying it does not currently offer this feature and would take a “very thoughtful approach” before rolling anything out. Whether that promise holds remains to be seen.

