Popular Articles
Today Week Month Year


“We see everything”: Meta’s Ray-Ban glasses are recording your most private moments
By Cassie B. // Apr 09, 2026

  • Civil society groups warn Meta's planned facial recognition for Ray-Ban glasses creates a dangerous surveillance tool.
  • Internal documents show Meta acknowledged the feature's privacy risks while planning a strategic launch.
  • Investigations reveal sensitive user footage is routinely reviewed by overseas contractors for AI training.
  • The glasses' discreet design enables covert filming and harassment, with limited legal recourse for victims.
  • Regulators face urgent calls to intervene as millions of the smart glasses are sold globally.

A broad coalition of civil society groups is sounding the alarm over Meta’s plan to embed facial recognition into its popular Ray-Ban smart glasses, warning the feature would create a new frontier for surveillance and abuse. The company’s push comes as investigations reveal a hidden pipeline where sensitive user footage, including scenes of nudity and private conversations, is routinely reviewed by overseas contractors, often without the knowledge of the people being recorded. This convergence of advanced biometric tracking and intimate data handling has triggered urgent calls for regulators to intervene before the technology becomes ubiquitous.

Last week, 64 consumer advocacy groups sent a letter to Meta, Ray-Ban parent company EssilorLuxottica, the White House, the Federal Trade Commission, and the Department of Justice demanding a halt to the facial recognition rollout. The coalition, led by the Consumer Federation of America and Ultraviolet Action, called the plan “dangerous and reckless.” Their letter stated, “This move will endanger us all, and particularly give ammunition to scammers, blackmailers, stalkers, child abusers, and authoritarian regimes.”

A feature designed for stealth

Internally dubbed “Name Tag,” the feature would allow glasses wearers to identify individuals with public Meta accounts and retrieve information about them using Meta’s AI assistant. According to internal documents reviewed by The New York Times, Meta knew the feature carried “safety and privacy risks.” The company reportedly planned an initial release at a conference for blind attendees. An internal memo from Meta’s Reality Labs noted the company would launch “during a dynamic political environment where many civil society groups that we would expect to attack us would have their resources focused on other concerns.”

The commercial stakes are high. EssilorLuxottica reported selling more than 7 million pairs of the smart glasses last year. This push follows a history of regulatory penalties for Meta, including a $5 billion FTC settlement in 2019 for privacy violations and billions more to settle lawsuits in Illinois and Texas over unauthorized facial data collection.

The human cost of AI training

A separate investigation by Swedish newspapers Svenska Dagbladet and Göteborgs-Posten peeled back the curtain on how Meta’s AI systems are trained. It found that contractors in Nairobi, Kenya, hired through outsourcing firms like Sama, routinely annotate sensitive images, videos, and audio collected through the glasses. Interviews with more than 30 employees revealed the material often includes highly intimate content. “In some videos you can see someone going to the toilet, or getting undressed,” one worker said. “I don’t think they know, because if they knew they wouldn’t be recording.”

Other annotators described reviewing clips showing nudity, sexual activity, and financial information like visible bank cards. One worker summarized, “We see everything — from living rooms to naked bodies.” They also noted users can record themselves without realizing it. As one annotator put it, “You think that if they knew about the extent of the data collection, no one would dare to use the glasses.”

Meta states that sensitive data are not intended for AI training and claims safeguards like face blurring are in place. However, former employees and current annotators say these protections frequently fail. “The algorithms sometimes miss,” one former Meta worker said, noting faces and bodies can remain visible. The company’s privacy terms state that some data may be reviewed by humans, but experts argue the line between voluntary sharing and automatic collection is dangerously unclear.

A tool for harassment and a legal gray zone

The privacy invasion extends beyond data centers. The glasses’ discreet design, resembling ordinary eyewear, makes covert filming simple. A small LED indicator light is meant to signal recording, but online tutorials show how to disable it, and special patches are sold to trick the sensor. This has created a new tool for harassment, with women disproportionately targeted. Kassy Zanjani of Vancouver was secretly recorded by a stranger using the glasses; the video was posted online and viewed tens of thousands of times. “Now, this is on the forefront of my mind,” Zanjani said. “There’s fear, I’m not able to enjoy public spaces.”

Legal recourse is limited. When Zanjani contacted police, she was told her case did not meet the criteria for existing harassment or intimate image laws in Canada. Law professor Wayne MacKay argues legislation must focus on the harm rather than the specific technology. “If the harm is the taking of the information or the posting of the images, that should be the main focus,” he said.

U.S. Senators Ron Wyden and Jeff Merkley have separately demanded Meta explain its facial recognition plans, setting an April 6 deadline for a response. As of now, it is unclear whether Meta has replied. Neither the senators’ offices nor Meta responded to requests for comment from other news outlets.

The story of Meta’s smart glasses is a familiar one in the digital age: a rush to innovate and monetize, followed by belated scrutiny of the human consequences. It reveals a world where a casual conversation in a café or a private moment in a home can become training data for a corporate AI, reviewed by a stranger thousands of miles away. As these glasses sell by the millions, the debate is no longer about a speculative future but about the privacy standards we are willing to accept today. The question for regulators and the public is whether the convenience of a wearable camera is worth the erosion of our collective right to be unseen and unknown.

Sources for this article include:

ChildrensHealthDefense.org

EFF.org

CBC.ca

CNET.com



Take Action:
Support NewsTarget by linking to this article from your website.
Permalink to this article:
Copy
Embed article link:
Copy
Reprinting this article:
Non-commercial use is permitted with credit to NewsTarget.com (including a clickable link).
Please contact us for more information.
Free Email Alerts
Get independent news alerts on natural cures, food lab tests, cannabis medicine, science, robotics, drones, privacy and more.

NewsTarget.com © All Rights Reserved. All content posted on this site is commentary or opinion and is protected under Free Speech. NewsTarget.com is not responsible for content written by contributing authors. The information on this site is provided for educational and entertainment purposes only. It is not intended as a substitute for professional advice of any kind. NewsTarget.com assumes no responsibility for the use or misuse of this material. Your use of this website indicates your agreement to these terms and those published on this site. All trademarks, registered trademarks and servicemarks mentioned on this site are the property of their respective owners.

This site uses cookies
News Target uses cookies to improve your experience on our site. By using this site, you agree to our privacy policy.
Learn More
Close
Get 100% real, uncensored news delivered straight to your inbox
You can unsubscribe at any time. Your email privacy is completely protected.