2026 Deceptive Ad Trends
A closer look at what we’ll be monitoring in the new year.
Meta markets its AI smart glasses with the slogan “designed for privacy, controlled by you.”
“You’re in control of your data and content,” Meta says on its website, adding that the AI glasses are “built for your privacy and others’ too.”
But according to a recent spate of class-action lawsuits, users’ most private moments are not only being captured through the glasses but viewed by strangers halfway around the world to train Meta’s AI models.
Specifically, the complaints allege, Meta feeds consumer data, including videos captured through the glasses, to a subcontractor in Kenya where thousands of people working as “data annotators” manually review and label the data with tags like “cars,” “lamps” and “people” to improve Meta’s AI.
“These workers report seeing everything,” alleges the first of eight lawsuits filed against Meta in March, citing whistleblower accounts. “People changing clothes, using the bathroom, engaging in sexual activity, handling financial information, and conducting other private activities inside their homes that no reasonable consumer would ever expect a stranger to watch.”
The complaints also cite workers in Kenya who reported that faces were sometimes still visible, despite Meta’s claims that it removes “key identifiable information” and the company’s policy stating that it automatically blurs faces that appear in annotated data.
According to the lawsuits, all of which were filed in California and are pending, millions of people paid a premium for Meta’s Ray-Ban and Oakley branded AI glasses and now they’re unable to use the touted AI features without having their personal information exposed. As the initial complaint alleges:
Any consumer who does not wish to have their most intimate moments transmitted to Meta’s servers and reviewed by offshore human contractors is left with a $299 to $799 pair of sunglasses frames with no AI or “smart” functionality whatsoever.
In other words, a regular pair of sunglasses.
In response to a request for comment, a Meta spokesperson said the company disagrees with the allegations and intends to fight them in court, adding:
When people share content with Meta AI, we sometimes use contractors to review this data for the purpose of improving people’s experience, as many other companies do. We take steps to filter this data to protect people’s privacy and to help prevent identifying information from being reviewed.
Find more of our coverage on artificial intelligence.
A closer look at what we’ll be monitoring in the new year.
Prohibited content slips through the cracks.
The pandemic has also triggered lawsuits over privacy issues with some video conferencing apps, and more.