Meta contractors say they can see Facebook users sharing private information with their AI chatbots
Personal Finance
Fortune

Meta contractors say they can see Facebook users sharing private information with their AI chatbots

August 6, 2025
06:19 PM
5 min read
AI Enhanced
technologycommunication servicesmarket cyclesseasonal analysismarket

Key Takeaways

Meta has a mixed track record when it comes to data governance and its reliance on third-party contractors.

Article Overview

Quick insights and key information

Reading Time

5 min read

Estimated completion

Category

personal finance

Article classification

Published

August 6, 2025

06:19 PM

Source

Fortune

Original publisher

Key Topics
technologycommunication servicesmarket cyclesseasonal analysismarket

·MetaMeta contractors say they can see Facebook users sharing private information with their AI chatbotsBy Dave SmithBy Dave SmithEditor, U.S

NewsDave SmithEditor, U.S

NewsDave Smith is a writer and editor who previously has been published in Insider, Newsweek, ABC News, and USA TODAY.SEE FULL BIO Mark Zuckerberg, CEO of Meta Platforms, in September 2024.David Paul Morris—Bloomberg/Getty ImagesPeople love talking to AI—some, a bit too much

And according to contract workers for Meta, who review people’s interactions with the company’s chatbots to imve their artificial intelligence, people are a bit too willing to personal, private information, including their real names, phone numbers, and addresses, with Meta’s AI

Insider spoke with four contract workers whom Meta hires through Alignerr and Scale AI–owned Outlier, two platforms that enlist human reviewers to help train AI, and the contractors noted that “unredacted personal data was more common for the Meta jects they worked on” compared with similar jects for other clients in Silicon Valley

And according to those contractors, many users on Meta’s various platforms such as Facebook and Instagram were sharing highly personal details

Users would talk to Meta’s AI as if they were speaking with friends, or even romantic partners, sending selfies and even “explicit photos.” To be , people getting too close to their AI chatbots is well-documented, and Meta’s practice—using human contractors to assess the quality of AI-powered assistants for the sake of imving future interactions—is hardly new

Back in 2019, the Guardian reported how Apple contractors regularly heard extremely sensitive information from Siri users even though the company had “no specific cedures to deal with sensitive recordings” at the time

Similarly, Bloomberg reported how Amazon had thousands of employees and contractors around the world manually reviewing and transcribing clips from Alexa users

Vice and Motherboard also reported on Microsoft’s hired contractors recording and reviewing voice content, even though that meant contractors would often hear children’s voices via accidental activation on their Xbox consoles

But Meta is a different story, particularly given its track record over the past decade when it comes to reliance on third-party contractors and the company’s lapses in data governance

Meta’s checkered record on user privacy In 2018, the New York Times and the Guardian reported on how Cambridge Analytica, a political consultancy group funded by Republican hedge-fund billionaire Robert Mercer, exploited Facebook to harvest data from tens of millions of users without their consent, and used that data to file U.S. voters and target them with personalized political ads to help elect President Donald Trump in 2016

The breach stemmed from a personality quiz app that collected data—not just from participants, but also from their friends

It led to Facebook getting hit with a $5 billion fine from the Federal Trade Commission (FTC), one of the largest privacy settlements in U.S. history

The Cambridge Analytica scandal exposed broader issues with Facebook’s developer platform, which had allowed for vast data access, but had limited oversight

According to internal documents released by Frances Haugen, a whistleblower, in 2021, Meta’s leadership often prioritized growth and engagement over privacy and safety concerns

Meta has also faced scrutiny over its use of contractors: In 2019, Bloomberg reported how Facebook paid contractors to transcribe users’ audio chats without knowing how they were obtained in the first place. (Facebook, at the time, said the recordings only came from users who had opted into the transcription services, adding it had also “paused” that practice.) Facebook has spent years trying to rehabilitate its image: It rebranded to Meta in October 2021, framing the name change as a forward-looking shift in focus to “the metaverse” rather than as a response to controversies surrounding misinformation, privacy, and platform safety

But Meta’s legacy in handling data casts a long shadow

And while using human reviewers to imve large language models (LLMs) is common industry practice at this point, the report Meta’s use of contractors, and the information contractors say they’re able to see, does raise fresh questions around how data is handled by the parent company of the world’s most social networks

In a statement to Fortune, a Meta spokesperson said the company has “strict policies that govern personal data access for all employees and contractors.” “While we work with contractors to help imve training data quality, we intentionally limit what personal information they see, and we have cesses and guardrails in place instructing them how to handle any such information they may encounter,” the spokesperson said. “For jects focused on AI personalization … contractors are permitted in the course of their work to access certain personal information in accordance with our publicly available privacy policies and AI terms

Regardless of the ject, any unauthorized sharing or misuse of personal information is a violation of our data policies, and we will take appriate action,” they added

Introducing the 2025 Fortune 500, the definitive ranking of the biggest companies in America

Explore this year's list.