Big Tech’s standard for fighting AI fakes puts privacy on the line
Cryptocurrency
Fortune

Big Tech’s standard for fighting AI fakes puts privacy on the line

Why This Matters

C2PA content credentials promise a way to detect deepfakes and other AI-generated content. The risk: new privacy liabilities, identity exposure, and a global system controlled by Big Tech.

September 18, 2025
02:46 PM
8 min read
AI Enhanced

AI·Eye on AIBig ’s standard for fighting AI fakes puts privacy on the lineBy Sharon GoldmanBy Sharon GoldmanAI ReporterSharon GoldmanAI ReporterSharon Goldman is an AI reporter at Fortune and co- Eye on AI, Fortune’s flagship AI .

She has written digital and enterprise for over a decade.SEE FULL BIO Welcome to Eye on AI, with AI reporter Sharon Goldman.

In this edition…a new report says a growing standard for fighting AI fakes puts privacy on the line…Nvidia and Intel announces sweeping partnership to co-develop AI infrastructure and personal computing ducts…Meta raises its bets on smart glasses with an AI assistant…China’s DeepSeek says its hit model cost just $294,000 to train.

Last week, Google said its new Pixel 10 phones will ship with a feature aimed at one of the biggest questions of the AI era: Can you trust what you see?

The devices now support the Coalition for Content venance and Authenticity (C2PA), a standard backed by Google and other heavyweights Adobe, Microsoft, Amazon, OpenAI and Meta.

At its core is something called Content Credentials—essentially a digital nutrition label for photos, s, or audio.

The metadata tag, which can’t easily be tampered with, shows who created a piece of media, how it was made, and whether AI played a role.

Over a year ago, I reported that TikTok would automatically label all realistic AI-generated content created using TikTok Tools with Content Credentials.

And the standard was actually founded before the current generative AI boom: The C2PA was founded in February 2021 by a group of nology and media companies to create an open, interoperable standard for digital content venance, or the origin and history of a piece of content, to build trust in online information.

But a new report from the World Privacy Forum, a data-privacy nonfit, warns that this growing push for trust could put privacy on the line.

The group argues C2PA is widely misunderstood: it doesn’t detect deepfakes or flag potential copyright infringement.

Instead, it’s quietly laying down a new nical layer of media infrastructure—one that generates vast amounts of able data creators and can link to commercial, government, or even biometric identity systems.

Because C2PA is an open framework, its metadata is designed to be replicated, ingested, and analyzed across platforms. That raises thorny questions: Who decides what counts as “trustworthy”?

For example, C2PA relies on “trust lists” and a compliance gram to verify participants.

But if small media outlets, indie journalists, or independent creators don’t make the list, their work could be penalized or dismissed.

In theory, any creator can apply credentials to their work and apply to C2PA to become a trusted entity.

But to get full “trusted ,” the creator often needs to have a recognized certificate authority, meet criteria that are not fully public and navigate verification.

According to the the report, this risks sidelining marginalized voices, even as policymakers — including a New York state lawmaker — push for “critical mass” adoption.

But inclusion on these “trust lists” isn’t the only concern.

The report also warns that C2PA’s openness also cuts the other way: the framework can be too easy to manipulate, since so much depends on the discretion of whoever attaches the credentials—and there’s little to stop bad actors from applying them in misleading ways.

“A lot of people think, oh, this is a content labeling system, they’re not necessarily cognizant of all of the layers of identifiable information that might be baked in here,” said Kate Kaye, deputy director of the World Privacy Forum and co-author of the report.

She emphasized that C2PA isn’t just a simple label on a piece of media — it creates a of data that can be ingested, stored, and linked to identity information across countless systems.

All of this matters for both corporate entities and consumers.

For example, Kaye stressed that es might not realize that C2PA falls into privacy and data governance and requires policies around how it’s collected, d, and secured.

Also, reers have already shown it’s possible to cryptographically sign forged images.

So while companies may embrace C2PA to gain credibility — they also assume new obligations, potential liabilities, and dependence on a trust system controlled by Big players.

For consumers, there are definitely privacy and identity exposure issues.

C2PA metadata can include timestamps, geolocation, details on editing, and even connections to identity systems (including government IDs), but consumers may have little control or awareness that this is being captured.

It’s nically opt-in—but if you don’t opt in, your content could be marked less trustworthy.

And in the case of TikTok, for example, users are automatically opted in (other platforms Meta and Adobe are adopting C2PA, but generally as opt-in for creators).

Overall, there are a lot of power dynamics at play, Kaye said.

“Who is trusted and who isn’t and who decides – that’s a big, open- thing right now.” But the burden to figure it out isn’t on consumers, she emphasized: Instead, it’s on es and organizations to think carefully how they implement C2PA, with appriate risk assessments.

With that, here’s the rest of the AI news.

Sharon Goldmansharon.goldman@fortune.com@sharongoldmanFORTUNE ON AIExclusive: Former Google DeepMind reers secure $5 million seed round for new company to bring algorithm-designing AI to the masses – by Jeremy KahnBig companies pledge $42 billion in U.K.

investments as U.S.

President Donald Trump begins state visit – by Beatrice NolanNvidia s drop, China surges as Beijing tries to push grown AI chips – by Nicholas GordonWhy OpenAI’s $300 billion deal with Oracle has set the ‘AI bubble’ alarm bells ringing – by Beatrice NolanAI IN THE NEWSNvidia and Intel announces sweeping partnership to co-develop AI infrastructure and personal computing ducts.

The deal, which includes Nvidia taking a $5 billion stake in Intel, brings together two longtime rivals at a moment when demand for AI computing is exploding.

“This historic collaboration tightly couples NVIDIA’s AI and accelerated computing stack with Intel’s CPUs and the vast x86 ecosystem — a fusion of two world-class platforms," Nvidia CEO Jensen Huang said.

“Together, we will expand our ecosystems and lay the foundation for the next era of computing.”Meta raises its bets on smart glasses with an AI assistant.

According to the New York Times, Meta is doubling down on smart glasses after selling millions since their debut four years ago.

At its annual developer conference this week, the company unveiled three new models — including the $799 Meta Ray-Ban Display, which features a tiny screen in the lens, app controls via a wristband, and a built-in AI voice assistant.

Meta also introduced an upgraded Ray-Ban model and a sport version made with Oakley.

But the rollout wasn’t flawless: onstage, Mark Zuckerberg’s demo faltered when the glasses failed to der a recipe and place a call.China's DeepSeek says its hit model cost just $294,000 to train.

Reuters reported today that Chinese AI startup DeepSeek is back in the spotlight after months of relative quiet, with new details on how it trained its reasoning-focused R1 model.

A recent Nature article co-authored by founder Liang Wenfeng revealed the system cost just $294,000 to train using 512 of Nvidia’s China-only H800 chips — a striking contrast with U.S.

firms OpenAI, whose training runs cost well over $100 million. But questions remain: U.S.

officials said that DeepSeek has had access to large volumes of restricted H100 chips, despite export controls, and the company has now formally acknowleged it also used older A100s in early development.

The revelations may reignite debate over AI "scaling laws" and whether massive clusters of the most advanced AI chips are really necessary to train cutting-edge AI models.

It also highlights geopolitical tensions over access to Nvidia's chips. AI CALENDAROct. 6-10: World AI Week, AmsterdamOct. 21-22: TedAI San Francisco. Apply to attend here. Nov.

10-13: Web Summit, Lisbon. Nov. 26-27: World AI Congress, London. Dec. 2-7: NeurIPS, San Diego Dec. 8-9: Fortune Brainstorm AI San Francisco.

Apply to attend here.EYE ON AI NUMBERS50% Half of Americans are now more worried than excited AI’s growing role in daily life — up from just 37% in 2021, according to a new Pew Re study.

Only 10% say they’re more excited than concerned, while 38% feel both equally. A majority say they want more control over how AI shows up in their s.

Larger s believe AI will erode — not enhance — people’s creativity and relationships. Still, many are fine with AI lending a hand on everyday tasks.

Americans draw a line: most reject AI in personal domains religion or matchmaking, but are more open to its use in data-heavy fields weather forecasting or medical re.

And while most say it’s important to know whether images, or text come from AI or humans, many admit they can’t reliably tell the difference. Fortune Global Forum returns Oct. 26–27, 2025 in Riyadh.

CEOs and global leaders will gather for a dynamic, invitation-only event shaping the future of . Apply for an invitation.

FinancialBooklet Analysis

AI-powered insights based on this specific article

Key Insights

  • Consumer sector trends provide insights into economic health and discretionary spending patterns

Questions to Consider

  • What does this consumer sector news reveal about economic health and spending patterns?

Stay Ahead of the Market

Get weekly insights into market shifts, investment opportunities, and financial analysis delivered to your inbox.

No spam, unsubscribe anytime