Zuckerberg says Meta needs to be ‘careful about what we choose to open source,’ citing risks from superintelligence
Investment
Fortune

Zuckerberg says Meta needs to be ‘careful about what we choose to open source,’ citing risks from superintelligence

July 31, 2025
12:46 PM
4 min read
AI Enhanced
moneytechnologyaimarket cyclesmarketdata analysis

Key Takeaways

Zuckerberg makes a case for a type of “personal superintelligence” that people can use to achieve their individual goals.

Article Overview

Quick insights and key information

Reading Time

4 min read

Estimated completion

Category

investment

Article classification

Published

July 31, 2025

12:46 PM

Source

Fortune

Original publisher

Key Topics
moneytechnologyaimarket cyclesmarketdata analysis

AI·MetaZuckerberg says Meta needs to be ‘careful what we choose to open source,’ citing risks from superintelligenceBy Beatrice NolanBy Beatrice NolanReporterBeatrice NolanReporterBeatrice Nolan is a reporter at Fortune covering

Beatrice previously worked as a reporter at Insider, covering stories AI and Big

She's based in Fortune's London office and graduated from the University of York with a bachelor's degree in English.SEE FULL BIO Mark Zuckerberg has published his AI manifesto.Mark Zuckerberg has laid out his vision for “personal superintelligence” in a new blog post

In it, he acknowledged that the company may need to be “careful what we choose to open source” to mitigate the risks of advanced AI

The shift suggests Meta may be preparing to scale back its open-source apach as the company moves closer to “superintelligence,” a hypothetical form of artificial intelligence that surpasses human intelligence across all domains

Mark Zuckerberg has published his AI manifesto, making a case for a type of “personal superintelligence” that people can use to achieve their individual goals

In a new blog post, the Meta CEO said he wanted to build a personalized AI that helps you “achieve your goals, create what you want to see in the world, be a better friend, and grow to become the person that you aspire to be.” However, the company’s new aims come with a caveat: this powerful AI may soon be too powerful to be left open to the world. “We believe the benefits of superintelligence should be d with the world as broadly as possible

That said, superintelligence will raise novel safety concerns,” Zuckerberg wrote. “We’ll need to be rigorous mitigating these risks and careful what we choose to open source

Still, we believe that building a free society requires that we aim to empower people as much as possible.” Among those risks: That AI could become “a force focused on replacing large swaths of society,” he wrote

Zuckerberg has traditionally positioned Meta as a ponent of open-source AI, especially compared to rivals OpenAI and Google

While many argue the company’s LLaMA models don’t meet the strict definition of “open source,” the company has leaned more toward open-sourcing its frontier models than most of its Big peers

In a blog post last year, Zuckerberg made an impassioned case for open source, heralding Meta as taking the “next steps towards open source AI becoming the industry standard.” “I believe that open source is necessary for a positive AI future,” Zuckerberg wrote last year. “Open source will ensure that more people around the world have access to the benefits and opportunities of AI, that power isn’t concentrated in the hands of a small number of companies, and that the nology can be deployed more evenly and safely across society.” The CEO has left himself some wiggle room, saying in a podcast last year that if there was a significant change in AI capabilities, it may not be safe to “open source” it

Closed models give companies more control over monetizing their ducts

Zuckerberg pointed out last year that Meta’s isn’t reliant on selling access to AI models, so “releasing Llama doesn’t undercut our revenue, sustainability, or ability to invest in re it does for closed viders.” In contrast to competitors OpenAI, Meta makes most of its money from selling internet advertising

Closed vs. open source AI AI safety experts have long debated whether open or closed-source models are more responsible for advanced AI development

Some argue that open-sourcing AI models democratizes access, accelerates innovation, and allows for broader scrutiny to imve safety and reliability

But others say that releasing powerful AI models openly could increase the risk of misuse by bad actors, including for misinformation, cyberattacks, or biological threats

There’s a commercial argument against open source as well, which is why most leading AI labs keep their models private

Open-sourcing powerful AI models can erode a company’s competitive edge by allowing rivals to copy, fine-tune, or commoditize its core nology

Meta is in a different position here than some of its rivals, as Zuckerberg said last year that Meta’s isn’t reliant on selling access to AI models. “Releasing Llama doesn’t undercut our revenue, sustainability, or ability to invest in re it does for closed viders,” he said

Representatives for Meta did not immediately respond to a request for from Fortune, made outside normal working hours

Introducing the 2025 Fortune 500, the definitive ranking of the biggest companies in America

Explore this year's list.