OpenAI launches its first open model in years so it can stop being on the ‘wrong side of history’—while still keeping its most valuable IP under wraps
Business News
Fortune

OpenAI launches its first open model in years so it can stop being on the ‘wrong side of history’—while still keeping its most valuable IP under wraps

August 5, 2025
05:00 PM
5 min read
AI Enhanced
aitechnologymarket cyclesmarketdata analysis

Key Takeaways

For the first time since GPT-2, OpenAI is releasing models with open weights—but key details like architecture and training methods remain secret.

Article Overview

Quick insights and key information

Reading Time

5 min read

Estimated completion

Category

business news

Article classification

Published

August 5, 2025

05:00 PM

Source

Fortune

Original publisher

Key Topics
aitechnologymarket cyclesmarketdata analysis

AI·OpenAIOpenAI launches its first open model in years so it can stop being on the ‘wrong side of history’—while still keeping its most valuable IP under wrapsBy Sharon GoldmanBy Sharon GoldmanAI ReporterSharon GoldmanAI ReporterSharon Goldman is an AI reporter at Fortune and co- Eye on AI, Fortune’s flagship AI

She has written digital and enterprise for over a decade.SEE FULL BIO OpenAI CEO Sam AltmanNathan Laine—Bloomberg/Getty ImagesDespite what its name suggests, OpenAI hadn’t released an “open” model—one that includes access to the weights, or the numerical parameters often described as the model’s brains—since GPT-2 in 2020

That changed on Tuesday: The company launched a long-awaited open-weight model, in two sizes, that OpenAI says pushes the frontier of reasoning in open-source AI. “We’re excited to make this model, the result of billions of dollars of re, available to the world to get AI into the hands of the most people possible,” said OpenAI CEO Sam Altman the release. “As part of this, we are quite hopeful that this release will enable new kinds of re and the creation of new kinds of ducts.” He emphasized that he is “excited for the world to be building on an open AI stack created in the United States, based on democratic values, available for free to all and for wide benefit.” Altman had teased the upcoming models back in March, two months after admitting, in the wake of the success of China’s open models from DeepSeek, that the company had been “on the wrong side of history” when it came to opening up its models to developers and builders

But while the weights are now public, experts note that OpenAI’s new models are hardly “open.” By no means is it giving away its crown jewels: The prietary architecture, routing mechanisms, training data, and methods that power its most advanced models—including the long-awaited GPT-5, widely expected to be released sometime this month—remain tightly under wraps

OpenAI is targeting AI builders and developers The two new model names—gpt-oss-120b and gpt-oss-20b—may be indecipherable to non-engineers, but that’s because OpenAI is setting its sights on AI builders and developers seeking to rapidly build on real-world use cases on their own systems

The company noted that the larger of the two models can run on a single Nvidia 80GB chip, while the smaller one fits on consumer hardware a Mac laptop

Greg Brockman, cofounder and president of OpenAI, acknowledged on a press pre-briefing call that “it’s been a long time” since the company had released an open model

He added that it is “something that we view as complementary to the other ducts that we release” and along with OpenAI’s prietary models, “combine to really accelerate our mission of ensuring that API benefits all of humanity.” OpenAI said the new models perform well on reasoning benchmarks, which have emerged as the key measurements for AI performance, with models from OpenAI, Anthropic, [hotlink]Google,[/hotlink] and DeepSeek fiercely competing over their abilities to tackle multistep logic, code generation, and complex blem-solving

Ever since the open source DeepSeek R1 shook the industry in January with its reasoning capabilities at a much lower cost, many other Chinese models have ed suit—including Alibaba’s Qwen and Moonshot AI’s Kimi models

While OpenAI said at a press pre-briefing that the new open-weight models are a active effort to vide what users want, it is also ly a strategic response to ramping up open-source competition

Notably, OpenAI declined to benchmark its new open-weight models against Chinese open-source systems DeepSeek or Qwen—despite the fact that those models have recently outperformed U.S. rivals on key reasoning benchmarks

In the press briefing, the company said it is confident in its benchmarks against its own models and that it would leave it to others in the AI community to test further and “make up their own minds.” Avoiding the leak of intellectual perty OpenAI’s new open-weight models are built using a mixture-of-experts (MoE) architecture, in which the system activates only the “experts,” or subnetworks, it needs for a specific input, rather than using the entire model for every query

Dylan Patel, founder of re firm SemiAnalysis, pointed out in a post on X before the release that OpenAI trained the models only using publicly known components of the architecture—meaning the building blocks it used are already familiar to the open-source community

He emphasized that this was a deliberate choice—that by avoiding any prietary training niques or architecture innovations, OpenAI could release a genuinely useful model without actually leaking any intellectual perty that powers its prietary frontier models GPT-4o

For example, in a model card accompanying the release, OpenAI confirmed that the models use a mixture-of-experts (MoE) architecture with 12 active experts out of 64, but it does not describe the routing mechanism, which is a crucial and prietary part of the architecture. “You want to minimize risk to your , but you [also] want to be maximally useful to the public,” Aleksa Gordic, a former Google DeepMind reer, told Fortune, adding that companies Meta and Mistral, which have also focused on open-weight models, have similarly not included prietary information. “They minimize the IP leak and remove any risk to their core , while at the same time sharing a useful artifact that will enable the startup ecosystem and developers,” he said. “It’s by definition the best they can do given those two opposing objectives.” Introducing the 2025 Fortune 500, the definitive ranking of the biggest companies in America

Explore this year's list.