AI·OpenAIAGI talk is out in Silicon Valley’s vibe shift, but worries remain superpowered AIBy Sharon GoldmanBy Sharon GoldmanAI ReporterSharon GoldmanAI ReporterSharon Goldman is an AI reporter at Fortune and co- Eye on AI, Fortune’s flagship AI .
She has written digital and enterprise for over a decade.SEE FULL BIO Sam Altman, the CEO of OpenAI. As recently as earlier this year, Altman was eager to say that AGI appeared imminent.
Now he says the term AGI itself is not useful.
Nathan Laine—Bloomberg via Getty ImagesOnce upon a time—meaning, um, as recently as earlier this year—Silicon Valley couldn’t stop talking AGI.OpenAI CEO Sam Altman wrote in January “we are now confident we know how to build AGI.” This is after he told a Y Combinator vodcast in late 2024 that AGI might be achieved in 2025 and tweeted in 2024 that OpenAI had “AGI achieved internally.” OpenAI was so AGI-entranced that its head of sales dubbed her team “AGI sherpas” and its former chief scientist Ilya Sutskever led the fellow reers in campfire chants of “Feel the AGI!”OpenAI’s partner and major financial backer Microsoft put out a paper in 2024 claiming OpenAI’s GPT-4 AI model exhibited “sparks of AGI.” Meanwhile, Elon Musk founded xAI in March 2023 with a mission to build AGI, a development he said might occur as soon as 2025 or 2026.
Demis Hassabis, the Nobel-laureate co-founder of Googe DeepMind, told reporters that the world was “on the cusp” of AGI.
Meta CEO Mark Zuckerberg said his company was committed to “building full general intelligence” to power the next generation of its ducts and services.
Dario Amodei, the cofounder and CEO of Anthropic, while saying he disd the term AGI, said “powerful AI” could arrive by 2027 and usher in a new age of health and abundance—if it didn’t wind up killing us all.
Eric Schmidt, the former Google CEO turned minent investor, said in a talk in April that we would have AGI “within three to five years.” Now the AGI fever is —in what amounts to a wholesale vibe shift towards pragmatism as opposed to chasing utopian visions.
For example, at a CNBC appearance this summer, Altman called AGI “not a super-useful term.” In the New York Times, Schmidt—yes that same guy who was talking up AGI in April—urged Silicon Valley to stop fixating on superhuman AI, warning that the obsession distracts from building useful nology.
Both AI pioneer Andrew Ng and U.S. AI czar David Sacks called AGI “overhyped.” AGI: under-defined and over-hyped What happened? Well, first, a little background.
Everyone agrees that AGI stands for “artificial general intelligence.” And that’s pretty much all everyone agrees on. People define the term in subtly, but importantly, different ways.
Among the first to use the term was physicist Mark Avrum Gubrud who in a 1997 re article wrote that “by advanced artificial general intelligence, I mean AI systems that rival or surpass the human brain in complexity and speed, that can acquire, manipulate and reason with general knowledge, and that are usable in essentially any phase of industrial or military operations where a human intelligence would otherwise be needed.”The term was later picked up and ized by AI reer Shane Legg, who would go on to co-found Googled DeepMind with Hassabis, and fellow computer scientists Ben Goertzel and Peter Voss in the early 2000s.
They defined AGI, according to Voss, as an AI system that could learn to “reliably perform any cognitive task that a competent human can.” That defintion had some blems—for instance, who decides who qualifies as a competent human?
And, since then, other AI reers have developed different definitions that see AGI as AI that is as capable as any human expert at all tasks, as opposed to merely a “competent” person.
OpenAI was founded in late 2015 with the explicit mission of AGI “for the benefit of all,” and it added its own twist to the AGI definition debate.
The company’s charter says AGI is an autonomous system that can “outperform humans at most economically valuable work.” But whatever AGI is, the important thing these days, it seems, is not to talk it.
And the reason why has to do with growing concerns that gress in AI development may not be galloping ahead as fast as industry insiders touted just a few months ago—and growing indications that all the AGI talk was stoking inflated expectations that the itself couldn’t up to.
Among the biggest factors in AGI’s sudden fall from grace, seems to have been the roll-out of OpenAI’s GPT-5 model in early August.
Just over two years after Microsoft’s claim that GPT-4 showed “sparks” of AGI, the new model landed with a thud: incremental imvements wrapped in a routing architecture, not the breakthrough many expected.
Goertzel, who helped coin the phrase AGI, reminded the public that while GPT-5 is impressive, it remains nowhere near true AGI—lacking real understanding, continuous learning, or grounded experience.
Altman’s retreat from AGI language is especially striking given his prior position.
OpenAI was built on AGI hype: AGI is in the company’s founding mission, it helped raise billions in capital, and it underpins the partnership with Microsoft.
A clause in their agreement even states that if OpenAI’s nonfit board declares it has achieved AGI, Microsoft’s access to future nology would be restricted.
Microsoft—after more than $13 billion—is reportedly pushing to remove that clause, and has even considered walking away from the deal.
Wired also reported on an internal OpenAI debate over whether publishing a paper on measuring AI gress could complicate the company’s ability to declare it had achieved AGI.
A ‘very healthy’ vibe shift But whether observers think the vibe shift is a marketing move or a market response, many, particularly on the corporate side, say it is a good thing.
Shay Boloor, chief market strategist at Futurum Equities, called the move “very healthy,” noting that reward execution, not vague “someday superintelligence” narratives.
Others stress that the real shift is away from a monolithic AGI fantasy, toward domain-specific “superintelligences.” Daniel Saks, CEO of agentic AI company Landbase, argued that “the hype cycle around AGI has always rested on the idea of a single, centralized AI that becomes all-knowing,” but said that is not what he sees happening.
“The future lies in decentralized, domain-specific models that achieve superhuman performance in particular fields,” he told Fortune.Christopher Symons, chief AI scientist at digital health platform Lirio, said that the term AGI was never useful: Those moting AGI, he explained, “draw resources away from more concrete applications where AI advancements can most immediately benefit society.” Still, the retreat from AGI rhetoric doesn’t mean the mission—or the phrase—has vanished.
Anthropic and DeepMind executives continue to call themselves “AGI-pilled,” which is a bit of insider slang.
Even that phrase is disputed, though; for some it refers to the belief that AGI is imminent, while others say it’s simply the belief that AI models will continue to imve.
But there is no doubt that there is more hedging and downplaying than doubling down. Some still call out urgent risks And for some, that hedging is exactly what makes the risks more urgent.
Former OpenAI reer Steven Adler told Fortune: “We shouldn’t lose sight that some AI companies are explicitly aiming to build systems smarter than any human.
AI isn’t there yet, but whatever you call this, it’s dangerous and demands real seriousness.” Others accuse AI leaders of changing their tune on AGI to muddy the waters in a bid to avoid regulation.
Max Tegmark, president of the Future of Life Institute, says Altman calling AGI “not a useful term” isn’t scientific humility, but a way for the company to steer of regulation while continuing to build towards more and more powerful models.
“It’s smarter for them to just talk AGI in private with their investors,” he told Fortune, adding that “it’s a cocaine salesman saying that it’s un whether cocaine is is really a drug,” because it’s just so complex and difficult to decipher.
Call it AGI or call it something else—the hype may fade and the vibe may shift, but with so much on the line, from money and jobs to security and safety, the real questions where this race leads are only just beginning.
Introducing the 2025 Fortune Global 500, the definitive ranking of the biggest companies in the world. Explore this year's list.