Sam Altman now says AGI, or human-level AI, is 'not a super useful term’ — and he's not alone
Investment
CNBC

Sam Altman now says AGI, or human-level AI, is 'not a super useful term’ — and he's not alone

August 11, 2025
11:18 AM
4 min read
AI Enhanced
technologyartificial intelligencemarket cyclesseasonal analysismarket

Key Takeaways

Computer science experts say it's better to focus on the more specialized use cases of AI.

Article Overview

Quick insights and key information

Reading Time

4 min read

Estimated completion

Category

investment

Article classification

Published

August 11, 2025

11:18 AM

Source

CNBC

Original publisher

Key Topics
technologyartificial intelligencemarket cyclesseasonal analysismarket

OpenAI CEO Sam Altman speaks during the Snowflake Summit in San Francisco on June 2, 2025.Justin Sullivan | Getty Images News | Getty ImagesOpenAI CEO Sam Altman said artificial general intelligence, or "AGI," is losing its relevance as a term as rapid advances in the space make it harder to define the concept.AGI refers to the concept of a form of artificial intelligence that can perform any intellectual task that a human can

For years, OpenAI has been working to re and develop AGI that is safe and benefits all humanity."I think it's not a super useful term," Altman told CNBC's "Squawk Box" last week, when asked whether the company's GPT-5 model moves the world any closer to achieving AGI

The AI entrepreneur has previously said he thinks AGI could be developed in the "reasonably close-ish future."The blem with AGI, Altman said, is that there are multiple definitions being used by different companies and individuals

One definition is an AI that can do "a significant amount of the work in the world," according to Altman — however, that has its issues because the nature of work is constantly changing."I think the point of all of this is it doesn't really matter and it's just this continuing exponential of model capability that we'll rely on for more and more things," Altman said.Altman isn't alone in raising skepticism "AGI" and how people use the term.Difficult to defineNick Patience, vice president and AI practice lead at The Futurum Group, told CNBC that though AGI is a "fantastic North Star for inspiration," on the whole it's not a helpful term.Read more CNBC newsTesla exec leading development of chip and Dojo supercomputer is leaving companyHow Tim Cook convinced Trump to drop made-in-USA iPhone — for nowOpenAI launches new GPT-5 model for all ChatGPT usersFirefly Aerospace soars over 30% in market debut to $8.5 billion valuation"It drives funding and captures the public imagination, but its vague, sci-fi definition often creates a fog of hype that obscures the real, tangible gress we're making in more specialised AI," he said via .OpenAI and other startups have raised billions of dollars and attained dizzyingly high valuations with the mise that they will eventually reach a form of AI powerful enough to be considered "AGI." OpenAI was last valued by investors at $300 billion and it is said to be preparing a secondary sale at a valuation of $500 billion.Last week, the company released GPT-5, its large language model for all ChatGPT users

OpenAI said the new system is smarter, faster and "a lot more useful" — especially when it comes to writing, coding and viding assistance on health care queries.But the launch led to criticisms from some online that the long-awaited model was an underwhelming upgrade, making only minor imvements on its predecessor."By all accounts it's incremental, not revolutionary," Wendy Hall, fessor of computer science at the University of Southampton, told CNBC.AI firms "should be forced to declare how they measure up to globally agreed metrics" when they launch new ducts, Hall added. "It's the Wild West for snake oil salesmen at the moment."A distraction?For his part, Altman has admitted OpenAI's new model misses the mark of his own personal definition of AGI, as the system is not yet capable of continuously learning on its own.While OpenAI still maintains artificial general intelligence as its ultimate goal, Altman has said it's better to talk levels of gress toward this state of general intelligence rather than asking if something is AGI or not."We try now to use these different levels ... rather than the binary of, 'is it AGI or is it not?' I think that became too coarse as we get closer," the OpenAI CEO said during a talk at the FinRegLab AI Symposium in November 2024.Altman still expects AI to achieve some key breakthroughs in specific fields — such as new math theorems and scientific discoveries — in the next two years or so."There's so much exciting real-world stuff happening, I feel AGI is a bit of a distraction, moted by those that need to keep raising astonishing amounts of funding," Futurum's Patience told CNBC."It's more useful to talk specific capabilities than this nebulous concept of 'general' intelligence."watch now21:5921:59Watch CNBC's full interview with OpenAI CEO Sam AltmanSquawk Box