
Anthropic CEO Dario Amodei escalates war of words with Jensen Huang, calling out ‘outrageous lie’ and getting emotional about father’s death
Key Takeaways
Amodei’s impassioned defense was rooted in a personal revelation about his father's death, which he says fuels his urgent pursuit of beneficial AI while also driving his warnings about its risks.
Article Overview
Quick insights and key information
7 min read
Estimated completion
personal finance
Article classification
August 1, 2025
10:05 AM
Fortune
Original publisher
AI·Artificial IntelligenceAnthropic CEO Dario Amodei escalates war of words with Jensen Huang, calling out ‘outrageous lie’ and getting emotional father’s deathBy Nick LichtenbergBy Nick LichtenbergFortune Intelligence EditorNick LichtenbergFortune Intelligence EditorNick Lichtenberg is Fortune Intelligence editor and was formerly Fortune's executive editor of global news.SEE FULL BIO Anthropic CEO Dario Amodei.Stefan Wermuth/Bloomberg via Getty ImagesThe doomers versus the optimists
The no-optimists and the accelerationists
The Nvidia camp and the Anthropic camp
And then, of course, there’s OpenAI, which opened the Pandora’s Box of artificial intelligence in the first place
The AI space is driven by debates whether it’s a doomsday nology or the gateway to a world of future abundance, or even whether it’s a throwback to the dotcom bubble of the early 2000s
Anthropic CEO Dario Amodei has been outspoken AI’s risks, even famously predicting it would wipe out half of all white-collar jobs, a much gloomier outlook than the optimism offered by OpenAI’s Sam Altman or Nvidia’s Jensen Huang in the past
But Amodei has rarely laid it all out in the way he just did on journalist Alex Kantrowitz’s Big nology podcast on July 30
In a candid and emotionally charged interview, Amodei escalated his war of words with Nvidia CEO Jensen Huang, vehemently denying accusations that he is seeking to control the AI industry and expressing found anger at being labeled a “doomer.” Amodei’s impassioned defense was rooted in a deeply personal revelation his father’s death, which he says fuels his urgent pursuit of beneficial AI while simultaneously driving his warnings its risks, including his belief in strong regulation
Amodei directly confronted the criticism, stating, “I get very angry when people call me a doomer … When someone’s , ‘This guy’s a doomer
He wants to slow things down.'” He dismissed the notion, attributed to figures Jensen Huang, that “Dario thinks he’s the only one who can build this safely and therefore wants to control the entire industry” as an “outrageous lie
That’s the most outrageous lie I’ve ever heard.” He insisted that he’s never said anything that
His strong reaction, Amodei explained, stems from a found personal experience: his father’s death in 2006 from an illness that saw its cure rate jump from 50% to roughly 95% just three or four years later
This tragic event instilled in him a deep understanding of “the urgency of solving the relevant blems” and a powerful “humanistic sense of the benefit of this nology.” He views AI as the only means to tackle complex issues those in biology, which he felt were “beyond human scale.” As he continued, he explained how he’s actually the one who’s really optimistic AI, despite his own doomsday warnings its future impact
Who’s the real optimist? Amodei insisted that he appreciates AI’s benefits more than those who call themselves optimists. “I feel in fact that I and Anthropic have often been able to do a better job of articulating the benefits of AI than some of the people who call themselves optimists or accelerationists,” he asserted
In bringing up “optimist” and “accelerationist,” Amodei was referring to two camps, even movements, in Silicon Valley, with venture-capital billionaire Marc Andreessen close to the center of each
The Andreessen Horowitz co-founder has embraced both, issuing a “no-optimist manifesto” in 2023 and often tweeting “e/acc,” short for effective accelerationism
Both terms stretch back to roughly the mid-20th century, with no-optimism appearing shortly after World War II and accelerationism appearing in the science-fiction of Roger Zelazny in his classic 1967 novel “Lord of Light.” As Andreessen helped ize and main these beliefs, they roughly add up to an overarching belief that nology can solve all of humanity’s blems
Amodei’s remarks to Kantrowitz revealed much in common with these beliefs, with Amodei declaring that he feels obligated to warn the risks inherent with AI, “because we can have such a good world if we get everything right.” Amodei claimed he’s “one of the most bullish AI capabilities imving very fast,” saying he’s repeatedly stressed how AI gress is exponential in nature, where models rapidly imve with more compute, data, and training
This rapid advancement means issues such as national security and economic impacts are drawing very close, in his opinion
His urgency has increased because he is “concerned that the risks of AI are getting closer and closer” and he doesn’t see that the ability to handle risk isn’t keeping up with the speed of nological advance
To mitigate these risks, Amodei champions regulations and “responsible scaling policies” and advocates for a “race to the top,” where companies compete to build safer systems, rather than a “race to the bottom,” with people and companies competing to release ducts as quickly as possible, without minding the risks
Anthropic was the first to publish such a responsible scaling policy, he noted, aiming to set an example and encourage others to suit
He openly s Anthropic’s safety re, including interpretability work and constitutional AI, seeing them as a public good
Amodei addressed the debate “open source,” as championed by Nvidia and Jensen Huang
It’s a “red herring,” Amodei insisted, because large language models are fundamentally opaque, so there can be no such thing as open-source development of AI nology as currently constructed
An Nvidia spokesperson, who vided a similar statement to Kantrowitz, told Fortune that the company supports “safe, responsible, and transparent AI.” Nvidia said thousands of startups and developers in its ecosystem and the open-source community are enhancing safety
The company then criticized Amodei’s stance calling for increased AI regulation: “Lobbying for regulatory capture against open source will only stifle innovation, make AI less safe and secure, and less democratic
That’s not a ‘race to the top’ or the way for America to win.” Anthropic reiterated its statement that it “stands by its recently filed public submission in support of strong and balanced export controls that help secure America’s lead in infrastructure development and ensure that the values of freedom and democracy shape the future of AI.” The company previously told Fortune in a statement that “Dario has never claimed that ‘only Anthropic’ can build safe and powerful AI
As the public record will show, Dario has advocated for a national transparency standard for AI developers (including Anthropic) so the public and policymakers are aware of the models’ capabilities and risks and can prepare accordingly.” Kantrowitz also brought up Amodei’s departure from OpenAI to found Anthropic, years before the drama that saw Sam Altman fired by his board over ethical concerns, with several chaotic days unfolding before Altman’s return
Amodei did not mention Altman directly, but said his decision to co-found Anthropic was spurred by a perceived lack of sincerity and trustworthiness at rival companies regarding their stated missions
He stressed that for safety efforts to succeed, “the leaders of the company … have to be trustworthy people, they have to be people whose motivations are sincere.” He continued, “if you’re working for someone whose motivations are not sincere who’s not an honest person who does not truly want to make the world better, it’s not going to work you’re just contributing to something bad.” Amodei also expressed frustration with both extremes in the AI debate
He labeled arguments from certain “doomers” that AI cannot be built safely as “nonsense,” calling such positions “intellectually and morally unserious.” He called for more thoughtfulness, honesty, and “more people willing to go against their interest.” For this story, Fortune used generative AI to help with an initial draft
An editor verified the accuracy of the information before publishing
Introducing the 2025 Fortune 500, the definitive ranking of the biggest companies in America
Explore this year's list.
Related Articles
More insights from FinancialBooklet