
‘Artificial stupidity’ made AI trading bots spontaneously form cartels when left unsupervised, Wharton study reveals
Key Takeaways
AI bots told to act as trading agents in simulated markets engaged in pervasive collusion, raising new questions about how financial regulators have previously addressed this tech.
Article Overview
Quick insights and key information
7 min read
Estimated completion
cryptocurrency
Article classification
August 1, 2025
11:05 AM
Fortune
Original publisher
AI·‘Artificial stupidity’ made AI trading bots spontaneously form cartels when left unsupervised, Wharton study revealsBy Sasha RogelbergBy Sasha RogelbergReporterSasha RogelbergReporterSasha Rogelberg is a reporter and former editorial fellow on the news desk at Fortune, covering retail and the intersection of and culture.SEE FULL BIO AI trading agents engaged in price-fixing behaviors when put in simulated , a Wharton study found.Spencer Platt—Getty ImagesA study from University of Pennsylvania’s Wharton School and the Hong Kong University of Science and nology found that when placed in simulated , AI trading bots did not compete with one another, but rather began colluding in price-fixing behaviors
According to the study , re on how AI behaves in market environments can help regulators understand gaps in existing rules and statutes
Artificial intelligence is just smart—and stupid—enough to pervasively form price-fixing cartels in financial market conditions if left to their own devices
A working paper posted this month on the National Bureau of Economic Re website from the Wharton School at the University of Pennsylvania and Hong Kong University of Science and nology found when AI-powered trading agents were released into simulated , the bots colluded with one another, engaging in price fixing to make a collective fit
In the study, reers let bots loose in market models, essentially a computer gram designed to simulate real market conditions and train AI to interpret market-pricing data, with virtual market makers setting prices based on different variables in the model
These can have various levels of “noise,” referring to the amount of conflicting information and price fluctuation in the various market contexts
While some bots were trained to behave retail investors and others hedge funds, in many cases, the machines engaged in “pervasive” price-fixing behaviors by collectively refusing to trade aggressively—without being explicitly told to do so
In one algorithmic model looking at price-trigger strategy, AI agents traded conservatively on signals until a large enough market swing triggered them to trade very aggressively
The bots, trained through reinforcement learning, were sophisticated enough to implicitly understand that widespread aggressive trading could create more market volatility
In another model, AI bots had over-pruned biases and were trained to internalize that if any risky trade led to a negative outcome, they should not pursue that strategy again
The bots traded conservatively in a “dogmatic” manner, even when more aggressive trades were seen as more fitable, collectively acting in a way the study called “artificial stupidity.” “In both mechanisms, they basically converge to this pattern where they are not acting aggressively, and in the long run, it’s good for them,” study co-author and Wharton finance fessor Itay Goldstein told Fortune
Financial regulators have long worked to address anti-competitive practices collusion and price fixing in
But in retail, AI has taken the spotlight, particularly as legislators call on companies to address algorithmic pricing
Ruben Gallego (D-Ariz.) called Delta’s practice of using AI to set individual airfare prices “predatory pricing,” though the airline previously told Fortune its fares are “publicly filed and based solely on trip-related factors.” “For the [Securities and Exchange Commission] and those regulators in financial , their primary goal is to not only preserve this kind of stability, but also ensure competitiveness of the market and market efficiency,” Winston Wei Dou, Wharton fessor of finance and one of the study’s , told Fortune
With that in mind, Dou and two colleagues set out to identify how AI would behave in a financial market by putting trading agent bots into various simulated based on high or low levels of “noise.” The bots ultimately earned “supra-competitive fits” by collectively and spontaneously deciding to avoid aggressive trading behaviors. “They just believed sub-optimal trading behavior as optimal,” Dou said. “But it turns out, if all the machines in the environment are trading in a ‘sub-optimal’ way, actually everyone can make fits because they don’t want to take advantage of each other.” Simply put, the bots didn’t question their conservative trading behaviors because they were all making money and therefore stopped engaging in competitive behaviors with one another, forming de-facto cartels
Fears of AI in financial services With the ability to increase consumer inclusion in financial and investors time and money on advisory services, AI tools for financial services, trading agent bots, have become increasingly appealing
Nearly one third of U.S. investors said they felt comfortable accepting financial planning advice from a generative AI-powered tool, according to a 2023 survey from financial planning nonfit CFP Board
A report last week from cryptocurrency exchange MEXC found that among 78,000 Gen Z users, 67% of those traders activated at least one AI-powered trading bot in the previous fiscal quarter
But for all their benefits, AI trading agents aren’t without risks, according to Michael Clements, director of financial and community at the Government Accountability Office (GAO)
Beyond cybersecurity concerns and potentially biased decision-making, these trading bots can have a real impact on . “A lot of AI models are trained on the same data,” Clements told Fortune. “If there is consolidation within AI so there’s only a few major viders of these platforms, you could get herding behavior—that large numbers of individuals and entities are buying at the same time or selling at the same time, which can cause some price dislocations.” Jonathan Hall, an external official on the Bank of England’s Financial Policy Committee, warned last year of AI bots encouraging this “herd- behavior” that could weaken the resilience of
He advocated for a “kill switch” for the nology, as well as increased human oversight
Exposing regulatory gaps Clements explained many financial regulators have so far been able to apply well-established rules and statutes to AI, saying for example, “Whether a lending decision is made with AI or with a paper and pencil, rules still apply equally.” Some agencies, such as the SEC, are even opting to fight fire with fire, AI tools to detect anomalous trading behaviors. “On the one hand, you might have an environment where AI is causing anomalous trading,” Clements said. “On the other hand, you would have the regulators in a little better position to be able to detect it as well.” According to Dou and Goldstein, regulators have expressed interest in their re, which the said has helped expose gaps in current regulation around AI in financial services
When regulators have previously looked for instances of collusion, they’ve looked for evidence of communication between individuals, with the belief that humans can’t really sustain price-fixing behaviors unless they’re corresponding with one another
But in Dou and Goldstein’s study, the bots had no explicit forms of communication. “With the machines, when you have reinforcement learning algorithms, it really doesn’t apply, because they’re ly not communicating or coordinating,” Goldstein said. “We coded them and grammed them, and we know exactly what’s going into the code, and there is nothing there that is talking explicitly collusion
Yet they learn over time that this is the way to move forward.” The differences in how human and bot traders communicate behind the scenes is one of the “most fundamental issues” where regulators can learn to adapt to rapidly AI nologies, Goldstein argued. “If you use it to think collusion as emerging as a result of communication and coordination,” he said, “this is ly not the way to think it when you’re dealing with algorithms.” Introducing the 2025 Fortune 500, the definitive ranking of the biggest companies in America
Explore this year's list.
Related Articles
More insights from FinancialBooklet