
Data centers are eating the economy — and we’re not even using them
Key Takeaways
The average server utilization rate hovers between 12%-18% of capacity, while an estimated 10 million servers sit completely idle, representing $30 billion in wasted capital
Article Overview
Quick insights and key information
5 min read
Estimated completion
investment
Article classification
August 11, 2025
01:00 PM
Fortune
Original publisher
ary·Artificial IntelligenceData centers are eating the economy — and we’re not even using themBy Baris SaydagBy Baris Saydag Baris Saydag is CEO of Kinesis Network, a platform that transforms underutilized computing resources into scalable, on-demand compute services
He has extensive experience in enterprise nology and infrastructure optimization.Data centers are eating the economy.Getty ImagesAs giants announce hundreds of billions in new data center investments, we’re witnessing a fundamental misunderstanding of our compute shortage blem
The industry’s current apach, throwing money at massive infrastructure jects, resembles adding two more lanes to a congested highway
It might offer temporary relief, but it doesn’t solve the underlying blem
The numbers are staggering
Data center capital expenditures surged 53% year-over-year to $134 billion in the first quarter of 2025 alone
Meta is reportedly exploring a $200 billion investment in data centers, while Microsoft has committed $80 billion for 2025
OpenAI, SoftBank, and Oracle have announced the $500 billion Stargate initiative
McKinsey jects that data centers will require $6.7 trillion worldwide by 2030
Yet here’s the uncomfortable truth
Most of these resources will remain dramatically underutilized
The average server utilization rate hovers between 12%-18% of capacity, while an estimated 10 million servers sit completely idle, representing $30 billion in wasted capital
Even active servers rarely exceed 50% utilization, meaning the majority of our existing compute infrastructure is essentially burning energy while doing nothing ductive
The highway analogy holds true When faced with traffic congestion, the instinctive response is to add more lanes
But transportation reers have documented what’s known as “induced demand.” It’s a counterintuitive finding that ves additional capacity temporarily reduces congestion until it attracts more drivers, ultimately returning traffic to previous levels
The same phenomenon applies to data centers
Building new data centers is the easy solution, but it’s neither sustainable nor efficient
As I’ve witnessed firsthand in compute orchestration platforms, the real blem isn’t capacity
It’s allocation and optimization
There’s already an abundant supply sitting idle across thousands of data centers worldwide
The challenge lies in efficiently connecting this scattered, underutilized capacity with demand
The Environmental Reckoning Data center energy consumption is jected to triple by 2030, reaching 2,967 TWh annually
Goldman Sachs estimates that data center power demand will grow 160% by 2030
While giants are purchasing entire nu power plants to fuel their data centers, cities across the country are hitting hard limits on energy capacity for new facilities
This energy crunch highlights the significant strains on our infrastructure and is a subtle admission that we’ve constructed a fundamentally unsustainable system
The fact that companies are now buying their own power plants rather than relying on existing grids reveals how our exponential appetite for computation has outpaced our ability to power it responsibly
The distributed alternative The solution isn’t more centralized infrastructure
It’s smarter orchestration of existing resources
Modern software can aggregate idle compute from data centers, enterprise servers, and even consumer devices into unified, on-demand compute pools
This distributed apach offers several advantages: Immediate availability: Instead of waiting years for new data center construction, distributed networks can utilize existing idle capacity instantly
Cost efficiency: Leveraging underutilized resources costs significantly less than building new infrastructure
Environmental sustainability: Maximizing existing hardware utilization reduces the need for new manufacturing and energy consumption
Resilience: Distributed systems are inherently more fault-tolerant than centralized mega-facilities
The nical reality The nology to orchestrate distributed compute already exists
Some network models already demonstrate how software can abstract away the complexity of managing resources across multiple viders and locations
Docker containers and modern orchestration tools make workload portability seamless
The missing piece is just the industry’s willingness to embrace a fundamentally different apach
Companies need to recognize that most servers are idle 70%-85% of the time
It’s not a hardware blem requiring more infrastructure
Nor is it a capacity issue
It’s an orchestration and allocation blem requiring smarter software
Instead of building our way out with increasingly expensive and environmentally destructive mega-jects, we need to embrace distributed orchestration that maximizes existing resources
This requires a fundamental shift in thinking
Rather than viewing compute as something that must be owned and housed in massive facilities, we need to treat it a utility available on demand from the most efficient sources, regardless of location or ownership
So, before asking ourselves if we can afford to build $7 trillion worth of new data centers by 2030, we should ask whether we can pursue a smarter, more sustainable apach to compute infrastructure
The nology exists today to orchestrate distributed compute at scale
What we need now is the vision to implement it
The opinions expressed in Fortune.com ary pieces are solely the views of their and do not necessarily reflect the opinions and beliefs of Fortune.Introducing the 2025 Fortune Global 500, the definitive ranking of the biggest companies in the world
Explore this year's list.
Related Articles
More insights from FinancialBooklet