Could DePIN be the answer to AI’s compute problem?

Digital Frontier

https://digitalfrontier.com/articles/depin-ai-compute-problem

As AI use intensifies, the limitations of centralised resources are starting to show. Some are banking on decentralised networks to power the technology’s next phase

By Isabelle Castro

April 30, 2025

“WHOEVER LEADS IN AI COMPUTE POWER will lead the world,” wrote AI strategist Mark Minevich in January 2025, capturing the frenzy then unfolding around AI infrastructure. Compute, the raw processing power needed to train and run AI models, has become the new oil, the essential resource that many consider essential for technological supremacy. 

Governments and tech giants alike are scrambling to secure their place in this race. From the moment of his inauguration, Donald Trump has made AI infrastructure a centerpiece of his economic strategy, pushing for aggressive deregulation and investment to ensure America remains the global leader. In the UK, Prime Minister Keir Starmer has doubled down on government-backed AI investments, promising to turn Britain into an “AI superpower.” China, meanwhile, has ramped up its own efforts, with state-backed enterprises securing massive supplies of GPUs despite U.S. sanctions, determined to outpace Western rivals. 

Google, Microsoft, Meta, and Amazon plan to spend a collective $325bn in 2025 to expand AI infrastructure, a 46% increase on the previous year. OpenAI announced in January its ambition to spend up to $500bn over the next four years on its AI infrastructure Stargate Project. While Elon Musk quietly builds out xAI’s compute empire, pouring billions into data centers across the US South. 

But as AI usage becomes “parabolic” will all this investment be enough? Already, AI giants are feeling the strain. Outages are becoming a common occurrence and in the wake of thousands flocking to OpenAI’s image generation tool to “Ghiblify” themselves, Sam Altman took to X to lament that ChatGPT’s “GPUs are melting”. A GPU shortage is already causing concern over AI infrastructure’s ability to scale, and AI adoption is becoming dependent on how much companies can spend. 

However, citing the millions of idle computers worldwide, some people believe the efficient redirecting of compute power might be the key to solving AI’s scaling problem. This may be DePIN’s moment to shine.  

AI’s compute problem 

Despite the flood of capital, there is a hard limit to how fast AI infrastructure can scale. Data centres require immense amounts of land, energy, cooling infrastructure and chips. The global semiconductor industry faces continuing shortages, and AI accelerators like Nvidia’s H100s are being snapped up by the industry’s biggest players before smaller competitors even get a chance to bid.  

Boxes that resemble computers, connected by a grid of red lines

Carmelo Giuliano, general partner at Arcanum Ventures explains that as daily AI usage increases, and more people use “vibe coding” to build even more digital solutions, the demand for compute power is going to go “parabolic”. 

Over the past six years AI adoption within organisations has remained around 50% however, a McKinsey survey found that in 2024 adoption rate had jumped to 78%. Use of generative AI is also rapidly increasing, jumping from 65% to 71% between the beginning and end of the year. Only 5% of individuals surveyed at the end of 2024 said they had never used generative AI (down from 18% in 2023).  

“We’re definitely going to need more compute,” he says. “There’s still a chip shortage, so I don’t know how AI can scale without fixing that issue.”  

A survey conducted by Flux, a platform for decentralised cloud computing, found that the increasing demand for AI is already close to surpassing the capacity for existing data centres. By 2027, Flux’s co-founder and CEO, Daniel Keller says that the demand will be far beyond the capabilities of existing planned data centre construction. “Even if they wanted to add a data centre a day, they could never add enough data centres to facilitate the growth that AI has coming,” he says.  

Aggregating compute power 

Keller is one of a few founders who have turned to repurposing the power of existing idle computers, using blockchain to create an efficient network of aggregated compute power. “The only solution is a decentralised model,” he continues.  

Tory Green, CEO of GPU-power aggregator, io.net, is another founder who believes that the limited availability of compute power is the result of a “significant underutilisation of the resources we already have.” 

“There is, in fact, no need for Big Tech hyperscalers to build football fields full of data centres to meet AI demand,” he says.  

A recent report released by the MIT Technology Review found that despite China investing billions of dollars into AI infrastructure in 2023, 80% of the newly built data centres remain unused. While the report cites “inexperienced players jumping on the hype train” being the cause of the underutilisation, other, earlier reports have estimated that globally servers are sitting idle, on average, 88-94% of the time.  

Keller explains that a few days prior to talking to Digital Frontier he had met with a person who had bought 20,000 NVIDIA H100s to run training models for AI. Now, they were all sitting idle.  “The problem is that if you’re not associated with the big players in the space, they tend not to mess with you,” he says. “So what we do is aggregate those enterprise projects back into Flux.”  

There is, in fact, no need for Big Tech hyperscalers to build football fields full of data centres to meet AI demand

Tory Green, io.net

Flux and other aggregators like io.net use blockchain and middleware to connect GPUs and other hardware to a distributed cloud computing network. Contributors install software to offer their idle resources, earning cryptocurrency for their compute power. Users submit tasks that are split and processed across the network, with smart contracts managing task allocation and verification. In this way, they unlock access to vast levels of compute power, which has the potential to dwarf the capacity of existing data centres.  

Scaling up  

While the networks are growing, they aren’t yet at the capacity to fully support ChatGPT-level demand. Currently the networks are mainly used by small to medium sized operations, like academic AI projects and supporting AI usage for systems like supply chain management, that are trying to update their operations without incurring a lot of cost.  

“Right now, you’ve got the haves. The big players in the AI space. They have a market monopoly on all H100s, H200s and they’re getting first dibs on new processing chips that come out,” says Keller. “And then you have the have-nots. The middle-tier to lower-tier companies who want to participate but don’t have the resources.” 

“They’re problems that are desperately looking for a solution.”  

As AI infiltrates every sector, the ability to integrate it is becoming a necessity for business survival. For these smaller enterprises that are primarily looking for a way to compete, DePIN could provide an affordable solution.   

However, he sees DePIN networks also being an underutilised resource for larger systems’ compute power, with the ability to support hyperscalers’ large spikes in demand.  

Green goes one step further, noting that due to DePIN’s decentralisation, it has the capacity to scale rapidly without the need for extensive investment. “Networks grow organically by adding new nodes, leveraging dormant resources, and dynamically responding to demand without centralized intervention,” he says. “This borderless scalability ensures that DePIN can accommodate big players while maintaining resilience, efficiency, and cost-effectiveness.” 

The test of resilience 

In addition, there is a question in the AI sector of how long GPUs will sufficiently fulfill the compute power needs of ever more powerful AI, which decentralised compute networks may be well positioned to solve. While NVIDIA’s H100s are considered the industry gold standard for now, they are power hungry, generate a lot of heat and aren’t efficient for every AI task. Engineers are working on alternative solutions like TPUs, quantum computing and neuromorphic chips, which may make the billions of dollars currently being spent on data centres full of GPUs obsolete.    

Many of the current DePIN networks focused on compute, like the AI hyperscalers, are focusing on aggregating GPU and CPU power. However, their position as a decentralised network could allow for new innovations to also be connected adding to the available supply. 

Danny O’Brien, senior fellow at the Filecoin Foundation sees decentralised networks possibly forming a longer term infrastructure for the evolving needs of AI. While the source of the compute power may change, the networks themselves are built to allow for this evolution without collapsing. Many of the platforms, like Flux, have leveraged their networks to allow for the aggregation of other things like storage, which Filecoin has specialised in, that could support the needs of scaling AI.  

“If OpenAI suddenly realises data centres aren’t what they need, suddenly their capital investment is worthless,” he says. “​​A decentralised network has a bunch of people quitting that network because they can’t make money. But other providers come into it.” 

A flexible future 

While today’s investments are pouring into centralised data centres, the industry’s future may belong to those who embrace a more flexible approach. The rapid rise in AI adoption is pushing existing infrastructure to its limits, exposing vulnerabilities in the traditional model of compute power centralisation. As demand surges, the race to secure high-performance chips and energy-intensive data centres is becoming unsustainable, raising concerns about accessibility, efficiency, and long-term scalability.  

DePIN networks, while still in their infancy, have the potential to scale and adapt rapidly, following the evolution of AI tech. By aggregating underutilised computing resources worldwide, they offer a way to redistribute processing power efficiently, reducing reliance on monolithic data centers and AI giants. This decentralised model not only alleviates pressure on compute supply but also democratises access to AI infrastructure, allowing smaller players to compete in an industry increasingly dominated by a few hyperscalers. 

This shift could redefine the AI landscape, breaking the dominance of hyperscalers and enabling broader participation in the AI economy.