Hyperscalers Tap Neoclouds to Run Internal AI Workloads
By Reece Hayden |
10 Sep 2025 |
IN-7935
Log In to unlock this content.
You have x unlocks remaining.
This content falls outside of your subscription, but you may view up to five pieces of premium content outside of your subscription each month
You have x unlocks remaining.
By Reece Hayden |
10 Sep 2025 |
IN-7935
Hyperscalers Tap Neoclouds to Support Internal AI Workloads |
NEWS |
The development of neoclouds over the last 2 years has been rapid supported by enormous Venture Capital (VC), debt, and government funding. This infrastructure build-out and platform expansion have placed them alongside hyperscalers in the conversation for Artificial Intelligence (AI) infrastructure. However, the market is going one step further, with hyperscalers tapping neoclouds for AI infrastructure capacity. This is not new, but the frequency of these deals is increasing, which will have an impact on market dynamics. An early partnership deal was led by Meta & Fluidstack (the Meta deal was around 66% of Fluidstack’s 2024 revenue), but since then, and in the last couple of months, we have seen a significant uptick:
- NVIDIA/Lambda: NVIDIA recently signed a US$1.5 billion deal to lease 18,000 of its own Graphics Processing Units (GPUs) over 4 years from Lambda. This contract would make NVIDIA Lambda’s biggest customer and be perfect timing for the Initial Public Offering (IPO) expected for 1H 2026.
- Microsoft AI/Nebius: Recently signed, this US$17.4 billion (potentially increasing to US$19.4 billion) deal will see Nebius supply infrastructure for Microsoft’s internal AI team to support model development. This new capacity will mostly come from dedicated, “bare metal” infrastructure in Vineland, New Jersey—commencing later this year. This data center was recently built by DataONe and Nebius with 300 Megawatt (MW) power capacity. This deal led to a rapid increase in Nebius’ share price increasing its market cap by over 30%.
- CoreWeave/Google: CoreWeave will play a significant role in Google/OpenAI’s new capacity agreement, with CoreWeave providing infrastructure via Google’s platform to OpenAI.
This trend is driven by hyperscaler legacy capacity, market conditions, and neocloud capabilities. The hyperscaler installed base is not optimized for frontier AI development, instead mostly supporting traditional compute, storage, and networking. Market conditions, including power scarcity, data center lead times, and GPU-infrastructure supply chain friction, mean that building greenfield capacity is harder than ever. Hyperscalers look to leverage neocloud infrastructure and power at a slight premium to shorten deployment cycles and reduce shortfalls today. By handing this off, the hyperscalers transfer Capital Expenditure (CAPEX) to Operational Expenditure (OPEX), and reduce their exposure to GPU depreciation. Hyperscalers do not see value in the medium to long run for this infrastructure, as it will mostly support internal AI models, expecting that it will become obsolete in favor of inference-focused AI infrastructure.
Neocloud capabilities and operational excellence are another reason why hyperscalers are looking to outsource to neocloud market leaders. These leaders are highly proficient at quickly building out new capacity with strong operational skills and internal talent. The competitive dynamics in this saturated market space mean that they have significantly reduced time to market expectations, and also provide technical support based on a legacy understanding of AI infrastructure. One example is Fluidstack that has been building AI infrastructure for 13 years, and has a clear track record of deploying large clusters in a very short period of time (e.g., 48 hours to deploy 2,500+ GPUs for Poolside in a private cloud). The allure of neoclouds does not stop there—they also have a global footprint with ambitious plans to build out sovereign infrastructure in Europe, and it also offers an easy route to hand off responsibility to a third party and alleviate further global infrastructure headaches.
Hyperscalers as Customers Bring Massive Opportunities, but Also Substantial Risks |
IMPACT |
On the face of it, massive customers and stock prices soaring is enough of a reason to get excited by hyperscalers signing long-term infrastructure contracts with neoclouds. However, these “ultra” large customers bring further opportunities, but also risks:
Opportunities:
- Credible Long-Term Partner De-Risks Debt Raising, Which Will Likely Lead to More Preferential Financing Terms for Future CAPEX: Long-term contracts, especially through hyperscalers, provide multi-year visibility into neocloud financial stability, which de-risks capacity build-out and will open up cheaper debt funding for CAPEX today. This will support much faster and broader AI infrastructure built-out.
- Decreased Demand for New Managed Services, as Hyperscalers Want Resources to Support Training and Inference for Internal, Highly Skilled AI Teams: AI-competent companies like hyperscalers or model developers require far less hand holding, with demand for autonomy and usage of their own software stack on bare metal infrastructure. This reduces the burden on neoclouds to accelerate a managed service ecosystem lowering investment demands today (and able to focus on the “long-tail”/enterprise opportunity today).
- Leverage Customer Relationships to Develop Technology/Commercial Partnerships That Help Develop Platform Capabilities: Hyperscaler customers can be leveraged to build out solutions or services that enhance neoclouds’ appeal to the long-tail market.
Risks:
- Too Big, Too Soon: Although these neoclouds are very proficient in AI infrastructure, they are certainly not (with some exceptions) operating at a scale comparable to hyperscalers. This means that their supply chains may not be as reliable, and they may not have the operational capacity to meet companies’ demands (scale, latency, availability, reliability).
- Risk of Crowding Out Other Customers and Becoming Too Reliant on One Vendor: Hyperscaler infrastructure demands will most likely make up the majority of revenue for neoclouds and will, therefore, start to take priority. This creates risks for existing customers who will expect reliability and availability challenges.
- Public Funding May Slow Down, as Governments View Hyperscaler Exposure as Contra to Sovereign AI Initiatives: Funding through initiatives like InvestAI and EuroHPC will be a necessity for continued CAPEX and customer acquisition, especially in Europe. Hyperscaler exposure may undermine the perceived value of neocloud to the ecosystem and lower funding.
- Demand for Leading-Edge Hardware Is Easy to Promise, but Harder to Deliver: AI Independent Software Vendor (ISV) and enterprise demand is mostly focused on price/performance optimization with many happy to use legacy hardware (NVIDIA H100, A100s, etc.) However, hyperscaler AI groups with a focus on model training at the leading edge will require access to the bleeding edge with GB200 NVL72 and beyond to meet requirements. This not only brings supply chain challenges for neoclouds, but it also requires much more advanced cooling infrastructure and engineering expertise.
- Customer Concentration: Hyperscale contracts will dominate neocloud revenue, creating additional risks, especially if these customers have latitude to increase/decrease spending per year. For example, Microsoft accounted for about 62% of CoreWeave’s 2024 revenue. This exposed the IPO to an early shock as Microsoft decided to drop specific services within the contract.
Neocloud Customer Type Alters Supply Chain Requirements |
RECOMMENDATIONS |
Neoclouds will not say no to hyperscaler checks, but this trend will likely cause further strategic divergence in the market—neocloud leaders will target the top of the market, supporting few companies with bigger clusters; while followers with regionally-specific infrastructure will service the long-tail market with a greater focus on Inference-as-a-Service and abstracted solutions. This will create new dynamics within the supply chain, and should inform ecosystem Research and Development (R&D) and Go-to-Market (GTM) decisions:
- Server Original Equipment Manufacturers (OEMs) Should Expect Neocloud Customer Demand to Change Quickly Depending on Customer Type: AI leaders will want access to top infrastructure, new cooling techniques, etc.. Long-tail neoclouds will look to optimize price/performance, while still servicing customer requirements.
- Neocloud Investment in Software and Managed Services Will Change: It is likely that leading vendors will slow down the shift away from managed services with a preference for bare metal targeted at AI-competent leaders. This influences how the ecosystem will develop and what it vendor will need.
- Vendor Diversification Demand Will Change: Vendors exposed to hyperscalers will likely remain within the NVIDIA universe, and reduce demand for diverse hardware solutions—NVIDIA infrastructure remains best-in-class for R&D and model trainers. Long-tail neoclouds will probably step up their diversification as they look to build differentiation and a completely different value proposition for leaders. The type of workload will also affect this decision, and will largely depend on how hyperscalers intend to use neocloud facilities. One thing is for sure—hyperscaler demand will dictate what neocloud leaders procure and deploy.
- Expand or Invest in AI Capacity: Hyperscalers may want to expand their own AI capacity, but by throwing billions of dollars at neoclouds, they will gain soft power over strategy, access, and even insight into customer dynamics. This will be supported by hyperscaler employees being seconded to neocloud providers.
Written by Reece Hayden
Related Service
- Competitive & Market Intelligence
- Executive & C-Suite
- Marketing
- Product Strategy
- Startup Leader & Founder
- Users & Implementers
Job Role
- Telco & Communications
- Hyperscalers
- Industrial & Manufacturing
- Semiconductor
- Supply Chain
- Industry & Trade Organizations
Industry
Services
Spotlights
5G, Cloud & Networks
- 5G Devices, Smartphones & Wearables
- 5G, 6G & Open RAN
- Cellular Standards & Intellectual Property Rights
- Cloud
- Enterprise Connectivity
- Space Technologies & Innovation
- Telco AI
AI & Robotics
Automotive
Bluetooth, Wi-Fi & Short Range Wireless
Cyber & Digital Security
- Citizen Digital Identity
- Digital Payment Technologies
- eSIM & SIM Solutions
- Quantum Safe Technologies
- Trusted Device Solutions