Data centers in space are computing facilities deployed in orbit that process and store data using solar-powered infrastructure outside Earth’s atmosphere. This Analyst Q&A reflects ABI Research’s current understanding of how space-based data centers are evolving, including their use cases, commercial viability, limitations, key developers, and long-term impact on global compute infrastructure.
ABI Research forecasts up to 18,600 data centers to be active in space by 2035, with effective orbital compute power reaching 1.5 Gigawatts (GW). Total financial backing exceeded US$3 billion as of April 2026, supported by leading investment firms and global conglomerates, including Benchmark, EQT Ventures, NFX, Soma Capital, Saudi Aramco Energy Investors, and GCC.
Once considered a futuristic sci-fi concept, space-based data centers (also referred to as Orbital Data Centers (ODCs)) are approaching the commercial feasibility stage. Momentum mostly stems from two places: 1) organizations seeking alternatives to terrestrial compute options, and 2) a broader push to lower the US$/Watt (W) tied to space-based compute.
While this is one of the hottest topics in tech, it still raises many questions among industry stakeholders. To better understand the state of data centers in space, ABI Research Principal Analyst Andrew Cavalier answers four critical questions. This Q&A provides expert analysis on why data centers in space are needed, which companies are leading, and the key challenges to scaling them.
Why do we need data centers in space?
Andrew: The short answer is we are facing an energy crisis. Artificial Intelligence (AI) data center demand growth, among other energy-guzzling applications, is straining the electric grid. For example, ABI Research forecasts that AI workloads will require 26.4 Gigawatts (GW) of active Information Technology (IT) capacity in the United States by 2031, up from 8.2 GW in 2026. Similar trends are expected in other regions.
The central challenge is ensuring energy gets sent to the right place at the right time. With data center grid interconnection timelines stretching to 10 years in some markets, enterprises and governments are beginning to set their sights on alternatives such as ODCs to meet surging compute demand.
Energy generation from Earth-based infrastructure is inherently constrained. Grid interconnection queues are broken, transmission infrastructure to load centers is lagging, physical equipment supply chains are seizing up, and demand is outstripping available capacity.
Data centers in space are continuously exposed to the sun with no atmospheric losses or weather dependencies. As a result, ODCs can draw on solar energy that is 10X to 40X more energy-dense per square meter than terrestrial solar alternatives, fundamentally changing the energy economics of compute.
What are the top demand drivers for data centers in space?
Andrew: Right now, the largest drivers for data centers in space are defense and national security. Governments increasingly need resilient compute infrastructure for applications like missile warning and Intelligence, Surveillance, and Reconnaissance (ISR). ODCs offer persistent coverage, reduced exposure to ground-based threats, and the ability to process data closer to where it’s collected—compressing the data pipeline and improving decision-making windows.
In the near term (2026 to 2029), ABI Research anticipates new use cases to emerge across Earth Observation (EO)/Synthetic Aperture Radar (SAR), Kilowatt (kW)-scale Compute-as-a-Service (CaaS), and space traffic management. By the 2030s, the commercial opportunity for data centers in space will expand to commercially viable AI inference, secure data backup, hyperscale CaaS, and AI training.
Which companies are building data centers in space?
Andrew: Today, over 35 companies are making up the ODC ecosystem, a number ABI Research expects to double by 2027. The landscape spans several tiers: early-stage edge deployments from companies like Axiom Space, ADA Space, and Kepler Communications, and hyperscale ambitions from SpaceX, Starcloud, and Amazon.
The earliest deployments include hosted compute nodes aboard the International Space Station (ISS) via Axiom Space, alongside purpose-built kW-scale platforms from ADA Space and Kepler. These represent what ABI Research classifies as "kW-scale edge" deployments, a fundamentally different category from the Megawatt (MW)-scale, and eventually GW-scale infrastructure the hyperscalers are pursuing.
Among those hyperscalers, Starcloud and SpaceX are expected to move the fastest, with test ODC satellite launches expected by 2027. Blue Origin is another prominent contender, but two headwinds suggest slower progress: a Federal Communications Commission (FCC) approval still pending, and a Bezos execution track record via Amazon's Kuiper program that suggests the gap between approval and operational deployment should not be underestimated. Finally, Google's Project Suncatcher, a research moonshot pairing Tensor Processing Unit (TPU)-equipped satellites with free-space optical links, is targeting a two-satellite learning mission in partnership with Planet by early 2027. However, commercial services are a longer-term ambition.
SpaceX has the most ambitious and heavily scrutinized plans of all these systems. The company’s FCC filing indicates it plans to launch up to 1 million satellites for edge AI compute workloads. For context, Starcloud’s own 88,000-satellite constellation, also designed to run AI workloads in the same way hyperscale data centers do, will still be dwarfed by SpaceX’s stated ambition.
What are the key challenges of data centers in space?
Andrew: The first challenge is Mother Nature. Compute hardware needs to withstand harsh conditions. Chips must handle cosmic radiation as data travels across long distances using high-speed optical links. Cooling is also tricky in space because there’s no air or liquid to absorb heat; it must be radiated away, which requires an enormous surface area. For perspective, a single NVIDIA H100 Graphics Processing Unit (GPU) needs about 1.1 square meters of radiator space. And with a DGX H100 system, you need roughly 16 square meters of radiators and 33 square meters of solar panels.
High costs add further pressure for ODC companies. Launching hardware into space is expensive, and orbit selection affects both performance and economics. Sun-Synchronous Orbit (SSO) offers strong solar availability and low latency, but that is already highly congested. As a rough estimate, ABI Research’s Total Cost of Ownership (TCO) analysis suggests that an ODC can cost upward of 78X more than a terrestrial equivalent. However, the full picture is considerably more nuanced, as explored in our detailed analysis.
Weight is the third major constraint. A 2,000-Kilogram (kg) satellite generating 100 kW of power may allocate around 670 kg to solar panels alone. That leaves limited capacity for cloud compute and thermal cooling. And once in orbit, hardware cannot be easily upgraded, forcing operators to generate value within a 3 to 5-year window.
These combined challenges explain why most deployments remain small and why hyperscale space data centers are still a longer-term goal.
The Next Phase for Data Centers in Space
ODC network deployment costs are prohibitively high, meaning market accessibility is primarily reserved for companies with significant financial resources. This creates a premium around the $/W of these systems and restricts adoption to users with mission-critical requirements and deep pockets.
However, by 2035, ABI Research forecasts orbital compute $/W to reach convergence with terrestrial benchmarks. This will be driven by declining launch costs, improved power-per-kg ratios, and manufacturing scale and components innovation in photovoltaics and cooling. Orbital hyperscalers like SpaceX and Starcloud will be the key enablers of this convergence.
As space-based compute democratizes, there will be a growing reliance on ecosystem partnerships. In fact, as part of a May 2026 compute agreement, Anthropic expressed interest in partnering with SpaceX to develop multiple gigawatts of orbital AI compute capacity. This signals that hyperscale ODC demand is beginning to materialize from AI-native companies.
Everyone from satellite operators and launch providers to telcos and chipset providers will team up to fill the niche technological gaps involved in building space-based data centers. Nobody can do everything alone; SpaceX will likely be the closest to self-reliance, especially if the Terafab concept comes to fruition. But even SpaceX needs the hardware and software specialization from other technology organizations to succeed.
Gain a more comprehensive evaluation of data centers in space in the following ABI Research deliverables:
- Orbital Data Center Market Outlook
- Orbital Data Centers Economics Outlook
- Orbital Data Center Deployments