nb1t.sh

Data Centers in Space

Thu Apr 02 2026 · Nitin Bansal

Table of Contents


What You Need to Know

The idea of putting data centers in orbit has moved from science fiction into early engineering sketches, mostly because terrestrial data center power demand is exploding (projected to rise 165% by 2030 versus 2023 [13]), launch costs are falling, and someone at Google found that their TPUs can handle more radiation than expected. But here's the blunt truth: this is still a TRL 3–4 concept, meaning we're somewhere between "we have the equations" and "we've built a prototype that flies." The first orbital compute demo carrying an NVIDIA H100 GPU launched in late 2025 [13].

The core engineering constraint is thermal management. In a vacuum there is no convection — you can't blow air over a chip. Radiation is the only way to shed heat, and it requires enormous radiator surfaces. Experimental data shows electronics run up to 66% hotter in vacuum than in atmosphere [11]. Over 55% of electronics failures are temperature-related [7]. Rejecting megawatt-class heat via radiation alone means radiator areas in the thousands of square meters and masses in the tens of thousands of kilograms. No existing space system has solved this.

The economics are rough. A modeled 81-satellite constellation has a 10-year total cost of ownership of about $380 million versus $81.2 million for an equivalent terrestrial facility — a 4.7× premium [12]. One company's pitch compares $175 million in terrestrial electricity against "tens of millions" in launch and solar costs, but that framing conveniently omits radiator mass, structural mass, insurance, ground stations, and replacement cycles [13].

So what you're really looking at is a long-horizon R&D bet with genuine strategic appeal — sovereignty, grid independence, potentially abundant solar power — that might yield niche applications before it ever becomes a general-purpose alternative to terrestrial infrastructure.


Why Would Anyone Put Data Centers in Space?

The motivations are real even if the math is hard:

  • The Sun delivers ~1,360 W/m² at 1 AU [4], [13]. A dawn-dusk sun-synchronous orbit at 650 km gets >95% illumination with eclipses under 5 minutes [12], effectively eliminating intermittency.
  • There's no grid to connect, no permitting delays measured in years, and no land to acquire.
  • Terrestrial constraints are severe: the US has 150 GW of data center capacity in the pipeline [13]; $6.7 trillion in projected global data center investment by 2030 [13]; nuclear-powered campuses go for $650 million for 960 MW [13].
  • Orbital compute could enable data jurisdiction avoidance and neutral-zone processing, letting nations without domestic hyperscale access strategic compute [13].

The vacuum also provides a cold sink at ~2.7 K [8], making radiative heat rejection fundamentally available without power input. But the Stefan-Boltzmann T⁴ relationship means radiator area requirements grow rapidly with power, and space stressors — temperature cycling, UV radiation, atomic oxygen — degrade radiative surfaces over time [8]. Source 8 explicitly flags "fundamental and material-requirement differences between terrestrial and space-based radiative cooling," so you can't just port terrestrial cooling research into orbit [8].

Carbon accounting arbitrage is speculative at this point — no source quantifies the full lifecycle emissions of an orbital data center versus a terrestrial one [13].


Can Commercial Hardware Survive Up There?

Partially. Google's internal testing shows TPU HBM subsystems tolerate >2 krad(Si), and LEO exposure is ~150 rad(Si)/year, giving a 2.67× safety margin over a 5-year mission [12]. That suggests standard aluminum spacecraft structure is enough shielding for at least Google's Trillium architecture. But this is architecture-specific — different chips, memory types, and newer process nodes (3nm, 2nm) may respond very differently. Single-event upsets aren't addressed [12].


Is It Economically Viable?

No, not today. Even under optimistic assumptions (launch at $200/kg, COTS hardware, 5-year operations), space-based TCO exceeds terrestrial equivalents by roughly 4.7× [12]. The viability threshold requires launch costs at or below $200/kg [12], a dramatic drop from current Falcon 9 costs of ~$2,700/kg [12].


How Ready Is This Technology Really?

TRL 3–4 overall [13]. Individual components like solar arrays, heat pipes, and deployable radiators are at higher TRLs (5–9), but no integrated orbital data center has been demonstrated. Starcloud-1, launched November 2025, carried an NVIDIA H100 described as ~100× more compute than anything previously operated in orbit [13].


Thermal Management Is the Hard Problem

In vacuum, convection is gone entirely [2], [4], [6], [7], [9], [11]. Radiative heat transfer via Stefan-Boltzmann ($Q = \sigma \varepsilon A [T^4_{rad} - T^4_{space}]$) is the sole rejection mechanism [1], [4]. The spacecraft heat balance: $q_{solar} + q_{albedo} + q_{planetshine} + Q_{gen} = Q_{stored} + Q_{out,rad}$ [6].

Experimental evidence is sobering:

  • Electronics in vacuum run 32.8% hotter than in atmosphere [10].
  • At high vacuum (0.00025 Pa), temperatures exceeded atmospheric results by up to 66% for a 3D-printed heat sink at 3.5–7 W [11].
  • Adding just 6g of paraffin wax PCM reduced temperatures by up to 18°C in vacuum (vs. 12.3°C in atmosphere) [10].
  • PCM doubled operating time under both conditions [10].
  • A 2024 CubeSat confirmed microgravity doesn't mess with wax PCM orientation on heat sinks [3].

But the scale gap is enormous. Experiments test watts. Models cover kilowatts. Business plans target megawatts to gigawatts. No validated analysis bridges these gaps.

A modeled satellite uses 4.0 m² of radiator to reject 1,200 W at 362.5 W/m² [12]. At 1 MW that's ~2,759 m² of radiator, ~9,100 kg just for the panels. At Starcloud's 40 MW target, you're looking at ~110,000 m² of radiator, ~363,000 kg of panels — comparable to the entire ISS mass [13].

Available cooling technologies [1], [2], [4], [6], [7], [9], [10], [11], [12]:

Technology Heritage Key Data
Radiative cooling panels Highest — standard on all satellites ISS: 371 W/m²; modeled: 362.5 W/m² [12]
Constant conductance heat pipes High — standard satellite tech ~0.15 kg/m; 0.05–0.4 °C/W resistance [4], [7]
Loop Heat Pipes High — flight heritage More flexible than CCHPs [2], [9]
Vapor chamber heat pipes Modeled for orbital compute 50,000 W/(m·K) effective conductivity [12]
Pumped fluid loops Moderate-high — ISS heritage Mini-MPL: >20 W cooling in <1U; >200% improvement [6]
PCMs (wax/salt hydrate) Low-moderate — CubeSat tested 2024 6g reduced temp 18°C in vacuum; doubled operating time [10]
Variable-emissivity radiators Early research [9]
Thermoelectric coolers Moderate Low COP, no moving parts [2], [5], [7]

Advanced materials [9]:

  • Annealed pyrolytic graphite: 1,700 W·m⁻¹·K⁻¹ in-plane; 10 through-plane
  • Carbon nanotubes: 6,000 W·m⁻¹·K⁻¹ theoretical (not manufacturable at scale)
  • Diamond-like carbon: 850–1,050 W·m⁻¹·K⁻¹
  • Boyd k-Core flexible strap: 1,200 W·m⁻¹·K⁻¹
  • Copper nanospring TIMs: R_th < 0.01 cm²·K·W⁻¹

Microgravity complications: vapor bubbles don't buoyantly rise, potentially causing vapor lock [7]. Two-phase components can have startup problems [9]. TIMs degrade under gamma radiation over 5–10 years, causing increased contact resistance, cracking, or delamination [9].

The modeled satellite maintains junction temperature at 111.4°C — 13.6°C from the 125°C limit [12], with ±20% TDP uncertainty. That's operating near the thermal edge.


Power in Orbit

Solar is the only viable energy source discussed. Key numbers [4], [12], [13]:

Parameter Value
Solar irradiance at 1 AU 1,360 W/m²
Triple-junction cell efficiency 32% BOL, 27% EOL
On-orbit efficiency 20–25%
Dawn-dusk SSO illumination >95%, eclipses <5 min
Modeled satellite generation 2,420 W from 8.0 m² array
Power for 100 MW ~330,000 m² (0.33 km²) of solar array

Energy storage for eclipses is barely addressed. Batteries for <5-minute eclipses in the modeled orbit are mentioned, but longer storage needs aren't discussed.

Starcloud frames orbital power as: $175 million in terrestrial electricity over 5 years vs. "tens of millions" in launch and solar infrastructure [13]. But this omits radiator mass, structural mass, shielding, ground stations, insurance, and replacement cycles [12].


Radiation and Reliability

LEO TID is ~150 rad(Si)/year [12]. Google's TPUs tolerate >2 krad(Si) with a 2.67× safety margin over 5 years [12]. Encouraging but architecture-specific, with ±20% TDP uncertainty and no SEU analysis [12].

Reliability data is grim:

  • 55% of electronics failures are temperature-related [7]

  • Silicon dependability drops ~10% per 2°C beyond 80–90°C [9]
  • A 10°C increase can double failure rates [11]
  • Each 1°C reduction improves reliability ~4% [11]

In orbital context where hardware replacement costs are orders of magnitude higher than on Earth, maintaining junction temperatures well below threshold is an economic imperative, not an optimization.


Connectivity Is a Big Unknown

This is a significant gap. No source quantifies downlink bandwidth or round-trip latency for AI inference workloads from orbit. Biswas assumes optical ground links (NASA TBIRD heritage) but doesn't quantify achievable bandwidth [12]. Starlink v3 is described as adding AI processing with inter-satellite laser links, but latency comparisons versus terrestrial fiber aren't analyzed [13]. LEO at 650 km gives ~2–4 ms one-way, but cumulative hop latency isn't addressed.

The viability of orbital inference depends entirely on data transfer economics between space and Earth, and this is unquantified in the evidence base.


The Economics Don't Work Yet

Metric Orbital (Biswas) Terrestrial
10-year TCO (81-sat / equiv.) $380M $81.2M
Premium 4.7×
Launch cost target $200/kg N/A
Current Falcon 9 ~$2,700/kg
PUE claimed 1.42 (1.17 "optimized") 1.58 average (1.10–1.20 best hyperscale)

The PUE comparison is questionable — using a terrestrial average of 1.58 when hyperscale routinely hits 1.10–1.20 eliminates the claimed space advantage [12].

Launch cost trajectory [12]:

  • Current: ~$2,700/kg
  • Target: $200/kg via Starship reusability
  • Viability threshold: $200/kg
  • Modeled satellite: 415 kg → $1.86M per satellite at $200/kg

Modeled per-satellite 5-year ops: $0.3M, implying highly autonomous operations [12].


Who's Actually Building This?

Entity What Timeline Source
Starcloud Starcloud-1: NVIDIA H100 in orbit Launched Nov 2025 [13]
Starcloud Starcloud-2: AWS Outposts hardware Scheduled Oct 2026 [13]
Starcloud Long-term: 40 MW modules, GW clusters [13]
Google Project Suncatcher: thermal analysis of orbital TPU Conceptual/research [12]
SpaceX Starlink v3: AI processing on LEO satellites Ongoing [13]
UTS / UIUC / USyd / Mawson Rovers PCM cooling CubeSat (Waratah Seed) Launched Aug 2024 [3]
Xi'an Jiaotong Thermal management review Published 2024 [9]

The field is bifurcating into two models [13]:

  1. Large centralized orbital platforms (Starcloud): 40 MW modules, gigawatt clusters, favors latency-tolerant training
  2. Distributed edge compute via existing constellations (Starlink v3): kW-class across thousands of satellites, favors low-latency inference

Starcloud argues these are complementary — terrestrial for training and bulk inference, orbital platforms for sovereign compute and space-based sensing, distributed constellations for edge inference [13].


The Big Contradictions and Debates

1. Economic viability is deeply disputed. Biswas models a 4.7× premium and says the standalone business case is "not viable" [12]. Starcloud compares $175M in electricity against "tens of millions" in launch/solar, suggesting orbital is cheaper — but omits most non-power costs [13]. The truth likely leans toward Biswas but neither analysis is fully rigorous (Biswas is on LinkedIn, Starcloud is investor-facing).

2. PUE claims are contested. 1.42 space vs. 1.58 terrestrial [12], but best hyperscale hits 1.10–1.20, which would eliminate the advantage.

3. Radiation tolerance may not generalize. Google's TPU results are architecture-specific; ±20% TDP uncertainty and no SEU analysis limit confidence [12].

4. Passive vs. active cooling. Small spacecraft favor passive methods [5], [7], but active systems (pumped loops, variable-emissivity radiators) offer >200% improvement [6]. For data center power levels, active cooling is almost certainly required.

5. Thermal scaling direction is unclear. Starcloud argues larger platforms benefit from shielding scaling (surface area ∝ r² vs. volume ∝ r³) [13]. But radiator area scales with heat rejection (∝ power), and structural mass accumulates linearly. Which argument dominates at scale is unresolved.


Scaling From Watts to Megawatts

This is the most critical gap. Current thermal tech is designed for watts to hundreds of watts:

Scale Power Evidence
CubeSat PCM 3.5–7 W 32.8–66% temp increase in vacuum [10], [11]
SmallSat cryocooler 1–2 W MICRO1-1/2: 0.350–0.475 kg [6]
Modeled satellite 1,200 W 4.0 m² radiator, 111.4°C junction [12]
Starcloud target 40,000,000 W ~110,000 m² radiator [13]

The jump from the modeled satellite to Starcloud's target is ~33,000×. From the experimental heat sink study to Starcloud is roughly 6 million-fold. No validated engineering analysis bridges these scales. Terrestrial rack power densities have risen to 30–50 kW (100 kW+ for frontier AI), and air cooling fails above 10–15 kW per rack [13], driving immersion cooling and direct-to-chip liquid loops — but no orbital equivalent knowledge base exists.


View Factor Degradation and Cluster Problems

When multiple compute satellites operate in proximity — as Starcloud's gigawatt-scale clusters would require [13] — adjacent radiators partially view each other instead of cold space, degrading radiative efficiency. This "view factor degradation" is a known spacecraft thermal engineering problem that no source addresses [12], [13]. Tightly clustered modules could partially self-heat, creating a thermal death spiral that limits achievable compute density per unit orbit volume.

Environmental and debris considerations are thin: launch emissions aren't quantified, the <5% radiator area loss assumption from debris [12] doesn't address aggregate constellation impact, and lifecycle carbon footprints aren't compared [12], [13].

Legal and regulatory coverage is minimal. Orbital data would likely fall under the launching state's jurisdiction [12], but data jurisdiction arbitrage and spectrum allocation aren't analyzed. The Outer Space Treaty applies, and launching states are liable for debris [12].


What This Means for Different Audiences

For investors and strategists: The 4.7× TCO premium means orbital data centers aren't viable as drop-in replacements today. Theses must rest on continued launch cost declines toward $200/kg [12], premium pricing for specialized use cases (sovereign compute, defense), or structural advantages not captured in simple TCO. Starcloud's H100 launch is a milestone but not commercial viability. The terrestrial competition is massive: $6.7 trillion projected investment by 2030 [13].

For engineers: Thermal management is the primary constraint. The 66% temperature increase in vacuum [11] and the 13.6°C margin to TPU limits [12] indicate designs operate near feasibility edges. The 55% failure attribution to temperature [7] and exponential temperature-failure curves make thermal reliability the single biggest determinant of uptime. TIM degradation over 5–10 years [9] creates impractical maintenance cycles without robotic servicing.

For policymakers: Orbital compute raises novel jurisdiction questions, debris policy implications as constellations scale, and geopolitical questions about which nations can deploy orbital data processing — none of which are addressed in the evidence base.


Where This Could Go

Optimistic: Launch costs hit $200/kg by 2030–2035 [12]. Thermal breakthroughs scale from kW to multi-MW. Starcloud demonstrates 40 MW modules within Starship volumes [13]. Specialized use cases (sovereign compute, space-domain awareness, latency-tolerant AI training) generate premium revenue. Orbital data centers capture 1–3% of global compute by 2040. Starlink v3 creates a distributed edge layer [13].

Base case: Launch costs decline to $500–1,000/kg by 2035, short of viability threshold [12]. Small-scale demonstrations succeed but the 4.7× TCO gap persists. Terrestrial alternatives (nuclear SMRs, grid-scale batteries, improved PUE) absorb most demand. Orbital compute stays niche — demonstrations, defense, specialized sovereign compute where cost is secondary. Starlink edge compute succeeds modestly by embedding inference in existing satellites [13].

Pessimistic: Launch costs plateau above $1,500/kg. Radiation degradation proves worse than TPU tests suggest. Thermal scaling proves intractable at MW+ levels. A high-profile failure erodes confidence. Regulatory uncertainty persists. Terrestrial alternatives capture all demand. The concept remains perpetually "5–10 years away."


What We Still Don't Know

  1. Downlink bandwidth and latency for orbital inference workloads — arguably the most critical unanswered question.
  2. Thermal scaling to MW/GW — experimental data covers watts, modeling covers kilowatts, plans target megawatts.
  3. Microgravity effects on two-phase cooling at scale — one CubeSat experiment and one numerical reference exist.
  4. Radiation effects on advanced process nodes (3nm, 2nm).
  5. Cost comparison of space-based vs. terrestrial cooling per kW of compute.
  6. View factor degradation in clustered configurations.
  7. Long-duration (10+ year) radiator surface degradation from VUV, atomic oxygen, and micrometeorites.
  8. Robotic servicing feasibility — the $0.3M/satellite ops budget implies full autonomy but this isn't demonstrated.
  9. Regulatory and legal framework for orbital data processing.
  10. Lifecycle carbon footprint comparison.
  11. Terrestrial competition trajectory (nuclear SMRs, fusion, advanced cooling).
  12. Variable-emissivity radiator maturity.
  13. Carbon nanotube manufacturability at scale.

References

  1. Advanced Cooling Systems for Space - https://numberanalytics.com/blog/advanced-cooling-systems-for-space
  2. Cooling techniques for satellite systems - https://thermal-engineering.org/cooling-techniques-for-satellite-systems
  3. Space electronics cooling experiment tests wax-based heat sinks on orbiting satellite - https://mechse.illinois.edu/news/76332
  4. ENAE 691 Satellite Design: Thermal Control - https://ntrs.nasa.gov/api/citations/20230001953/downloads/ENAE%20691%20Spring23%20Thermal%20Cottingham.pdf
  5. Satellite Thermal Control System Design Tutorial - https://east-space.com/satellite-thermal-control-system-design-tutorial
  6. State-of-the-Art of Small Spacecraft Technology — Chapter 7: Thermal Control - https://nasa.gov/smallsat-institute/sst-soa/thermal-control
  7. Review of Electronic Cooling and Thermal Management in Space and Aerospace Applications - https://mdpi.com/2673-4591/89/1/42
  8. Radiative Cooling Materials for Spacecraft Thermal Control: A Review - https://advanced.onlinelibrary.wiley.com/doi/10.1002/adma.202506795
  9. Review on Thermal Management Technologies for Electronics in Spacecraft Environment - https://sciencedirect.com/science/article/pii/S277268352400013X
  10. Experimentally investigating phase change material behaviour in satellite electronics thermal control under vacuum and atmospheric pressure - https://sciencedirect.com/science/article/pii/S0017931024012134
  11. Investigating the performance of a heat sink for satellite avionics thermal management - https://sciencedirect.com/science/article/pii/S0017931025004788
  12. Space-Based Data Center Infrastructure: A Multi-Physics Thermal Analysis for AI Computing in Low Earth Orbit - https://linkedin.com/pulse/space-based-data-center-infrastructure-multi-physics-biswas-phd-xfipc
  13. BSV Insights 0002: Kilowatts to Compute — The Convergence of Data Centers and Power on Earth and in Orbit - https://balerionspace.substack.com/p/bsv-insights-0002-kilowatts-to-compute