Data Centers in Space
Is One Small Step for Technology a Big Step for Mankind?
Introduction
Every infrastructure boom produces a frontier. The railroad era produced land speculation schemes in territories that didn’t yet exist as states. The fiber-optic buildout of the 1990s produced transoceanic cable routes to cities with insufficient demand to justify them. The logic in each case was the same: if the core story is big enough, the outer edges seem inevitable.
The AI data center story is big enough. The four largest hyperscalers — Amazon, Google, Microsoft, and Meta — spent roughly $350 billion on capital expenditure in 2025, a year-over-year increase in the mid-30% range. McKinsey’s central scenario projects $5.2 trillion in data center investment through 2030. Goldman Sachs expects demand to grow 50% to 92 gigawatts by 2027. These are the market’s working assumptions.
And yet these assumptions tend to smooth over the gap between announced capacity and built capacity, between projected power access and actual grid connection, between GPU purchases and sustained utilization. That gap is where infrastructure booms do most of their speculative work. It is also where the frontier opens.
This edition tracks three developments in sequence: what the terrestrial data center story looks like when the hype is separated from the operational evidence; what orbiting data centers might do and for whom; and what it means that a company has filed with the FCC for authority to launch one million satellites to process AI workloads in low Earth orbit. That last development reveals something about the distance the demand narrative has traveled — and how far it still intends to go.
Study 1: The Ground Story — Announced Capacity and the Inflation Gap
Sources: McKinsey, “The Cost of Compute: A $7 Trillion Race to Scale Data Centers” (2025); S&P Global, “Global AI Power Demand: Challenges and Opportunities” (2025); Ed Zitron, The Tech Report, “The AI Boom Is a Lie: Fake Data Centers and Unused GPUs” (2025); Innovation Endeavors, “The AI Data Center Gold Rush: What’s Next Beyond Power?” (2025)
The terrestrial data center story divides cleanly into two layers. The first layer is real: AI workloads require compute, compute requires power, and power availability has become the operative constraint on where and how fast data centers get built. Dominion Energy Virginia reported in February 2025 that data center firms had requested 40.2 gigawatts of new power connections, up from 21.4 gigawatts six months earlier. Grid interconnection queues in Northern Virginia, California, and Germany now stretch seven to twelve years. These are bottlenecks, not projections.
The second layer is where the accounting gets harder. Announced capacity and financed, built capacity are different quantities that market narratives routinely treat as interchangeable. Ed Zitron, in a widely discussed 2025 interview with The Tech Report, put figures to the gap: roughly 4 gigawatts of data centers were built globally in the preceding year, while more than 200 gigawatts were described as “in progress” with only about 5 gigawatts actually under construction, and power supply remains uncertain even for that. The rest exists in letters of intent, pre-permits, and press releases.
Five pressure points define where the inflation is concentrated:
Announced vs. financed construction. A project that appears in a headline as “$10 billion committed” may be at any stage from conceptual planning to breaking ground. The numbers aggregate without distinguishing.
Projected power access vs. actual grid readiness. Building a data center shell is faster than connecting it to the grid. Typical construction timelines run twelve to fourteen months; gas-fired power plant permitting and construction commonly takes six years. The data center can be built before the power to run it exists.
GPU purchases vs. sustained utilization. Most colocation providers operate at 30–50% utilization. Even best-in-class hyperscalers sustain utilization above 60–70% with difficulty. Transparent, high-quality utilization data is scarce; a 2025 Association for Computing Machinery study found that energy consumption is rarely linked to compute capacity or workload type in any consistent public way. GPU clusters that appear in acquisition figures may not appear in productive output.
Training demand vs. cheaper inference and smaller models. The dominant AI workload is shifting from training — which is capital-intensive, episodic, and concentrated — to inference, which is ongoing, distributed, and increasingly served by smaller, more efficient models. DeepSeek’s V3, released in early 2025, achieved substantial training and inference efficiency gains at a fraction of the cost of comparable Western models. Each efficiency jump reduces the compute required per unit of output, recalibrating the denominator in every demand forecast that preceded it.
Strategic hoarding vs. productive deployment. Competitive pressure among hyperscalers creates incentives to acquire GPU capacity before rivals do, independent of near-term deployment plans. That dynamic produces announced demand that reflects competitive positioning as much as operational need. The forecast and the strategy are not always the same document.
None of this establishes that AI data center demand is illusory. S&P Global and KKR both note that current US absorption rates show no signs of acute overbuilding in active markets, and that the fiber overbuild of the 1990s, which looked catastrophic at the time, ultimately became the backbone of the internet economy. The disagreement is not about whether demand is real. It is about whether demand forecasts are treating announced intent as bankable fact — and whether that gap, historically, is where infrastructure booms locate their bubbles.
Editor’s Note: The inflation in AI data center demand estimates does not require bad faith to operate. Competition, investor pressure, and genuine uncertainty about how quickly AI adoption will compound all push forecasts upward. The gap between announced and built is a structural feature of how large capital commitments get made before operational evidence exists. Identifying that gap is the beginning of the analysis, not the conclusion.
Study 2: Orbital Data Centers — Real Niche, Stretched Narrative
Sources: Axiom Space, Orbital Data Center program documentation (2025–2026); Data Center Knowledge, “ISS Data Center Launch Tests Edge Computing in Space” (September 2025); Space.com, “Axiom Space to Launch Its 1st Orbiting Data Centers” (2025); Axiom Space / Spacebilt announcement (September 2025)
On January 11, 2026, Axiom Space launched the first two operational orbital data center nodes to low Earth orbit. Designated ODC 1 and 2, the nodes are aboard Kepler Communications satellites in Kepler’s optical relay constellation, operating at 2.5 gigabits per second via optical intersatellite links. The launch followed a progression of tests: in 2022, Axiom flew an AWS Snowcone device to the International Space Station; in August 2025, it launched AxDCU-1, a prototype running Red Hat Device Edge, to the ISS for operational testing.
Axiom’s stated use cases are niche applications for which orbit confers genuine operational advantages: lower latency for satellite-to-satellite communication, independence from terrestrial ground station windows, and data sovereignty for defense customers who cannot route traffic through foreign server infrastructure.
The technical logic is sound within those limits. The ISS orbits every 90 minutes, producing intermittent and unpredictable communication windows with ground stations. Processing data locally before transmitting results to Earth is a meaningful efficiency gain.
Starcloud, a competitor, launched Starcloud-1 in November 2025 carrying an NVIDIA H100 GPU, became the first company to train a language model in space, and demonstrated that orbital compute is operationally viable, not merely theoretical.
The in-orbit data center market is projected to reach $1.77 billion by 2029 and $39 billion by 2035, at a 67.4% compound annual growth rate. Those figures come from the same analytical tradition that produces the terrestrial AI demand forecasts described in Study 1 — they project from current activity at maximum growth rates and assume no efficiency improvements, no competitive substitutes, and no friction.
The actual Axiom launch involves two satellite payloads on Kepler’s constellation and a prototype on the ISS. The distance between that operational footprint and $39 billion in 2035 is not prediction. It is extrapolation presented as prediction.
The category distinction matters. Orbital compute addresses real constraints for specific workloads: satellite operations, defense processing, Earth observation, and lunar or deep-space missions where terrestrial infrastructure is unavailable by definition. It does not address the mainstream “train and serve large models cheaply at scale” problem that drives the bulk of hyperscaler capital expenditure on Earth.
Orbit adds cost, latency, maintenance complexity, and hardware obsolescence risk; it removes power and cooling constraints that, on the terrestrial side, are real but not yet intractable.
Editor’s Note:
The substitution case — orbit instead of terrestrial cloud — is weak for most current AI workloads.
What orbital data centers accomplish is different and narrower than what the promotional narrative suggests. That narrowness does not diminish the accomplishment. It clarifies what is actually being built.
Axiom’s January 2026 launch is a real infrastructure event, not a press release. The nodes are in orbit and operational. What they represent is a working solution to a specific set of problems — not the leading edge of a replacement for terrestrial infrastructure.
The 67% CAGR projection is a forecast made by people with financial interests in the outcome. The nodes are made by engineers with operational specifications. These are different animals.
Study 3: The SpaceX FCC Filing — One Million Satellites and the Shape of Belief
Sources: SpaceX FCC filing SAT-LOA-20260108-00016 (filed January 30, 2026); FCC Space Bureau Public Notice DA 26-113 (February 4, 2026); American Astronomical Society action alert and reply comments (March 23, 2026); Futurism, “SpaceX’s One Million Orbital Data Centers Would Be Debilitating for Astronomy Research” (2026); SpaceNews, “SpaceX Files Plans for Million-Satellite Orbital Data Center Constellation” (January 2026); The Register, “FCC Opens Musk’s 1M-Satellite DC Plan for Public Comment” (February 2026); Geekwire, “SpaceX Seeks Go-Ahead from the FCC to Put Up to a Million Data Center Satellites in Orbit” (January 2026)
On January 30, 2026, SpaceX filed an application with the FCC’s Space Bureau for authority to launch and operate a constellation of up to one million satellites in low Earth orbit to function as an orbital AI data center system. The FCC accepted the application for filing on February 4 — which means it entered the regulatory process for comment and review, not that it was approved or that approval is imminent.
The filing’s own language establishes the tone in which this proposal operates. “Launching a constellation of a million satellites that operate as orbital data centers is a first step towards becoming a Kardashev II-level civilization,” SpaceX wrote — a civilization capable of harnessing the full energy output of its star.
The Kardashev scale is a theoretical framework from a 1964 Soviet astronomy paper. Its appearance in an FCC regulatory filing raises eyebrows. Infrastructure filings typically discuss orbital mechanics, interference mitigation, and deployment schedules.
This one invokes interstellar civilization ambition, then requests multiple waivers of standard regulatory requirements: processing-round procedures, milestone requirements, deployment obligations, and surety bond requirements.
Of consequence: No deployment schedule was included. No cost estimate was filed. No satellite dimensions, mass specifications, or detailed orbital parameters were provided.
The public comment period ran from February 4 through March 23, 2026. Approximately 1,500 comments were submitted. The organizational filings by parties with standing to challenge the application ran nearly unanimously against the proposal as filed.
The opposition is organized into four categories.
Commercial competitors and infrastructure stakeholders. Amazon/Kuiper filed a petition to deny, arguing the application was incomplete and competitively distortive. Viasat filed a petition to deny on standing, completeness, and interference grounds. Blue Origin argued the sector may develop real use cases but authorizing one million satellites could structurally crowd out future competitors. WISPA and Cambium raised interference concerns affecting terrestrial and fixed-service operators.
Scientific institutions. The American Astronomical Society, the International Astronomical Union, the Royal Astronomical Society, and the European Southern Observatory all filed comments raising light pollution, radio interference, and night-sky access concerns. The astronomers noted that SpaceX’s proposed constellation would place satellites in high-inclination orbits fully illuminated by sunlight even at midnight — a configuration worse for ground-based observation than existing Starlink deployments, over which astronomers had spent years negotiating mitigations. The filing itself contained no discussion of the dark-sky coordination agreement that FCC rules require.
Government and public-interest bodies. NASA filed technical concerns about mission impacts and requested coordination. The Secure World Foundation raised orbital sustainability and governance questions. DarkSky International, which had mobilized 193,000 supporters to file comments, focused on the irreversibility of light pollution at this constellation scale. PEER raised environmental review concerns. Harvard astrophysicist Jonathan McDowell, who tracks all active payloads in orbit, noted that the 14,518 active satellites currently in orbit are already generating significant debris and astronomy management challenges — and that a million-satellite system would require a dedicated fleet of tow-truck satellites to manage failed units and prevent Kessler cascade scenarios.
Individual and smaller organizational challengers. William Stewart filed a formal protest on collision risk grounds. Nickolai Bakken filed an environmental objection. DashAstro Astronomical Society and MCCI Corporation filed comments in the record.
SpaceX filed a consolidated opposition to petitions and response to comments on March 16. The opposition argued that objectors lacked standing, that interference concerns were speculative, and that SpaceX’s track record with Starlink brightness mitigation demonstrated good faith toward the astronomy community. It also suggested that AI tools enabled by the constellation could “accelerate” scientific research — a claim that did not address the specific interference concerns raised.
Editor’s Note: The political context is relevant as a structural condition, not a prediction. Elon Musk’s relationship with the current FCC and the broader regulatory apparatus is a documented feature of the environment in which this filing will be judged. What that means for the outcome is not established. What is established is that the application requested multiple standard waivers, provided minimal technical detail, and received essentially no substantive support from the institutional record.
What the record shows is an application that asked to be exempted from the rules that normally govern applications of this kind, while simultaneously invoking civilizational ambition as its organizing rationale. Those two features, taken together, describe a proposal designed to establish a position rather than to demonstrate a plan.
One can only hope that a plan, not a dream, is still needed for approval to be granted.
Study 4: What This Tells Us About the AI Demand Story
Sources: McKinsey demand scenarios (2025); S&P Global medium-term overbuilding risk assessment (2025); SpaceX FCC filing narrative (2026); Axiom Space ODC program documentation; Morgan Stanley orbital compute analysis
The three studies in this edition trace a single narrative from its grounded form to its orbital form. In the grounded form, real AI compute demand meets real power and permitting constraints, producing real capital expenditure — and also producing announced capacity that exceeds built capacity by factors large enough to matter. In the orbital form, those same constraints are reframed as reasons to leave the planet.
The structural logic of that reframe is not irrational. Power bottlenecks are real. Solar energy in orbit is genuinely more abundant — approximately five times more productive per square meter than ground-based panels, with near-continuous exposure in sun-synchronous orbits. Cooling in the vacuum of space requires no water. Land-use and permitting conflicts do not exist at 1,000 kilometers altitude. These are genuine advantages for a narrow set of workloads. Axiom’s nodes exist because those advantages are real for satellite operations.
The inflation enters when the niche advantages are presented as solutions to the mainstream problem. The mainstream AI compute problem — training and serving large models at scale, cheaply, with low latency to end users on Earth — is not well-served by orbital infrastructure at current or near-term orbital economics. Launch costs, even on an optimistic Starship trajectory, create a per-kilogram capital cost that terrestrial data center hardware does not face. Hardware failure in orbit is not a technician dispatch; it is a resupply mission. Latency to end users on Earth is inherently constrained by orbital mechanics. The workloads that justify the capital cost are sovereign, defense, and satellite-native — not mainstream cloud.
McKinsey’s own demand modeling illustrates the uncertainty the promotional narrative absorbs without acknowledgment. Its three investment scenarios for 2025–2030 range from $3.7 trillion to $7.9 trillion in required capital expenditure — a spread of more than $4 trillion driven by assumptions about AI adoption, efficiency improvements, and semiconductor supply.
S&P Global’s medium-term assessment explicitly flags overbuilding risk for 2027–2030 as efficiency innovations reduce infrastructure requirements per unit of compute.
DeepSeek’s efficiency gains, arrived at on a fraction of the capital cost of comparable Western models, demonstrated that the denominator in demand forecasts is not fixed.
Against that uncertainty, the SpaceX filing is useful as a diagnostic rather than a plan. It reveals what happens at the end of the demand narrative chain. When terrestrial projections are taken as given — when 200 gigawatts “in progress” is treated as demand rather than announcement — the logical next move is to find a larger stage. Orbit is that stage. A million satellites is the number that corresponds to a demand story that has already assumed its own conclusion.
The Kardashev II reference in the FCC filing is the tell. A civilization that can harness its star’s full energy output is, by definition, a civilization that has already solved every intermediate problem — energy production, launch economics, hardware reliability, regulatory governance, orbital debris management, and economic demand sufficient to justify the investment. The filing did not discuss any of those intermediate problems in detail. It invoked the destination and asked for the waivers.
That pattern — conclusion first, intermediate steps deferred — is recognizable from the terrestrial buildout. Announced capacity treats the destination as given. The power connection, the construction financing, the utilization rate, and the paying customer are intermediate steps. The pattern does not mean the destination is wrong. It means the distance between here and there is being presented as shorter than it is.
Editor’s Note: Orbital data centers will find real customers. The niche is genuine and technology is advancing. The question the SpaceX filing raises is not whether orbital compute can exist — Axiom’s nodes demonstrated that it can — but what demand story makes one million satellites necessary. That story requires AI demand to be not merely large but effectively unbounded, not merely growing but capital-insensitive at civilizational scale. When the demand story reaches that level of abstraction, the market is no longer discussing infrastructure requirements. It is pricing belief in a future it has not yet built the intermediate steps to reach.
The overarching question remains: Does the AI industry have a sustainable business model?
Sources available on request. FCC proceeding SAT-LOA-20260108-00016 remains open for agency review as of publication. Axiom Space ODC nodes 1 and 2 are operational in low Earth orbit as of January 11, 2026.



My big question: will the ever increasing thinking power of computers eventually get to a point where the need for such large amounts of hardware (and the resources it consumes) is eliminated? We may not even be talking about orbital data centers in 5 years if the hardware shrinks enough.