SpaceX S-1 cautions that orbital AI data centers could be unfeasible, just months after Musk referred to space-based AI as an obvious choice.
Summary: SpaceX’s private S-1 pre-IPO filing indicates that its plans for orbital AI data centers “involve significant technical complexity and unproven technologies, and may not achieve commercial viability,” contradicting Elon Musk’s earlier assertion at Davos that space-based AI was a “no-brainer” achievable within two to three years. The filing emerges as SpaceX aims for a $1.75 trillion IPO valuation and has made an application to the FCC for one million data center satellites, while rivals Starcloud, Google (Project Suncatcher), and Blue Origin pursue similar orbital computing initiatives.
In its confidential S-1 pre-IPO filing, SpaceX cautioned potential investors that its plans for orbital AI data centers “involve significant technical complexity and unproven technologies, and may not achieve commercial viability.” The company emphasized that any future space-based computing infrastructure will function “in the harsh and unpredictable environment of space, exposing it to a range of unique space-related risks that could lead to malfunction or failure.” This disclosure, first highlighted by Reuters on Monday, is a standard legal requirement for companies approaching what might be the largest IPO in history. It also reflects a notable moment of bureaucratic honesty from a company led by a CEO who previously termed data centers in orbit a “no-brainer” just three months earlier.
During the World Economic Forum in Davos in January, Elon Musk remarked that the most cost-effective location for AI would be in space “within two years, maybe three at the most.” He claimed that space-based solar energy would be “10 times cheaper than terrestrial solar” since “there’s no need for batteries,” suggested the cooling issue could be addressed simply by positioning a radiator away from the sun at three degrees Kelvin, and predicted that within five years, more AI capacity would be in orbit than on Earth. In February, SpaceX filed with the Federal Communications Commission to launch and operate up to one million satellites as the “SpaceX Orbital Data Center system” at altitudes between 500 and 2,000 kilometers. This filing described satellites that would “directly harness near-constant solar power with minimal operating or maintenance costs.” However, the S-1 filing contradicts these assertions.
The physics behind this issue reveals that the conflict between Musk’s public comments and SpaceX’s legal disclosures relates to engineering limitations that remain unchanged since Davos. In a vacuum, heat dissipation occurs solely through radiation; there’s no convection, liquid cooling, or fans. To radiate one megawatt of heat at 20 degrees Celsius, an orbital data center would require approximately 1,200 square meters of radiator surface—akin to the area of four tennis courts. In contrast, the entire electrical system of the International Space Station produces merely 0.2 megawatts, while ground-based hyperscale data centers are advancing toward gigawatt scale. The three-degree temperature of space becomes irrelevant if the necessary radiators outweigh the servers they are meant to cool.
Power supply faces similar limitations. Solar panels in orbit can collect roughly five times more energy than on Earth, with no atmosphere, weather, or night in certain orbits. Nonetheless, it would take about one square mile of solar array in Earth orbit to generate one gigawatt at 30% cell efficiency. The ISS generates 0.2 megawatts from solar arrays that stretch across a football field. To achieve the gigawatt levels consumed by a single hyperscale data center on Earth, significantly larger solar infrastructure than anything currently developed in space would need to be launched and maintained.
Hardware obsolescence may be the most overlooked constraint. GPUs tend to depreciate as new architectures are released every two to three years. On Earth, servers are continuously upgraded. In orbit, every hardware upgrade necessitates a launch, docking, or robotic servicing mission. The effects of radiation can lead to bit flips and permanent damage to circuits. Radiation-hardened chips are typically several generations behind commercial processors. Triple modular redundancy—operating three parallel systems and choosing the majority vote—would triple hardware requirements. The rising energy demands of AI, which the IEA predicts will push data center electricity consumption over 1,000 terawatt-hours by the end of 2026, are significant. The crucial question is whether addressing these demands in orbit presents more complications than solutions.
SpaceX is not alone in its pursuit of orbital computing, which makes the S-1 disclaimer more critical than a typical risk factor. Starcloud, previously Lumen Orbit, sent the first high-performance GPU to orbit in November 2025, an Nvidia H100 that represented a 100-fold increase in computational power in space. In December, Starcloud became the first company to operate a large language model, Google’s Gemma, and the first to conduct in-orbit LLM training. By March 2026, it had raised $170 million at a valuation of $1.1 billion, becoming the fastest unicorn in
Другие статьи
SpaceX S-1 cautions that orbital AI data centers could be unfeasible, just months after Musk referred to space-based AI as an obvious choice.
A pre-IPO filing by SpaceX indicates that space data centers rely on untested technology and might never be feasible. Musk mentioned at Davos that they would become the most cost-effective solution within the next three years.
