The emergence of AI has led to tremendous demand for compute. Today, hyperscalers report overwhelming utilization of chips in their datacenters and are anticipating this demand to grow exponentially as GPU-intensive processes cement themselves across every vertical of our economy. However, providing the compute needed to support such a world requires enormous infrastructure development, especially considering cloud providers’ current struggle to meet existing capacity requirements. Wall Street analysts project $500B-$600B in capex (primarily from companies like Amazon, Microsoft, Alphabet, Meta, and Oracle) just in 2026 to build datacenters. Some dub this massive spend the “AI arms race” because big players pressure each other to deploy massive investments into new capacity out of fear of falling behind. Despite these ambitious buildout plans, doubts remain on whether infrastructure deployment can keep pace with future requirements. Datacenters take years to build and outfit with the necessary hardware, networking, and cooling, and the GPU supply chain is complex and subject to delays. Not to mention the multiple year lag it takes to integrate something as power-intensive as a massive datacenter to the power grid.


Market consensus holds that rapid datacenter buildout is crucial to keep pace with AI growth over the coming decades. However, expansion faces significant challenges. Facilities need three core inputs, GPUs, Power, and Cooling, which each bring their own challenges.


Datacenters need access to vast quantities of reliable (and cost efficient) electricity to power the chips, and water, to cool the chips. Geography matters too: hyperscalers look for cool climate, flat, stable topography, minimal natural disaster risk, and sometimes proximity to densely populated areas for delay-sensitive inference. These requirements place a limit on the amount of land suitable for datacenters.


Datacenters also impose substantial environmental costs, raising questions about their long-term sustainability. Compute operations strain power grids and drive increased fossil fuel consumption. Hyperscalers cool GPUs by evaporating massive amounts of fresh water. The vapor escapes into the atmosphere, permanently removing the water from the local ecosystem and requiring datacenters to continuously source more. Rivers dry up, agriculture withers, and droughts intensify.


Building data centers in space might sound like a CEO’s empty PR promise, but we're rapidly approaching the point where it becomes the economically viable choice. With SpaceX slashing launch costs and Starcloud already operating real hardware in orbit, what once sounded like sci-fi fantasy could become reality in the next 5–10 years.

Bull Case


Let’s zoom in on the latter two inputs: power and cooling.


Electricity in space is harnessed through solar energy, which is unlimited, free, and easily accessed closer to the Sun. Satellites can follow an orbital path that ensures the system has access to sunlight for up to 99% of the time. Furthermore, because the atmosphere dissipates ~50% of solar energy, panels in space produce around 8x more power per square meter than on Earth.


Orbital datacenters require radiative cooling as traditional methods involving conduction and convection fail in space’s vacuum. Heat pipes, sealed tubes containing water or ammonia, act as conduits that transfer heat from the chips to large external radiator panels, where the heat dissipates into the vacuum. The fluid then returns back to the chips to absorb more heat and the cycle continues. The ISS uses radiative heating, but since its systems consume less power it employs much smaller panels than a datacenter would need. Engineers have developed clever origami-like folding techniques to pack flaps of maximum surface area into the limited space available on rockets.


Space offers what Earth cannot: infinite room to scale and boundless, reliable energy without consuming land, water, or fossil fuels.

Key Players


Starcloud and SpaceX have partnered and emerged as early leaders in this market. On November 2, 2025, Starcloud launched an NVIDIA H100 GPU into orbit using SpaceX's Falcon 9 rocket. The system was the size of a fridge and weighed in at 60kg, marking a milestone as the H100 is 100x more powerful than any computing system previously deployed in space. Starcloud plans to test their platform’s model training and fine-tuning capabilities as well as inference capabilities on Gemini. They planned the next iteration for sometime in 2026: a micro datacenter housing multiple H100s and at least one NVIDIA Blackwell chip.


While Starcloud's progress is encouraging, the ultimate determinant of space datacenter viability is launch cost. SpaceX’s ability to rapidly produce and reuse rockets has given them overwhelming market share. Their competitors like Blue Origin and Rocket Lab trail far behind in launch frequency, reusability, and manufacturing capability. Elon Musk and his team have reduced the cost of launching to space by 95%—from $50,000 per kilogram in the 1970s to $2,500 today—and aim to eventually reach $50 per kilogram with Starship, their soon-to-be flagship vessel.


Rapid production and refurbishment timelines drive SpaceX's cost advantage. Their factory in Hawthorne, California assembles a Falcon 9 booster in weeks which can be launched and relaunched in under 9 days. Musk plans to mass-produce Starships with 1,100 cubic meter cargo bays capable of delivering 100-ton payloads to orbit.

Challenges & Limitations


Despite the current optimism, questions around the practicality and feasibility of these datacenters persist. Radiation usually absorbed by the atmosphere interferes with chips' circuits and memory in space, and long-term exposure will significantly shorten hardware lifespan. Radiation-resistant materials exist but command premiums. Moreover, how will hyperscalers replace or upgrade chips in orbit? Operating existing datacenters is extremely labor-intensive; failures occur frequently and require on-site troubleshooting.


Will hardware systems become more robust over time? Will advancements in robotics enable the automation of maintenance and prevent the need to return faulty systems to Earth or send personnel into space? These questions remain unanswered but will be central to the viability of space-based datacenters.


Lastly, routing compute to an orbital datacenter increases latency for most users on Earth. Inference loads, like a self-driving car deciding how to avoid a pedestrian, tend to be delay-sensitive and comprise around 70% current AI compute demand. That said, while it typically should add tens of milliseconds of latency, some inference requests originating far enough from terrestrial datacenters can actually be faster when routed through orbital infrastructure. Nonetheless, the marginally higher latency should not matter for the majority of use cases.

Ornn Adds the Financial Layer


Ornn is cementing its position in the AI ecosystem by offering instruments that allow compute consumers, providers, and lenders financing infrastructure to hedge against price volatility. Their partnerships with cloud providers give them access to extensive GPU pricing data unavailable to the public, which enables Ornn to precisely price futures that allow AI consumers to lock in future costs. Given the scale of corporate compute spending, disruptions like energy price spikes or semiconductor supply chain issues could significantly erode profit margins. By providing data centers and hyperscalers with revenue certainty, which in turn secures more favorable financing terms crucial for the infrastructure expansion necessary to meet AI demand.


Ornn's derivatives will be a necessity with the deployment of orbital datacenters. The massive capital expenditure required to develop infrastructure in space makes Ornn's role as the leader in compute downside protection crucial. It also hedges terrestrial hyperscalers against the risk of their datacenters becoming obsolete if compute migrates to space and legacy hardware proves incompatible with orbital operations.


Looking Forward


Major innovations from SpaceX and Starcloud make orbital datacenters feasible within the next 5-10 years, though significant uncertainty remains around how they would ultimately manifest. Specific datacenter architectures and operational designs are still undetermined. Given Elon Musk's track record of vertical integration, SpaceX could build orbital compute infrastructure independently and rather than partnering with Starcloud. Edge cases exist where chip efficiency gains could enable ground-based compute to remain economically and environmentally superior. Overall, orbital datacenters represent a promising development worth monitoring over the next few years, but several variables and technical challenges must be resolved before they become an operational reality.