Orbital data center
Orbital data center
  • Axiom Space + Red Hat flew a prototype “orbital data center” (AxDCU‑1) to the ISS in Aug. 2025, using Red Hat Device Edge/MicroShift to run containerized workloads in orbit—part of a roadmap to deploy multiple Orbital Data Center (ODC) nodes by 2027. TechRadar
  • ESA’s ASCEND study (led by Thales Alenia Space) says space data centers look technically feasible and could be eco‑competitive—but only if launch emissions drop by ~10x; space DCs would use abundant solar and no water for cooling. Thales Alenia Space
  • Launch costs keep falling: SpaceX publicly lists smallsat rideshare at $325k for 50 kg to SSO and $6.5k/kg thereafter, making compute/storage payloads more affordable to orbit. SpaceX
  • In‑orbit edge computing is real today: HPE’s Spaceborne Computer‑2 (COTS servers) returned to the ISS in 2024 with 130 TB flash; past workloads showed ~30,000× downlink reduction by sending insights instead of raw data. Hewlett Packard Enterprise
  • AWS ran Snowcone edge hardware on the ISS (Ax‑1) and updated a 7 GB model on‑orbit, validating maintenance and ML in space. Amazon Web Services, Inc.
  • Connectivity is maturing: Kepler is building an optical inter‑satellite relay with on‑orbit compute capacity; Axiom plans to fly initial ODC nodes on Kepler’s network. Kepler
  • Specialist vendors now sell “space‑resilient” compute/storage: Unibap (SpaceCloud iX5), Ramon.Space (NuPod/NuStream), OrbitsEdge (Edge1), Ubotica (SPACE:AI), SkyServe (STORM on D‑Orbit). dorbit.space
  • GPU‑class on‑orbit AI is already in service: OroraTech flies NVIDIA Jetson for real‑time wildfire detection. NVIDIA Blog
  • Lunar experiments are underway: Lonestar Data Holdings flew data‑storage tests in cislunar space (2024) and integrated a hardware “Freedom” payload for Intuitive Machines’ IM‑2 mission in 2025 (targeting disaster‑recovery/archival). spaceflorida.gov
  • Latency isn’t always worse: For long‑haul routes, LEO constellations with laser links can beat fiber because light travels faster in vacuum than in glass (classic result from Handley 2018). ACM Digital Library
  • But heat rejection is hard: The ISS external thermal system can radiate ~70 kW with large ammonia‑cooled radiators—illustrating the size/complexity of dumping even modest data‑center‑scale heat in space. NASA
  • Regulation is tightening: The FCC 5‑year deorbit rule is now policy; space activities by private firms remain under Article VI “authorization & continuing supervision” of a State. Federal Communications Commission
  • Data sovereignty is complicated: The GDPR’s territorial scope and the U.S. CLOUD Act can both apply regardless of server location—including if those “servers” are in orbit. GDPR
  • Environmental externalities matter: Studies warn expanding launches/re‑entries inject black carbon and metals into the stratosphere, with potential ozone and climate impacts. Agupubs
  • Bottom line: The first “servers in space” exist as prototypes and edge nodes; MW‑class orbital data centers are a 2027–2035 story—if thermal, power, legal, and environmental hurdles are met. Axiom Space

The future will be servers in space: a definitive 2025 roundup

The idea sounds like sci‑fi clickbait until you look at what launched this year. In August 2025, Axiom Space delivered a Kubernetes‑managed data‑processing unit (AxDCU‑1) to the ISS to evaluate “orbital edge” workloads with Red Hat Device Edge/MicroShift—container orchestration, delta updates, rollback, and self‑healing in a low‑bandwidth, high‑radiation environment. Axiom’s public roadmap calls for multiple Orbital Data Center (ODC) nodes by 2027, with two initial nodes riding on Kepler’s optical data‑relay network to add in‑space compute and storage closer to satellites that generate the data. Kepler

Why now? Three converging trends

  1. Cheaper lift. Falcon 9 rideshare pricing publicly starts at $325k for 50 kg to SSO (then $6.5k/kg)—a clear, published price many startups can plan around. Starship aims to push costs further down, but the actionable price signal is Falcon rideshare today. SpaceX
  2. On‑orbit compute is credible.
    HPE Spaceborne Computer‑2 uses COTS servers and big flash on the ISS, showing massive downlink savings (compute the science in orbit, send only results). Hewlett Packard Enterprise
    AWS Snowcone ran in space (Ax‑1), including a live 7 GB model update—demonstrating remote ops and ML maintenance off‑planet. Amazon Web Services, Inc.
    OroraTech and others already run GPU‑class inference on satellites for real‑time wildfire alerts. NVIDIA Blog
  3. Backbone is coming together. Kepler has validated optical inter‑satellite links and is building a “real‑time optical backbone,” exactly the sort of fabric an orbital cloud needs to talk to itself and downlink selectively. Kepler

Who’s doing what (by category)

Orbital data center prototypes

  • Axiom Space (ODC) — AxDCU‑1 on ISS (2025); plans for multiple ODC nodes and ISS‑class deployments around 2027; uses Red Hat Device Edge/MicroShift. ISS National Lab
  • HPE Spaceborne Computer‑2 — third iteration to ISS (2024), 130 TB flash, prior experiments cut downlink by ~30,000×. Hewlett Packard Enterprise
  • AWS — ran Snowcone in orbit (Ax‑1) for edge ML and workflow validation. Amazon Web Services, Inc.

Space compute & storage vendors

  • Unibap (SpaceCloud iX5) — radiation‑tolerant edge computers; flew AWS services in space in 2022 tests. Unibap Space Solutions
  • Ramon.Space — space‑resilient compute/storage (NuPod/NuStream) and software‑defined comms (NuComm) aimed at building “cloud‑like” in‑space services. RAMON.SPACE
  • OrbitsEdge — ruggedized high‑performance “orbital edge” platforms; first Edge1 mission slated after a 2025 reveal. OrbitsEdge
  • Ubotica (SPACE:AI) — onboard AI cameras/computers for live EO insights. ubotica.com
  • SkyServe — deploys edge‑AI (STORM) on D‑Orbit carriers for in‑orbit EO analytics. dorbit.space

Data relay / “Internet for space”

  • Kepler — building an optical data‑relay network with on‑orbit compute; first tranche targeted for service kickoff with launches around 2025. Kepler

Secure storage / sovereignty concepts

  • Cloud Constellation (SpaceBelt) — long‑standing plan for a LEO “data vault” constellation dedicated to space‑based cloud storage. NewSpace Index

Lunar experiments

  • Lonestar Data Holdings — cislunar data‑storage tests in 2024; “Freedom” hardware payload integrated for IM‑2 (2025) focusing on disaster‑recovery/archival services. (Note: operations on the lunar surface remain an active, high‑risk demonstration.) spaceflorida.gov

Why put servers in space at all?

  • Energy & water. Solar is continuous in many orbits, and space DCs don’t require water for cooling—a big deal as terrestrial sites hit water constraints. (ASCEND’s caveat: to be net‑green, launcher emissions must drop ~10×.) Thales Alenia Space
  • Bandwidth triage. Processing at the “source” (a satellite or an in‑orbit hub) lets you discard noise and transmit only insights, which HPE and AWS demoed on the ISS. Hewlett Packard Enterprise
  • Latency (for the right routes). For intercontinental paths, LEO laser backbones can beat fiber’s refractive‑index penalty (light is ~31% slower in glass), yielding competitive end‑to‑end delays. That’s the classic Handley 2018 result many designs chase. ACM Digital Library
  • Resilience & sovereignty. Space‑hosted archives/DR sites are physically remote from regional disasters and—depending on legal architecture—may offer unique jurisdictional separation (with important caveats below). NewSpace Index

The hard engineering truths

  • Thermal rejection is the bear. In vacuum, radiators are your only practical sink. The ISS’s external system rejects ~70 kW with vast ammonia‑cooled panels; scaling to MW‑class AI in LEO implies football‑field‑scale radiators or very high rejection temperatures—both heavy and complex. NASA
  • Radiation & reliability. Single‑event upsets and total‑dose effects drive hardened designs (ECC/TMR) or restart‑tolerant software. That’s why suppliers like Ramon.Space and Unibap exist. RAMON.SPACE
  • Operations & upgrades. Launch mass/volume limits, on‑orbit servicing, and software‑only refresh cycles complicate lifecycle economics versus swapping a rack on Earth.

Economics & the environment

  • Capex to orbit has a spreadsheet now. With a public $6.5k/kg rideshare rate, you can price a 100 kg edge node to SSO at ~$650k for launch alone (ex‑integration/insurance). Starship could change the curve later; plan with the rate you can actually buy today. SpaceX
  • Externalities aren’t free. Rocket exhaust (especially kerosene BC) and re‑entry metals (Al, others) accumulate in the stratosphere; recent work in JGR, PNAS, and Nature flags warming/ozone risks if launch/re‑entry counts keep rising. Any “green” space DC story must squarely address this. Agupubs

Law & policy you can’t ignore

  • Debris rules: The FCC 5‑year deorbit requirement is now on the books for U.S.‑licensed LEO satellites; design your ODC nodes with post‑mission disposal from day one. Federal Communications Commission
  • Who’s in charge: Under Outer Space Treaty Article VI, private orbital DCs operate only with authorization and continuing supervision by a State Party—there is no “lawless orbit.” UNOOSA
  • Data laws follow you: The GDPR (Art. 3) and the U.S. CLOUD Act can both reach data held outside their territories when certain criteria are met. Cloud‑in‑space ≠ outside compliance. GDPR

What will actually run in space (first)?

  • Filter/triage near sensors: EO/SAR/IR/RF satellites offload raw streams to an ODC for denoising, compression, detection, and tasking—forward only what matters. (This is already happening at smaller scale with Unibap, Ubotica, SkyServe, OroraTech.) NVIDIA Blog
  • Long‑haul acceleration: For specific city‑pairs, space backbones with optical crosslinks can shave milliseconds; put caches, brokers, or inference endpoints in space where the network is fastest. ACM Digital Library
  • Cold archives & DR. Lunar or highly resilient LEO/MEO stores for disaster recovery and tamper‑resistant time capsules (à la Lonestar) will rise before anyone hosts latency‑sensitive web apps off‑planet. PR Newswire

Realistic timeline

  • 2025–2027: ISS‑hosted prototypes (Axiom, HPE) and hosted payloads mature; first ODC nodes ride on relay networks; more satellites adopt onboard AI. ISS National Lab
  • 2027–2030: Multi‑node ODC clusters with optical crosslinks appear; routine “process‑in‑orbit” for EO, plus initial DR/archive services with legal/compliance wrappers. Axiom Space
  • 2030+: If launch emissions fall, on‑orbit assembly (robotics) and larger radiators make higher‑power DCs credible; ASCEND envisions GWh‑scale capacity by mid‑century. Thales Alenia Space

How to think about “servers in space” if you’re an architect today

  1. Workload fit: Prioritize bandwidth‑bound (not latency‑bound) jobs where in‑situ processing saves $$ and time (e.g., change detection, encryption/signing, smart discard). Hewlett Packard Enterprise
  2. Data governance: Map GDPR/CLOUD Act reachability and export‑control constraints before you move a single byte off‑planet. GDPR
  3. Thermal/power realism: Assume radiator‑dominated envelopes and tight power budgets; MW‑scale AI inference/training belongs on Earth until you can prove heat rejection. (The ISS’s ~70 kW is a sobering benchmark.) NASA
  4. Ops model: Favor containerized, OTA‑updatable payloads with strong fault tolerance (A/B images, delta updates, automatic rollback), as Axiom/Red Hat are demonstrating. ISS National Lab
  5. Sustainability: Account for launch + re‑entry externalities in your carbon model; look for lower‑emission launchers and long‑life, serviceable platforms. Agupubs

Companies, projects & proofs you should watch

  • Axiom Space ODC / AxDCU‑1—Kubernetes‑style ops on orbit; multi‑node roadmap. ISS National Lab
  • Kepler Network—optical backbone + on‑orbit compute for a true “internet for space.” Kepler
  • HPE Spaceborne Computer‑2—COTS servers proving practical value in bandwidth‑starved regimes. Hewlett Packard Enterprise
  • Unibap, Ramon.Space, OrbitsEdge, Ubotica, SkyServe—the vendor ecosystem that will supply your flight‑qualified compute. ubotica.com Solutions+3RAMON.SPACE
  • Lonestar (lunar DR)—archival semantics and legal frameworks for off‑planet backups. PR Newswire
  • ASCEND—policy‑and‑engineering pathfinder for “green” space DCs in Europe. Thales Alenia Space

Takeaway: The first servers in space are not hypothetical—they’re racked in ISS lockers and riding hosted payload slots today. Over the next 2–5 years, expect orbital “edge hubs” to spread across LEO relay networks and commercial stations, specializing in processing near the source, relaying via lasers, and archiving far from harm. The leap to mega‑watt orbital clouds hinges on solving heat rejection, launch emissions, and compliance—hard problems, but now on engineering roadmaps rather than in science fiction.

Artur Ślesik

I have been fascinated by the world of new technologies for years – from artificial intelligence and space exploration to the latest gadgets and business solutions. I passionately follow premieres, innovations, and trends, and then translate them into language that is clear and accessible to readers. I love sharing my knowledge and discoveries, inspiring others to explore the potential of technology in everyday life. My articles combine professionalism with an easy-to-read style, reaching both experts and those just beginning their journey with modern solutions.

Leave a Reply

Your email address will not be published.

Don't Miss

Agentic AI vs. Generative AI

Agentic AI vs. Generative AI: What You Need to Know

Artificial intelligence today is often discussed in two buzzing terms
IoT Sensors Automate Irrigation

Smart Water, Bigger Harvests: How IoT Sensors Automate Irrigation – and Cut Water Use by 30%+

Why this matters now “Agriculture is responsible for 70 percent