Paul Mercina, head of innovation, Park Place Technologies, asks whether we might soon see data centres floating in space
In space exploration, it’s easy to answer the ‘why’ question as mountaineer George Mallory once did about climbing Everest: ‘Because it’s there.’ At first, practical applications emerging from astronauts’ missions were almost a side benefit to the sheer knowledge.
Now, we’re in the era of commercialisation. Space is already home to considerable computing action, from satellite telecommunications to emerging cryptocurrency storage. And there are those who see opportunities for more – even the potential for putting rather quotidian data centre capabilities far above us.
When pursued for profit, however, sending up large quantities of hardware needs to survive a detailed cost-benefit analysis, and some technological hurdles remain. The concept of the space data centre is not here yet, but it feels like it’s approaching at the speed of light.
The space equation
The cost outlook for larger scale, space-based compute and storage is improving. Private space flight is driving prices down. Innovations like simplified rocket design, interoperable parts, and flashiest of all, reusable self-landing rockets, are slashing costs. SpaceX’s Elon Musk claims launching a satellite has become $300 million cheaper, and the price of sending up equipment has fallen to less than $2,500 per pound to orbit.
At the same time, technology’s miniaturisation – the reason we can carry 128 GB of data storage as a key ring – makes it possible to launch more capacity within an ever-shrinking footprint. (We’ll leave aside the problems with Moore’s Law here.) In fact, sending up an iPhone today would put a more powerful computer in space than at any time during the Apollo era in US spaceflight.
Nonetheless, blowing hundreds of thousands of kilograms of fuel to fire an already costly data centre pod over 35,000 km into geosynchronous orbit is no small undertaking. Only significant advantages would make it worth the while. Here are the upsides that the hopefuls are looking for:
Energy – It’s estimated that data centres consume up to 10% of the world’s energy. This makes the plentiful solar power available in space-based facilities highly attractive.
Cooling – Space is cool, physically. Colder temperatures hold the promise of reducing failure rates in IT equipment and enabling higher processing speeds.
Consistency – The factors making space a harsh environment for humans can be great for computers. There are no storms or other weather extremes, and almost no humidity or dust to gum up the works. What’s more, spinning drives love zero-g.
Access – Putting independent space pods in orbit above developing nations or inhospitable locations could bring a level of service not possible through ground-based systems.
From low maintenance to maintenance free
Space data centre pods would be entirely sealed, like satellites now, and likely filled with nitrogen. Because it’s not practical to have systems administrators stop by to swap failed drives, these facilities would need to be entirely self-sufficient and the hardware incredibly reliable.
Significant work being done across the data centre sector will fuel the capabilities required to transition the industry into space. We’re finding that getting human hands off the machines can be incredibly beneficial. The Uptime Institute claims that more than 70% of data centre outages are caused by human error. DCIM is now being pushed toward fully automated, lights-out data centres, which could one day be packaged up and launched. Additionally, high-level, ‘plug and play’ modular data centres are being designed, outfitted and delivered to sites, such as Dubai International Airport, absolutely ready to go. Why not into orbit?
Microsoft is pushing the envelope with its undersea data centres. Earlier in the year, the company sunk a functional prototype off Scotland’s Orkney Islands to take advantage of the free cooling of those waters while situating the pod close to about half the world’s computing population.
They acknowledge that lack of maintenance is an issue, but also a savings opportunity. As research project director Ben Cutler said: ‘We’re going to drop it down there, and we’re done. We’re never going to come back to the vendor and say “replace this disk drive.”’
Needless to say, there’s no need for the three-year OEM support contract. But failures common to off-the-shelf equipment would be enough to compromise the investment in a space-based facility, yet at the same time, bespoke, space-ready hardware presents its own barriers.
The hard problem
Among the core challenges of putting IT equipment in orbit (or beyond) are the violent forces accompanying take off and the subsequent exposure to radiation and sun flares. The IT equipment floating around in space today, whether in satellites, the International Space Station (ISS), or a Mars rover, has been hardened to withstand these assaults.
The process is effective, as any of us could surmise based on the reliability of GPS, but it’s expensive. Developing hardened equipment also takes time. That’s why the ISS command and control operates on technology from the late 1980s, and even the astronauts’ laptops are 2007-era.
These issues are manageable when you’re talking about a discrete number of weather satellites, for example, but would not be viable for true space data centres expected to compete with land-based offerings. The hope is that a cheaper, software-based approach to hardening could allow the use of current, off-the-shelf equipment instead.
One concept is being tested by HPE. The company sent two servers in water-cooled enclosures to the space station. The liquid provides a shell of protection, but the interior workings of the Spaceborne Computer are standard. The company is investigating whether throttling speed or shutting down in the case of solar flare or radiation hazard could prevent data corruption.
Other approaches rely on redundancy. ConnectX, for example, is introducing a proprietary solution for distributed resiliency to deal with events rendering a system inoperable. The bottom line – we have ideas but not yet a product ready to fill our space data centre dreams.
Actually, this is the hard problem
Manufacturing hardened IT equipment at a reasonable cost and production scale may be a hurdle, but it’s a manageable one. More concerning for the space data centre concept is networking. Sending data from Earth to orbit, or vice versa, is relatively slow.
In 2016, NASA’s space network was capable of 300 Mbps connectivity speeds and was transmitting about 28 TB of information per day. It’s since gone through an upgrade, but transmission bottlenecks remain a challenge in many situations, as when sending large image files being collected by satellite-based telescopes. If we’re to move Google’s server farms to geosynchronous orbit, scientists will need to find ways to exponentially increase data transmission rates.
There are companies with interesting solutions in the works, such as intertwined beams to maximise the throughput of radiofrequency. NASA itself is testing and improving laser downlink technology, which is faster, smaller, lighter, and more secure. Is it enough for widespread consumer use, when most of us are already frustrated with 4G? We’ll find out.
If we can solve this last big problem and accelerate data transmission, edge computing across the Earth’s surface might one day be paired with storage and processing in the final frontier.
This post originated at Data Centre Management magazine, from the same publisher as The Stack. Click here to find out more about the UK’s most important industry publication for the data centre space.