Defining Mission Critical
Fifty years later: Apollo 11 — from the Earth to the moon — and back.
In my last column I discussed the new space race by Amazon and SpaceX to launch tens of thousands of satellites to provide internet access from anywhere on earth. This being the 50th anniversary of the Apollo 11 mission, I felt the need to step back to July 20, 1969, when Apollo 11 landed on the moon and Neil Armstrong became the first person to set foot there. By now you have probably seen or read the many highly detailed stories of the mission’s other astronauts; Buzz Aldrin and Michael Collins, as well as the massive efforts, personal sacrifices, and successes (and failures) of the hundreds of thousands of people who worked on the Apollo project; building the Saturn launch rockets, Apollo spacecraft, and Lunar Lander. But what about computers and software to solve the ultimate mission critical challenge that led to that historic event: how to safely get astronauts to the moon and back? I thought we might take a brief look at how they were able to accomplish this milestone journey without access to internet, or the multitude of computing and communications resources we take for granted today.
The 1960s became the decade of the U.S. space race to the moon, driven by U.S. national pride and massive government funding. As a result, the computing hardware that was available at the time for NASA’s lunar mission was cutting edge, state-of-the-art technology, and leading firms such as IBM, Honeywell, Raytheon, and many others who saw this as their priority mission statement to be developed and brought their best technology and talent to the project.
Moreover, even educational institutions such as MIT became involved with the development of the computer and software. Nonetheless, the computers had a tiny fraction of the capacity and capabilities that today’s smartphone has. According to NASA, the onboard Apollo Guidance Computer (AGC) had a memory storage unit (called memory rope modules) measured in kilobytes (yes, kilobytes, not megabytes or gigabytes), and a 1 Mhz clock speed, weighing 70 lbs (on Earth) operated on 28VDC, generated from fuel cells, and requiring only 70 Watts. To put things into further technology time perspective, the program code was loaded into the AGC from punch cards.
Meanwhile, back at Mission Control, mainframes, such as the venerable IBM 360, UNIVAC, and other “Big Iron” of the time, did the heavy number crunching for “real time” trajectory tracking and calculating of any guidance updates that were then uploaded into the onboard AGC.
According to NASA, twin IBM 360s were used in the Goddard Real-Time System (GRTS). In May during the Apollo 10, the predecessor mission to go around the moon, operators made checkpoint tapes of current data every 1.5 hours. A failure of the Mission Operations Computer occurred May 20, 1969 at 12:58. By 13:01, the standby had been brought up, using a checkpoint tape made at 12:00. No significant problems resulted, which is an actual example of the benefits of hardware redundancy when “Failure is Not an Option” that were utilized throughout the Apollo era.
The astronauts interfaced with the AGC via the display and keyboard unit (DSKY) (Figure 1). The DSKY was not a virtual alpha-numeric keyboard on a high-res graphic touchscreen display that we take for granted today. It only had numerical keys and a handful of other related input keys, which required the astronauts to use an attached code reference card or memorize code numbers for various and more common command sequences (ever try to use a touchscreen keyboard wearing spacesuit gloves?), and a five-line numeric display. (Interesting side note, someone is trying to sell a new DSKY for $2,995 on eBay).
The onboard computers not only were used to calculate positional guidance and read critical system functions, they also provided direct control of onboard course correction rockets of a Apollo Command Module named Columbia, as well as the main landing and return thrusters of the Lunar Excursion Module (LEM or LM) named Eagle. Much was accomplished with the relatively limited computing and communications technologies that were used to support, control, and guide the Apollo flights.
Of course, voice, video, and data communication technology for telemetry and computer data existed in 1969, and live video of the first step on the moon was broadcast from a slow scan camera mounted on the exterior of the LEM. It only had 320 scan lines at 10 frames-per-second, which could be transmitted using just 500 kHz of bandwidth. Nevertheless, while the resolution was low, the images are historic (the images did not require any emojis for emphasis).
While edge data centers, along with 5G networks, have recently become the latest trend, in order to provide the lowest latency to deliver HD video to endusers and support smart devices, the Eagle Lunar Module and the Columbia Apollo Command Module were effectively the earliest and extreme examples of edge computing, yet they were mostly able to fulfill the mission’s technical requirements despite the 2,500 millisecond round trip latency due to the 238,855 mile average distance from Earth to the moon.
There were only a few more trips to the moon after Apollo 11, including the almost as famous, aborted, but fortunately non-fatal Apollo 13 lunar mission. We have not gone back to the moon for many reasons, such as social, financial, and political. Moreover, by the early ‘70s most major TV stations stopped covering live launches. The NASA space program research and engineering produced a huge amount of useful technologies, far beyond bigger rockets, however, much of the public did not see the benefits. Apollo 17 was the last lunar landing by a human in December of 1972. NASA ended the space shuttle mission in 2011 and no longer has its own launch vehicle program. Nonetheless, spacecraft and booster rocket technology has slowly advanced and is now relatively mature. It has become a market for commercial enterprises; SpaceX is owned by Elon Musk, who launched a SpaceX Falcon Heavy rocket in February 2018 toward Mars, carrying his old Tesla Roadster with a dummy Astronaut named Starman.
Now, 50 years after Apollo 11, NASA is getting ready to send astronauts back to the moon. They recently announced a series of awards as part of 2024 moon return ambitions. The award includes SpaceX and Jeff Bezos’ Blue Origin, alongside traditional contractors such as Boeing and Lockheed Martin. Moreover, in July, Virgin Galactic (the space tourism company founded by billionaire Richard Branson) announced they are preparing to enter the stock market by the end of 2019.
Besides sending people to the moon again, Mars is on NASA’s agenda and they are going to need a continuous power supply that can keep humans alive in space for years at a time without refueling. So small nuclear reactors seem like a logical choice. Los Alamos National Laboratory, in conjunction with NASA, has come up with a potential solution, known as Kilopower: a small nuclear reactor that may one day power a colony on the moon, Mars, or beyond. At present, initial designs envision 10 kW modules, which could be tied together for greater capacity or redundancy.
According to NASA, Kilopower is a small, lightweight fission power system capable of providing up to 10 kilowatts of electrical power continuously for at least 10 years. In 2018, the tested prototype power system used a solid, cast uranium-235 reactor core, about the size of a paper towel roll. The experiment culminated with a 28-hour, full-power test that simulated a mission, including reactor startup, ramp to full power, steady operation, and shutdown (https://go.nasa.gov/2JKBCO8).
However, there is already more than 500,000 pieces of debris or “space junk” being tracked by NASA as they orbit the Earth. And, as I initially indicated, apparently it is going to get a lot more crowded up there pretty soon with thousands of Amazon and SpaceX satellites and whoever else will be jumping into the game (Facebook, or perhaps Lyft or Uber?)
While so far none of these announcements seem to indicate that the purpose of satellites are more than to provide global coverage internet access, as AI and ML processors (CPU, GPU, TPU) and solid state storage technology progresses, why not include meshed edge computing functionality in larger satellites? Obviously, that would require a lot of power, so small nuclear reactors seem like a logical choice (of course if you are paranoid, or even if you are not, this might sound vaguely like Skynet…).
The bottom line
Meeting the challenges of going to the moon gave rise to the term “moonshot” for more mundane tasks. Here on Earth, more and more gigawatt level data center sites are being developed and built at an astounding rate (especially in those areas which offer tax and other economic development incentives), however, in some cases, data center builders are beginning to see pushback. In July, Amsterdam, which has been offering incentives to bring in new data centers, was too successful and has announced that it is halting new sites until the end of the year while they reevaluate the situation.
Google was granted a patent for a floating data center in 2009 cooled by seawater and powered by waves. A prototype was rumored to have been built in 2013 but so far has not seen production. Microsoft’s CEO Satya Nadella has called out-of-the-box ideas such as Project Natick “relevant moonshots,” which is intended to accommodate exponential growth in demand for cloud computing infrastructure near population centers, and has built several experimental submerged data centers. Others, such as HPE, have referred to some of their servers as “moonshots” as well.
While interesting of course, there is still the issue of generation and delivery of power, not to mention that all the power will still be turned into waste heat. While waste heat dumped into the water instead of the air, is more energy efficient than traditional compressor-based mechanical cooling, waterborne data centers still do not help climate change. So perhaps space may be the better eco-friendly choice for computing, using Kilopower or other nuclear power.
We have all seen the benefits of the outgrowth the computing and communications technologies originally developed and used for the Apollo missions. However, the rate of progress in space travel technology over the past 50 years has not matched the accelerated rate of development in computer and communications technology in the last 25 years. To most of the general population, developments, such as the internet and smartphones with facial recognition, as well as “smart speakers” that let users say “Hey Google” or Alexa to search for movies or TV programs, or to control lights or appliances, have more direct tangible benefits.
Many other benefits have come from the research and developments in science, medicine, and other technologies, which were directly or indirectly the result of the last 50 years of space programs, some of which have improved the human condition. However, in most cases, they are not as well associated with Apollo 11 or other space missions, or appreciated by some people.
As we enter the next decade, artificial intelligence and machine learning is available as a commodity cloud service from Google and IBM’s Watson providing health care-related services, and Facebook launching its own crypto-currency Libra. So while the words “One small step for man, one giant leap for mankind” aptly states the significance of the Apollo 11 milestone, and combined with renewed ambitions to go to Mars, it begs the question; What will the next 50 years bring humanity?