In mylast columnI wrote about my predictions for 2014. While it is all well and good to look ahead, I thought that before we raced too eagerly into the future, we should not forget the adage “Those who fail to learn from the mistakes of their predecessors are destined to repeat them,” which in one form or another was attributed to Winston Churchill.

I recently have been reviewing several older data center properties that need to be refreshed in order to compete in the exploding colocation market. Many were generally built 20 to 25 years ago, which in computer years, roughly equates to approximately 1,000 years past the Paleolithic era of human development. A prime example of that era is the original ENIAC “Electronic Numerical Integrator And Computer,” built for the U.S. Army and designed to be used for artillery ballistic firing calculations in 1946 — just after “WWII” (World War II - not the Internet 2.0). I originally thought that ENIAC was the first “electronic” computer (built using vacuum tubes), unlike its predecessors which were electromechanical-based units (consisting of relays, solenoids, index wheels, gears, and cogs). These electromechanical units were essentially highly-developed mechanical adding machines used to perform somewhat more advance math functions.

Since my knowledge (even as a self-appointed know-it-all) was a bit limited on events before 1950, I did a Google search which led me to Wikipeda and a link to a “list of vacuum tube computers” (while I hate to quote Wikipedia as an accurate source, it does make for some interesting reading). It turns out that there was a predecessor vacuum tube computer built in 1943 that was used in England during WWII to break German encrypted military traffic (sent by telegraph!). Even more interestingly, it was named “Colossus” (first rule in government — you can’t do or build anything without a good code name).

At the risk of digressing a bit early on, I could not help but think how this super-secret Colossus machine represents a bit of parallel to the recent stories based on the leaked information of a presumed development of the “Quantum” computer, which is purportedly being built by No Such Agency, specifically designed to be able to decrypt virtually any encrypted data. However, in case you want to get past the media hype, the concept and goal of a “quantum computer” is not primarily for code-breaking, it is a sort of a holy grail for quantum mechanics researchers, based on utilizing the behavior of subatomic particles. Apparently, there is even a bit more mystery behind the more recent history of a “quantum computer.” According to an IEEE article, there is a company called D-Wave which claims to have built a quantum computer called “D-Wave One” that was sold to Lockheed-Martin in 2011. Even more interestingly, they recently built the “D-Wave Two,” which was jointly purchased and shared by NASA and Google in 2013 for research purposes (note, I did mention that NASA was going to need to adjust to the new economic budget realities in my last column).

Now getting back to the main “archeological dig” thread of my column, the ENIAC was shortly followed by the more  commercially focused “UNIVAC 1” (UNIVersal Automatic Computer), which used over 5,000 tubes and  burned-out an average of one or more tubes every few days. It therefore had a continuous “uptime” measured only in hundreds of hours. The UNIVAC 1 also had a clock speed of 2.25 MHz, which at the time was tens of thousands of times faster than any mechanical calculating device. It was enormously expensive, which greatly limited its sales when production finally ended in the 1960s. However, eventually “timesharing” via remote access was developed (oddly they had not yet called it cloud computing), and made available to those who did not have their own mainframe. Nonetheless, despite its limitations, it was the best computing technology at the time and the race for faster, and more powerful, reliable, and cheaper computing power was on.

By the 1960s, mainframes had evolved (unlike dinosaurs, which ruled the Jurassic period) from vacuum tubes to “solid state” (discrete transistors and then microprocessors), however unlike dinosaurs, mainframes were not allowed to roam freely and were kept in “glass houses,” AKA the data center. In fact, they shared the space with their keepers: the keypunch, printer, and tape operators, as well as other operations and support personnel (and for those of you that may not have ever been in an older site, some even had carpet tiles for the raised floor, since some personnel complained that the hard floor tiles hurt their feet).

Unlike today, heat densities were relatively low (25 to 35 W/sq ft, since the equipment itself used much less power and a significant portion of the space was occupied, lowering the overall average density. However, environmental stability issues were critical for several reasons, the hardware was fairly delicate and the manufacturers required a stable 68°F,  50% relative humidity (rh) environment or they would violate the support contracts (a paper chart record in the form of a wheel recorded this as proof of compliance or violation).

The other reason for the tight environmental control was the presence of and dependence on paper (for punch cards and paper tape for input, and many impact dot-matrix line printers — loaded with continuous “green bar” paper, all of which were located within the data center), and paper was fairly sensitive to environmental changes and could jam the machines. This practice continued for more than 30 years, and more significantly, 68°F, 50% rh became ingrained in the collective memory of the data center industry.

Even today some people have trouble moving forward from those deeply entrenched numbers, despite the ever broadening environmental envelope information provided by the TC 9.9 ASHARE Thermal Guidelines. These guidelines were first issued in 2004 and, as of 2011, is now in its third edition. It now references a new A4 class of IT equipment capable with intake temperatures of up to 113°F, with the ultimate goal of building some data centers without mechanical cooling wherever climatically possible.

Originally “availability” had very few “9s.” Mainframes required regular and frequent planned downtimes for maintenance, as did support systems (power and cooling) and in fact, there was only one power supply in early models and most of the subsequent systems (until the most recent generations) only had a single power supply path, effectively making even the “best” data center a “Tier 1” design (of course the concept of tiers did not yet exist). It was not until the 1990s and the advent of dual power supplies in the IT equipment that there are the underpinnings of the Tier System of Availability, created by the late Ken Brill and the Uptime Institute in 1995.

We have clearly moved way beyond the mainframe days of scheduled downtime and delayed batch processing, to our 7x24, with five 9s of uptime and instantaneous response. However, that being said, we have reached the stage where the data center facility and its physical infrastructure is as nearly as redundant and available as it can ever be (assuming a true Tier 4 design). However, that is no longer enough in some cases, and the reliance for “continuous” availability, simply based on the redundancy of the physical site, is no longer the only or ideal solution. Software has transcended the physical limitations and even the very definition of the data center. Over the last 10 years or so, virtualization has expanded beyond its initial target— the server, to include storage and the network, and ultimately to virtualize the concept of the data center itself with the advent of the software defined data center.

The rate of change in what constitutes computing has been accelerating and evolving rapidly, however it is taking a while for the new paradigm to become part of the collective psyche of some of the designers, builders, and operators of the modern data center (cloud, virtualized, software defined, and whatever comes next), as we know it, and as it continues to evolve. The dinosaurs, as large and powerful as they were, became extinct because they could not adapt to the changing environmental conditions quickly enough. While so far we as human have apparently adapted to changing conditions, some in the data center industry are far less comfortable with adapting to change.

Beside all the general trends for greater computing performance at a lower cost, as well as better energy efficiency, we must also be prepared to make a “quantum leap” in our data center thinking. For example; what I did not mention in my previous reference of quantum computing is that unlike the silicon-based microprocessor systems that we use today, is their extreme environmental requirements. The quantum computer’s “Vesuvius” processor made by D-Wave is composed of “SQUIDS” a Superconducting QUantum Interference Device, analogous to a “quantum transistor” made with “niobium,” which needs to operate as close to absolute zero Kelvin (-459.67°F) as possible, as well as being totally magnetically shielded, since SQUID processes qubits magnetically. It also requires total electromagnetic isolation, housed in a dedicated room fully enclosed by a Faraday shield. This is essential in order to prevent being influenced by “quantum decoherence” (which occurs when a system interacts with its environment in a thermodynamically irreversible way). Moreover, the Vesuvius is a 512-qubit processor. By the way, a “qubit” (for those of you like me who do not hold dual Ph.D.’s in quantum physics and computing) is a “quantum bit,” which unlike the conventional bits in binary state computing, qubits can be either in the on or off state, or in a state of superimposition (while it decides to “choose”), according the D-Wave Systems website.

If following all this seems a bit confusing, you are not alone. To put it all in perspective, I thought you might want to ponder this quote attributed the late Nobel laureate Richard Feynman, who is widely regarded as the pioneer in quantum computing, “If you think you understand quantum mechanics, you don’t understand quantum mechanics,”

Perhaps one day in the not so distant future, quantum computing may or may not be the new normal. Nonetheless, we will need to be able to increase our rate of adaption to meet the challenge, perhaps by looking back, as well as ahead, when designing the data center of the future (with processors cooled by air, water, immersed in mineral oil, or perhaps even liquid nitrogen).

THE BOTTOM LINE

So with apologies to any real archeologists (or quantum physicists), for my taking literary license with the history of man, dinosaurs, and quantum theory, what is it that we need learn from the past to avoid repeating past mistakes? Our predecessors built fixed monolithic data centers with inflexible infrastructure systems with the assumption of a long term 15 to 20 year service life. While they accomplished their design goals, as witnessed by the fact that the facilities I saw, as well as the numerous sites that are still operational, even some that were built 25 or more years ago, (or at least capable of running, if re-started), they are functionally obsolete in many ways.

We are now at the next juncture of another stage in the ongoing computing evolution and organizations and their CIOs and CTOs must begin to fully comprehend, adopt, and ultimately embrace this newest paradigm as the future direction or perhaps face premature obsolesce. In much the same way that those in “data processing”  clung too long to the old “big iron” mainframe mentality and rejected the concept of a PC-based server as not worthy of being in the data center and originally relegated them to the wiring closet (or in some cases, the supply closet).

We now need to fundamentally accept that whatever we do today will be rapidly superseded in the future, as evidenced by the latest generations of IT hardware that seems to have shorter and shorter refresh cycles. However, as we have become more aware of long term sustainability issues we will need to consider this more carefully. Therefore, we should adapt our thinking from a long-term technical and sustainability viewpoint, as well as a financial initial CAPEX and overall TCO perspective.

And finally, as an example of developments and new challenges in the every changing cyber world, I was recently asked to design a data center for a Bitcoin mining operation. So if anyone has any spare Bitcoins that I could borrow please let me know. I was told that each “Bitcoin Mining Rig” can produce several hundred dollars a day. Therefore (since I like to look back before moving ahead), I would like to see a Bitcoin before designing the necessary Bitcoin collection chutes for the racks and whether to use an underfloor or overhead conveyor belt system.