The idea behind this is to judiciously manage the airflow through the server and intelligently make thermal management decisions for fan speed based on monitoring the temperature of the critical components, rather than just the intake air temperature. By intelligently monitoring component conditions and workload and adjusting the airflow to keep them in a safe operating range they are able to more efficiently control the airflow rather than depending on a simple fan speed map, just based on intake air temperature alone. HP is certainly not the only server vendor that is trying to improve this, but seems to have a more advanced system at moment.
In general, most servers begin to speed up their fans, at about 77°F, which aggressively increases server fan energy. This practice seems to be legacy related to the original 2004 ASHRAE TC 9.9 “Recommended” temperature range of 68-77°F. While this is a “safe” approach, which will protect the IT equipment, it is does raise the fan energy of the IT equipment and the airflow requirements at higher temperatures. This begins to negate some of the cooling energy savings being promoted by the new ASHRAE 2011 Expanded Thermal Guidelines.
By monitoring the critical component temperatures HP claims they improve fan energy efficiency by not blindly raising fan speed, while still ensuring the safe operation of all critical components. This not just lowers the fan energy of the servers, theoretically it would also allow data centers the opportunity to dynamically lower the fan speed and energy of variable speed fans in the CRACs or CRAHs, if the proper airflow based control scheme is developed and implemented by the cooling system manufacturers.
This mismatch in airflow is currently becoming a more significant issue as data center designer and operators strive to improve their energy efficiency. The fan energy issue becomes even more pronounced as “free cooling” becomes more prevalent as the mechanical cooling energy is reduced or eliminated. Therefore, the fan energy will then represent a more significant portion of the cooling energy (expect to see more partial PUE “pPUE” claims approaching 1.01x).
According to HP they project that in a full rack of HP G8 servers the lower fan energy saving would be enough to power one additional server (perhaps not quite a free lunch – but at least it could qualify as a free desert).
And while the HP announcement had many other computing performance improvements, one interesting new feature is HP’s Discovery Services which “automatically maximizes the use of space, power, and cooling”and is part of their “Automated Energy Optimization” management strategy. While capable of performing millions of computations per second, other ordinary servers are apparently not very smart and are totally clueless as to where they are located in a data center (yes, I can ping it, but it seems the server has no idea where it is). So, like leaving your car in an airport parking lot, if you forgot where your new HP server is, HP location services can tell you, but only if you also buy an HP cabinet and management software.
So if you are interested in improving data center energy efficiency and computing performance consider looking at HP G8 servers, but make sure you know where you left your old servers first.