This website requires certain cookies to work and uses other cookies to help you have the best experience. By visiting this website, certain cookies have already been set, which you may delete and block. By closing this message or continuing to use our site, you agree to the use of cookies. Visit our updated privacy and cookie policy to learn more.
This Website Uses Cookies By closing this message or continuing to use our site, you agree to our cookie policy. Learn MoreThis website requires certain cookies to work and uses other cookies to help you have the best experience. By visiting this website, certain cookies have already been set, which you may delete and block. By closing this message or continuing to use our site, you agree to the use of cookies. Visit our updated privacy and cookie policy to learn more.
ASHRAE’s 2011 Expanded Thermal Guidelines has caused a lot of controversy and confused many in the data center industry. Although ASHRAE released the information a year ago in May 2011, it is still highly debated and sometimes misinterpreted.
Administrators constantly assess data center and network capacity needs in preparation for a potential disaster but are often at a loss to respond in an optimal way to balance capacity and performance during an actual disaster.
I recently participated in a discussion on LinkedIn about the best, most energy-efficient way to control temperature: either the classic sensor in the return of each individual CRAC/CRAH or supply air either by under-floor sensors or sensors in the cold aisles.
In the early years of data centers, when computing technology was costly and temperamental, protecting hardware with critical power and cooling capabilities was the top priority.
In the Jan/Feb issue of Mission Critical, Zinc Whiskers focused on how the “cloud” operates as an automatically virtualized environment that creates very high power densities in processing areas.