Skip to content

Water Cooling Vs Air Cooling Comparison Essay

Discussions on the preferred fluid for removing heat from data center equipment have historically been short enough to have been conducted through rolled down windows of cars passing each other from opposite directions. It has basically trickled down to a couple decades of “liquid is the only choice,” followed by a couple decades of, “No water in the data center.” While such simplicity of consensus has not yet devolved into roadside road rage fisticuffs, many of us have pulled over into the parking lot for an extended howdie-do. Today’s blog is intended to contribute to those parking lot conversations and cut through some of the sales and marketing rhetoric from the competing air-cooling and liquid-cooling camps. Spoiler alert: air cooling and liquid cooling are both more effective than the proponents of the competing technology would have you believe.

When data center managers are asked what is the one thing they would most like to change in their data centers, it probably doesn’t surprise many of us that the top two responses are to be more energy efficient and to have better cooling/HVAC systems. However, when these same respondents are asked to identify the most promising new technology for data centers, it may be a little more surprising that liquid cooling tied with DCIM as the top choice.1 Interestingly, only a painfully small minority of data centers today come even close to exploiting the full capability of air cooling, thereby leaving the door obviously ajar through which liquid cooling proponents can launch claims for ten-fold increases in density and 75-90% reductions in cooling operational costs. While such advantages for liquid cooling definitely exist in comparison to average air-cooling deployments and especially in comparison to most legacy air-cooled data center spaces, those efficiency and density gaps are much narrower when compared to air-cooled data centers more fully exploiting industry best practices. Nevertheless, there are other benefits derived from liquid cooling, in combination with density and efficiency capabilities, that can make liquid cooling particularly attractive for some applications.

The Benefits of Liquid Cooling

The ability to support extreme high density has been a differentiator for liquid cooling proponents since some 10-12 years ago. This is when 8kW was pegged as a generally accepted maximum rack threshold for air cooled data centers, prior to the maturation of airflow management as a science. At that time, the liquid cooling banner was carried by solutions that were not even really liquid cooling but which today are more accurately specified as close-coupled cooling – row based cooling and top-of-rack cooling. As more complete airflow management techniques provided a path to 20kW per rack up to near 30kW per rack, the density advantage for liquid cooling subsided a bit. More recently, the folks at Intel have been running an 1100 watts per square foot air-cooled high density data center for a couple years with rack densities up to 43kW. Does that mean there is no density advantage for liquid cooling? Not necessarily. For starters, the Intel folks needed to come up with special servers and racks that allowed them to pack in 43kW of compute into a 60U high reduced footprint and they built the data center in an old chip fab with a high enough ceiling to accommodate the massive volume of supply and exhaust air.2 Secondarily, direct touch liquid cooling solutions now can effectively cool upwards of 60kW per rack footprint and 80-100kW solutions are available, basically waiting for chip sets that will actually stack to that density. Pat McGinn, Vice President of Product Marketing at CoolIT Systems has advised me these configurations are in the works.

But liquid cooling solutions should not be pigeon-holed to that niche of high density somewhere above the 30-40kW that can be air-cooled up to the 80-100kW that can be feasibly configured and deployed. For example, if space is at a premium for both the white space and for the mechanical plant, then 100kW of IT could be packed into two or three racks instead of four to fifteen and the supply water could be cooled through a heat exchanger coupled to the building return side of the chilled water loop. This use of liquid cooling can allow a path for an on-site enterprise data center alternative to colocation where there is, in fact, no space or mechanical facility for a data center. In these cases, density becomes the solution rather than the problem.

Similarly, the latest Intel production data center experiment involves cooling coils built into the roof of hot aisle containment3. Those coils are coupled to a highly efficient adiabatic cooling mechanism, resulting in an annual savings of over 100 million gallons of water versus tower cooling, and with 1100 watts per square foot density achieved with all commodity IT equipment. While the footprint of this adiabatic cooling system consumes around 3X the real estate of a tower and chiller plant, the concept does include a path for smaller data centers. Since Intel is meeting their server inlet temperature targets while allowing the “chilled” water to the coil to get up to 80˚F, this same 330kW coil capacity could be plumbed to the return side of a building chilled water loop, resulting in a moderate sized data center with an air-cooled PUE even less than the 1.06 that Intel is seeing on their 3MW row.

Additional Considerations

Scalability and flexibility are also considerations when evaluating the relative merits of liquid cooling versus air cooling. Phil Hughes, CEO of Clustered Systems Company, cites not needing a custom building and not needing to “re-characterize airflow prior to moves, adds or changes” as some of the less obvious benefits of liquid cooling that can still have an impact on total cost of ownership.

Another consideration would be the relative homogeneity or heterogeneity of IT equipment being deployed in a space. As a general rule of thumb, the economies of scale will benefit liquid cooling solutions with more homogenous IT equipment. Especially if a rather likely customization project is required to configure the IT equipment with individual cold plates on each major heat source or conductors between every heat source and a single large cold plate. An extreme example of this difference might be a colocation data center with rack-level customers versus a research lab data center where all the IT was being used to run simulations on some model, whether that be a new chip design, an intergalactic weather system or a cardiovascular system. The heterogeneity of the colocation space would normally be better served by air cooling with all the disciplines of good airflow management; whereas the research data center might be a more reasonable candidate for liquid cooling.

Liquid, Air, or Hybrid Solution?

Finally, there are subsets of liquid cooling and air cooling that should be considered as part of an overall assessment of cooling alternatives. For air cooling, there is a deployment with very tight airflow management in conjunction with some form of economization and allowances for operating within a wider band of the ASHRAE TC9.9 allowable server inlet envelope and then there is bad air cooling. There really is no defensible reason for bad air cooling. On the liquid cooling side, there are systems that remove all the heat by liquid and there are hybrid systems that remove most of the heat by liquid and require some air cooling for whatever remains. In theory, it would seem that one fully liquid cooling solution makes more sense than absorbing the capital expense for both a liquid cooling solution and an air cooling solution, but that economic analysis needs to be part of basic project due diligence.

In addition, there are situations where hybrid solutions might make sense. For example, if there is an existing data center that is woefully inadequate to the planned future for IT requirements, rather than building an entirely new data center to accommodate business growth, an existing data center at 100 watts per square foot could be converted to 500 watts per square foot, in the same footprint without adding any mechanical facility. Another possibly cost-effective hybrid solution might be a fully integrated product, such as cold plates supplied by return water from an integrated rear door heat exchanger. If that rack rear door heat exchanger required its own mechanical plant, then it might suffer on capital investment versus a fully liquid cooling solution. However, if the rear door heat exchanger itself operated off a building return water loop, then this hybrid approach could make a lot more sense. A final element worth serious consideration would be access to pre-engineered IT equipment configurations or your own comfort zone of working with engineered-to-order IT.

PUE Implications

There is one final caveat to evaluating the appropriateness of air cooling versus liquid cooling for a specific application situation. As noted above, all the real economic benefits require to one degree or another, exploring wider thresholds of the ASHRAE allowable temperature elements or the true server OEM operating temperature specifications. Under those conditions, PUE will not be the final determiner of the difference between liquid cooling and air cooling. When server inlet air exceeds 80˚F, in most cases, the server fans will ramp up and consume a nonlinear increase in energy. That energy increase goes into the divisor of the PUE equation and will result in a higher consumption of energy and a lower PUE; whereas liquid cooling will either eliminate or greatly reduce the fan energy element, thereby lowering the PUE divisor and potential producing a higher PUE while the total energy content is lower. This caveat does not necessarily mean one technology is superior to the other; it is merely another factor that needs to be considered in terms of all the other variables.

Conclusion

In conclusion, everything is better than air cooling with bad or absent airflow management, period. Liquid cooling at this time appears necessary for rack densities 50kW and up. However, liquid cooling should not be restricted to high density applications as it could help overcome a variety of site constraints to air cooling. Rumors of the eminent demise of air cooling are a bit premature, as illustrated by Intel’s successful deployment of 43kW racks with airside economizer. When an air cooled data center can achieve a PUE less than 1.1, straight economics will not always be a significant differentiator between liquid cooling and air cooling. Nevertheless, there are other differentiators included in a full assessment of the mission of the data center, applications and variations of IT equipment planned, building constraints and stage of life of the data center that should reveal a preferred path.

1 Mortensen Construction, “Trends in Data Centers,” Spring 2014, p. 12
2 “Intel IT Redefines the High-Density Data Center: 1100 watts/Sq Ft,” Intel IT Brief, John Musilli and Paul Vaccaro, July 2014
3 “Intel IT: Extremely Energy-Efficient, High-Density Data Centers,” IT@Intel White Paper, Krishnapura, Musilli and Budhai, December 2015

 

For information on how to reduce bypass airflow, download our FREE white paper: Bypass Airflow Clarified


About the Author

Ian Seaton is an independent Critical Facilities Consultant and serves as a Technical Advisor to Upsite Technologies. He recently retired as the Global Technology Manager ofChatsworth Products, Inc.  (CPI).

 

Tags: Airflow management, Data Center Cooling, Liquid Cooling, PUE

Is Liquid Always Best?

The largest available coolers to fit most large cases are typically based on liquid thermal transfer, and the flexible hoses also allows some builders (depending on case design or modification) to take advantage of the cool intake air by mounting the radiator on the front panel. This method ejects CPU heat into the case, but the large volume of air passing through the radiator diminishes its effect on other components.

Top mounting is still the most common option though, and radiators placed there usually work best with the fans underneath, blowing upward. Yet a problem arises when the heat of an excessively hot graphics card is blown into the case below the radiator, as the warmer air is less effective at reducing coolant temperature. Planning ahead is key to the solution, as most high-end GPUs (graphics processors) are available in both internally and externally venting designs.

Concerns about graphics card heat being ejected through the top-panel radiator have been addressed in my own builds by using graphics cards that expel most of their heat through a rear-panel expansion slot, as seen in the silver card above. Yet graphics card reviewers frequently recommend dual-fan or triple-fan cards such as the black card above, focusing entirely on the improved graphics temperature-to-noise ratio without any concern for the impact that waste heat has on every component above the graphics card. Because I review both cases and CPU coolers, I consider graphics coolers that blow heat into the case to be defective.

The debate between graphics and processor priorities need not get heated however, as liquid cooling both the CPU and GPU is another option.

An alternative to liquid cooling, Big Air CPU coolers affix radiator fins to a heatsink base, usually by way of heat pipes. Some of our test have even shown these massive coolers outpacing a few of their liquid rivals. And while liquid systems usually have an edge in CPU temperature, a comparison of the cooling and noise shows these designs trading blows: Note that the Kraken X61 liquid cooler is roughly the same size as the NH-D15 Big Air cooler in the republished chart below.

The lack of a pump allows Big Air designs to be priced lower then liquid coolers, yet both have drawbacks, and these begin with size. First of all, a big air cooler is located directly over the CPU, which usually blocks access to memory slots and some cable connectors. Liquid coolers move the radiator to one of the case’s outer panels, leaving only the CPU socket covered by a water block or combination of water block and pump. On the other hand, “closed loop” coolers that have no fill port have been known to dry out over several years through microscopic leaks. Furthermore, Big Air coolers don’t have a pump to wear out or growl constantly. Modern pumps are usually very quiet, but they do make noise.

Perhaps the biggest reason to choose liquid over air isn’t the convenience of reaching RAM and cable connectors, but the fact that Big Air designs are usually large and heavy. These can weaken motherboards over a course of years, cause instant damage if the system is even slightly mishandled, or even bend down the CPU contacts of Intel’s Land Grid Array (LGA) design. We’ve even seen big-air coolers break off from their mounts and destroy the graphics cards in systems that were being shipped cross country.

In summary, liquid coolers are often better than air, but not always for reasons of CPU temperature. We generally use Big Air when a system is intended to remain stationary, and switch to liquid when the system is designed to be moved around or requires something much larger than the compact coolers we’ve consistently recommended to first and even second-time builders. Armed with enough information to make sense of our cooling reviews, we hope that you’ll stick around to engage in the discussion below.

MORE:Best Cooling
MORE:Best CPUs
MORE:Best Cases
MORE:All Cooling Content

Subscribe to us on FacebookGoogle+, RSS, Twitter & YouTube.

Previous

About the author

Thomas Soderstrom@hardware_tom

Thomas Soderstrom is a Senior Staff Editor at Tom's Hardware US. He tests and reviews cases, cooling, memory and motherboards.