Higher Rack Density Requires Liquid-Cooled Servers
Most of today's computers are cooled with forced airflow across the server chips. Cooling systems using airflow as the cooling medium can be efficient and effective when good rack row layout, containment and control sequences are included in the design with power densities in the 10 to 20 kW per rack range. But beyond that, air cooled servers are running so hot that liquid cooling is becoming a requirement. With the broader adoption of artificial intelligence, machine learning and other high performance computing, datacenter power densities can exceed 20 kW per rack and can even approach rack density levels upwards of 75 kW to 100 kW. These densities far exceed the ability to cool with air as the medium.
Liquid as a Cooling Medium
There’s a limit to how much you can cool with air. Air by its very nature has a low capacity to absorb and transport heat compared to liquid. Consider the specific heat of air: 0.24 BTU/lb-F, this tells us that it takes only 0.24 BTU of heat to raise one pound of air by one degree fahrenheit. In contrast, the specific heat of water at 1.0 BTU/Lb-F tells us that it takes about 4 times more heat to raise one pound of water by one degree. The specific heat is the ability to transfer heat. So, our interpretation of this scientific property for air and water reveals that we can reject 4 times more heat to water at equal mass. Now let’s consider the difference in density. At 62.4 lb/cf, water is over 800 times denser than air at 0.075 lb/cf. Now if we combine these properties it becomes very clear why a liquid medium is much more effective at removing heat than air. Water is 832 times denser and has 4.17 times more heat transfer ability: 832 x 4.17 = 3469. That makes water nearly 3500 times more effective at carrying away heat than air!
For air systems to compete, this means moving more air, larger ducts, larger fans and yes, larger coils too. Ultimately the air system (unless it’s in economizer mode) must reject this heat to a chilled water coil or direct expansion refrigerant coil. Server fans increase their speed to provide the needed increase for airflow. High server fan speeds result in high fan noise which can make it difficult for IT workers in the datacenter space and can even surpass OSHA safety requirements for suistained noise levels. Higher server fan speeds also draw more UPS power. In the end there is a limit as to how much cooling an air system can provide.
So how does liquid cooling affect the building’s cooling plant? The capacity of the cooling plant is the same and therefore must be designed for the total planned tonnage no matter which medium is chosen to cool the IT equipment. The advantage of using liquid as a medium rather than air is realized at the system or terminal unit level where the heat density is very high. Heat can be moved from the datacenter more efficiently through liquid cooling using pumps and pipes rather than space consuming CRAC units, large ducts and high airflows. The pipes conveying the heat are small in comparison to the air ducts, and pumps are typically located outside of the datacenter space leaving more floor space in the datacenter available for IT and power distribution equipment. If a datacenter is entirely liquid cooled, there can be advantages at the plant level, too. A dedicated cooling plant can be based on a warm water design which can be much simpler and less costly than a conventional chiller plant. In some climates, a plant based on warm water design may not even require compressorized equipment or open cooling towers.
Cooling with a liquid medium verse air can increase the annual hours that you are able to spend in economizer mode and reduce compressor run time. Since liquid is so much more effective at carrying away heat, the electronics can be cooled using warmer water temperatures. The high end of this range depends on the type of liquid cooled equipment and the low end of the range must always be high enough (above dewpoint) to eliminate any concerns over condensation within the liquid cooled electronics. Ashrae(1) recommends a minimum of 63 deg. F in Class A1 environments for liquid cooling applied directly to the electronics. With immersion cooled server technology, the servers are immersed in a non-conductive liquid coolant. This coolant is used to transfer heat away from the computer servers to the building cooling water via a heat exchanger. When this type of system is added to an existing datacenter, the building chilled water return loop can be used as the cooling source which typically operates at around 53F to 55F.
In new designs, a dedicated cooling plant using elevated temperatures may not require chillers or other compressorized equipment at all. Depending on the data center location’s climate, dry coolers may be all that is necessary for year-round heat rejection. If the design is to include a chiller plant because of warm weather peaks in climates such as the Northeast, a dedicated system can be based on a warm temperature cooling design with a hydronic economizer. Because of the warmer design cooling temperature and geographic climate, there will be many more hours each year that the system can operate in economizer mode without the need to operate the chiller. But it will be there when the hot weather returns, and the hydronic economizer can no longer produce the warm temperature cooling design temperatures needed. Another design scenario would be to use dry coolers as the primary source of heat rejection with a small chiller downstream to polish off the temperature on peak days.
More data centers are starting to use high performance computing applications like artificial intelligence which drive rack power densities even higher. The only way to cool the higher density technology is with liquid cooling. The silver lining here is that data centers of the future can be designed to be very efficient and sustainable when incorporating a liquid cooling medium. Warm temperature designs can reduce or even eliminate the need for chiller plants and cooling towers by using closed loop dry cooling technology and heat exchangers to take advantage of extended economizer hours. Not only will this reduce demand for power by reducing or eliminating compressorized equipment, but demand for make-up water and chemical treatment in open tower systems can be reduced or even eliminated as well, saving countless gallons of water needed to replace evaporation in evaporative heat rejection systems.
(1)Thermal Guidelines for Data Processing Environments, 4th Ed, 2015
Image Source: Green Revolution Cooling, Inc. (“GRC”)