Innovations in Data Center Cooling: Balancing Energy Efficiency with Increased Processing Power

Subscribe To Download This Insight

3Q 2023 | IN-7072

Purdue University received a substantial grant from the U.S. Department of Energy's COOLERCHIPS program to advance its efforts in developing efficient data center cooling solutions, driven by the increasing demand for such solutions in the face of increasing data center power density and energy consumption. In response to this demand, innovations like direct-to-chip cooling and immersion cooling are emerging as crucial methods to enhance energy efficiency, impacting data center operators and smart building technology providers. These advancements aim to reduce overall Power Usage Effectiveness (PUE), addressing the critical need for more sustainable and cost-effective data center cooling.

Registered users can unlock up to five pieces of premium content each month.

Log in or register to unlock this Insight.


COOLERCHIPS Program Targets Innovative Cooling Solutions


Purdue University received a significant grant of US$1.8 million from the U.S. Department of Energy's COOLERCHIPS program to advance its innovative initiatives in developing more efficient cooling solutions for data centers. Purdue’s ALPHA Lab, which focuses on semiconductor packaging, heat transfer, and assembly, has made significant strides in this area. One of the lab’s primary areas of study is two-phase impingement cooling, a direct-to-chip cooling process that builds microchannels filled with liquid directly into the microchip packaging itself. Purdue's main goal is to enhance cooling efficiency, while minimizing energy consumption, with the goal of revolutionizing chip cooling methods.

As businesses increasingly adopt resource-intensive computing functions like Deep Learning (DL), neural networks, big data analytics, and blockchain, there is increasing demand for liquid cooling options in data centers. Vertiv reports that data centers currently support power requirements of just over 20 Kilowatts (kW), but the market is shifting toward 50 kW or more. Traditional air-cooling systems are unable to efficiently and sustainably cope with the heat generated from this increase in power consumption.

The Demand for Data Centers Is Growing Rapidly


The surge in demand for new data centers, driven largely by the growing adoption of Artificial Intelligence (AI) technologies, has become a significant trend in the tech industry. In North America, data center construction is expected to experience impressive growth, projected at a Compound Annual Growth Rate (CAGR) of 7% from 2023 to 2032 according to GMI. This outpaces the broader North American construction industry's CAGR of 4.8% between 2023 and 2028 as reported by Mordor Intelligence. However, it's worth noting that data centers, while pivotal for modern computing, come with environmental challenges. Globally, data centers already consume a remarkable 1% of all electricity produced, rivaling the energy consumption of the aviation industry. Moreover, cooling these data centers, a crucial operational requirement, poses its own set of challenges. Cooling systems can be energy-intensive, accounting for up to 33% to 40% of the overall energy usage in data centers and consuming vast quantities of fresh water annually. Presently, data centers employ various cooling methods, including Computer Room Air Conditioning (CRAC) units, rear door heat exchangers, and free cooling, to manage the heat generated by the servers and equipment. Finding more efficient and sustainable cooling solutions is a critical concern for the industry's future.

Enhancing Data Center Energy Efficiency with Innovative Cooling Technologies


Power Usage Effectiveness (PUE) serves as a fundamental metric for assessing energy efficiency in data centers. Simply put, PUE is the ratio of total facility energy consumption to the energy used by Information Technology (IT) equipment within a data center, represented by the formula PUE = (Total facility energy usage) / (IT equipment energy usage). Lowering the overall PUE in data centers is a primary objective for operators, with particular focus on how air conditioning contributes significantly to the PUE calculation. To achieve this, operators need to narrow the gap between server inlet temperatures and the processors' setpoint temperatures. By efficiently cooling processors to their required setpoint with higher inlet temperatures, operational costs decrease, PUE decreases, and overall energy efficiency improves. Achieving this entails innovative approaches, including advancements in capturing heat at its source.

Innovations such as direct-to-chip cooling and immersion cooling play a pivotal role in this endeavor. Direct-to-chip cooling, also known as direct-to-plate cooling, integrates liquid cooling directly into a computer's chassis. Liquid coolant circulates through cooling plates situated alongside Central Processing Units (CPUs) and Graphics Processing Units (GPUs), removing approximately 70% to 75% of the heat generated by equipment in the rack, thereby reducing reliance on air-cooling systems for the remaining 25% to 30%, as reported by Vertiv. Immersion cooling, an emerging technology, submerges all server components into a non-conductive dielectric fluid, sealed in a leak-tight container. This method efficiently transfers heat from electronic components to the coolant, resulting in significantly lower energy consumption compared to other cooling methods.

With an increased volume of liquid coolant capturing more heat downstream, a persistent demand remains for traditional air coolers, chillers, and evaporative cooling systems to efficiently remove the accumulated heat and recirculate chilled fluids. Recent advancements in chiller technology have empowered contractors to design more sustainable data centers. The introduction of low-friction, air-cooled, magnetic-bearing centrifugal chillers has the potential to significantly reduce the full-load power consumption of chillers, surpassing industry standards in terms of efficiency. Ensuring optimal full-load power consumption is crucial for designing data centers that can scale effectively. Moreover, despite the superior heat dissipation capabilities of liquid cooling for processors, it does not effectively cool other electrical components integral to the computing process. As a result, there will continue to be a demand for contractors to support traditional air-cooling systems in data centers, even those with high rack power requirements.

The opportunity to embrace these innovations impacts two key groups: data center operators and smart building technology providers. For data center operators, investing in advanced cooling technologies is crucial for scaling facilities with evolving technology trends. Initial capital investments in such technologies are expected to be offset by operational savings in the years ahead. Additionally, smart building technology providers stand to benefit from the expanding data center market. The rapid growth of data center construction, the increased adoption of AI for business, and increasing computing power density present a significant market opportunity to provide innovative cooling solutions for processing equipment, addressing the critical need for energy-efficient data centers.


Companies Mentioned