Verne Global

Data Center |

9 October 2017

AI and Edge Could Usher in a New Era of Liquid-Cooled Data Centers

Written by Andrew Donoghue (Guest)

Andrew Donoghue is a technology writer specialising in data centers and critical infrastructure. He has worked for analyst companies such as 451 Research and has also held senior editorial roles at various tech publishing companies. You can follow Andrew at: @andrew_donoghue

A proliferation of new workloads could drive greater adoption of direct liquid cooling technology in High Performance Compute environments and beyond.

The ISC High Performance 2017 event in Frankfurt earlier this year was as usual a showcase for all that is new and shiny in the world of High Performance Computing (HPC). However, one technology that was represented in force at the show has its roots in the mainframe era of the 1960s.

Direct-Liquid Cooling (DLC), or on-chip liquid cooling as it is also known, is a method of heat dissipation where the processors (or other components) are in close proximity to – or fully immersed in – a liquid. Proponents of the technology point out that DLC enables higher rack densities in a smaller footprint and also enables higher processing speeds. Additionally, a more uniform temperature across systems compared to air reduces thermal stress and improves reliability and system lifespans.

DLC has been undergoing something of a renaissance over the last five years with more than 20 specialist suppliers now offering various flavours of the technology. All of the major server OEMs now has a DLC system - either developed internally or licensed from DLC technology suppliers. But while there has been plenty of activity on the supply-side, adoption has largely been confined to the HPC sector to date. There is still considerable room for growth within HPC – the worldwide HPC server market was worth more than $11 billion in 2016 according to Hyperion Research – but some suppliers have wider ambitions.

DLC suppliers believe that there will be at least two key drivers for more widespread adoption of the technology into enterprise, cloud and colocation facilities. The first of these is the momentum around AI and in particular deep learning. The computational complexity of deep learning neural networks could lead to clusters of high-density racks in a wider range of data centers providing new opportunities for DLC suppliers. For example, over the last 18 months Fujitsu – one of the leading suppliers of HPC equipment – has expanded its investment in systems to support deep learning but also DLC. It now has a broader range of DLC technologies than any other large systems maker.

The other key driver for DLC adoption is at the other end of the compute intensity spectrum. DLC systems – especially those based around immersing components directly in a dielectric fluid – have attributes that make them suited for so-called edge compute driven by IoT and other applications. As there are no noisy fans involved (with immersion systems at least), DLC systems are well suited to being deployed into traditional office environments for example. The fact they are also effectively self-contained from a cooling perspective also means they can be deployed outside of traditional whitespace – perhaps a factory floor or warehouse. Finally, the fact that the units are sealed makes them more rugged and less prone to dust and other contaminants that affect air-cooled systems. A number of DLC suppliers including Clustered Systems, RuggedPOD and Iceotope (backed by Schneider Electric) are now actively targeting this edge market.

However, despite developments in DLC technology, it still represents some challenges for end-users, from concerns about leakage to the need for specialised IT systems as well as modifications to facilities-side equipment (plumbing to the rack). Tate Cantrell, Chief Technology Officer at Verne Global, says the colocation provider is happy to accommodate DLC but ultimately it is down to the customer to choose the system best suited to their workload and business requirements:

“We don’t force the industry to adopt liquid cooling but if liquid cooling is what is required - a number of our customers that have traditional Cray style compute use liquid cooling - then it’s very easy to accommodate in Iceland,” he said. “There is no limit to the kinds of technology we can accommodate. We might have to make some bespoke adjustments to existing infrastructure but it is usually easily done.”

Many data center experts will admit, if pushed, that if the clock could be turned back, liquid cooling is a much more efficient way to cool data centers. However, the real issue is not whether liquid is more theoretically efficient than air, but whether a watertight economic case can be made. That case is convincing for HPC but is also now looking more persuasive for other types of workload.

Share:FacebookTwitterLinkedInmail

Sign up for the Verne Global newsletter

Opinion, thought leadership and news delivered directly to your inbox once a month.