Verne Global

Data Center |

30 October 2017

Should HPC Workloads Know Their Place?

Written by Andrew Donoghue (Guest)

Andrew Donoghue is a technology writer specialising in data centers and critical infrastructure. He has worked for analyst companies such as 451 Research and has also held senior editorial roles at various tech publishing companies. You can follow Andrew at: @andrew_donoghue

Throwing off the yoke of the British monarchy and class system had a (mostly) positive effect on US society. The concept of the ‘American Dream’ has it that anyone can be successful no matter what his or her economic or social class. However the reality is obviously much more nuanced and complex; the maxim of egalitarianism doesn’t ring true in every situation it seems. For example, when it comes to computing, perversely efficiency and performance can be improved by inequality; it could be argued that every workload should ‘know its place’ and be treated differently according to its status and class.

Analyst company 451 Research has a specific term for this notion that there is an ideal niche for every workload to thrive: Best Execution Venue or BEV. At first glance BEV appears to carry some unhealthy connotations regarding capital punishment but is actually concerned with helping to define the ideal environment for every type of workload. 451 Research explains it this way: “Every application workload has a best execution venue… the intention is to help inform IT decision-maker thinking on how to deploy workloads in ways that take full advantage of the evolving service capability and price/performance characteristics of colocation, managed hosting and cloud, balanced against more traditional on-premises infrastructure investments.”

The notion of the BEV is obviously of increasing relevance and importance given the growing proportion of organisations that are eschewing purely enterprise-owned data center strategies for a hybrid approach; on-premises, colocation, hosting and various flavours of cloud are now all in the mix. However this obviously makes decisions around infrastructure and data center location increasingly challenging.

Focusing in on high performance compute (HPC) specifically should take some of the complexity out of BEV discussions. But as anyone who is familiar with HPC will know, the term is actually fairly nebulous. There are a myriad of different workloads and application types that could be categorised as HPC – arguable even more now given the rise of AI and deep learning. However, HPC workloads do share some broad characteristics in terms of being obviously compute intensive but often with less latency requirements than some other workloads. Many HPC applications – while of intrinsic importance – are also arguably less ‘mission-critical’ than some enterprise workloads; they can tolerate service interruption with fewer immediate consequences.

Given those parameters it is not surprising that a growing number of organisations are looking to locate HPC systems in areas that may not have the lowest-latency network access but do have things like cheap energy and secure facilities. For example, computer translation start-up DeepL recently took the decision to locate its new supercomputer at Verne Global’s facility in Iceland for exactly those reasons. DeepL chief technology officer Jaroslaw Kutylowski told me in a recent interview that he estimates it would be up to five times more expensive for DeepL to locate its system at a facility in Germany: “The problem is that those providers that we have been speaking to in Germany – and we have been speaking to a quite a lot of them – are not really set up for HPC. They have to prepare special compartments, rework their cooling. It’s been problematic and it all boils down to providing the power and being able to cope with the cooling,”

But energy costs, latency requirements and cost of data center infrastructure are not the only the factors at play when it comes to deciding on the BEV for HPC. Systems used in advanced research and experimentation often requires more hands-on monitoring and management. That need to be hands-on means that some HPC operators may prefer to locate systems near to their scientific or research institution. DeepL also had some concerns around access to its equipment but ultimately decided the trade-off was worthwhile. “The platform is partially experimental and this means we have to visit the location to do some tests on the hardware. You can obviously do that remotely but that doesn’t always give you the feeling for what you are doing,” said Kutylowski. “Verne’s team have been troubleshooting and racking for us but there are times when we need to get over to Iceland ourselves. This is obviously sometimes less convenient than visiting the next city in Germany but you have to factor that in.”

Deciding on the BEV for HPC is also being further complicated by the emergence of more cloud-based HPC compute. At the recent 2017 ISC High Performance Compute event in Frankfurt, Germany, Intel predicted that up to 30% of HPC CPUs will be in cloud data centers by 2021, driven primarily by AI and deep learning. Two years ago, Intel suggested that it would be just 15% during the same timeframe.

It may be that public cloud will eventually usher in a new age of workload utopia with HPC and all other types of compute safely ensconced in largely homogenous hyperscale facilities. But just as with notions of fairness and parity in society, I suspect the reality will continue to be infinitely more complex.


Share:FacebookTwitterLinkedInmail

Sign up for the Verne Global newsletter

Opinion, thought leadership and news delivered directly to your inbox once a month.