Cloud options are challenging dedicated HPC infrastructure

HPC Tech Trends


Speaking at a conference in November last year, Bernd Mohr, general chair of the Juelich Supercomputing Center, described HPC as “a new gold rush” . He said: “We are the modern pioneers, pushing the bounds of science for the betterment of society.” That’s a pretty grandiose claim but much of the work being done in HPC does have groundbreaking potential. It’s use in the discovery of new medicines, for example, has allowed pharmaceutical firms to massively accelerate their research.


Intel says that manufacturers are using HPC to move towards Industry 4.0, which will streamline the process of product design and manufacturing, with increased personalisation based on massive amounts of data. HPC is a vital component of manufacturing a “run of one” product – either a rapid prototype or a hyper-personalised product.

For example, Intel says while one of its customers is processing 25,000 data points per second to predict and improve the reliability of its manufacturing process, another is working with 450 million data points per second. And as Nick Dale wrote on this blog in June, HPC is helping the oil and gas industry to draw insights from the petabytes of data that it has previously struggled to bring value from. In one example, Royal Dutch Shell saved almost $10 million on a drilling project in Argentina thanks to ‘virtual drilling’ technology.

As costs fall, spending on HPC is increasing. More than $35 billion was spent on HPC in 2016 and that is forecast to reach almost $44 billion by 2021, according to Intersect 360 research. A significant amount of this will go towards companies attempting to provide virtualised HPC in the cloud.

However, in a recent interview with Computer Weekly, Francis Lam, the director of HPC product management at Huawei, said that the most demanding HPC tasks still require dedicated infrastructure. He told the magazine: “Users with higher computing demands [that] are currently running dedicated HPC infrastructure will see little benefit in decommissioning their systems in favour of dedicated cloud resources as their workloads are vastly different."

Heavier workloads require scale, performance and customisable, attuned hardware – all things that can be hard to deliver without dedicated HPC infrastructure which isn’t on a virtualised basis across the public cloud. Mr Lam acknowledged that cloud capacity is useful for “bursting” – short periods when resources are low and the cloud can offer extra capacity.

The other point, of course, is that although HPC costs are coming down, there are still a lot of organisations that cannot afford dedicated HPC infrastructure. For these cases, a cloud-based service that makes dedicated infrastructure (such as bare metal) available on an accessible and flexible basis may be the best of both worlds.

Writing on this blog last month, Spencer Lamb noted that HPC use in the public sector has grown significantly over the last 2-3 years and argued that the G-Cloud 10 framework – a marketplace for cloud firms to sell their services to public sector bodies – will make it even easier for those organisations to access HPC.

The cloud capabilities of HPC are growing all the time – and doing so at pace – which is encouraging because of its promise to spread this groundbreaking technology. For the time being, however, the heaviest workloads that need genuine HPC will require dedicated HPC infrastructure and environments - and the public cloud isn't the most optimal home for these kind of intensive deployments.


Written by Shane Richmond (Guest)

See Shane Richmond (Guest)'s blog

Shane Richmond is a freelance technology writer and former Technology Editor of The Daily Telegraph. You can follow him at @shanerichmond

Related blogs

Deep Learning at Scale

Deep learning is a current hot topic in HPC and I am sure it will be one of the key themes at SC18. In many cases, we have been hearing about artificial intelligence, machine learning, and deep learning discussed together, though in reality, machine learning is a type of AI, and deep learning is a subset of machine learning. In this article, we will try to best define deep learning and its industrial applications and, more specifically, the benefits of scale - from big data to high-performance computing - to the successful implementation of deep learning.

Read more


The edge could be a winning card for telcos

For some time now, I’ve been trying to talk more about “digital infrastructure” than “data centers”. That’s because the connections that link data centers, their users and other resources such as power, are just as important as the servers and infrastructure inside the buildings. When it comes to the 'Edge' - new, exciting opportunities could exist for telecommunications providers...

Read more


Explainable AI

SC18 here in Dallas is proving once again to be a fascinating melting pot of HPC insights and observations, and it's intriguing to see the continuing convergence of AI into the supercomputing ecosystem. Along these lines I started to think about the movement towards 'Explainable AI'. Being able to explain and understand how models work when making predictions about the real world is a fundamental tenet of science. Whether solving equations in a dynamic system for precise answers or using statistical analysis to examine a distribution of events, the results sought from these methods are intended to increase our clarity and knowledge of how the world works.

Read more

We use cookies to ensure we give you the best experience on our website, to analyse our website traffic, and to understand where our visitors are coming from. By browsing our website, you consent to our use of cookies and other tracking technologies. Read our Privacy Policy for more information.