Cloud options are challenging dedicated HPC infrastructure

HPC Tech Trends


Speaking at a conference in November last year, Bernd Mohr, general chair of the Juelich Supercomputing Center, described HPC as “a new gold rush” . He said: “We are the modern pioneers, pushing the bounds of science for the betterment of society.” That’s a pretty grandiose claim but much of the work being done in HPC does have groundbreaking potential. It’s use in the discovery of new medicines, for example, has allowed pharmaceutical firms to massively accelerate their research.


Intel says that manufacturers are using HPC to move towards Industry 4.0, which will streamline the process of product design and manufacturing, with increased personalisation based on massive amounts of data. HPC is a vital component of manufacturing a “run of one” product – either a rapid prototype or a hyper-personalised product.

For example, Intel says while one of its customers is processing 25,000 data points per second to predict and improve the reliability of its manufacturing process, another is working with 450 million data points per second. And as Nick Dale wrote on this blog in June, HPC is helping the oil and gas industry to draw insights from the petabytes of data that it has previously struggled to bring value from. In one example, Royal Dutch Shell saved almost $10 million on a drilling project in Argentina thanks to ‘virtual drilling’ technology.

As costs fall, spending on HPC is increasing. More than $35 billion was spent on HPC in 2016 and that is forecast to reach almost $44 billion by 2021, according to Intersect 360 research. A significant amount of this will go towards companies attempting to provide virtualised HPC in the cloud.

However, in a recent interview with Computer Weekly, Francis Lam, the director of HPC product management at Huawei, said that the most demanding HPC tasks still require dedicated infrastructure. He told the magazine: “Users with higher computing demands [that] are currently running dedicated HPC infrastructure will see little benefit in decommissioning their systems in favour of dedicated cloud resources as their workloads are vastly different."

Heavier workloads require scale, performance and customisable, attuned hardware – all things that can be hard to deliver without dedicated HPC infrastructure which isn’t on a virtualised basis across the public cloud. Mr Lam acknowledged that cloud capacity is useful for “bursting” – short periods when resources are low and the cloud can offer extra capacity.

The other point, of course, is that although HPC costs are coming down, there are still a lot of organisations that cannot afford dedicated HPC infrastructure. For these cases, a cloud-based service that makes dedicated infrastructure (such as bare metal) available on an accessible and flexible basis may be the best of both worlds.

Writing on this blog last month, Spencer Lamb noted that HPC use in the public sector has grown significantly over the last 2-3 years and argued that the G-Cloud 10 framework – a marketplace for cloud firms to sell their services to public sector bodies – will make it even easier for those organisations to access HPC.

The cloud capabilities of HPC are growing all the time – and doing so at pace – which is encouraging because of its promise to spread this groundbreaking technology. For the time being, however, the heaviest workloads that need genuine HPC will require dedicated HPC infrastructure and environments - and the public cloud isn't the most optimal home for these kind of intensive deployments.


Written by Shane Richmond (Guest)

See Shane Richmond (Guest)'s blog

Shane Richmond is a freelance technology writer and former Technology Editor of The Daily Telegraph. You can follow him at @shanerichmond

Related blogs

Data center cooling - it’s time to go with the (air) flow

The direction of travel for the industry should be away from tightly controlled cooling to a more easygoing approach.

Read more


Next generation energy storage solutions: An emerging option for enhancing data center reliability

For years, data centers have been haunted by the threat of power outages and the associated costs of such events. This situation is getting worse, with the most recent numbers from a 2016 report by the Ponemon Institute indicating that the average costs of a data center outage rose from $505,500 in 2010 to over $740,000 in 2015, while the maximum cost increased from $1.0 million to $2.4 million. How can next generation energy storage solutions help?

Read more


Deep Learning at Scale

Deep learning is a current hot topic in HPC and I am sure it will be one of the key themes at SC18. In many cases, we have been hearing about artificial intelligence, machine learning, and deep learning discussed together, though in reality, machine learning is a type of AI, and deep learning is a subset of machine learning. In this article, we will try to best define deep learning and its industrial applications and, more specifically, the benefits of scale - from big data to high-performance computing - to the successful implementation of deep learning.

Read more

We use cookies to ensure we give you the best experience on our website, to analyse our website traffic, and to understand where our visitors are coming from. By browsing our website, you consent to our use of cookies and other tracking technologies. Read our Privacy Policy for more information.