Verne Global

HPC | Tech Trends |

17 August 2018

Cloud options are challenging dedicated HPC infrastructure

Written by Shane Richmond (Guest)

Shane Richmond is a freelance technology writer and former Technology Editor of The Daily Telegraph. You can follow him at @shanerichmond

Speaking at a conference in November last year, Bernd Mohr, general chair of the Juelich Supercomputing Center, described HPC as “a new gold rush” . He said: “We are the modern pioneers, pushing the bounds of science for the betterment of society.” That’s a pretty grandiose claim but much of the work being done in HPC does have groundbreaking potential. It’s use in the discovery of new medicines, for example, has allowed pharmaceutical firms to massively accelerate their research.

Intel says that manufacturers are using HPC to move towards Industry 4.0, which will streamline the process of product design and manufacturing, with increased personalisation based on massive amounts of data. HPC is a vital component of manufacturing a “run of one” product – either a rapid prototype or a hyper-personalised product.

For example, Intel says while one of its customers is processing 25,000 data points per second to predict and improve the reliability of its manufacturing process, another is working with 450 million data points per second. And as Nick Dale wrote on this blog in June, HPC is helping the oil and gas industry to draw insights from the petabytes of data that it has previously struggled to bring value from. In one example, Royal Dutch Shell saved almost $10 million on a drilling project in Argentina thanks to ‘virtual drilling’ technology.

As costs fall, spending on HPC is increasing. More than $35 billion was spent on HPC in 2016 and that is forecast to reach almost $44 billion by 2021, according to Intersect 360 research. A significant amount of this will go towards companies attempting to provide virtualised HPC in the cloud.

However, in a recent interview with Computer Weekly, Francis Lam, the director of HPC product management at Huawei, said that the most demanding HPC tasks still require dedicated infrastructure. He told the magazine: “Users with higher computing demands [that] are currently running dedicated HPC infrastructure will see little benefit in decommissioning their systems in favour of dedicated cloud resources as their workloads are vastly different."

Heavier workloads require scale, performance and customisable, attuned hardware – all things that can be hard to deliver without dedicated HPC infrastructure which isn’t on a virtualised basis across the public cloud. Mr Lam acknowledged that cloud capacity is useful for “bursting” – short periods when resources are low and the cloud can offer extra capacity.

The other point, of course, is that although HPC costs are coming down, there are still a lot of organisations that cannot afford dedicated HPC infrastructure. For these cases, a cloud-based service that makes dedicated infrastructure (such as bare metal) available on an accessible and flexible basis may be the best of both worlds.

Writing on this blog last month, Spencer Lamb noted that HPC use in the public sector has grown significantly over the last 2-3 years and argued that the G-Cloud 10 framework – a marketplace for cloud firms to sell their services to public sector bodies – will make it even easier for those organisations to access HPC.

The cloud capabilities of HPC are growing all the time – and doing so at pace – which is encouraging because of its promise to spread this groundbreaking technology. For the time being, however, the heaviest workloads that need genuine HPC will require dedicated HPC infrastructure and environments - and the public cloud isn't the most optimal home for these kind of intensive deployments.

Share:FacebookTwitterLinkedInmail

Sign up for the Verne Global newsletter

Opinion, thought leadership and news delivered directly to your inbox once a month.