Cloud options are challenging dedicated HPC infrastructure

HPC Tech Trends


Speaking at a conference in November last year, Bernd Mohr, general chair of the Juelich Supercomputing Center, described HPC as “a new gold rush” . He said: “We are the modern pioneers, pushing the bounds of science for the betterment of society.” That’s a pretty grandiose claim but much of the work being done in HPC does have groundbreaking potential. It’s use in the discovery of new medicines, for example, has allowed pharmaceutical firms to massively accelerate their research.


Intel says that manufacturers are using HPC to move towards Industry 4.0, which will streamline the process of product design and manufacturing, with increased personalisation based on massive amounts of data. HPC is a vital component of manufacturing a “run of one” product – either a rapid prototype or a hyper-personalised product.

For example, Intel says while one of its customers is processing 25,000 data points per second to predict and improve the reliability of its manufacturing process, another is working with 450 million data points per second. And as Nick Dale wrote on this blog in June, HPC is helping the oil and gas industry to draw insights from the petabytes of data that it has previously struggled to bring value from. In one example, Royal Dutch Shell saved almost $10 million on a drilling project in Argentina thanks to ‘virtual drilling’ technology.

As costs fall, spending on HPC is increasing. More than $35 billion was spent on HPC in 2016 and that is forecast to reach almost $44 billion by 2021, according to Intersect 360 research. A significant amount of this will go towards companies attempting to provide virtualised HPC in the cloud.

However, in a recent interview with Computer Weekly, Francis Lam, the director of HPC product management at Huawei, said that the most demanding HPC tasks still require dedicated infrastructure. He told the magazine: “Users with higher computing demands [that] are currently running dedicated HPC infrastructure will see little benefit in decommissioning their systems in favour of dedicated cloud resources as their workloads are vastly different."

Heavier workloads require scale, performance and customisable, attuned hardware – all things that can be hard to deliver without dedicated HPC infrastructure which isn’t on a virtualised basis across the public cloud. Mr Lam acknowledged that cloud capacity is useful for “bursting” – short periods when resources are low and the cloud can offer extra capacity.

The other point, of course, is that although HPC costs are coming down, there are still a lot of organisations that cannot afford dedicated HPC infrastructure. For these cases, a cloud-based service that makes dedicated infrastructure (such as bare metal) available on an accessible and flexible basis may be the best of both worlds.

Writing on this blog last month, Spencer Lamb noted that HPC use in the public sector has grown significantly over the last 2-3 years and argued that the G-Cloud 10 framework – a marketplace for cloud firms to sell their services to public sector bodies – will make it even easier for those organisations to access HPC.

The cloud capabilities of HPC are growing all the time – and doing so at pace – which is encouraging because of its promise to spread this groundbreaking technology. For the time being, however, the heaviest workloads that need genuine HPC will require dedicated HPC infrastructure and environments - and the public cloud isn't the most optimal home for these kind of intensive deployments.


Written by Shane Richmond (Guest)

See Shane Richmond (Guest)'s blog

Shane Richmond is a freelance technology writer and former Technology Editor of The Daily Telegraph. You can follow him at @shanerichmond

Related blogs

G-Cloud 10 makes accessing high performance computing easier then ever...

As the Director of Research at Verne Global I spend a lot of my time working with our colleagues and partners within the UK’s publicly funded universities and research and science community. I’m privileged to get to see some of the truly innovative and inspiring research that is taking place, using high performance computing (HPC) and further encouraged with how Verne Global is helping them do this. This is why I was delighted to see Verne Global’s participation in the G-Cloud 10 (G10) framework confirmed last week and indeed strengthened for 2018/19 – enabling more public sector bodies to enjoy the benefits of our on-demand true hpcDIRECT platform.S

Read more


Explainable AI

SC18 here in Dallas is proving once again to be a fascinating melting pot of HPC insights and observations, and it's intriguing to see the continuing convergence of AI into the supercomputing ecosystem. Along these lines I started to think about the movement towards 'Explainable AI'. Being able to explain and understand how models work when making predictions about the real world is a fundamental tenet of science. Whether solving equations in a dynamic system for precise answers or using statistical analysis to examine a distribution of events, the results sought from these methods are intended to increase our clarity and knowledge of how the world works.

Read more


Classic HPC v New HPC – Rumours from the trade show floor

Sometimes the combination of networking at a trade show and catching an insightful presentation provide a valuable insight into market dynamics. Earlier this year I attended the HPC and Quantum Computing show in London and following this, watched Addison Snell’s (CEO of Intersect360 Research) “The New HPC” presentation from the Stamford HPC Conference. Both confirmed my HPC suspicions garnered over the last year.

Read more

We use cookies to ensure we give you the best experience on our website, to analyse our website traffic, and to understand where our visitors are coming from. By browsing our website, you consent to our use of cookies and other tracking technologies. Read our Privacy Policy for more information.