Industrial HPC Solutions: Modeling & Simulation

HPC Engineering


In the early days of industrial high-performance computing (HPC), modeling and simulation (M&S) was, in many ways, the stalwart -- the ideal domain area where companies could leverage advanced computing resources.

Today, a confluence of traditional HPC M&S with artificial intelligence (AI) is occurring, changing the solution set significantly. Further, advancements in processing power -- now a seemingly constant state -- is allowing more sophisticated models to be done in less time.

This article briefly explores the history of HPC in M&S then provides examples of successes in the past and more recently. Finally, a more technical overview of recent benchmarking will be detailed that touches on AI and newer processing options.

History

For decades, HPC has been at the forefront of improving sophistication of and results in M&S. Many large manufacturing companies in transportation and agriculture, in particular, have long leveraged HPC in their M&S R&D.

Though compute power was a small fraction of what it is now, HPC was still considered high-performance back when M&S and HPC made great strides together in the mid-1980s. Companies like Caterpillar, Rolls-Royce, John Deere, Boeing, Lockheed Martin, FMC, Eli Lilly, Ford, and many others with a global reach utilized HPC compute and application solutions to drive more efficient and effective simulations.

M&S HPC Success

Successful M&S engagements over many years with the aforementioned companies as well as many others have resulted in improved applied/corporate research results. Two years ago, ExxonMobil, the mammoth oil & gas leader, leveraged HPC to model global oil reservoir opportunities. Through optimization and scaling to over 700000 threads, a three-month runtime was reduced to ten minutes, resulting in a USD$1 billion+ return-on-investment (ROI).

“We now have the ability to use hundreds of thousands of processors simultaneously, which can result in a substantial improvement to reservoir management workflows. We can make robust business decisions in a fraction of time.” - Jayme Meier, ExxonMobil Upstream Research, Vice President of Engineering

ExxonMobil created a 53-second video for social media to promote the result, which meshed very well with a three-minute, more detailed video that details the challenge and success.

Today’s M&S

HPCwire recently published on 20 March New iForge GPUs Enable Faster Models, Simulations, which detailed benchmarking performed on new NVIDIA V100 GPUs on NCSA’s iForge cluster dedicated to industrial workloads. The GPU queue was recently added to provide solutions to larger data challenges in less time. These GPUs sync well with Intel Skylake nodes, offering the best solution customized to each workload.

As for the performance bump, the article states that "EDEM", a leading commercial Discrete Element Modelling software used for particle and bulk materials research, saw significant speedups on the iForge GPU queue when compared to Skylake CPU alone, especially when scaling across GPUs. In one series of comparisons, simulating 0.01 seconds of particle flow for 17 million particles in a slowly rotating box took 6.9 hours on 12 CPU cores, while using four GPU cards and just four CPU cores lowered the walltime to only 13.5 minutes. After accounting for ancillary processes, the EDEM solver speedup was shown to reach 60 times! NCSA’s high-memory 32 GB GPUs also demonstrated computational advantages over standard GPUs: EDEM tests that had previously run out of memory with 16 GB GPUs were able to complete on the NCSA queue.”

The performance improvements continue: “In the field of Finite Element Analysis (FEA), NCSA tested major software Abaqus/Standard using a challenging non-linear problem with high fidelity meshes and almost 17 million degrees of freedom. Abaqus/Standard showed a two-time speedup using two CPU nodes each accelerated with four GPU cards versus two CPU nodes alone. Seid Koric, Technical Assistant Director of NCSA and a Research Professor in the Mechanical Science and Engineering Department at Illinois, stated that to the best of his knowledge, ‘this is the first time that a commercial CAE code has exhibited this level of acceleration on multiple nodes with multiple GPU cards.’”

We have touched on the confluence of AI in M&S earlier, and the article continues: “Machine learning also presents a strong use case for GPUs, as training machine learning models using tools like TensorFlow requires a tremendous amount of data processing that GPUs can help accelerate. NCSA ran CIFAR10 CNN benchmarks with Tensorflow using one or more GPU cards. In testing where TensorFlow employed a single GPU, it enabled processing of 10 times more training images/second than an equivalent CPU node. When scaling across all four GPU cards on one node with multi-tower programming, results showed a 40-time increase in training samples processed per second compared to one CPU node. The superior processing of GPUs enables better-trained, more reliable machine learning models in less time.”

Summary

HPC use in M&S, with such a legacy, remains strong today. With AI solutions added to the equation, larger, more sophisticated models are possible in less time. The aforementioned benchmarking demonstrated significant gains in performance. In industry, that means more intelligence faster, leading to massive ROI.

The next five years likely means more of the same, which is a good thing -- more compute, more AI confluence, and more M&S tools -- all leading to more sophisticated solutions in less time.

Brendan McGinty is Director of Industry for the National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana-Champaign.


Written by Brendan McGinty (Guest)

See Brendan McGinty (Guest)'s blog

Brendan McGinty is Director of Industry for the National Center for Supercomputing Applications (NCSA), University of Illinois at Urbana-Champaign.

Related blogs

Iceland provides the power behind Germany's most pioneering AI start-ups

This week has seen the announcement of Analytic Engineering, a pioneering German AI engineering firm, choosing Verne Global’s data center in Iceland as the location for their intensive computing. This represents another impressive AI and Machine Learning client win for us, following DeepL joining us just before Christmas.

Read more


Industrial HPC Solutions: Visualisation

When you imagine what visualisation is in the world of HPC, most people think of astronomy, such as images of galaxies or black holes, or they think of weather, like analyses of tornadoes or hurricanes. Astronomical and atmospheric data is huge, requires HPC to analyse, and can make for amazing, sophisticated visualisations.

Read more


HPC is unlocking the 'data gusher' in Oil and Gas research

An ‘oil gusher’, or a 'blowout', is the name for that phenomenon that you’ve seen in photos and film clips, when a drill strikes oil and it sprays out of the top of the well. It was common in the early 20th Century but is now quite rare, thanks to pressure control equipment. However in today’s oil and gas industry, data is the modern gusher – it sprays out in an uncontrolled fashion, signifying that something good is going on but it remains hard to get under control.

Read more

We use cookies to ensure we give you the best experience on our website, to analyse our website traffic, and to understand where our visitors are coming from. By browsing our website, you consent to our use of cookies and other tracking technologies. Read our Privacy Policy for more information.