Verne Global

Tech Trends | Insights |

28 December 2017

GPUs here, GPUs there, GPUs everywhere – Rumours from the trade show floor

Written by Bob Fletcher

Bob, a veteran of the telecommunications and technology industries, is Verne Global's VP of Strategy. He has a keen interest in HPC and the continuing evolution of artificial intelligence and Deep Neural Networks. He's based in Boston, Massachusetts.

For 30 years I’ve been looking for a reason to visit the Octoberfest in Munich and earlier this year during the Autumn I was only a couple of weeks late while attending the NVIDIA GPU Technology Conference (GTC) held between 10-12th October. GTC was branded as the place to “Learn how GPU computing is transforming everything from supercomputing and deep learning to HPC and VR” and I have to admit looking back now, it was far more fun than a few evenings in a bierkeller.

Graphics processing units are specialist computational computer chips used for image processing. They have lots (1,000s) of small compute cores that perform image manipulation or that was their original intent. Some years ago, some imaginative computer science boffins started exploiting them for large scale parallel processing of data and the training of AI neural networks. This trend has blossomed and based upon the percentage of the exhibit floor dedicated to AI it is now over 50% of their application.

The cool Mercedes Benz concept car designed by and using GPUs


The Munich event was my second GTC of the year, I also attended the premier event in San Jose last May. Here is some insight into the varied applications GPUs:

AI Neural Network Training - As detailed in my previous blogs neural network training is the current monster compute application, often consuming megawatts of power over 10s of racks of servers. Off-loading some or much of the number crunching to GPUs results in many-fold improvements in training time because now 1,000s of small compute cores are working in parallel versus the 10s in the Intel type CPU. There was a myriad of implementations of GPU augmentation to meet all High-Performance Computing (HPC) requirements. The biggest being the Nvidia DGX-1 170 TFLOP, 8 P-100 GPU server costing $129,000 (shown below). IBM’s Power8 server with 2 P-100 GPUs connected via the native NVLink memory bus costing $65,000 was also prominent. Many other server manufacturers had their GPU augmented HPC server usually connected by the PCI Express memory bus.

Below: Jensen Huang, Nvidia’s CEO, presenting the DGX-1

Real-time parallel computing - There were many real-time applications of GPUs at GTC but the two that caught my attention were financial trading risk analysis and autonomous vehicle steering. The trading risk application monitored all the trades for compliance with a company’s guidelines and offered warnings to the trading desk management if something strange occurred. This application was complex and only exploiting the parallelism of the GPU could it meet its latency requirements. The vehicle steering application solved a problem that I had forgotten about until teaching my son to drive. Teenagers and robots tend to drive with very abrupt steering inputs which almost always keep the car on the road but constitute an uncomfortable riding experience. This application monitored all the autopilot instructions with many of the source metrics and determined how quickly the car needed to regain its desired track. In situations of no danger it would gently steer back to the desired location avoiding high lateral accelerations, otherwise it would use amble steering while avoiding any loss of control, once again this task required the parallel compute capabilities of the GPU.

Image processing - I’m an animation luddite who enjoys “Wallace and Gromit" (see below) clay model movies. The GPU based animation at GTC caused me to reflect upon this and dream about how striking Wallace and Gromit would be if reworked with the imaging processing technology from GTC. Despite the clear visual improvements now possible, there is something fetchingly artistic and simple about the clay animation or maybe it’s perhaps that I was schooled surrounded by Yorkshire accents and plenty of Yorkshire tea...

Virtual Reality - GPUs have transformed VR from a niche technology providing synthetic vision and gaming to a real productivity tool at GTC. I got the opportunity to experience some VR augmentation to CAE automotive design tools. It was amazingly cool to look at my Mercedes dream car and with a click of the finger disassemble it and walk between the component parts assessing how they all interact. This visualisation combined with structural simulation will make mechanical design attractive to folks like me who have needed the almost instant gratification of software development!

The Future - The applications for GPUs will continue to multiply as the low-power variants become invasive in all types of AI augmented devices from fast-food preparation to autonomous vehicles and the high-power ones drive the boundaries of supercomputing. When I was recently at Supercomputing ’17 in Denver, the traditional territory of Cray and IBM, Nvidia GPUs were also omnipresent

Below: DGX-1s: 22.4 PFLOPs

Share:FacebookTwitterLinkedInmail

Sign up for the Verne Global newsletter

Opinion, thought leadership and news delivered directly to your inbox once a month.