Verne Global

Tech Trends | Industry | High Tech |

13 September 2017

Spotting Elephant Computer Applications

Written by Bob Fletcher

Bob, a veteran of the telecommunications and technology industries, is Verne Global's VP of Strategy. He's based in Boston, Massachusetts.

Since joining Verne Global I’ve been recalibrating myself to spot hugely compute intensive applications - the ‘Elephants’ of the application Serengeti - which would benefit from a Nordic data center’s free air-cooling and abundant, low-cost renewable power.

My previous professional life on the internet backbone was easy, spotting really-large bandwidth users consisted of a combination of video and a large end-user communities. Finding the guys who would pay hard cash for good quality connectivity was a different matter. Eventually the largest video sources innovated away from the colossal costs of video distribution by building their own custom content delivery network (CDN). Today Google, Amazon, Facebook, Netflix all have custom designed CDNs to meet their video distribution needs.

So, back to the search for my herd of elephants, which in theory should be easy… but in reality is the opposite. It’s counter intuitive but alas, 99% of computer applications just don’t use that much power. So many people have told me we must come to Iceland because "we have 10,000s of people on our email system". Sadly, you can easily run 1,000s of email accounts from a modern laptop. Masses of users doing tasks irregularly only consumes modest compute resources.

So, who are these compute power beasts? Like the internet video giants, they have at least a couple of factors that group them together - they need to compute things in huge volumes and do it on an ongoing basis.

Vehicle crash test simulation is a major one from the high performance computing (HPC) segment. Using finite element analysis techniques the progressive time-series deformation of the vehicle is calculated. Pam-Crash and LS Dyna are the industry leading applications for this. It often takes multiple days for each time-series to be generated using 10+ racks of servers. Some other applications in this area are computational fluid dynamics (CFD) designing new aircraft wings, or rocket engines and structure analysis for mechanical designs.

The tricky part is that these HPC applications are not always compute gluttons. Fluent, a CFD application from Ansys, can design new aircraft and new shower curtain rails. The latter will hardly heat your laptop. A large spread of design parameters is a major clue to complexity. The aircraft wing must behave from close to walking speed right up to 600 mph or more, whereas the shower curtain design has somewhat stable environmental conditions unless your shower is at Basecamp Everest! A colleague shared that his recent rocket engine simulation consumed 1,000,000 server core hours. So far, I’ve identified about 70 HPC applications which can drive massively intensive compute requirements, ranging from structural design to weather and hurricane modeling.

The second, and perhaps much more compute hungry area that is busting data center power budgets around the world is AI/Machine Learning/Deep Learning and neural network training. Neural networks use similar logic to human brains and are quite efficient at doing so. Building the neural network is a different story rather like children learning the 12*12 multiplication table, which involves lots of trial and error to match the learning style of the child. This neural network training for images (which today are often 12 mega pixels each) often involves millions of images many of which are the same object from differing perspectives. This training takes large compute clusters many days and often weeks without GPU augmentation, frequently with unsatisfactory results for deployment to a product, resulting in tweaks to the next training attempt.

Facebook recently tested using 256 GPUs to reduce a test image training task from 29 hours to an hour, making the design engineer able to iterate many times each day. I can guarantee you, there is a very happy GPU sales person out there as they retail for about $10,000 each!

The team behind ‘DeepL’ – an AI driven, neural network translation service – announced recently: “to train our neural translation networks, we have built a supercomputer in Iceland (at Verne Global), capable of performing more than 5 100, 000, 000, 000, 000 floating point operations per second. This would rank 23rd in the current list of the world’s top 500 supercomputers.

DeepL is the perfect type of Elephant application that requires specialist, optimised, technical infrastructure combined with access to large amounts of power, free cooling and a super reliable electricity grid. It’s no surprise then that DeepL’s waterhole of choice is Verne Global’s campus in Iceland. Now the search is on for more massively intensive applications who can follow in their footsteps.

Share:FacebookTwitterLinkedInmail

Sign up for the Verne Global newsletter

Opinion, thought leadership and news delivered directly to your inbox once a month.