Deploying HPC within Financial Services: Which path should your firm follow?

HPC Financial Services


Against a backdrop of accelerated industry change, forward-thinking banks and fintech firms are increasingly turning to high performance computing (HPC) to secure a competitive edge and mitigate risk. So which path should your firm follow when deploying intensive HPC workloads?

Across the board many firms now view their HPC strategies as a means of addressing both commercial threats and lucrative opportunities. Computationally intensive workloads such as grid compute, big data analytics, artificial intelligence (AI) and associated machine learning applications are gaining massive traction in response to these ambitions, and HPC is providing the raw ‘horse power’ behind all of them.

In the coming weeks our partners at The Realization Group will publish two Insight Papers focused on optimising HPC within financial services:

  1. The first paper will look at the infrastructure choices and considerations when deploying high intensity, HPC workloads.
  2. The second paper will delve deeper into specific uses cases around powering AI in quantatative analytics.

Together, these papers will tackle some of the most pressing strategic questions firms face when deploying HPC and deciding between data center hosting, in-house or outsourced, and utilisation of the cloud through either virtualised or bare metal servers.

So why is an optimised HPC strategy so critical for the sector? For a start, pre-trade analytics, quantitative analysis and risk management functions as well as many other niche forms of computational assessment require vast amounts of compute power and specialised infrastructure. As markets have become more complex, asset classes more diverse and regulatory requirements more intense, this has added to the importance of having a coherent infrastructure strategy when it comes to HPC. Recent advances in, and adoptions of AI and machine-learning-based applications only bolster the case.

In the case of AI, this is fast becoming one of the prime areas of focus in financial markets. Quantitative investment firms are always on the hunt for new sources of alpha or ways to manage risk more effectively. To power quant analytics, they are using more data than ever and increasingly that means alternative data sets. How do firms ensure the infrastructure they are running is fit for purpose, is operating most efficiently and is keeping the total cost of operations (TCO) in check so as to maximise reward? Can firms even spark a leap in performance by utilising more innovative deployments like physical bare-metal approaches? Are there security implications?

There are also broader questions to consider based on firms’ business models. For instance, ultra-low latency applications are by default co-located in exchange data centers and latency sensitive applications in proximity. With the rise of quantitative strategies and AI techniques such as machine and deep learning, HPC optimised locations weight more in the hosting strategy, to ensure compute resources are managed cost-efficiently and are scalable.

The infrastructural choices firms make can have profound impacts on performance, reliability, flexibility and scalability, to name just a few considerations. As firms juggle long lists of commercial and operational priorities – from time to market to customer requirements to balance sheets – the decisions they take early in the process become crucial.

Please be sure to catch The Realization Group’s upcoming Insight Papers and industry round table events, where you’ll hear from a range of senior industry figures and HPC specialists.

Upcoming Insight Paper publications

Thursday 1st March: Optimising Compute in Financial Services

  • Options for deployment: pros and cons from cloud, colo, bare metal and hybrid solutions
  • Speed to market, performance, reliability, flexibility, scalability, access to the cloud, CAPEX & OPEX
  • Performance considerations of Infrastructure-as-a-Service versus bare metal
  • Types of jobs being run and their influence on infrastructure decisions

Wednesday 18th April: Powering AI in Quant Analytics

  • Deploying the right hybrid cloud infrastructure to maximise use of alternative sources of data and AI
  • Types of jobs being run in terms of workload, data sets and compute resources required
  • Factors such as costs versus rewards, timeliness of results and data security

We’d like to invite all our readers to contact us if they’d like to participate in the series and events we have planned. If you would like to know more, please contact me direct at stef.weegels@verneglobal.com


Related blogs

The Predictive Power of High-Performance Computing in Finance

Jan Witte, a leading quantitative analyst, writes for us about the predictive power of high-performance computing in finance and how the principles of machine learning apply to AI.

Read more


Iceland provides the power behind Germany's most pioneering AI start-ups

This week has seen the announcement of Analytic Engineering, a pioneering German AI engineering firm, choosing Verne Global’s data center in Iceland as the location for their intensive computing. This represents another impressive AI and Machine Learning client win for us, following DeepL joining us just before Christmas.

Read more


Verne Global - DGX-Ready, Set, Go!

Today our friends at NVIDIA announced that Verne Global’s Icelandic data center has been selected as one of its initial three European DGX-Ready Data Center program partners. We’re delighted to be working with NVIDIA on this program and to have our data center identified as an optimal location for their powerful range of DGX AI supercomputers.

Read more

We use cookies to ensure we give you the best experience on our website, to analyse our website traffic, and to understand where our visitors are coming from. By browsing our website, you consent to our use of cookies and other tracking technologies. Read our Privacy Policy for more information.