As I speak to customers, I am struck by a profound misconception about the banking and financial services industry. Traditional as they may be, technology laggards they are not. Innovative companies are creating and processing data at industrial scale. They are deploying machine learning for everything from risk assessment models to personalised user experiences to fraud detection. The need to analyse these vast and complex data sets is resulting in applications that are both data- and power-hungry.
Machine learning is an artificial intelligence system that automatically learns and improves without necessarily being programmed to do so. It uses numerical techniques based on the architecture of the brain to both learn skills and find information you wouldn’t normally see. The amount of compute required is orders of magnitude greater than previous generations of high performance computing (HPC).
According to analyst firm Intersect 360, one of every eight dollars spent on HPC goes towards the financial sector. That is a significant investment in a technology that is just beginning to make its imprint. With each machine learning application deployed, comes a ripple effect across the architecture enterprise-wide. According to The Next Platform, these applications start small, but begin to have an immediate impact on compute, storage and network performance for the entire system as a whole. “Big banks mean big legacy and while many are unlikely to scrap their mainframes and large enterprise software systems, the cutting edge inspired by hyperscale, AI, and even some HPC will lead the charge in the next few years.”
Those firms who are further down the HPC path find themselves entertaining how to deploy these workloads while balancing their legacy systems and cloud solutions. My colleague, Tate Cantrell, recently discussed one of the ways financial organisations are managing this transition – by focusing on the application. “For example, a customer may want to test the limits of a particular algorithm in an effort to discover the optimal economics surrounding the new application. Once they do the stress test on that algorithm, they will settle on the amount of hardware that is needed for the long term. They’ll buy that dedicated server infrastructure and deploy it in a location, like ours, where they have control over their servers. But new applications and new business requirements require the flexibility to grow and to adapt.”
Another piece of the puzzle to be considered is sustainability. As I mentioned at the start, these machine learning applications are not just data hungry, but power hungry too. Large scale AI clusters can run 24/7 at about 15-50 kW per rack – consuming a lot of power and expending a lot of heat. The cooling of hot air exhaust can be a significant operational expense. The best way for a financial service company to mitigate this cost is by partnering with a data center organisation that offers true sustainability.
At Verne Global, we are powered by the Icelandic grid. Iceland is the only Nordic country that has 100% power generation from renewable resources. Added to that, we have the ability to secure power contracts of up to 30 years, which means financial services companies can know their fixed power costs on a long term basis.
Finally, there is a real monetary incentive for financial services companies to begin ramping up their AI projects. According to McKinsey, AI technologies could potentially deliver up to $1 trillion of additional value each year to the global banking industry. It is well worth the time and investment of financial institutions to start mapping out their long term AI and machine learning strategies today.
Please feel free to reach out to the team at Verne Global at [email protected] to learn how we can help.