Verne Global

HPC | Insights |

15 December 2017

hpcDIRECT Interview with Dominic Ward

Written by Andrew Donoghue (Guest)

Andrew Donoghue is a technology writer specialising in data centers and critical infrastructure. He has worked for analyst companies such as 451 Research and has also held senior editorial roles at various tech publishing companies. You can follow Andrew at: @andrew_donoghue

Verne Global recently launched its hpcDIRECT service that adds a range of new capabilities to the Iceland-based provider’s core colocation offerings.

Verne Global describes hpcDIRECT as a high performance compute (HPC)-as-a-service (HPCaaS) platform. Essentially, rather than its current colocation model where customers house their own HPC equipment in Verne Global’s facilities, hpcDIRECT enables customers to run workloads on their dedicated HPC equipment. One of the main benefits of the bare-metal cloud service is it enables customer’s to shift capital expenditure on costly HPC equipment into operating expenditure.

I was able to grab some time with managing director Dominic Ward to discuss the motivation behind hpcDIRECT and the opportunities and challenges of integrating the new service into the existing business.

Can you talk me through what was the genesis behind hpcDIRECT? Did you see an opportunity in the market to exploit or was it more customer-led?

This is very much a customer led initiative. Our existing customers were asking us if we could provide this type of solution to them. Verne Global has absolutely been focused on colocation services since it began operations in 2011. But one of the things we have heard time and time again – particularly over the last 12 months – is, ‘Are you able to provide us HPCaaS as well as the infrastructure?’

How long has the service been in development?

It has been a year-long development cycle. However we pushed into the development of the actual environment and the orchestration really for the last four months. Where we are today is that we are currently in testing and beta with a number of our customers including a couple who aren’t even existing colocation customers. That is the great thing about this – it has spread the customer opportunity for us.

Do you have any specific examples of how hpcDIRECT helps customers?

One of our automotive customers came to us at the beginning of 2017 and said ‘We love what you guys do and we want to grow with you but it would be great if you could also do something a bit more up the technology stack’.

That particular automotive customer has been using crash-test simulations as one of their major application types. The direct question was could we help provide them an environment that would help them increment the compute available for that particular application, and those workflows attached to that application, throughout the year. They want to tackle the massive procurement cycle that a lot of these customers have where there is huge amounts of resource, time, effort, energy and distraction.

What are some of the other benefits?

hpcDIRECT also starts to move some of the reliance on the server level onto us. We are happy to take that responsibility, as we know we can provide that level of service so customers don’t have to think about the nodes going wrong. It also moves capex to opex, which is a driver for some customers, especially those within the financial services industry, and specifically our hedge fund customers.

One of the other major benefits that we have is because we have got existing colocation customers of substance and existing footprints, we can cross connect directly into their environments in a secure manner. It looks just like an extension to their colocation racks in our data center.

How is the new service being priced?

Because a lot of the community has adopted the standard metric of per core per hour pricing that is really being set in place by the public cloud vendors, we are pricing to our customers on that metric. We know that we are highly competitive on price and that is because we have been able to optimise at the infrastructure level.

Iceland is relatively remote location. Is latency over the wide area network an issue at all when it comes to providing HPCaaS?

There are no latency issues as once the orchestration layer does its job and hands over the resources to that particular user then it is exactly the same as if it is was in their own rack – it makes absolutely no difference. If anything, the thing that we have heard a few times is that this is actually running faster than some customers’ colocation footprints. Some of that is down to the fact that we currently have a very fast connection to it and also it is newer equipment – the latest generation of Intel Xeon Skylake processor - as we only acquired that equipment in recent months, whereas the customer’s equipment might be a year or two old.

Does launching this service bring you into competition with the likes of AWS, Microsoft Azure, Cray and Markley and others offering cloud-based HPC services?

The hand-off here is bare-metal for now. There are other things in the pipeline but the vast majority of our customers consume their infrastructure and compute at bare metal level from a remote location. So there is no change in operations for them.

The relationships that have been developed between Cray and Azure and Markley are very interesting. Cray is one architecture type and is very much positioned at the supercomputing end. One of the reasons that deal has been struck is that it is very hard to break down the lumpy cost of Cray hardware and enable users to access that. That is not what we are focused on. We are much more focused on infrastructure that is high-end specification, Intel Xeon Skylake processors or Intel Xeon Phi processors combined with high levels of onboard memory and storage.

We can of course add in the kind of architecture that Cray provides or the GPU-type architecture for AI and machine learning applications. We are planning to move into that in the course of 2018, which we will bring into hpcDIRECT as a solution.

Intel has predicted that up to 30% of HPC workloads could be in the cloud by 2021. How do you see your mix of colocation and hpcDIRECT customers shifting in the future?

I think the balance over time will shift towards more customers wanting to consume more HPCaaS. However for now I think the balance will remain that customers will want the majority – anything over 50% – in a colocation environment while wanting to start to test our HPCaaS. But I do think there will be gradual migration in the same way we have seen that shifting for enterprise cloud environments, or enterprise applications, I do think that is coming for HPC as well.


Share:FacebookTwitterLinkedInmail

Sign up for the Verne Global newsletter

Opinion, thought leadership and news delivered directly to your inbox once a month.