I recently had the pleasure of presenting to a team from Dell Technologies talking about high performance computing (HPC) and artificial intelligence (AI) workloads within enterprise organizations. The scale and complexity of HPC is growing tremendously. Even with a predicted slowdown for 2020, analyst firm Intersect 360 is still anticipating the market to grow to $55B by 2024.
With this growth comes the discussion of how enterprise organizations will deploy their HPC workloads. While enterprises may use a cloud model – private, public, or hybrid – for deployment of its HPC application suite, cloud alone does not ensure success at delivering enterprise-class HPC-as-a-Service. So what are the infrastructure requirements that need to be taken into consideration? Let’s take a look.
Cost is always important. Manufacturers, for example, really understand their variable costs. We’ve had conversations where the lead engineer told us exactly what the cost we needed to be on a per teraflop basis. This provides challenges in terms of the technology as these savvy enterprise customers won’t adopt technology for technology’s sake – it needs to demonstrate its purpose and value.
One way we are seeing enterprise companies lower their costs is by focusing on the application. We are coming to the end of Moore’s Law and with that comes an increased focus on tuning the applications. Not just in terms of software, but hardware as well. For example, a customer may want to test the limits of a particular algorithm in an effort to discover the optimal economics surrounding the new application. Once they do the stress test on that algorithm, they will settle on the amount of hardware that is needed for the long term. They’ll buy that dedicated server infrastructure and deploy it in a location, like ours, where they have control over their servers. But new applications and new business requirements require the flexibility to grow and to adapt.
Enterprise organisations need industrial scale particularly when they see their HPC needs growing exponentially. At Verne Global we are able to offer HPC-as-a-Service to enable customers to easily manage their HPC workloads. The customer specifies the equipment, down to the server, storage, nodes, networking equipment, etc. and then we manage the logistics specifically to their design. This allows them to efficiently ramp their workloads and scale as needed. If the time comes where even more capacity is required, our colocation solutions offer additional flexibility to scale their infrastructure, all within the same campus.
Enterprise organisations invest a significant amount of time and money not only into their HPC infrastructure, but also into the teams and engineers that use the company’s applications to deliver value. Workflow and workflow management become embedded into the process and are a high priority even if new technologies are better and cheaper. It’s imperative when selling to enterprise companies, to not just come in with the next latest and greatest technology and expect people to go for it. You need to learn and understand how their workflow processes operate and how best to fit into them to win the business.
This is not to say technology does not play a role because it certainly does. HPC innovation relies on the ability to invoke both parallel computing and modern accelerators. Optimal efficiency is achieved when applications are carefully tuned with the infrastructure in mind. If you look at a company like NVIDIA, they are the perfect example of a company that is shrink wrapping the hardware around the application. NVIDIA realised from the start that they could not just offer a superior chip. They knew they needed to provide an ecosystem of support to ensure developers could incorporate their superior hardware into new and existing software suites. NVIDIA invests hugely into their software development kits and these SDK’s are the primary reason NVIDIA has achieved success in the adoption of their technology.
We are also in the midst of a technology arms race between NVIDIA, Intel, AMD, Graphcore and others all competing for a share of the HPC market. These companies are looking out into the future 5 years or more to visualise how software will interact with hardware. The companies that will be the most successful will get that prediction right and make their hardware easy for the software developer to work with. People and processes still sit prominently in terms of achieving success with emerging tech.
While there is a lot of talk around data center sustainability, it’s becoming more and more apparent that enterprise organisations are looking for truly sustainable solutions. It’s no longer enough to show a voucher or certificate of origin claiming green power, organisations are actively seeking a direct connection to renewable energy.
Iceland is the only Nordic country that has 100% power generation from renewable resources. Add to that, the ability to secure power contracts up to 30 years means companies can know their fixed power costs on a long term basis. That will have a positive impact on a company’s bottom line for years to come.
Today’s companies are under increased pressure to differentiate against their competitors through faster implementation, smarter insights, and raw innovation. We have learned through our enterprise customers that an HPC-as-a-Service business model offers the convenience of cloud, but with the advantages of bespoke hardware, dedicated security, and support from a staff of HPC experts. We focus on the infrastructure so they can focus on driving their business forward.