Verne Global

Data Center | HPC | Tech Trends |

20 November 2017

When Will We Achieve Exascale?

Written by Tate Cantrell

Tate is Verne Global's CTO and is responsible for setting the technical direction for the company. You can follow Tate on Twitter: @tate8tech

Let’s talk about the future of computing, specifically, the future of exascale supercomputing. Andrew Donoghue did a very nice write-up on the recent DCD Zettastructure session that focused on this very topic. And, instead of repeating his good work, this blog will focus in on a specific point that I made during the panel in response to the feasibility of powering an exascale machine. 

And for those who weren't aware, an exascale supercomputer will be a machine that achieves 1,000,000,000,000,000,0000 floating point operations per second. That's a lot, by the way. Keep reading to learn more...

Exascale computing is feasible and can be achieved within 5 years without requiring an entire power grid to power it. Science desires the answers that exascale will bring and so should our society. For example, exascale computing may advance our position in the war on fossil fuels with innovations in fusion technology as shown by the advancements of Tri Alpha Energy. While achieving exascale isn’t a foregone conclusion, with a coordinated approach to hardware design, software implementation, networking, facility integration and power sourcing, the data shows that science shouldn’t have long to wait.

Whenever someone in the industry thinks of the most powerful compute platforms on the planet, that person will almost always think of the Top500, which for the last 25 years has tracked the world's fastest supercomputers. And, since its inception in 1993, the Top500 has driven competition and thereby innovation across the supercomputing community. It is even fair to say that the names and faces of the very chip manufacturers has changed a lot over those 25 years, primarily driven by the competition created by the Top500. Just look at the breakdown (shown below) of manufacturers over the past 25 years as pointed out by HPCwire in their analysis of the newest release of the Top500 results. Sitting where we are today, it is hard to imagine that 15 years ago, Intel had barely any share of the most powerful supercomputers on the planet.


Chip Technology (Source: Top500Flipping the Flops and Reading the Top500 Tea Leaves by Tiffany Trader

One of the reasons that there's been a dramatic shift in the makeup of the chip technology leading the charge is that the focus on the fastest supercomputers has moved completely in the direction of massively parallel computing. And, considering that as much as 70% of the power used by supercomputers are spent on moving around the bits of data, architectures that promote on chip parallel compute while remaining flexible for software innovation are winning the day. This push toward massively parallel core computing is seen again as Top500 announced their latest Top500 list at SC17 in Denver last week. Topping the list for the fourth time in a row is Sunway TaihuLight of the National Supercomputing Center in Wuxi which boasts 93 petaflops of compute across a whopping 10 million cores! But that’s still an order of magnitude away from exascale. So, just how realistic is exascale computing?

If we look purely at the advancements at the top of the Top500 list (shown below), you can easily see the exponential trends upwards in computing. For the last 25 years, both the average performance and the maximum performance of the Top500 machines have followed the exponential trends with amazing consistency.

And if we blindly apply logarithmic regression to the data, we can trend out a prediction that exascale by 2020 is within our grasp. But is it that easy?



The problem comes with the power required to produce the 93 petaflops that Sunway TaihuLight currently boasts. Number 1 and Number 2 on the Top500 charts currently consume 15 megawatts and 17 megawatts respectively. That’s as much power as an entire data center would typically consume or about 1,000 Tesla chargers. So the question is less about the exponential growth of the top performers on the Top500 list and more about the performance per watt. Luckily, the folks at Top500 have been tracking performance per watt for a while now and have been publishing the Green500 to showcase the most energy efficient supercomputers.

The Green500 data (shown below) clearly shows that not only are supercomputers increasing in performance at exponential scale, but innovations in hardware and networking are driving exponential improvements in performance per watt. Based on the Green500 data, the average of the top 10 performers since 2009 have grown from 419 megaflops per watt to over 13,700 megaflops per watt in the most recent release of the Green500.



To put that in perspective, if someone had built an exascale machine at 419 megaflops per watt, that would amount to nearly 2.4 gigawatts of power - almost the entire Icelandic 100% renewable power grid, which incidentally is the highest per capita power generation grid on the planet. But, despite the exponential changes in efficiency, a theoretical exascale machine at the efficiency of the Top 10 of the Green500 would still require over 70 megawatts of power. And the reality is that current technology could not be scaled up to an exascale. It just isn’t that simple.



So what does the future look like? If we track a 50% year on year growth in efficiency in the Green500 Top 10, we could have a theoretical exascale below 20 megawatts before 2022. Plus, if history can be trusted as a guide, the top performer in the Top500 has not been too far off of the Green500 Top 10 in years past.



So what are the final predictions? The chip industry is doubling down on multi-core technology, largely driven by the advancements in the use of chips such as GPGPU’s for applications like machine learning. The investments being made in multi-core chips will continue to drive innovation in the direction to improve supercomputing efficiency and keep the leaders in the Top500 close to highest efficiency performers in the Green500. While there are rumours that the Chinese will release an exascale system in the next few years, any machines that arrive on the scene before 2020 will require significant power - 50 megawatts or more. A more realistic target would be a 20-megawatt exascale system by 2022.

However, if my fellow panelist at DCD Zettastructure, Dr. Tim Cutts of The Wellcome Trust Sanger Institute has anything to say about it, we won’t stop there. Tim called for the benefits of exascale to move all the way to the bedside, making the next generation of personal medicine a reality. Looking at the charts, he might get his way. An exascale machine in 2035 might just be a 100-kilowatt device. But we can only guess at the innovations that will happen along the way.



Share:FacebookTwitterLinkedInmail

Sign up for the Verne Global newsletter

Opinion, thought leadership and news delivered directly to your inbox once a month.