Verne Global

Industry | HPC | Tech Trends |

19 June 2017

Insights into AI and Deep Learning

Written by Nick Dale

Nick is Senior Director at Verne Global and leads our work across HPC and specifically its implementation within advanced manufacturing.

Despite many years of slow progress in the study of artificial intelligence (AI), a new generation of AI applications are coming to prominence with promising real-world business benefit. According to a survey conducted by market research firm Forrester, of the 3,000 global technology firms polled last year, 41% are currently making investments in AI, while 20% more are planning to invest in the next year.

Driving this new generation of AI is deep learning, the study and design of artificial neural networks that simulate the neuron activity in the human brain. The goal of deep neural networks is to enable machines to analyse and solve complex problems much like humans do. Although artificial neural networks have existed for decades, only since the advent of readily available high performance computing (HPC) power — and GPU compute power in particular — have data scientists been able to build neural networks with enough complexity and strength to have real-world applications.

This new breed of deep learning system has garnered some very high-profile media coverage of late. Recent examples include: Apple’s personal assistant, Siri, who introduced the world to the voice assistant in 2011 and has steadily developed stronger capabilities since, and Amazon’s Alexa, which is poised to provide the personality for a wide range of household devices in the approaching IoT era.

The power of these deep learning applications extends far beyond flashy demonstrations however, and is starting to have a significant and positive effect on the bottom-line of businesses in a wide variety of fields. Business intelligence, for example, which uses technology to analyse data and provide actionable information, is one area where AI and deep learning are having such an impact. Historically, business intelligence tools have been built around the idea of using data collection, analysis, and presentation in order to explain why or how a certain result occurred. As business intelligence systems adopt deep learning technology, they’re now able to provide not just better analysis of past actions, but can use their accumulated “knowledge” about past events in order to predict future customer behavior. This shift from descriptive to predictive business intelligence is allowing companies to identify opportunities for better growth and make rapid adjustments to optimise their current performance.

The predictive capability made possible by deep learning will have a disruptive effect on a variety of traditional industries as far flung as agriculture. According to experts, the agriculture industry must increase food production by 60% in order to keep up with the demand created by a global population growth, which is expected to reach 9 billion by 2050. In order to meet this large food need, agriculture technology companies are leveraging deep learning techniques to increase their efficiency throughout the growth and harvest cycle. AI applications in the agriculture industry include automated irrigation systems that reduce the production cost of vegetables while also minimising environmental impact; intelligent crop health monitoring that can provide high-resolution plant data across thousands of acres; and a host of other improvements. These advances, coupled with equally profound deep learning advances in meteorology and other disciplines complimentary to agriculture, are a key strategy in ensuring the future stability of our food supply.


Although deep learning applications present the opportunity for vast improvements in many areas, training deep learning neural networks is a time-consuming, highly compute-intensive endeavor. Furthermore, this compute intensive nature requires a steady and abundant power supply, a fact made doubly important by the lack of once steady improvement in transistor power efficiency. Most artificial neural networks are trained using “supervised learning” methods, which means feeding them massive amounts of well-labeled data. Some well-known examples of this include Google Translate, which analyses corpuses of bilingual data to develop more accurate translation algorithms. The compute resources necessary to process datasets used in these applications will often reach into the exaflop range. Using HPC systems which are highly scalable, utilise fast interconnect technology, and provide massively parallel compute power are a natural solution to this type of challenge. With the advent of widely available HPC and GPU processing power, deep learning algorithms that used to take months to train can now be trained in a single day, or less.

The future is looking very bright for AI applications. From the increased prominence of AI applications like Apple’s Siri and Amazon’s Echo, to the many AI and deep learning startups that are forging new avenues for business growth, it appears that the golden age of AI is finally starting to come to fruition. Verne Global, as a provider of carbon-neutral, cost-effective data center capacity, looks forward to helping companies in the field of deep learning to scale the compute power necessary to develop better AI algorithms. Furthermore, our location in Iceland provides an abundant and highly scalable energy profile, which makes it an ideal location to provide the power required by increasing AI, deep learning, and machine learning workloads. By providing our partners in these industries with the power and support to grow, we hope to bring greater intelligence and productivity to both business and society.

Note: If you're attending the ISC17 High Performance Conference in Frankfurt, please come and visit Verne Global's stand in Hall 3 - No: K530.


Share:FacebookTwitterLinkedInmail

Sign up for the Verne Global newsletter

Opinion, thought leadership and news delivered directly to your inbox once a month.