Integration of AI systems at scale

AI Tech Trends


Today, data scientists and machine learning engineers can implement systems for tackling discrete tasks with a very high degree of success. Results in fields such as image processing to support cancer diagnosis can be extremely accurate, with the best performing algorithms even exceeding experienced clinicians for certain categories of cancer diagnosis.


These successes are down to the ability of deep learning systems to correctly work with the features/attributes of large, but finite training data sets and identify those parameters that are key to giving an accurate categorisation and associated probability.

The repetitive training processes needed to achieve this level of accuracy lend themselves well to tasks that are highly focused, and which can operate almost exclusively within the confines of a computer where repetition has almost zero penalty. The opposite can be true for systems that by necessity must interface with actual physical processes, for example, training a robot to walk or creating a more advanced vehicle braking system combining AI and traction control into a 'system of systems'. Often we find in such situations that mechanical failures arise before training is complete, the result of the frequent repetition needed to develop the desired capability.

As well as the difficulties with certain mechanical systems, there is another dynamic that will be realised as organisations attempt to achieve more advanced AI requiring cooperation between AI agents. This complexity is already realised inside autonomous vehicles, where it is necessary to create this 'system of systems' and have various elements including Deep Learning modules, working together effectively. Then take this up a level of abstraction and think about how autonomous vehicles might interact between themselves in the context of a smart city with AI-assisted traffic lights to better optimise traffic flow.

The challenge for many organisations is that Deep Learning algorithms can, in fact, be quite fragile, in the sense that they don’t respond well to unforeseen inputs. This was recently demonstrated by researchers from Kyushu University who demonstrated that well-placed pixel changes (usually just numbering 1, 3 or 5 brightly lit pixels) could be used to fool an AI system into not recognising an object. This is an effect that might have consequences in the future in areas such as military camouflage (perhaps beneficial) or for cyclists with illuminating LEDs (possibly causing accidents). As a consequence, this puts more emphasis on the need to not just test AI systems within the confines of limited scenarios, but in the context of how they operate in the real world connected into more complex overall systems or across an organisation where they are expected to collaborate.

This has implications for organisations starting to deploy large scale and multiple coordinated AI systems, as it creates an increased emphasis on validation and testing within the context of the environment in which the AI operates. Not only will individual systems need to be updated and trained continuously, but ensembles of connected systems will need to be trained and tested together.

This need for continuous testing and validation will drive significant emphasis on the running costs of such processes. This is where organisations like Verne Global can help. By delivering Deep Learning platforms at scale and with them powered entirely with low cost, 100% renewable energy, organisations can be confident that they are deploying fully tested and validated systems of systems without breaking the bank.


Written by Vasilis Kapsalis

See Vasilis Kapsalis's blog

Vas is Verne Global's Director of Deep Learning and HPC Solutions. He comes with a wealth of experience from the global technology sector, with detailed knowledge in Deep Learning, Big Data and HPC, as well as consultancy skills in IoT and digital transformation.

Related blogs

Deep Learning at Scale

Deep learning is a current hot topic in HPC and I am sure it will be one of the key themes at SC18. In many cases, we have been hearing about artificial intelligence, machine learning, and deep learning discussed together, though in reality, machine learning is a type of AI, and deep learning is a subset of machine learning. In this article, we will try to best define deep learning and its industrial applications and, more specifically, the benefits of scale - from big data to high-performance computing - to the successful implementation of deep learning.

Read more


Classic HPC v New HPC – Rumours from the trade show floor

Sometimes the combination of networking at a trade show and catching an insightful presentation provide a valuable insight into market dynamics. Earlier this year I attended the HPC and Quantum Computing show in London and following this, watched Addison Snell’s (CEO of Intersect360 Research) “The New HPC” presentation from the Stamford HPC Conference. Both confirmed my HPC suspicions garnered over the last year.

Read more


HPC and AI collusion – Rumours from the trade show floor

Recently I’ve garnered much of my blog inspiration from industry events. February has been no exception and I benefited from a fascinating day earlier in the month at the HPC and Big Data conference in London, just a stone’s throw from the Houses of Parliament. Here are some of my observations from my discussions there...

Read more

We use cookies to ensure we give you the best experience on our website, to analyse our website traffic, and to understand where our visitors are coming from. By browsing our website, you consent to our use of cookies and other tracking technologies. Read our Privacy Policy for more information.