Integration of AI systems at scale

AI Tech Trends

Today, data scientists and machine learning engineers can implement systems for tackling discrete tasks with a very high degree of success. Results in fields such as image processing to support cancer diagnosis can be extremely accurate, with the best performing algorithms even exceeding experienced clinicians for certain categories of cancer diagnosis.

These successes are down to the ability of deep learning systems to correctly work with the features/attributes of large, but finite training data sets and identify those parameters that are key to giving an accurate categorisation and associated probability.

The repetitive training processes needed to achieve this level of accuracy lend themselves well to tasks that are highly focused, and which can operate almost exclusively within the confines of a computer where repetition has almost zero penalty. The opposite can be true for systems that by necessity must interface with actual physical processes, for example, training a robot to walk or creating a more advanced vehicle braking system combining AI and traction control into a 'system of systems'. Often we find in such situations that mechanical failures arise before training is complete, the result of the frequent repetition needed to develop the desired capability.

As well as the difficulties with certain mechanical systems, there is another dynamic that will be realised as organisations attempt to achieve more advanced AI requiring cooperation between AI agents. This complexity is already realised inside autonomous vehicles, where it is necessary to create this 'system of systems' and have various elements including Deep Learning modules, working together effectively. Then take this up a level of abstraction and think about how autonomous vehicles might interact between themselves in the context of a smart city with AI-assisted traffic lights to better optimise traffic flow.

The challenge for many organisations is that Deep Learning algorithms can, in fact, be quite fragile, in the sense that they don’t respond well to unforeseen inputs. This was recently demonstrated by researchers from Kyushu University who demonstrated that well-placed pixel changes (usually just numbering 1, 3 or 5 brightly lit pixels) could be used to fool an AI system into not recognising an object. This is an effect that might have consequences in the future in areas such as military camouflage (perhaps beneficial) or for cyclists with illuminating LEDs (possibly causing accidents). As a consequence, this puts more emphasis on the need to not just test AI systems within the confines of limited scenarios, but in the context of how they operate in the real world connected into more complex overall systems or across an organisation where they are expected to collaborate.

This has implications for organisations starting to deploy large scale and multiple coordinated AI systems, as it creates an increased emphasis on validation and testing within the context of the environment in which the AI operates. Not only will individual systems need to be updated and trained continuously, but ensembles of connected systems will need to be trained and tested together.

This need for continuous testing and validation will drive significant emphasis on the running costs of such processes. This is where organisations like Verne Global can help. By delivering Deep Learning platforms at scale and with them powered entirely with low cost, 100% renewable energy, organisations can be confident that they are deploying fully tested and validated systems of systems without breaking the bank.

Written by Vasilis Kapsalis

See Vasilis Kapsalis's blog

Vas is Verne Global's Director of Deep Learning and HPC Solutions. He comes with a wealth of experience from the global technology sector, with detailed knowledge in Deep Learning, Big Data and HPC, as well as consultancy skills in IoT and digital transformation.

Related blogs

AI is a great opportunity, but banks must innovate faster

The impact of AI on the lives of consumers and the operation of businesses is slowly growing. Whether it’s the increasing visibility of autonomous vehicles or the small conveniences of a voice assistant such as Amazon’s Alexa, we’re beginning to get a sense of what AI can do. However, we’re still at the beginning. The truly significant changes are yet to come.

Read more

Heavy metal and tag lines – Rumours from the trade show floor

On the cusp of spring I regularly refresh my GPU technology suntan at the Nvidia GPU Technology Conference (GTC) in San Jose. This year was fascinating as the speed and scale of both AI and Virtual Reality industries has leapt forward. Here are my takeaways...

Read more

3D Map Making Challenges for Autonomous Driving

Building accurate road maps is a central part of the effort to build and deploy more autonomous vehicles in the real world. The term “map” may be a bit of a misnomer, though, because these maps aren’t anything like the flat 2D images available online, they’re complete three-dimensional recreations of roadside environments that are updated on a continuous basis to provide a high degree of accuracy — often down to the centimeter scale. These 3D digital maps are a critical part of an autonomous vehicle’s ability to perceive the world, and have key applications in other technologies, which has made the effort to develop the definitive map a highly competitive endeavour.

Read more

We use cookies to ensure we give you the best experience on our website, to analyse our website traffic, and to understand where our visitors are coming from. By browsing our website, you consent to our use of cookies and other tracking technologies. Read our Privacy Policy for more information.