How AI can keep your executives out of jail


Only artificial intelligence (AI) can prevent social media firms from shutting their doors. Costs, fines and executive jail terms are threatened as the government tackles online harm.

Stricter regulation of social media companies is now high up on the agenda of many national governments. In the UK plans are in place to create a statutory duty of care toward social media users, and a new independent regulator with powerful sanctions is to be established.

Firms will be held to account if they fail to tackle a comprehensive list of perceived online harms and abuses. Initial details of the proposed legislation are set out in the UK Government’s Online Harms White Paper which is under consultation until July.

The new obligations cover a range of activities, from those that are illegal to those falling under the further duty of care; which extends breaches to include publishing content relating to harmful behaviours, even if the activity itself is legal.

Companies failing to comply could see individual executives facing criminal prosecutions, including fines, disqualification from directorships and even jail. Offending firms would also be the subject of substantial financial penalties calibrated to the size of the business - while more extreme corporate sanctions for the worst offenders will include the blocking of sites from search engines and UK ISPs, effectively putting them out of business.

Australia also recently passed legislation that could imprison executives if their platforms stream real violence, as occurred with the recent mosque shootings in neighbouring New Zealand.

With huge volumes of online content created each day, it is difficult to see social media firms and companies that offer online community services continuing to do so unless they automate the process of content control. Automation, through AI and machine learning, is thus essential. Firms cannot afford to take a laissez-faire attitude expecting communities to self-police; nor can they afford to employ the legions of human workers that would be needed to review all existing and previous online content. Investment in technology to review posts, images and video content numbered in the hundreds of billions will be essential. Only AI and machine learning can deal with such volumes in a user-friendly way; social media users, online shopping reviewers, bloggers and vloggers won’t tolerate posts being reviewed by a committee before publishing. Firms that don’t make these investments will likely not be able to continue their social media model.

The good news for such firms is that the technology to tackle these problems is developing fast. There has been significant progress in Natural Language Processing (NLP) in recent years with many good open source and commercial tools available. Image processing and in particular video processing remains harder due to the volume of data and the higher complexity/range of content to detect. But with technologies already existing to identify copyright infringement, it should be easier to prevent the proliferation of previously recognised content. However, an excellent opportunity exists for innovative firms that can develop these technologies at scale as to date the tech giants are so far mainly failing.

To operate at the right scale, companies need efficient, optimised back-end NLP and machine learning environments, such as those provided by Verne Global. These facilitate the training of social media analytics models and AI at industrial scale, allowing companies to scan and review content correctly and to incorporate feedback on any exceptions quickly.

Written by Vasilis Kapsalis

See Vasilis Kapsalis's blog

Vas is Verne Global's Director of Deep Learning and HPC Solutions. He comes with a wealth of experience from the global technology sector, with detailed knowledge in Deep Learning, Big Data and HPC, as well as consultancy skills in IoT and digital transformation.

Related blogs

Iceland provides the power behind Germany's most pioneering AI start-ups

This week has seen the announcement of Analytic Engineering, a pioneering German AI engineering firm, choosing Verne Global’s data center in Iceland as the location for their intensive computing. This represents another impressive AI and Machine Learning client win for us, following DeepL joining us just before Christmas.

Read more

AI Icelandic Style – Deep Learning in Reykjavik

Last week I was privileged to be part of our AI and HPC Field Trip to Iceland. The goal of the trip was to share insight and observations around the evolution of AI deep neural network (DNN) training and to tour our HPC-optimised data center. The attendees spanned DNN veterans like Eric Sigler of OpenAI, large enterprise data science leaders like Pardeep Bassi of Liverpool Victoria Insurance (LVE) and start-up pioneers like Max Gerer of e-bot7.

Read more

Wolf in Open Source clothing

In an interesting Medium Article, Andrew Leonard wrote about how Amazon may be starting to compete with some of its Open Source software partners. Andrew’s article delved into the specifics of the case involving Elastic and their Elasticsearch open source software. Elastic has been happy to offer Elasticsearch in its Open Source form on the AWS platform, and many customers were happy to consume Elastic’s capabilities that way.

Read more

We use cookies to ensure we give you the best experience on our website, to analyse our website traffic, and to understand where our visitors are coming from. By browsing our website, you consent to our use of cookies and other tracking technologies. Read our Privacy Policy for more information.