Verne Global

AI / ML / DL | Tech Trends |

21 October 2018

AI needs to be ‘explainable’ but is that possible?

Written by Shane Richmond (Guest)

Shane Richmond is a freelance technology writer and former Technology Editor of The Daily Telegraph. You can follow him at @shanerichmond

Amazon's AI made the news early in October after it was revealed that the company had scrapped a recruitment engine because it was 'sexist'. Private Eye, the UK's satirical news magazine, described it as "a reminder to take an extra big pinch of salt whenever you hear that AI will improve the world". However, the reality is more complicated...

Since 2014, Amazon has been trying to automate the process of sifting through the vast number of CVs it receives for every vacancy. In 2015, according to a Reuters report, the company realised that the AI it had been using was not "gender neutral". That's because it had been trained using CVs submitted to Amazon over a period of 10 years and, since those were dominated by men, its decisions were skewed.

It's a reminder that AI is only as good as the data on which it has been trained. If that data is biased, then the AI will reflect it. This isn't a new argument and it isn't specific to machine learning - any kind of big data automation runs the risk of entrenching unfairness, something explored by data scientist Dr Cath O’Neil in her 2017 book Weapons of Math Destruction and, more recently, by political scientist Virginia Eubanks in Automating Inequality, published in September.

Where AI adds a new dimension is that it is meant to learn as it goes through the process and improve over time. Therefore, if the data contains some kind of bias, those biases will be magnified over time. And they might not be visible, even to the people running the system.

It's fairly easy to spot that an AI is favouring men over women. It should be possible to spot discrimination based on ethnicity. But it's much harder to tell that someone was rejected for a job interview because of their first name, or the postcode where they grew up, or a particular verb they used on their application. When biases are automated and hard to spot, they can do a lot of damage.

That's starting to worry people. A study by IBM's Institute of Business Value, published in September, asked 5,000 executives if they were concerned about how AI uses data and makes decisions . About 60 per cent said that they were - up from 29 per cent in the 2016 version of the study. Their worry is falling foul of regulatory and compliance standards.

Meanwhile, in PwC's 2017 Global CEO survey, two thirds of business leaders said they believe that AI and automation will have a negative impact on stakeholder trust in their industry over the next five years.

Suppose a patient sues a hospital over their cancer treatment plan and the hospital says it was following a plan proposed by AI? Perhaps the AI selected the best possible plan in every case except this one. Is it possible to untangle its web of decisions to determine how it gave bad advice on this occasion? What would publicity about the story do to trust in automation in other hospitals and areas of medicine?

Such an example isn't likely today - even hospitals using AI for cancer treatment use it to aid decisions by doctors, not to replace them, but it isn't unimaginable that 'blame the AI' will become a common defensive manoeuvre. Vendors, such as IBM and Google, are responding by adding "explainability tools" into their offerings.

There are good reasons for business leaders to demand explainability, but is it really possible for vendors to provide it? Venture capitalist Rudina Seseri wrote on TechCrunch earlier this year:

"Part of the advantage of some of the current approaches (most notably deep learning), is that the model identifies (some) relevant variables that are better than the ones we can define, so part of the reason why their performance is better relates to that very complexity that is hard to explain because the system identifies variables and relationships that humans have not identified or articulated. If we could, we would program it and call it software."

In other words, true machine learning is almost inexplicable by definition. She adds that compelling firms to reveal how a proprietary AI system works is effectively making it possible for their rivals to copy them: "That’s why, generally, a push for those requirements favour incumbents that have big budgets and dominance in the market and would stifle innovation in the start-up ecosystem."

AI is estimated to represent a $15trn economic opportunity. It's coming. But it won't be perfect. We have to hope there are incentives in place for people throughout the system - vendors, business customers and consumers - to question AI processes and scrutinise decision making.

Share:FacebookTwitterLinkedInmail

Sign up for the Verne Global newsletter

Opinion, thought leadership and news delivered directly to your inbox once a month.

SIGN ME UP