Driven by the ever-increasing availability of medical data – including X-rays and MRI scans, blood tests, biopsy results, and epidemiological data – machine learning is beginning to take on a new role in the doctor’s arsenal, helping to diagnose diseases and disorders with greater reliability and accuracy.
High-profile projects in the field so far include IBM Watson Health, which has promised to has generate accurate tumour treatment recommendations, and Google’s DeepMind Health, which recently signed a deal with Britain’s National Health Service to increase the speed and accuracy of care provided by the system. But the rate of innovation in machine learning in the field of medical diagnosis has presented a great number of other equally exciting, and potentially revolutionary breakthroughs.
Take for example, the rapid improvement in skin cancer detection being achieved with machine learning at Stanford University, where the accuracy of a machine learning application can rival human dermatologists. To achieve this success, the Stanford team created a proprietary dataset of skin lesions, assembled from 130,000 images depicting 2,000 different diseases. They then trained one of Google’s machine learning algorithm, initially developed for the ImageNet Large Scale Visual Recognition Challenge, using their skin cancer data. The result? In all three of the tasks that were given to the algorithm, keratinocyte carcinoma classification, melanoma classification, and melanoma classification when viewed using dermoscopy, the accuracy of the machine learning application matched the performance of a dermatologist. Because early detection of skin cancer can radically reduce its mortality rate, the developers are confident that their application, which could be ported to run on a standard mobile phone, has the potential to have an enormous positive effect on the lives of the over 15 million new cases of skin cancer that are diagnosed every year.
Similar breakthroughs have been made in breast cancer diagnosis. In a research paper released earlier this year, Google announced that their machine learning system, optimised to locate breast cancer that had metastasised to lymph nodes adjacent to the breasts, had made a made a significant breakthrough in diagnostic efficiency. The field of breast cancer diagnosis is in urgent need of disruption. Diagnosing breast cancer with current technology requires reviewing immense amounts of collected patient data, often in a short amount of time, and produces a high number of false positives. Fully half of the 12.1 million mammograms performed in the United States each year are false positive, causing stress for patients, and necessitating invasive diagnostic procedures that drive up healthcare costs. Other entities have also made vigorous progress in improving upon traditional mammography with machine learning. This includes similar research at the Houston Methodist Research Institute, and IBM, who has started an open initiative to improve the result of their predictive models for breast cancer screening.
Machine learning is starting to have an impact on our ability to diagnose a wide spectrum of genetic disorders as well, which even experienced physicians can find difficult to detect. Many of these genetic conditions have a constellation of facial characteristics that could provide doctors with hints for a diagnosis. But dysmorphology, the practice of noticing these facial patterns, is a rarefied skill that a doctor learn through decades of practice. Face2Gene is a mobile app that uses machine learning technology to help close this skill gap. It analyses pictures of a patient’s face and helps less-experienced doctors determine the presence of these genetic disorders by recognising their characteristic feature configurations. The makers of Face2Gene claim the software currently has the power to help identify about 2000 separate genetic disorders, but because it’s been designed to improve itself as more patient photos are scanned and analysed by the database, inevitably the capability of Face2Gene will improve in the future.
In the field of Alzheimer’s research, people are also experiencing breakthroughs with the help of machine learning. Not only can machine learning algorithms be developed to analyse MRI scans of a patient’s brain to locate not Alzheimer’s disease, but the models can also be trained to detect the slight cognitive impairments which are often a precursor to full-blown Alzheimer’s. Some machine learning applications claim to be able to diagnose Alzheimer’s ten years before symptoms show up. Just as with the diagnosis for many types of cancer (including skin and breast cancer), the early detection of Alzheimer’s disease can improve treatment the disorder, and can lead to better clinical outcomes.
Clearly, it’s an exciting time to be developing machine learning applications. Some in the medical community, including some doctors, are even envisioning a world where machine learning has so completely surpassed our ability to diagnose illness that human pathologists don’t even exist anymore. Maybe it really is time for AI developers to begin signing the Hippocratic Oath?
Note: This is a first in a series of blog posts investigating the effects of machine learning on the healthcare and pharmaceutical industry. Part two of this series will focus on machine learning in the process of drug discovery and development.
Come and Join Us! Spencer and the Verne Global team will be exhibiting at the BioData World Congress 2017 between 2-3 November in Cambridge, UK. Please come and join us and find out how Iceland is well placed to aid machine learning compute and the Research, Pharma and Life Science industries in the UK.