The personalised medicine market - a broad term for a range of targeted therapies that take advantage of new big data technologies - is expected to be worth $149 billion dollars by 2020. This rapid growth is in response to an urgent need for more accurate, and more targeted healthcare, and is also evidence of the growing influence of machine learning within healthcare and medicine production.
According to the medical journal BMJ, medical error is the third leading cause of death in the United States, accounting for over 255,000 deaths a year. Countries in Europe are facing a similar challenge. According to a study done by the United Kingdom Department of Health, entitled “An Organisation with a Memory” there are an estimated 850 000 adverse healthcare events a year, which is equivalent to about 10% of all hospital admissions. There are growing signs that machine learning may be very helpful in reducing these numbers.
One way machine learning can help is by providing doctors with a more accurate therapeutic plan, that’s based on the specific physical needs and situation of the patient. A great example of this is Google’s recent initiative to improve radiotherapy using its DeepMind platform. Radiotherapy, the use of X-rays and other high-energy rays to destroy cancer cells, requires targeted application to ensure the health of as much of the living tissue surrounding the tumor as possible. This process, known as segmentation, is particularly important in head and neck cancers where other delicate anatomical features are closely surrounding the tumor.
Google, who are working in cooperation with the University College London Hospitals NHS Foundation Trust on the project, believe that its DeepMind machine learning system can rapidly differentiate cancerous tissue from healthy tissue in CT and MRI scans, not only speeding this segmentation process and improving accuracy, but also reducing the radiotherapy side effects by destroying less of the surrounding tissue.
But the effects of machine learning have potential that’s far broader than just cancer treatment. According to a report by the Personalised Medicine Coalition, “The Case for Personalised Medicine,” a large number of patients derive no clinical benefit from the first drug they are offered by their doctors. For example, nearly 50% of arthritis patients, and 43% of diabetes patients will not respond to their initial round of treatment. Sometimes, the therapeutic effect of a drug may even be dangerous — a fear that becomes increasingly relevant as drugs utilise increasingly complex pharmacological mechanisms. Warafin, the leading cause of drug-related hospitalisations in the United States, is powerful anticoagulant that’s known to elicit radically different, sometimes dangerous reactions from patients. This complexity has made it an interesting test case for machine learning, and numerous studies have been launched in order to make prescribing Warafin safer.
Similar machine learning systems are being develop to better prescribe opiate painkillers, better predict patient response to antidepressant drugs, and generally decrease negative drug reactions.
In other cases, ineffective therapies may be prescribed due to lack of information. Often patient data is spread over heterogeneous EHR platforms at different hospitals, making it difficult for doctors to access and synthesise into a course of treatment. This is especially true as EHR interoperability has been frustrating slow to implement, and only about 40% of hospitals exchange data externally. Data scientists from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have proposed a machine learning solution to this problem.
In a paper called the “EHR Model Transfer,” the MIT researchers outlined a natural-language processing system to extract important information based on key clinical concepts, such as “skin cancer” or “elevated pulse.” The system is designed to extract this important information from patient records — regardless of the format of the underlying EHR data — circumventing the interoperability problem and helping doctors find and access the information they need to make better diagnoses.
But the potential for machine learning in personalised medicine could be even more profound than the above cases indicate, especially when considered alongside complimentary technologies like wearables. 90% of the world’s data (healthcare data included) has been generated in the last two years. While the healthcare industry has been slow to exploit this data, as exemplified by the nagging lack of interoperability between EHR systems and healthcare providers, consumer technology companies not only have reliable access to the high-quality user data, but have been diligent in-learning to extract value from it.
Consumers don’t seem to mind with healthcare data being collected through wearables, according to Accenture, who reported that 78% of survey participants would be willing to wear a wearable for health tracking. An example of how machine learning and wearables can coexist is SleepSight, a solution to detect psychotic episodes in schizophrenic patients. The system, designed by clinicians at King’s College London, uses a Fitbit wrist monitor connected to a cell phone to monitor a sleeper’s movement, heart-rate, ambient light, and other information. Changes in sleep pattern are a primary indicator in predicting psychotic episodes, and the researchers believe that by having this real-time patient data, they can help patients suffering from psychosis better anticipate and treat episodes before they occur.
It’s clearly a very exciting time to be involved in the fields of machine learning and healthcare. The team at Verne Global will continue to enthusiastically support researchers and companies in the fields of healthcare, life sciences, pharmaceuticals, and machine learning and we stand ready to support this technological development from our HPC optimised, industrial scale data center here in Iceland.