In Japan’s first reported case of artificial intelligence saving someone’s life, an AI has succeeded where a team of skilled human doctors did not. A woman with a rare type of leukaemia was correctly diagnosed by the AI. Even more remarkable, it took just ten minutes to compare the woman’s genetic information with 20 million clinical oncology studies to arrive at the life-saving diagnosis.
Does this mean robots are going to replace our doctors? Not quite, but increasing volumes of medical data, more powerful computers and smarter algorithms could see a future medical science in which human doctors are helped by AI.
Data driven medicine taps into the expanding databases of genomic, clinical, imaging (scans and x-rays), and molecular data. Advanced algorithms are put to work that learn from repeated cycles of enquiry, and all of this takes place on affordable computer hardware. We can now sift through billions of records to find answers, taking minutes to do what might take years for humans.
Human genome project
Data driven medicine started with the Human Genome Project that aimed to map and understand all of the genes in the human body by collecting DNA from countries all over the world. It spawned a multitude of spin-off projects with a growing number of research institutes around the world specialising in DNA sequencing and a research agenda to understand the genetic basis of disease.
In the 13 years since the Human Genome Project, the computing power and quantity of data generated by it has increased dramatically, creating the foundation for data driven medicine. For example, the Wellcome-Sanger Institute produces more sequences of DNA today in one hour than it did in its first ten years.
This allows them to work on five or six sequencing projects concurrently. The Institute makes its results available to the international research community. Its website is reported to get 20 million hits each week.
At the other end of the medical data continuum, we now have an abundance of personal-level health data. Devices synced to smartphones can monitor your heart rate, distance covered,calories burned and so on. It’s like having your own physician on hand to give you helpful advice and warnings when you need it. For example, your blood glucose may be dangerously high. Time for some insulin.
All of this information can be analysed and consolidated into your medical history, which can live securely in “The Cloud”. It’s early days, but this is being used in some circumstances.
In the future, if you find yourself in need of hospital treatment, your detailed online medical records, which might include genome sequencing and other useful information, could be accessed by the attending doctor who would likely be using an AI helper on their tablet computer to support their diagnosis. As they do their rounds, consulting with their patients, their diagnostic acumen could increase by several orders of magnitude through the discreet use of their AI helper.
Great potential exists for AI and data driven medicine to save lives, improve standards of patient care and save money for providers, particularly hospitals and research institutions.
Is it risky?
The risks of AI come down to three broad issues; programming errors, cyber attacks and taking instructions too literally. With due diligence, none of these need be show-stoppers.
Programming errors, often called “bugs”, are an avoidable fact of life with poorly developed software. They creep in because the development and testing process has not been properly performed. Malfunctions could range from minor to serious, but software has been used for many decades in safety critical situations like hospitals and aviation. We would expect no less for medical AI applications.
Cybersecurity is a well-funded area of research that is doing a generally good job of staying ahead of the bad guys. While we must not be complacent, there is no good reason why medical AI, or any AI, could not be safely protected from attack.
Taking instructions too literally can likewise be managed by building in safeguards, as is standard practice for any safety critical system. It’s highly unlikely a hospital would leave an AI in charge of life or death decisions, such as whether to turn off life support.
While risks exist, they can be managed as we have been doing in other domains of computing for decades. So why do we need doctors at all? Medicine is the most people-centric profession, so unless a robot could be as empathetic as a human (which is a long way off), patients just wouldn’t stand for being treated by a robot doctor.
Opinion in the medical establishment is sure to be divided as to whether AI helpers such as IBM’s Dr. Watson, the medical version of IBM’s Jeopardy winning super-computer, are a good thing or just a flashy toy.
Given the very real social and economic benefits of medical AI, we should not be afraid to explore the possibilities of AI helpers in a supportive role, with humans always in the driver’s seat.
David Tuffley does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond the academic appointment above.
Authors: David Tuffley, Senior Lecturer in Applied Ethics and Socio-Technical Studies., Griffith University