Artificial intelligence is increasingly becoming embedded within our lives. Whether we use a voice assistant to order our shopping, check the weather or control our heating, set predictive replies on our email accounts to speed up communications, or use an online chatbot to answer our banking queries, AI is helping to create quicker and more seamless experiences for all of us.
Healthcare is one sector where practitioners are beginning to recognise its previously untapped potential, particularly when it comes to automating mundane tasks and analysing large collections of data. Many companies, universities and research institutes are now using AI to help speed up diagnosis, inform clinical decisions and even create entirely new digital tools that can improve patient care.
Machines could help, not hinder healthcare
While many fear that advancements in AI could lead to fewer jobs for humans, medical practitioners who are spearheading the healthcare and tech movement believe it can be a useful tool that helps rather than hinders — machine learning can speed up long, boring processes and help to analyse complex data accurately, so that doctors and nurses can spend more time seeing patients and forming treatment plans.
“We are seeing really good advancements in key areas, from medical imaging [such as x-rays and ultrasounds] to NHS transport for substances like blood, to tackling health problems in the population, like sepsis,” says Dr Indra Joshi, clinical lead at NHS England’s Empower the Person Portfolio, which focuses on digital initiatives in the NHS. “It’s thinking about what AI can do; it’s difficult for the human brain to compute large sets of data and see patterns in it. If a computer can do this automatically, the human is left to interpret those findings.”
“This isn’t about machines replacing doctors”
Those developing AI-powered healthcare tools agree that the purpose of any new digital product should be to assist medical professionals, acting in a complementary way rather than as a replacement.
“The potential for AI in healthcare is pretty big, but this isn’t about machines replacing doctors, which is vastly more difficult than is sometimes suggested,” says Rich Savage, chief scientific officer at Pinpoint Data Science, a company developing an AI tool for cancer diagnosis.
“I think it’ll be much more about building smarter tools for doctors to use, and intelligently automating tasks in ways that free up healthcare professionals, so they can focus on improving the experiences and outcomes for patients,” he adds. “The best way to use machine learning is to design tools that support human experts.”
Nearly £2 billion towards AI for health
Individual medical experts and NHS trusts might be investing in AI, but as well as this, AI is being integrated into a wider Government agenda for healthcare.
As part of its latest industrial strategy, and in keeping with the NHS’ long-term plan to focus on disease prevention, the Government is hoping AI will have transformed the ways doctors prevent, diagnose and treat chronic disease by 2030. It is also promising £50 million towards five new physical centres dedicated to AI in medicine, as well as another £1.3 billion towards using AI to detect and diagnose disease.
There has already been a lot of progress in the area from private tech companies — Mindscape, for example, is a voice tool that talks to people with stress and anxiety, responding to them with tips on managing their mental health, breathing exercises and bespoke music, while private healthcare provider Babylon uses an AI chatbot to assess patients’ symptoms as a first port of call.
But to what extent is AI being used in the NHS and public healthcare, to help both clinicians and patients? Machine learning is still in its infancy but projects are currently being undertaken that aim to streamline processes, diagnose people more quickly and monitor patients’ health more closely.
From blood to beds: streamlining day-to-day tasks
Faster diagnosis of diseases such as cancer, diabetes and dementia are most certainly at the top of scientists’ and medical practitioners’ priority lists — but AI can also play a big part in the administrative side of healthcare, which might seem less exciting, but is also critical in getting patients treated quickly and saving lives.
Some NHS trusts have started collecting anonymous data to meet a hospital’s needs, from looking at volumes of substances needed through to beds available and patient capacity.
One such area is blood transportation, undertaken by NHS Blood and Transport — the transport of blood and its products, such as white blood cells, red blood cells and plasma, for use in transfusions. Blood and its different components provide various, crucial uses, from treating medical conditions such as anaemia and certain cancers, to being used in surgery and treating blood loss after childbirth.
Dr Joshi says that getting the quantities right is crucial in meeting a hospital’s needs and reducing waste of “expensive” blood bags.
“AI techniques are being used to transport blood in better ways and determine where it flows,” she says. “We need to get away from thinking of it as ‘AI’, and instead see it simply as ‘data collection’.
“By collecting data, we can then put together models for different hospitals, determining whether it needs seven bags of blood, eight bags, or maybe five that day, based on the situation. AI can be used to analyse data and come up with thousands of models for blood transport, to make sure it’s being done in the best way possible.”
Operation lists and bed availability
This method is also starting to be applied to complicated, operational tasks that can prove difficult for humans, says Joshi. This includes writing up theatre lists and surgery timings for thousands of hospital patients, both booked-in operations and emergencies; determining how many beds are available in different wards at one time; and patient flow (how many patients are entering and leaving a hospital).
“It’s not just population health where data can be used,” says Joshi. “We can use virtual assistants to create the most efficient systems possible. It can be hard for a human brain to figure out these things, so we can optimise that working-out process by using AI to look for patterns in data.”
While the process of AI data analysis sounds complex, it is in fact just a more advanced version of what the human brain already does — the “complex models” that are formed as a result of the analysis are essentially just decision flow-charts, which include multiple routes and pathways.
“Think of it as, you need to buy a new phone, and you’re trying to decide which phone to buy,” says Joshi. “You might form a ‘yes/ no’ decision tree of two phones and their different attributes. That is essentially a model.”
Diagnosing disease more quickly
In the same way that data can be analysed to speed up day-to-day processes, it can also be used to automate certain tasks and enhance disease detection and diagnosis.
Medical imaging encompasses different techniques and equipment used to create visual images of the inside of the body for analysis — for instance, radiography (x-rays) used to detect breaks and fractures, ultrasounds to detect pregnancy, and magnetic resonance imaging (MRI) to detect disease and tumours in organs, such as the prostate, brain, breast, pancreas and liver.
Using a virtual assistant to look for patterns in some of these images can free radiologists and doctors from doing this job, allowing them to focus purely on interpreting the results.
“There are certain things in medical imaging that are boring for a human, but easy and quick for a computer,” says Joshi. “For instance, measuring the size [and therefore] growth of a mass on someone’s liver over time, who has a scan every year. Currently, a radiologist will physically measure the distance between ‘x’ and ‘y’ points on the mass. This could easily be done automatically by a computer, and AI can then be used to find patterns.”
Conclusions can then be formed from looking at these patterns, and doctors can focus on forming “care pathways”, says Joshi — unique treatment plans and solutions for different patients. “You can use the data to decide whether a scan should be repeated in two weeks’, or three weeks’ time, for example,” she says. “It helps you answer questions.”
Alongside being used more generally in imaging, this technique is also being applied to diagnosing specific diseases and medical conditions. Sepsis is a serious and potentially fatal complication of infection in the body, which, if not treated quickly, can result in multiple organ failure and eventually death. Roughly 44,000 people die from sepsis every year in the UK, and there has been a rise in deaths in recent years, which some put down to staff shortages and ward overcrowding. Delayed treatment has also resulted in avoidable consequences like limb amputation — if treated within an hour, most patients can make a full recovery.
A group of computer scientists and clinical researchers at Imperial College London is currently using data capture to predict the best treatment strategies for sepsis patients, including prescription plans and drug dosages.
The team has used AI to analyse the records of 96,000 former hospital patients in intensive care units in the US, looking at details such as their symptoms, “vital signs” data like blood pressure and heart rate readings, age, pre-existing conditions, and the decisions that doctors made for them. Using a process called “reinforcement learning”, the robot can then be programmed to make decisions and come to conclusions about treatment plans, based on all the variables it has analysed for each patient.
The team has called the system AI Clinician, and a trial is planned for intensive care units in the UK, where it will be used to suggest individual patient plans.
The project, led by Dr Aldo Faisal, reader in neurotechnology within Imperial’s departments of bioengineering and computing, says AI Clinician is intended to be used to help doctors come to decisions around treatment and dosage, such as how much fluid to give or when to start a patient on vasopressors to maintain their blood flow.
Testing has proved the tool is accurate — the clinical study found that, in 98% of cases, the AI Clinician’s treatment plan matched that of the doctor’s, or was better than the doctor’s. It also found that the death rate was lowest when the doctor’s decision matched the AI Clinician’s suggestions.
The idea is to help catch and treat sepsis earlier. “An intensive care doctor will see roughly 15,000 patients by the time they retire,” says Faisal. “Yet this system has seen nearly 100,000 patients — it has the lifetime experience of eight doctors and has learned from each of those cases what the best decisions are for each situation.
“Doctors do more than just diagnose, they treat people,” he adds. “The AI Clinician system focuses on capturing this cognitive capacity of doctors.”
Some new AI tools are focusing on early detection of long-term disease, rather than emergency care. Pinpoint Data Science is a Leeds-based company, which is developing a new system designed to catch cancer, founded by clinicians and scientists at the University of Leeds and various NHS trusts.
The process involves collecting tens of thousands of patient blood tests, anonymising the data, then analysing it to decipher the early signs of cancer. The idea is to help GPs make decisions, by providing them with more information about whether a patient’s symptoms indicate cancer or not.
“Cancer diagnosis in the NHS often starts with a patient visiting their GP with some vague symptoms, which could be due to cancer, or could be due to something else,” says Rich Savage, chief scientific officer at Pinpoint Data Science. “The key is to give the GP the best possible information with which to make their referral decisions.
“Our test is designed to take in a range of useful pieces of data and distill them into information that makes the decision-making process easier and higher quality,” he adds. “In general, larger anonymised datasets mean the machine learning algorithm can do a better job of learning.”
While the AI Clinician system for sepsis and Pinpoint Data Science’s cancer detection tool focus on data capture and analysis, other AI medical tools have been developed which use intelligent questioning and analyse humans’ responses instead, mimicking existing products like home voice assistants such as Amazon Alexa, and online chatbots.
One such tool is COCOA (Computerised Cognitive Assessment) — a tool designed to converse with older people in a way that analyses their cognitive and conversational ability and checks for early signs of dementia.
The project is run by the department of computer science at the University of Sheffield, as well as a special department called the Centre for Assistive Technology and Connected Healthcare (CATCH) and has received a Government fund of £50,000.
COCOA asks a person specially-designed questions, through a virtual doctor — an animated, talking head on a computer screen — intentionally asking the person cognitively demanding questions that span different brain functions, such as memory, language and attention.
The person’s speech is then analysed by the bot, and patterns are extracted. The aim is to help diagnose dementia early, using research as evidence rather than anecdotal situations, perhaps before a family member or friend would pick up on symptoms. It can also be difficult for humans to distinguish between dementia and natural memory loss, so the bot looks to offer a more informed way of checking. It is also non-invasive compared to existing dementia tests, which use methods such as radiation and lumbar puncture.
The team that has developed the tool is made up of AI experts, computer scientists, neurologists, neuro-psychologists and clinical linguistic experts, and is headed up by Heidi Christensen, senior lecturer at the department of computer science.
They say the aim is to create a tool that will help GPs make the right referral decisions, which are currently often made purely through pen-and-paper memory tests.
“These tests are not accurate and the GP is likely to lack the expertise to tell who is developing dementia,” says the team at CATCH. “As a result, GPs refer too many patients who are not developing dementia for specialist assessment, or they fail to identify people who are developing the disease. This is costly, causes unnecessary worry and delayed diagnosis.
“Our tool is based on analysis of speech and language of patients with memory problems,” they add. “We have shown that we can accurately predict whether a patient has got [another condition like] functional memory disorder (FMD), or neurodegenerative dementia (ND) by extracting information from conversations, which are [similar to] a typical history-taking part of a [medical appointment].”
The project is currently in the development and testing phase and is intended to be rolled out to NHS GP surgeries in the future.
Helping to keep patients safe
As well as using AI for data analysis to diagnose and treat patients, AI is also being used to monitor people’s safety and wellbeing — like a smart home security device.
Monitoring high-risk patients
The Oxehealth Digital Care Assistant (DCA), created by professor Lionel Tarassenko and his team at the University of Oxford, is a connected system that monitors patients in absence of a nurse, doctor or carer.
Intended for those most in need of observation, such as mental health patients and older people in care homes, the DCA is made up of a smart, optical sensor placed in a patient’s room, digitally connected to a computer in the nurses’ office, which processes the data. Alerts, reports and vital sign checks also appear on staff’s portable devices, such as tablets.
It has three main functions — to watch patients and assess their risks, such as the likelihood of an older person falling over and hurting themselves; monitor their vital signs, such as their pulse and breathing rate, particularly overnight, reducing the need for staff to enter rooms and disturb them; and analyse all of their data, including their sleep, activity and vital signs, turning these factors into a report so that medical staff can make bespoke patient care plans. The DCA is currently being used in selected care homes and NHS mental health trusts.
Its purpose is not only to monitor patients more closely, but also to give nurses and care assistants more time back, says Daniel Bayley, product manager at Oxehealth.
“It’s like placing an assistant in every room, paying attention when staff can’t be there,” he says. “Staff [using the DCA] refer to it as the ‘sixth member of a team’ – it gives them more time for hands-on care.”
Reports can be used for treatment plans
Beyond ward staff, the data captured from each patient can also be scrutinised by senior doctors such as consultants, who can use the report function to support their clinical decisions.
The DCA has proven to be successful in specific settings, such as for dementia patients — at Coventry and Warwickshire Partnership NHS Trust, it has improved patient safety in the dementia wards significantly, resulting in a third fewer falls at night, a 71% reduction in enhanced observation by staff, and has cut the need to attend Accident and Emergency (A&E) departments by half.
What are the challenges of AI in healthcare?
It’s still in its early days but there is a lot of potential for using machine learning to help inform clinical decisions and offer better patient care — however, there are considerations that need to be made by both the creators of new tech and the medical professionals using it.
Patient data and ethics
One obvious challenge is the ethics around capturing patient data — currently, many of these systems mass-collect then anonymise data to spot patterns and inform decisions, but others, such as the DCA system, record data specifically for individuals to help monitor their condition.
Secure IT systems need to be in place to protect patients’ data, says Savage, and anonymising whenever possible is also crucial. “It’s very important to protect patient data, so at Pinpoint Data Science, we never actually receive or hold the NHS data ourselves,” he says. “Instead, we conduct our analysis of the anonymous data remotely through a highly secure IT system at the Leeds Institute of Data Analytics.”
If individual data is being captured, then both the designers of the tool and the NHS trust in question need to have a “robust regime” for handling patient data and keeping it safe, adds Bayley.
Consistency of methods
Alongside security, Joshi says there needs to be consistency in data capture — there should be frameworks in place to ensure nurses and doctors are taking measurements in a standardised way, and that analysis is the same across the board, to make sure that results are accurate and coherent.
“If I capture blood pressure, it needs to be captured in exactly the same way by everybody else,” she says. “This is important so that if other people want to interrogate the results, it makes sense.”
Always look for evidence
As well, medical practitioners need to be wary of jumping on new tech that isn’t substantiated, says Bayley, and always look for “evidence of patient benefit and proven clinical outcomes” before opting in to use it.
Intuitive and easy to use
And ultimately, the tech needs be easy for medical staff to use, not only for tech and AI experts — otherwise it fails at helping to save their time and efforts. User experience (UX) needs to be thoroughly considered when creating a new tool.
“[Designers and developers] need to look at how medical staff feel about a product,” Bayley says. “Do they trust it? Do they rush to use it, or do they use it reluctantly? How much time does it save them?”
This is also where user interface (UI) comes in — software developers and digital designers need to be sure they are building front-end interfaces that are “clean and intuitive” for non-technical, medical staff to use, and which clearly display results, says Savage.
While the use of AI in healthcare is still young, projects such as these demonstrate what machine learning and smart tech can do to help clinicians make informed decisions that are backed up by evidence.
In doing so, this could make a drastic difference to their workloads, which are already heavily burdened due to staff shortages, and give them back time to spend with patients. But ultimately, success relies on product developers, scientists and clinicians alike thinking about the relationship between AI and healthcare professionals as symbiotic, rather than interchangeable.
“Good healthcare AI enables humans to do more of what only they can do — it should not seek to replace them,” says Bayley. “Fundamentally, we don’t have enough doctors and nurses and they are all but overwhelmed by the demands of an ageing population — they deserve all the help we can give them.”