Digital mental health tools have the potential to be the “next oxycontin scandal”, GPs attending the recent conference of the World Organisation of Family Doctors (WONCA) heard.
The conference, held at the Convention Centre Dublin last week, brought together more than 2400 attendees from around the globe for discussions on a range of topics, including digital health, telemedicine and artificial intelligence.
Louise Stone is a practising GP and an associate professor in the social foundations of medicine group at the Australian National University. In a session on why GPs appear reluctant to prescribe digital mental health services, she expressed concern about the lack of critical research on these tools and called for more rigorous and comprehensive clinical studies on the positive and negative aspects.
“I worry that digital entrepreneurs are the new version of the oxycontin drug company,” Associate Professor Stone said. “I really worry that we are at risk of bringing in algorithms that we GPs cannot interpret because they are obscure, or they are behind a whole lot of paywalls.
“I need to know the benefits, harms, contraindications and indications, the number of patients who dropped out, and I don’t get that from any of the data. I can’t see why we wouldn’t expect the researchers to report that.
“GPs are critical thinkers and we deserve to think critically. That’s what we do. We really need to be thinking more critically about this.”
The paucity of clinical evidence to support digital health tools was a common topic of conversation throughout the conference. WONCA hosted a special session to begin work on setting a primary care research agenda for the role of artificial intelligence (AI) in practice.
During the packed session there was consensus that GPs needed much more rigorous evidence before they would be comfortable implementing AI-assisted tools.
Speaking at the session, US GP Matthew Thompson said: “We have seen a flurry of editorials and commentaries but what is missing is the evidence. Where is the research?”
Dr Thompson pointed to a scoping review that was published in the Lancet Digital Health earlier this year that found just 86 randomised controlled trials evaluating AI in clinical practice. Most of the studies focused on radiology, cardiology or gastroenterology. The vast majority were single centre trials and reported only positive primary endpoints, such as performance yield.
He also highlighted a systematic review of AI-powered chatbot interventions for managing chronic conditions published in the Annals of Medicine this year. It found eight RCTs, three of which related to breast cancer, which Dr Thompson pointed out was not one of the most common chronic conditions seen in the primary care setting.
He said GPs must be involved in the development of these tools, and in a timely warning for Ireland, remarked on the experience of electronic health record (EHR) implementations in other countries where EHRs were imposed on GPs without proper engagement.
“We hated them then. We still hate them now,” he said. “That, to me, is what we need to avoid; that top-down approach where something appears in our clinic that we didn’t really ask for, that we didn’t get to help build and design, and suddenly our lives are miserable.
“The challenge is that AI is moving so quickly that I fear our research efforts in primary care will be too slow to keep pace and things will be implemented before we have had a chance to evaluate them.”
He admitted that randomised controlled trials may not be practical or indeed necessary in all situations. “I would strongly argue that it’s not a one-size-fits-all,” he said.
One example was the introduction of AI-assisted documentation at the Kaiser Permanente integrated managed care consortium in California. The technology, which is now being used in all the network’s hospitals and primary care clinics, completes medical documentation in real-time during the consultation. The technology was rolled out without RCT evidence.
“We might argue that we don’t need randomised controlled trials for this kind of tool,” he said. “When do we need randomised controlled trials and when can we implement things without it?”
Kaiser Permanente has developed seven principles for the responsible use of AI in healthcare, which are designed to ensure that the AI tools it uses are “safe and reliable”.
“We believe our clinicians and care teams can use AI to improve health outcomes for our members and the communities we serve,” Kaiser’s vice president for artificial intelligence and emerging technologies, Daniel Yang, said. “But we also know that nothing slows down the adoption of new technologies more than lack of trust – or worse, technologies that can lead to patient harm.
“That’s why we use a responsible AI approach. This means we adopt AI tools and solutions only after we thoroughly assess them for excellence in quality, safety, reliability, and equity. With a focus on building trust, we use AI only when it advances our core mission of delivering high-quality, affordable health care services.”
WONCA hopes that last week’s conference will drive forward research that provides GPs with the essential information they need to feel confident about adopting artificial intelligence tools in clinical practice.