Decades into the future, when we finally have autonomous flying cars and intelligent nanobots vanquishing cancer cells in our bloodstream, one date is likely to be mentioned by cyborg historians taking kids on augmented-reality museum tours.
30 November 2022 was the day ChatGPT was launched by OpenAI, and in so doing, brought artificial intelligence (AI) from the realms of science fiction into mainstream ubiquitous reality.
Of course, the concept of AI pre-dates ChatGPT by over seventy years. In medicine, research into intelligent systems began in the 1960s and the first application of AI was recorded in 1971 with the creation of INTERNIST-1, an algorithmic model for disease diagnosis.
Over the years, the application of AI proliferated into various clinical specialities and assisted with diagnosing and predicting risks of diseases. Even before ChatGPT, AI chatbots responded to patients’ and clinicians’ enquiries about diseases, diagnosis, treatment options and prognosis.
However, trained with data and pattern recognition from the entire internet up until 2021, ChatGPT is not only able to respond to specific enquiries in a more personable and conversational manner, it can also compose stories and songs and provide commentary and enhance a person’s existing work.
Perhaps due to the ease of its use and the general quality of responses from it, more than 100 million people had used ChatGPT within two months of its inception.
However, this article is not about ChatGPT but rather about the challenges with realising the full potential of AI in healthcare. Not long after ChatGPT’s launch, naysayers and their pitchforks predicted the loss of human jobs to AI, experts warned the need for tight regulation of AI technology and some forewarned the fall of mankind due to AI. Regulatory bodies released position statements calling for a moratorium into the use of AI technology in healthcare, and to introduce stringent regulation of its use.
Like others, the authors of this article agree that the application of AI in healthcare requires regulation to ensure patient safety is not compromised, and that any AI technology introduced to consumers is safe to operate and minimises risk of harm. However, we believe the approach and thinking of applying AI in healthcare needs to mature from its current absolute risk aversion starting position, to one with a more sanguine outlook and which discerns the potential benefits from AI technology.
The following recommendations propose to convince academics, clinicians and regulatory bodies to approach AI technology in a different light
AI as a scalpel, not a weapon
AI technology by itself is a set of tools or instruments which can be used to assist humans to undertake some aspects of their daily routine. An analogy which can be applied to AI technology is a knife. Used by a criminal, the knife can be used to harm others. Used by a butcher, the knife carves meat from animals. Used by a culinary chef, the knife carves intricate flowers made out of chocolate. And used by a surgeon, the scalpel helps saves lives by removing tumours. Until the day AI achieves sentience, the impact of AI technology lies in its application and its users.
Fundamentals first: remembering the principles of medical ethics
Institutions regulating the application of AI technology should reflect upon the four fundamental principles of ethics in healthcare: beneficence, non-maleficence, autonomy and justice. Any AI technology should ultimately be used for the benefit for the patient.
This can come in various forms including clinical-decision making platforms, assistive technology for patients and predictive algorithms for predicting risks of certain diseases. Either by reducing the cognitive workload for clinicians, or empowering patients directly with access to trustworthy, validated and patient-friendly information, AI technology have begun to, and will continue to improve the way healthcare is delivered.
The Hippocratic oath “to help and do no harm” does not apply to a stethoscope or to an electronic medical record system. It applies to clinicians who provide care to patients. The introduction of AI technology does not change this and clinicians will still need to act in their patients’ best interests, while ensuring the risk of harm is minimised.
Almost every clinical procedure potentially carries clinical risks. As with other medical technology already introduced, a clinician has to collaborate with the patient to weigh the benefits and risks of AI technology as they are applied. In these settings, the AI technology is used a tool and an enabler for the clinician and patient to achieve a certain outcome.
Providing a patient with as much information as possible, the time to reflect upon this information, and the ability to ask questions pertaining to the AI technology being proposed, will allow the patient to make an informed and autonomous decision. For patients who do not have the capacity to make their own clinical decisions, it is vital that their medical power of attorney is informed and equipped to make these decisions.
With regards to justice, it can be argued that limiting the access of some AI technology can be considered unjust to certain communities. For example, Australian Bureau of Statistics and Australian Institute of Health and Welfare data show that patients in rural and remote communities are unable to access the same level and variety of healthcare services compared to their metropolitan counterparts.
Despite concerted efforts to increase the number of clinicians working in these communities, the clinical workforce in these communities are in fact reducing. The advent of telehealth has assisted with bridging some healthcare gaps in these communities, but the advent of AI technology has the potential to assist even further.
For example, a telehealth consultation augmented with AI technology to automatically analyse a patient’s gait or a technology which can extract the photoplethysmography (PPG) signal from light reflected from the patient’s skin to measure their blood pressure and oxygen saturation levels provides vital clinical information to the clinician at the other end of the virtual connection.
If a patient’s gait is assessed to predispose them to a higher risk of falls, the clinician can arrange for an onsite physiotherapist to assess the patient’s suitability for mobility aids. An onsite occupational therapist can be arranged to assess the patient’s home environment and eliminate hazards which can increase the risk of falls.
Similarly, the AI-enabled PPG assessment provides the clinician with the patient’s vital signs in real- time. If there are any concerns about a patient’s deteriorating clinical parameters, the clinician can arrange for the patient to be extricated to a facility which can cater to htheir acute clinical needs.
In these settings where there has been a long-standing and worsening disparity with equity of access to healthcare, the introduction of regulated AI technology can potentially go some way to becoming a leveller and democratise the access to healthcare for millions around the world.
Education
The field of AI is rapidly expanding and unless one is working in the artificial intelligence industry, it can be challenging to keep abreast of the latest innovations and advances in AI technology. Health authorities should constantly be on the lookout for AI innovations which can make an impact on long-standing and contemporary issues in healthcare.
At the same time, refresher education regarding utilisation of AI tools in clinical settings, with an emphasis on adherence to the fundamental principles of medical ethics, should be considered part of a clinician’s mandatory annual professional development requirements.
Clinicians need to be able to make informed decisions regarding which, or if any AI technology can be beneficial to their patients. The best way for clinicians to do so would be to educate themselves through formal and informal industry-certified professional development pathways.
As gatekeepers of our health system, general practitioners play a vital role in educating their patients. While these interactions may be sporadic in frequency, GPs can provide patients with trustworthy and evidence-based references for safe and clinically validated AI technology.
With an increasingly ageing population, the rates of chronic disease is predicted to increase over the coming decades. By focusing on preventative medicine and chronic disease management, GPs are well-placed to educate their patients, and in turn our communities, regarding where clinically validated and trustworthy information regarding AI technology for healthcare can be found.
Regulatory bodies
It is true that progress should not come at the cost of safety. However, the risk with applying a broad-based over-cautious approach to introducing new technology is arguably not the right approach. Each application for AI technology conformity should be assessed individually, and by considering all the other proposals outlined above.
The fundamental principles of medical ethics should be one of the key frameworks through which each AI technology is assessed. The costs of assessing conformity applications and for approving applications should also not be prohibitive. If a conformity application is unsuccessful, the regulatory body should provide constructive feedback and guidance to the applicant.
Unless applications are potentially dangerous and portent high risks of harm to patients, AI technology conformity applications should be encouraged as the potential benefits to patients may be high. A risk of an over-zealous approach to regulating AI technology is the stifling of innovation and at the expense of finding novel ways to further enhance healthcare delivery.
Conclusion
The advent of AI can potentially augur a new golden age in healthcare. From democratising access to healthcare for those living in rural and remote communities, to complementing and augmenting the life-saving work undertaken by clinicians, and by empowering patients to be active participants in clinical decisions making processes, AI has the potential to unlock actual practical universal healthcare for every single community where there is an internet connection.
As with anything in healthcare, AI technology does require regulation to ensure patient safety is never compromised. However, instead of approaching AI technology with a broadly risk-based approach, it may be worthwhile to step back and ponder as one would as a child, of the almost endless possibilities of how AI technology can transform our healthcare system for the better. Perhaps then, we will get to see the intelligent nanobots vanquishing cancer cells one day.
Associate Professor Didir Imran FAID is Chief Medical Informatics Officer and Director Medical Services at South West Healthcare.
Great article Prof Didir !
Wow isn’t medicine going fast !
What a wonderful tool to have to help life and health !
The possibilities are endless … we just have to imagine … safely … and ethically
The challenge is to integrate AI into clinical care in a safe and effective way. Evidence of atleast equivalence if not improvement in the real world, which requires proper research, is essential as it is in any healthcare innovation.
Anecdotes of value need to be collaborated with real evidence from structured research. We need to develop a research framework that uses real world evidence to do this.