In this effervescent moment of AI renaissance triggered by ChatGPT, health is one of the most active arenas. It has been one of the hottest topics at this week’s MedInfo Congress in Sydney, where 2400 digital health leaders from around the world have converged.
Among the delegates are academics, consumers, clinicians and leaders from some of the world’s largest technology companies, all of whom are thinking about how to wrangle these generative AI models appropriately in healthcare. We have already seen clinicians using them to write their notes, predict patient outcomes, summarise research literature, even, according to some, improve their bedside manner. There are tools for patients to help translate medical jargon and access interactive health content via chatbots. Every day the list expands.
The potential for impact is great but the risks are steep. Professional bodies and regulators are scrambling to understand and address issues like bias and privacy. The WHO has urged caution and the AMA has called for national regulations around the use of AI in health, following a ban on doctors using ChatGPT in one health network in Western Australia.
The market stakes are equally high. These models offer an opportunity to democratise access to care; however there may also be a winner-takes-all phenomenon due to the concept of emergence, where we observe step changes in performance as models grow larger. In other words, all of a sudden a task that was impossible, becomes possible with a larger model. Organisations, health systems and nations that establish beachheads early will be at a distinct advantage. There is a risk of missing the boat.
Australia has historically punched above its weight in both the information technology and healthcare domains, with world-first innovations from wifi to the cochlear implant. Health AI could, and should, be another of these technological verticals that Australia becomes recognised for internationally. To do this, we need to leverage the assets we have: data, workforce, and regulatory infrastructure.
High quality data is the fuel for effective AI systems. Despite a relatively small population, Australia has some of the largest unified health datasets in the world thanks to longstanding research registries, shared health records, and a highly consolidated private healthcare system. However, these datasets are not being harnessed to their full extent. We have seen Silicon Valley companies come to Australia for these data assets, often taking the IP offshore, just as raw minerals are dug up and processed elsewhere. We should be growing AI technologies in-house by supporting privacy-preserving mechanisms for researchers and industry to leverage local data assets.
Second, we need to nurture a specialised health AI workforce by building bridges between the technical and clinical sides of this hybrid field. The technical talent is here thanks to academic institutions like the CSIRO and the Australian Institute of Machine Learning, alongside the reported ‘brain-gain’ during the pandemic of talented Australians moving home. Meanwhile, Australia is blessed with a world-class clinical workforce, among whom there is a growing appetite for hybrid careers involving technology. We now need to transform health AI from a niche sub-sector into a much more established community of practice, through targeted education programs, cross-sector collaboration forums, and adoption of the digital health workforce strategy under development.
Finally, we must set the temperature right on the regulatory landscape to encourage safe innovation. Australia has become a go-to destination for clinical trials in the pharmaceuticals industry because of efficient regulatory approvals and rigorous clinical governance. We should be leveraging that same infrastructure for safely sandboxing health AI. Australia has become an early leader in the fast-changing space of health AI safety. We were among the first countries to adopt an AI Ethics Framework and have even seen community-led recommendations for AI in healthcare. The federal government has been proactive in recently launching a cross-sector consultation on safe and responsible AI. We now need to translate that into a practical infrastructure for trialling and regulating health AI tools. Few countries are better equipped to do this.
The MedInfo Congress this year was the result of a bid made years ago by the Australian Institute of Digital Health, when the sector looked very different. Today there is a window of opportunity for Australia to rise and lead in this new era of health AI. We must leverage our homegrown data assets, build an intersectional workforce, and create the regulatory landscape to safely translate these technologies to the bedside. Now is the generational-shift moment to lay these foundations.
Dr Martin Seneviratne and Prof Farah Magrabi are Co-Chairs of the 19th World Congress on Medical and Health Informatics, Sydney Convention Center, 8-12th July.
This is a well-balanced article that documents hope but more importantly the need for caution with AI. Well done.
You wrote:
‘The potential for impact is great but the risks are steep. Professional bodies and regulators are scrambling to understand and address issues like bias and privacy. The WHO has urged caution and the AMA has called for national regulations around the use of AI in health, following a ban on doctors using ChatGPT in one health network in Western Australia.’
In my comments as a session chair and in my presentation at MEDINFO23 I made a crucial point about AI. Staunch advocates claim that “given enough training examples you can learn everything for your context”. THIS IS NOT TRUE. In some settings there are intrinsic characteristics that are not represented in an explicit way by training examples.
Furthermore LLMs are so large it is questionable that they can be corrected for small but crucial errors, as any new training example might well be swamped by the size of the model.
I advocate that all AI generated outputs be required to carry an IMMUTABLE WATERMARK so we always know how they were created.