On March 21, the world’s first global AI resolution was adopted by the United Nations, soon after the Artificial Intelligence Act was approved by the European Parliament.
These regulatory developments promote a risk-based approach to ensuring safety and compliance with fundamental rights, while aiming to boost innovation.
In Australia, the Department of Science, Industry and Resources has similarly proposed a risk-based approach to regulating AI. The department’s related consultation attracted over 500 submissions, and their published response proposes to develop mandatory guardrails for high-risk AI.
The Australian Alliance for Artificial Intelligence in Healthcare has also proposed an AI Policy Roadmap, prior to the development of a national digital health strategy and delivery roadmap.
While welcoming these developments, we encourage stakeholders to get involved to ensure that regulation is responsive, and to avoid the imposition of overly restrictive guardrails.
Guardrails refer to the use of regulation, policies and practices to ensure that AI development and use aligns with ethical standards and societal expectations. The metaphor of a guardrail evokes the idea of keeping cars from veering off the road. It derives from an earlier period of technological innovation and regulatory intervention in which mistakes were made.
Building on past contributions concerning the regulation of AI in healthcare and establishing parameters for the safe and ethical use of AI, we argue that time is still needed to test and validate AI within the context of supportive clinical systems.
Toward this aim, the Digital Health CRC (DHCRC) is supporting a partnership between digital health AI platform company Propel Health AI, Peter MacCallum Cancer Centre and Swinburne University of Technology, with a project funded under the Australian Commonwealth’s Cooperative Research Centres (CRC) Program.
Propel Health AI has developed an end-to-end data and AI platform designed to overcome some of the persistent challenges in real-world evidence accessibility, data governance and infrastructure for accessing and deploying machine learning systems in healthcare.
This collaboration has now established a secure, cloud-based platform which has ingested a subset of multimodal medical records spanning the patient’s care journey, (e.g., health records, pathology, radiological imaging, genomics etc), from previously disparate, siloed data sets. These data are then de-identified, harmonised and prepared for testing within an AI data analytics environment, whilst ensuring the protection of data.
Together, we are working towards a shared goal of accelerating R&D, reducing operational costs, and improving patient outcomes.
We’re being guided by the ethical AI commitments advanced by the Australian Government, the World Health Organisation and the recommendations of the Australian Medical Association. We are holding the challenging, interdisciplinary conversations, to collaboratively work out the implications of applying these ethical principles in the context of clinical practice.
Alongside our technical investments in AI models, we are undertaking consultations and qualitative research that goes beyond user experience, towards “use within systems”. We aim to learn how design, governance, and regulation can respond, how social arrangements, norms and workflows can adapt, and how new feedback mechanisms and standards can be applied.
This is a work in progress. Based on our efforts within a regulatory sandbox, we observe that it is only after the highways have been built that we will begin to know where the guardrails are needed. As we build AI augmented clinical systems, we hope to contribute to these regulatory conversations about how healthcare is changing, and how we can reshape it for the better.
Distinguishing between AI and AI systems
AI is a family of technologies, methods and models that learn patterns from data to make decisions or predictions. AI models may be contrasted with broader AI systems, which alternatively reflect real-world situated applications of a technology or method which are designed, supported, and maintained to achieve a range of pre-defined aims.
AI derives from the field of computer science, focused on algorithms, computation, and information processing systems. In contrast, the development of broader AI systems is a more diverse, inter-disciplinary, and cybernetic endeavour. This relates to system organisation planning and control, involving people, technologies, materials, regulation, and feedback mechanisms to enable the monitoring and adaptation of system performance.
Healthcare systems have been incorporating AI and automation within lower-risk operations for decades. Efficiencies have been achieved in the management of medical records, billing, scheduling, drug contraindications, alerts, reminders, incoming phone calls and basic health inquiries. AI is already everywhere, including our healthcare, and most of us wouldn’t even know it.
Today, the capacity of AI to solve higher risk complex problems is growing exponentially, as evidenced in the screening of diseases including diabetic retinopathy, melanoma, breast cancer, heart disease and more. The US Food and Drug Administration approvals for AI software as a medical device (SaMD) have been steadily increasing. Moreover, patients are no longer just Dr Googling, but Dr Chatting, shifting clinical dynamics, and creating new challenges for physicians.
In combination, these developments offer the prospect of transforming healthcare systems. Better-supported clinicians and better-informed patients might soon be interacting within a more integrated, patient-centric, bespoke system. This can only be achieved, however, if we can co-develop supportive AI systems that can advance these aims.
Dr Luke Bearup, Professor Christopher Fluke and Dr Sara Webb are researchers at the Centre for Astrophysics & Supercomputing at Swinburne University of Technology.