ÁñÁ«ÊÓƵapp

An imaging company gave its patients’ X-rays and CT scans to an AI company.

An imaging company gave its patients’ X-rays and CT scans to an AI company.

How did this happen?

Australia’s biggest radiology provider, , has to an artificial intelligence company without explicit patient consent, Crikey reported recently. The data were images such as X-rays and CT scans, which were used to train AI.

This prompted an by the national . It follows an I-MED of dating back to 2006.

Angry patients are reportedly .

I-MED’s does mention data sharing with “research bodies as authorised by Australian law”. But only read and understand privacy policies, so it’s understandable these revelations shocked some patients.

So how did I-MED share patient data with another company? And how can we ensure patients can choose how their medical data is used in future?

Who are the key players?

Many of us will have had scans with I-MED: it’s a private company with more than 200 radiology clinics in Australia. These clinics provide medical imaging, such as X-rays and CT scans, to help diagnose disease and guide treatment.

I-MED partnered with the AI startup in 2019. is their joint venture to develop AI for radiology. I-MED clinics were of Annalise.ai systems.

I-MED has been , and is listed for sale, .

Big commercial interests are at stake, and many patients potentially affected.

Why would an AI company want your medical images?

AI companies want your X-rays and CT scans because they need to “train” their models on lots of data.

In the context of radiology, “training” an AI system means exposing it to many images, so it can “learn” to identify patterns and suggest what might be wrong.

This means data are extremely high value to AI start-ups and big tech companies alike, because AI is, to some extent, made of data.

You might be thinking it’s a wild west out there, but it’s not. There are multiple mechanisms controlling use of your health-related data in Australia. One layer is Australian privacy legislation.

What does the privacy legislation say?

It’s likely the I-MED images were “” under the . This is because they can identify an individual.

The law limits situations in which organisations can disclose this information, beyond its original purpose (in this case, providing you with a health service).

One is if the person has given , which seem to be the case here.

Another is if the person would “” the disclosure, and the purpose of disclosure is directly related to the purpose of collection. On the available facts, this also seems to be a stretch.

This leaves the possibility that I-MED was relying on disclosure that is “necessary for research, or the compilation or analysis of statistics, relevant to public health or public safety”, where getting people’s consent is .

The have repeated that the scans were .

De-identified information is mostly outside the scope of the Privacy Act. If the chance of re-identification is very low, de-identified information can be used with little legal risk.

But , and context matters. At least has suggested these scans were not sufficiently de-identified to take them outside the protection of the law.

Changes to the Privacy Act have for interfering with people’s privacy, although the Office of the Australian Information Commissioner is .

How else is our data protected?

There are lots more layers governing health-related data in Australia. We’ll consider just two.

Organisations should have data governance frameworks that specify who is responsible, and how things should be done.

Some large public institutions have very mature frameworks, but this isn’t the case everywhere. In 2023, argued Australia urgently needed a national system to make this more consistent.

Australia also has hundreds of human research ethics committees (HRECs). All research should be approved by such a committee before it starts. These committees apply the to assess applications for research quality, potential benefits and harms, fairness, and respect towards participants.

But the has recognised that human research ethics committees need more support – especially to assess whether AI research is good quality with low risks and likely benefits.

How do ethics committees operate?

Human research ethics committees determine, among other things, what kind of consent is required in a study.

Published Annalise.ai research has had approval, , including approval for a “waiver of consent”. What does this mean?

Traditionally, research involves “opt in” consent: individual participants give or refuse consent to participate before the study happens.

But in AI research, researchers generally want permission to use some of an existing massive data lake already created by regular health care.

Researchers doing this kind of study usually ask for a “waiver of consent”: approval to use data without explicit consent. In Australia this can only be approved by a human research ethics committee, and , including that the risks are low, benefits outweigh harms, privacy and confidentiality are protected, it is “impracticable to obtain consent”, and “there is no known or likely reason for thinking that participants would not have consented”. These matters aren’t always easy to determine.

Waiving consent might sound disrespectful, but it recognises a difficult trade-off. If researchers ask 200,000 people for permission to use old medical records for research, most won’t respond. The , and the research will be poorer quality and potentially useless.

Because of this, people are working on alternative models. One example is “”, where governance structures are established in partnership with communities, then individuals are asked to consent to future use of their data for any purpose approved under those structures.

Listen to consumers

We are at a crossroads in AI research ethics. Both and agree we need to use high-quality Australian data to build sovereign health AI capability, and health AI systems that work for all Australians.

But the I-MED case demonstrates two things. It’s vital to engage with Australian communities about when and how health data should be used to build AI. And Australia must rapidly strengthen and support our existing infrastructure to better govern AI research in ways that Australians can trust.The Conversation

, Professor and Director, Australian Centre for Health Engagement, Evidence and Values, and , Senior Lecturer in Law,

This article is republished from under a Creative Commons license. Read the .


UOW academics exercise academic freedom by providing expert commentary, opinion and analysis on a range of ongoing social issues and current affairs. This expert commentary reflects the views of those individual academics and does not necessarily reflect the views or policy positions of the ÁñÁ«ÊÓƵapp of ÁñÁ«ÊÓƵapp.