Equity in the use of AI in healthcare

14 February 2023

Many people may have barely noticed a major milestone in AI, announced by Deep Mind last year, that their AlphaFold algorithm had at last cracked the challenge of accurately predicting how protein structures fold up. Understanding protein folding is central to the development of new drugs for illnesses that are currently untreatable.

A new type of healthcare

Unpicking the process of how genes then turn into proteins, the building blocks of life, was a painstaking process before AlphaFold. The new algorithm can achieve in minutes what would have been the work of a PhD student over 3 years of research, just to catalogue one protein. This algorithm may mean little to you today but it will bring a more pervasive role for AI in healthcare and may even end up saving your life. But there are reasons to be cautious about introducing more machine solutions into a field where error can lead to health disparities.

The application of AI solutions in the field of healthcare is, in general, a welcome step forward. A new study, however, has found potential for biases in the outcomes of AI health interventions based on the common social inequities of gender and minorities. A team from Imperial College has found that despite the fact that the application of AI in clinical settings is in its infancy, health disparities on the basis of gender and ethnicity are already present.

AI is used in healthcare in two main ways: The deployment of clinical decision supporting tools, such as those used in radiology and medical image analysis. And back-office support, such as scheduling appointments efficiently.

More research needed

The team from Imperial are keen to emphasise the paucity of information on the performance of, say, melanoma detecting algorithms on darker skin, when the algorithms were predominantly trained on white patients. If left to the private sector alone to self regulate, there is concern that disparities in health outcomes may be overlooked in the pursuit of profit.

The issue of health disparities was made clear following the Covid-19 pandemic, with disproportionate death rates in minority ethnic groups. It made a strong case for funding to understand the causes of health inequities and its causes.

Even without hard evidence, the potential for machines to exacerbate problems , rather than mitigate, can be anticipated from what we know about how machines work. The first is the use of generalisation of treatments developed with unrepresentative data samples to a wider population. The second is the use of socio-historically biased data, which have a long record of deprioritising female health over male health, and limited data on ethnic sensitivities in the data sets. A third is the use of data that does not account for differences in sub sections of the population. Analysts know that accuracy rates are not equal for all members of a population, but non-technically trained users may not see this issue.

It is always essential to ask questions around what any algorithm is trying to predict and how it will be used to determine the safety of AI recommended interventions.