×
<p>Why we won’t just be ‘checking for bias and  and removing it’ any time soon</p>

It's not data bias, it's machine bias

Why we won’t just be ‘checking for bias and removing it’ any time soon.

For me, there is one major red flag that an AI Ethics proposal has been written by someone who hasn’t invested enough time in understanding the technologies behind business applications of tools like large language models or machine learning. There is a phrase which creeps in very frequently: ‘We will check for bias and remove it’. Bias is not something that we can simply design out of these complex systems that we’re building. 

Discussions around the bias in training data have featured heavily in responsible AI for several years now. In one sense, we have come a long way in a short time. Even the team that developed the COMPAS algorithm have tried to distance themselves from their own work that predicted criminal recidivism with shockingly racist results. The days when you searched for an image of a kitchen online and the search engine only returned photos of women in the kitchen have, thankfully, gone. 

 

Algorithms will optimise what they want

Search words have a financial value to advertisers. I can still remember sitting with someone who was demonstrating what happened when she searched for ‘sexist representations of women’. The Google search algorithm that was programmed to prioritise only the highest paid  adverts helpfully suggested that we might like to look at some sexy photos of women instead. Though the worst days of data bias may be over, thanks to seminal work by pioneers in the field such as Joy Buolamwini. 

But given the success of some of the interventions in reducing data bias, the notion that we can just ‘identify the bias and fix the data’ has come to reassure designers that the problem has a solution. It perpetuates the idea that the technology itself is neutral and the society around it has created the problems. This is true to an extent but bias does not just creep into the model at the data ingestion stage; there are opportunities at many stages to manipulate output and introduce problems later down the line.

 

Political bias already in generative AI

A team have reviewed the output of generative AI chatbots and have concluded that there is evidence of political bias in the output of different machines*. The idea that we can remove this bias demonstrates a lack of understanding of how these technologies work. They found that OpenAI and ChatGPT were more left-wing leaning whereas LLaMA was more right-wing. This pattern is likely to increase in the future, rather than drift back to the mean.

Think about how the YouTube or Instagram algorithms work. The purpose of the respective algorithm is to keep you watching or scrolling through, so the machine uses reinforcement learning with human input (the feedback you give it through things you click on) to learn what will get you to stay. OpenAI is rumoured to be developing customised chatbots that will allow you to personalise your chatbot, possibly using a similar approach.

 

Black boxed means we can’t explain these machines

Bias in language models, in particular, will be impossible to fix because we don’t know how these models are generating content. We can’t explain how many machines based on transformers or neural networks have reached their conclusions; they are ‘black boxed’. And we certainly can’t imply that minority interest data was taken into account and given fair representation in the analysis, just because we fed some data from those groups into the process.

Rather than comforting ourselves that bias can be removed in training data, it is essential that we remain aware of the risks and have human-led mitigation approaches to deal with machine bias.

*This paper tells you about their findings: aclanthology.org