×
<p>Dr Clare Walsh of The Institute of Analytics..</p>

UK AI White Paper on AI Released

Dr Clare Walsh of The Institute of Analytics was brought on board early last year by the
Government to provide expert comment on the draft stages of the UK AI White Paper
released today. Here she looks at how the final version has turned out.

UK AI White Paper on AI Released

Pre-existing protections

The UK White Paper has taken a relatively hands-off approach to AI regulation for the time being, while the government gathers further evidence. It has prompted concerns among some industry experts that AI will be unregulated in the UK, putting individuals at risk. In a year when many other regions are introducing limits on algorithmic growth, it seems to hold back in certain areas.

Today, like many countries, the UK is not completely unprotected in terms of what AI developers can and cannot do. Infringement on individuals through data bias is already covered under equality laws and the Human Rights Act. There is a case that role machines play in disadvantaging minority groups may not be as clearly understood as some more transparent violations, but within the industry, the problems of biased data sets are now well documented. GDPR Article 22 also imposes additional restrictions on what machines can or cannot do especially when there is a real world impact, such as whether a person gets offered a favourable or unfavourable mortgage rate.

Evidence-based Policy

The reality is that is a little early to start to introducing further legislation against the vast majority of use cases for data analytics. We have several well-established AI practices, such as machine learning approaches that have been tested ‘in the wild’ or outside of research laboratories for many years now. Imposing additional unnecessary restrictions will probably act as a deterrent to the widespread adoption of data solutions. The best uses of data have the potential to provide happier, healthier lives and to lift many countries out of the slump in productivity that they have been stuck in for decades. Additional legislation on top of existing laws could potentially be disproportionately harmful to small to medium-sized enterprises (SMEs). Smaller companies in particular, have reported being put off using data analytics since GDPR was introduced because they lack the expertise to establish whether their practices are fully compliant with the law. Hampering growth unnecessarily is not a positive outcome.

IoA concerns – no easy solutions

At the IoA, we came to the discussion with a number of priorities. The first was the challenge of regulating and establishing responsibility in a sector defined by complex chains of 3rd party dependencies. Our field relies on long and complex supply chains. Often the person using the algorithm or software has no access to the code behind the algorithm, or in the case of machine learning, the data set that the machine was trained on. If there was a breach of the law, who would be responsible? How far back do we go in apportioning responsibility, when much of our field is built on Open Source environments, where the code is built by anyone willing to contribute? The technological innovations of today rely on legacy infrastructure, and identifying the exact moment in that chain where responsibility sits may be challenging. This is something we need to discuss more openly but it remains challenging to define in any new regulations and is an issue that needs clarification before being adopted by the law.

We were also keen not to over-complicate the field of AI regulation. Finance regulations in the USA, for example, have now become so Balkanised, with multiple overlapping jurisdictions between federal and state laws that it is impossible for any one individual to read and retain all of the regulations. Laws have sprawled so much that RegTech has become a growing field of AI to manage these complexities. It is, essentially, an AI to summarise relevant regulations, so having to get an AI to monitor compliance with laws designed to regulate AI would, in our view, be a failure in policy.

What are the new regulations?

The resulting compromise the UK has taken for now is a ‘best practice’ approach, along with the existing regulatory protections that people have enjoyed since GDPR in 2018. Those best practices emphasise the values that we, at the IoA have been promoting:

  • Safety, security and robustness
  • Transparency and explainability
  • Fairness
  • Accountability and governance
  • Contestability and redress

Policy needs to be evidence-based, just like any other major decision, and in particular we welcome the sandbox trial environment where businesses can test how regulation might impact on their workflows. This data can then inform the next stage of policy making, to see if greater restrictions on business practice are needed. If there is insufficient evidence on the impact of increased regulation on innovation and growth, as well as human rights values, then it makes sense to gather more data. This 12-month sandbox project will provide a test bed for companies to understand the impact of regulation on their practices and provide such a data set.

Additional work needed

There is, however, still some work to be done. Many companies recognise the value in trust and reputation and will want to evidence best practice. Communicating existing rights and responsibilities across the broad spectrum of companies that will be using AI over the coming months remains a challenge. Few companies allocate time to investigate these important issues.

We don’t really have a plan to remove legacy biased infrastructure. For example, we have built some machines based on biased legacy data into our physical environment. There are many racist hand soap dispensers still in public bathrooms across the country, which will not recognise darker hands. How do we get those out of use without having to fall back on individuals to litigate? Should, for example, owners of buildings be forced to remove these biased machines immediately? If so, who should foot the bill, the shopping centre for not showing due diligence and testing the machines with a representative sample or the soap dispenser manufacturers for not training on a representative sample in the first place?

“Some large companies need to be slowed down”

There is also need for a distinction in the regulations affecting the more common use of established AI in businesses around the world and the small number of research companies producing large, poorly-understood models that are being rushed out and promoted for widespread use before they have been fully tested and understood. Some large companies need to be slowed down. Technologies like the generative AI solutions are in a testbed phase for now.

The Digital Markets Act in Europe sets out definitions of some large players in the online platform hosting and e-retail world, and sets up additional controls on their actions to prevent the widespread misuse of power. We need a global agreement on certain kinds of AI development that should be limited in a similar way, while we try to understand the affordances and limitations of these emerging technologies. If we have learnt anything from the last decade of deploying data solutions, it’s that things that work in the lab rarely transition seamlessly into ‘the wild’.

At the IoA we welcome the work being done to create globally-relevant industry-specific best practice case studies that could inform a global audience. For more, see the White Paper.

If you have any concerns about data and AI-based solutions and how to use them ethically and within the bounds of the law, the IoA has resources to support you in your decision-making process. Please email hello@ioaglobal.org to find out more.