×
We are very excited at the IoA with the release of new guidance on AI applications this month.

New UK guidance on AI governance

We are very excited at the IoA with the release of new guidance on AI applications this month. We participated in early discussions during the drafting stage and could see the potential role this new guidance might play in steering organisations towards AI best practice, and you will too!

Less specific than the EU AI Act draft which was published earlier in the year, the UK’s solution at the moment is more a set of guidelines and principles which should inform every decision across all sectors. However, there is also a recognition that how those principles are interpreted may differ in each industry sector, and the concept of a proportionate response is key. This is a welcome move as fields such as autonomous vehicles and insurance will incur very different  risks.

The UK has a much smaller regulatory jurisdiction than the EU, but, the proposals here are to develop sector specific knowledge of best practice exemplars. This will be useful for all ethical AI design around the world.  

Called The AI Action Plan, it covers a number of tricky issues to reconcile. It is a different approach to that taken by the EU (for more details on the EU regulations, see our blog on  https://ioaglobal.org/blog/the-eu-ai-act-is-coming-are-you-ready/ ).

The main aims and overarching nature of the framework

The framework aims to be:

1 Context Specific

This is an acknowledgement that sometimes it is challenging to apply the same rule in all sectors of the economy and some of the new proposals will be set so that regulatory responses differ depending on the needs of different sectors, such as insurance and medicine etc.

2 Pro-Innovation And Risk-Based

Legislation controlling the use of technology is often favoured by big tech companies because increased regulation can become a barrier to entry to the market for small start ups. Part of the new guidelines provided in the UK model are aimed at regulating high risk practices among start-ups but also produce an environment where small companies with obviously low risk AI plans can flourish. The new guidelines insist that regulation focuses on areas where there is a real risk, and this leaves relatively safe applications of AI or areas where the risk is highly hypothetical with more freedom.

3 Coherent

While some regulations will be sector specific, there will be a set of cross-sector principles that all sectors must interpret and implement in a way that they feel is relevant to their area and these will be discussed below.

4 Proportionate And Adaptable

The regulations are currently proposed with a light touch, and are guidelines, but there is a clause in the documentation that leaves open the possibility of tightening up the approach in the face of widespread misuse and abuse of AI.

The 6 principles of ethical AI design

There are 6 principles highlighted in the Cross-sectorial proposals, in other words, principles that all sectors of industry are expected to abide by:

  1. Ensure that AI is used safely. Risks may be more apparent in fields such as healthcare or critical infrastructure but this principle does ask regulators to ensure that requirements to secure safe applications are commensurate with the actual risks the addition of AI brings.
  2.  Ensure that AI is technically secure and functions as designed. This requires AI systems to be technically secure and, more importantly, that the data set that the machine is trained on is representative enough of the population that the AI will be used on.
  3. Make sure that AI is appropriately transparent and explainable. Developers will be required to provide information about the nature and purpose of the AI, and the specific outcomes it aims to achieve, information about the training data and training process, information about the logic process used, and finally, the accountability process.
  4. Embed considerations of fairness into AI. Where AI applications have a significant effect on people’s lives, such as paying higher insurance premiums or job application filtering, the provider will need to define fairness and the steps taken to ensure it.
  5. Define legal person’s responsibility for AI governance. AI systems may be fairly autonomous, but accountability needs to rest with a named individual, and not with the system itself.
  6. Clarify routes to redress or contest. Where the AI system has a material impact on people’s lives, there must be a system in place to contest the decisions, subject to context and proportionality.

IoA Support on the new guidelines

Much of the approach suggested here is not completely new, and following the guidelines that we have provided on ethical AI that we recommend in our Governance and Professionalism CPD programs will be sufficient to be compliant with the broad principles described here.

See https://ioaglobal.org/ioa-cpd-programs/ for our Governance training.

You can look at the document itself in the link below.

https://www.gov.uk/government/publications/establishing-a-pro-innovation-approach-to-regulating-ai/establishing-a-pro-innovation-approach-to-regulating-ai-policy-statement

Challenges

One issue that perhaps might be challenging with this new approach is the prevalence of 3rd party services. Many AIs are contracted from 3rd party suppliers and it is unclear yet how responsibility will be allocated to each. If, for example, your organisation uses a 3rd party chat bot and it develops some offensive habits or starts to differentiate in the advice it gives to different people making enquiries, is your organisation responsible, or the developer? If the training data comes from both you and the AI developer, can you each inspect that data? Are you both responsible? Is it fair to hold developers responsible for what their customers do with their machines, or to hold customers responsible for what developers did during training?

If you are interested in these issues, we’ll be holding a webinar on the new framwork shortly.