The Allianz Risk Barometer is an annual, highly regarded global survey published by Allianz Commercial that identifies the top corporate risks for the year ahead. Insurance companies, risk management experts and businesses use it to plan for the year ahead. This year, one risk jumped a massive 8 places to take the #2 position - AI Risk.
AI poses a complex mix of operational, legal and reputational risks for businesses, making it effectively more problematic than business interruption (#3), natural catastrophe (#5) and rising political risks and violence. It comes second only to cyber risks, an area closely related to AI exposure risks. And it’s not just the large companies that are at risk.
Embedding AI has increased the risks
We have long argued that the main risks from AI come from the lack of skills and training needed to work with these tools. It’s easier to plug an AI in and get a decent response than it is to explain what data each AI is accessing, what it does with input and output, and what other AI’s it is sharing that information with.
The increased risk this year has largely been the result of embedding AI into systems. Once a business function depends on a tool to run, there will be challenges with system reliability, data quality limitations and the shortage of skilled talent. By adding liability exposure to the list, playing it fast and loose with AI systems is not going to be an option.
Why does the opinion of insurance companies matter?
Many of the developments in health and safety in the workplace over the last decades have not come as a result of government regulations and protections for the public. They have come from the pressure insurance companies have exerted. By refusing to insure companies if they operate under certain conditions, they have steered many internal rules and regulations. Health and safety must be overseen by a ‘competent’ person. There are requirements around training and stringent protocols. Safety guards are mandatory.
Imagine bringing this level of oversight to the AI industry, which has been largely untouched by novel regulation. We have seen the impact on data controls already. Insurers mandate multi-factor authentication, regular data backups and cybersecurity training for employees. There will be a response from the insurance industry to the new ranking for AI.
What does this mean for me?
The days of the amateur data scientist may be over. Without appropriate training, qualifications and active learning to stay up-to-date, it won’t be hirers making the decision not to recruit you. The insurance company may require them to reject you. For those with the appropriate qualifications, it is essential that you keep a record of any training and CPD, as well as more formal qualifications. Everything may need to be evidenced for a highly bureaucratic insurance agent in the near future.
For business leaders, it is time to take upskilling of all staff more seriously. When, at the Institute of Analytics, we were reviewing AI Risk, we found over 1,700 individually named risks in the MIT AI risks taxonomy. That should be enough to keep your management team awake at night. More shockingly though, only 2% of those risks could be detected before deployment. The people at the vanguard of your AI risk management solution are not your technical leads. It’s everyone else at the user end of the equation. It is easy to ‘find the right buttons’ with AI, but much harder to understand what happens after you’ve clicked.
And, you can never switch off your vigilance. Traditional software is very deterministic. If you click A, B happens every time. AI is probabilistic. That means that a system might work perfectly 1,000 times and then on the 1001st time, produce something that is a hallucination or a biased output that results in a lawsuit. You can never sit back and rest assured that the system is ‘safe’ and has been tried and tested.
Even if you don’t use AI, you have an AI problem
The risks don’t just come from the AI tools that you have procured and carefully managed with the help of your IT team. Your staff are likely bringing AI uses in through the back door, too. Some applications of AI help with efficiency and unless you have had a strong discussion with your team, it’s best to assume that at least some of them have browser extensions recording confidential team meetings, and scraping information from their personal accounts. Your official, authorised technology is only half the problem. Very soon you will find that if you can’t prove you know which tools your team is using, or prove that they are not using them, you may effectively become uninsurable.
We are not arguing that AI should never be used. Its widespread shadow adoption is a reflection of the usefulness of these tools. But you should have clear guidelines on appropriate uses.
3-step AI Risk Health Check
Step 1 - know what AI is in use, and categorise the risks.
Catalogue every AI system in use, including those shadow AI uses, then classify each tool based on the low, medium and high risk rankings of the EU AI Framework. A marketing chatbot and a credit scoring algorithm need a very different approach.
Step 2 - Decide who is going to be accountable and how
Appoint a cross-functional AI Risk Committee (if you have the staff, then this would ideally be legal, security, and data teams). Establish what human-in-the-loop requirements you need. Share absolute clarity on the no-go areas. Banning freebie team meeting note takers that sell your data to anyone would be top of our priority list. Identify any interim restrictions on AI use until you have had time to establish safety protocols.
Step 3 - Plan for the long run
Unlike traditional software, AI drifts over time as the data changes. You need to establish a process to monitor drift, bias and hallucinations over time. You will need an incident reporting process, and an incident response plan to be ready for minor issues, and major ones. Prompt injection attacks and a large scale data leak from a 3rd party supplier may be on your horizon.
Are you ready for the #2 Global Business Risk in 2026?
Knowing the steps is one thing, but operationalising them is another. Join us next month for our deep-dive webinar ‘Virtual Governance Workshop’ on 23rd April, at 11:00am BST. We will talk you through the steps to keep your insurance premiums down.
The Institute of Analytics is the global Professional Membership Body for the Analytics Professions. Learn more about how we can help you ioaglobal.org .
