Magazine

Read the latest edition of AIR and MEIR as an Interactive e-book

May 2025

Liability risk considerations in insuring AI

Source: Middle East Insurance Review | Jul 2023

In cases where an AI-related error or failure occurs, liability insurance may need to step in to address AI-related losses according to the National Alliance for Insurance Education & Research.
 
The insurance education alliance posed a question relating to a loss caused, in part, by AI. ‘Who is liable – the AI system’s creators, operators or users?’ The alliance has put forth five important considerations to bring some clarity on this aspect.
  1. What parties are potentially liable?

The alliance said that much like other losses where there are multiple parties, it will be important to identify all of the parties that may be at fault for the damage caused in an AI-related loss, including the creator(s), designer(s), installers and maintenance providers.

Liability may rest with the developer or manufacturer of the software, the business or individual who operates it or the end-user who interacts with it.

  1. Evaluate how the potential damage occurred

On evaluating potential damage, the alliance said AI systems can cause harm in various ways, such as property damage, personal injury, defamation and invasion of privacy. “It is crucial to consider all possible scenarios when assessing the potential risk and determining coverage.” 

The alliance said liability assessment will be based on how the AI system interacts with other technologies and third-party vendors. 

  1. Staying up to date on regulations is crucial

The alliance said it is important to be current on regulations governing AI. It said, “The European Union’s General Data Protection Regulation and California’s Consumer Privacy Act, both have provisions that require organisations to be transparent about how they collect and use consumer data – including data collected through AI.” 

  1. Understanding black box issues

The complex neural connections that make up AI can be difficult to understand, emphasising the need for insurance risk professionals to work together with AI developers to enhance transparency.

  1. Recognise that traditional policies may not be enough.
A one size fits all policy will not work for this highly changeable risk according to the alliance.  Specialised policies, tailored to AI technology providers and users, will be developed to address “issues such as cyber security breaches, intellectual property disputes and product liability claims related to AI system failures.” M 
 
| Print
CAPTCHA image
Enter the code shown above in the box below.

Note that your comment may be edited or removed in the future, and that your comment may appear alongside the original article on websites other than this one.

 

Recent Comments

There are no comments submitted yet. Do you have an interesting opinion? Then be the first to post a comment.