Recall consultant says AI has a ’sophisticated ability to actually detect defects that humans might not be able to identify’

Artificial intelligence’s (AI) ”increasingly sophisticated ability” could help reduce product liability insurance claims.

That was according to Chris Occleshaw, recall consultant at Sedgwick, who said that AI can detect defects that humans “might not be able to identify” before a product is launched.

In January 2020, home insurers were left in limbo following a mass recall of 519,000 Whirlpool washing machines.

This was because of a door locking system overheating above a certain temperature, which in turn posed a fire risk.

Occleshaw highlighted that the rise of AI had made it more popular as an assistance tool in the product safety market.

“The global market for AI in product safety is expected to reach $1.4bn in the next 18 months, which is a very significant growth opportunity for businesses that are developing in AI,” he said.

“This is due to AI’s increasingly sophisticated ability to actually detect defects that humans might not be able to identify.”

Report

Occleshaw made the comments during a webinar entitled AI and machine learning: The evolution of product safety last week (19 July 2023).

While he said that AI in product liability insurance presented an opportunity, he also highlighted a report from the Office of Product Safety and Standards that noted some of the negative implications for machine learning.

Published in December 2021, it highlighted that AI systems had been shown to produce discriminatory or inaccurate results, often due to biases or imbalances in the data used to train, validate and test such systems.

“But it wasn’t all negative,” Occleshaw said.

“[The OPSS] also noted that it could prevent product recalls by using data that it collected during the industry assembly.”

Regulation

Meanwhile, Katie Chandler, partner at law firm Taylor Wessing, said that neither the General Product Safety Regulations or the proposed EU Product Liability Directive and AI Liability Directive that were currently being reviewed by the European Parliament will be implemented in the UK.

The AI Liability Directive would allow national courts to force providers of high-risk AI systems that are defined under the AI act to give relevant evidence to potential claimants about a specific system that has been alleged to have caused damage. 

However, Chandler said the OPSS was ”looking at potential changes” to the UK Product Safety Regime, which provides the legal basis to ensure that consumer products are safe.

While in the UK suppliers or manufacturers are not legally required to hold product liability insurance, they do have a duty of care towards customers. 

Chandler said that with businesses operating under different regimes across countries, there was “obviously complexity in terms of compliance, risk and indeed costs”.

She added: “We do think that the outcome will probably focus on some similar issues, particularly around the definition of product and safety assessment, to try and take account of the challenges of AI and machine learning products.

”But whether it will go as far as adopting some of the changes in the EU approach remains to be seen.”

Meanwhile, a government white paper published in March 2023 suggested that regulators “take a princple based approach” to AI.

These include safety, secuity and robustness, transparency and explainability, fairness, accountability and governance and contestability and redress. 

Chandler said she expected a consultation from the OPSS and potentially the Information Commissioner’s Office as “data and GDPR is absolutely key to this”. 

“Time will tell whether this lighter touch that the UK is trying to adapt to will work,” Chandler added.