’We expect to see discrimination, bias and privacy concerns amplified as key risks and probable litigation targets,’ says partner 

Insurance will become a ”critical component” for firms to move projects that use artificial intelligence (AI) forward due to risks of increased litigation.

That was according to law firm Clyde and Co, which said that it seemed ”only a matter of time before an AI system is sued, along with the companies relying on it”.

In April 2023, prime minister Rishi Sunak launched a £100m taskforce to help the UK build and adopt the next generation of AI.

A statement announcing the funding said the taskforce would develop the “safe and reliable” use of AI and ”ensure the UK is globally competitive in this strategic technology”. 

However, Rosehana Amin, a Clyde and Co partner based in London, said that with the increased reliance on generative AI, ”we expect to see discrimination, bias and privacy concerns amplified as key risks and probable litigation targets”.

”As companies race to develop or adopt AI technology, insurance will be a critical component of moving projects forward as well as protecting against risks,” she added.

”The big question is will insurers want to address the risks and if so, how?”

Policies

Insurers have also begun using AI to help with claims and underwriting processes.

For example, Zurich said in March 2023 that it was experimenting with ChatGPT as it explored how it could use the technology for tasks such as extracting data for claims and modelling.

Amin warned that as insurers increasingly adopt AI tools within their organisations to streamline and improve operations, ”they will need to carefully consider what they are willing to cover when it comes to AI”.

”Will we see policies written to protect AI systems?,” she questioned.

”Likewise, as AI related litigation increases, and insurance becomes harder to obtain, we expect companies to more carefully consider contracts entered into with third parties over AI and adopt strict policies.

”It seems only a matter of time before an AI system is sued, along with the companies relying on it.”

Regulation

Meanwhile, Clyde and Co also highlighted that the increased use of AI had intensified the need for regulation globally.

It explained that calls for action from federal and local governments as well as human rights organisations will continue as countries around the world were in the ”nascent stage of regulation, grappling first and foremost with how to clearly define AI itself and the risks it presents”.

”Companies relying on the use of generative AI open themselves up to liability over bias and privacy issues, as do the developers of the AI systems, which will result in an intensifying push for safeguards as the lines become increasingly blurred in determining whether humans or machines are at fault,” Amin said.

Amin issued the statement alongisde fellow partners Cyntia Aoki, Noémie Bégin and Meghan Dalton as well as senior counsel Janice Holmes.