’When the stakes are high, people need to understand the impact of using AI,’ says chief executive 

The UK is taking a “very pragmatic approach” towards the use of artificial intelligence (AI) in insurance amid new EU regulations over AI being drawn up, according to Mind Foundry chief executive Brian Mullins.

The EU is currently working on a set of rules to govern AI, with a draft version suggesting that companies using generative AI will have to disclose copyrighted material used to develop its systems.

Mullins explained that in insurance, “industry regulators already oversee models for statistical analysis of risk and should therefore be able to quickly and relatively easily appreciate the difference AI does or does not make”.

He told Insurance Times: “We have been pleasantly surprised to see the UK taking a very pragmatic approach towards AI regulation, which makes sense given how unlikely it is for a government regulator to have the depth of knowledge required to effectively regulate the technology.

“When the stakes are high, people need to understand the impact of using AI and the only way to do that is to understand the industry the decisions are being made in.”

UK innovation

This came after Prime Minister Rishi Sunak launched a £100m taskforce to help the UK build and adopt the next generation of AI in April.

A statement announcing the funding said the taskforce would develop the “safe and reliable” use of AI and ”ensure the UK is globally competitive in this strategic technology”.

Roi Amir, chief executive of Insurtech 50 firm, Sprout.ai, said the government’s approach was ”a good place to start”.

He added: “When we discuss the regulation of AI, we have to break it down into individual component parts.

“AI is broad, so the risk of applying it must be robustly evaluated, especially in areas where full automation happens.

“Indeed, being outside of the EU may mean that the UK can develop regulation faster, but this still requires alignment between the UK and Europe, to enable UK business to export technology and sell to the European market.”

Amid EU AI rules being drawn up, Amir felt “regulators will always lag behind technology” and should therefore focus on a framework of “fairness, data security, competition and consumer protection – otherwise they run the risk of stifling game changing innovation”.

“A key sticking point for AI is that conclusions and results aren’t consistent and predictable,” he said.

He gave the example of large language models and deep learning, where it can be tough to explain why models arrive at a certain conclusion.

“These characteristics of AI are difficult to regulate, so regulators must work directly with AI companies to ensure that the balance of tech advancement and risk management is correct,” Amir added.