’You wouldn’t share a client report with a friend for a view, so why do it with AI?’ says corporate director 

The insurance industry should not risk sharing important data with artificial intelligence (AI) systems despite the “productivity boost” it can provide.

That was according to Verlingue corporate director Ian McKinney, who felt that it was “difficult to envisage a point in the near future where AI systems could be considered safe enough to use on a day-to-day basis”.

He told Insurance Times that AI was able to do a lot of “data sifting spade work”, which in turn can save firms time when carrying out tasks.

“But that productivity boost shouldn’t be at the cost of putting client data at risk,” he said.

“You wouldn’t share a client report with a friend for a view, so why do it with AI?”

’Inherently dangerous’

This came after McKinney said on LinkedIn that he would not put sensitive data at risk “regardless of whether [AI] makes life a little bit easier – until there is a cast of iron reassurance”.

McKinney told Insurance Times that his main concern was the use of Large Language Models (LLM).

“The issue here is that these types of AI have some very superficially attractive capabilities that make them appealing to use – but using them in these ways is inherently dangerous from a data point of view,” Mckinney said.

“For example, ChatGPT is excellent at summarising data and could easily be used to reduce an internal sensitive report into a paragraph summary for an underwriter.

“The issue is that in order to do that, that report will have to be sent to ChatGPT, putting that data at risk.”

The tool has been experimented with in the insurance industry, with insurtech Artificial Labs announcing earlier this year (31 May 2023) that it was using it as part of a pilot to assist underwriters.

And in March, insurer Zurich said it was experimenting with ChatGPT as it explored how it could use AI technology for tasks such as extracting data for claims and modelling.

Trust

McKinney explained that “[brokers] handle sensitive data of various kinds”, such as information gathered from a prospective client to produce a presentation to an insurer and obtain a quote.

He said that should an issue arise because of AI, trust in the insurance industry could become a further problem.

“We know the insurance sector has an issue with trust and we can’t afford to feed that perception,” Mckinney added.

“A major loss of data such as the leak of a confidential report, or claim information, or a manufacturing process, could have a catastrophic effect on the insurance sector as a whole.

“Client confidence, already lowered post-Covid, would be further damaged.

“In turn it would be harder to get clients to part with sensitive information in future, which would cause difficulties for both brokers and underwriters.

“From the perspective of the individual client, a data loss is always potentially business threatening.

”[This is] something that we, as brokers, are used to explaining when discussing cyber and data cover with our clients.”

As a result, Mckinney questioned whether “businesses might eventually invest in their own proprietorial AI systems”, claiming the easiest way to ensure security of data was to maintain control of it.

However, he said Verlingue would not use it for “any business-related purpose” for the time being.

Insurance Times Fantasy Football