With LLMs becoming increasingly more popular in the insurance industry, does the technology’s value outstrip its risks?

The use of large language models (LLMs) has been increasing in the insurance industry over the past few years, with the opportunities for their deployment far outweighing the risks of implementation.

LLMs are a form of of artificial intelligence (AI) algorithm that process natural language input and to generate language outputs based on previously analysed data. 

Back in March 2023, insurer Zurich announced it was experimenting with using ChatGPT, perhaps the most recogniseable LLM, to overhaul its claims processing.

In May 2023, insurtech Artificial Labs followed suit with the launch of a ChatGPT pilot designed to end the “messy” data barrier for carriers and brokers.

Ignite Insurance Systems, meanwhile, launched its own LLM chatbot service in June 2023 for brokers to save time when managing customer queries.

And in the same month, Cytora revealed that it was using LLMs to enhance insurers’ risks assessment and underwriting capabilities as well as make underwriting more accurate and efficient.

Clearly, the insurance sector has recognised the value that LLM-based AI technology can provide. However, identifying targeted uses for the technology has proved more difficult.

For Eric Sibony, AI solution provider Shift Technology’s cofounder and chief scientific officer, LLMs can be deployed for “multiple uses”.

He told Insurance Times that the first major use was ”data extraction from unstructured data, [such as] free text and documents”.

This presents a key use case for insurers, for which collect information is vital to support claims, for example. Data extraction could also be used to detect fraud or make automatic recommendations of products to vulnerable customers, Sibony added.

“LLMs are having a breakthrough, [the technology is] already adding a lot of value,” he said.

Publicly available chatbots like ChatGPT are “just a layer”, noted Sibony, representing the “iceberg under the water”. 

He explained: “When [Shift Technology] use LLMs, we don’t use them as chatbots – instead, we use them as application processing interfaces (APIs).

”The core part of a LLM is that it is fundamentally an AI model that can understand language, so they can answer a question or generate a summary of something.”

Better customer service

Outside data extraction, another potential use case for LLMs is in the form of chatbots that can bolster customer service offerings.

Sibony explained that an AI-powered chatbot can be used to handle a claim, determine when it is triggered in a specific policy and thus streamline claims handling.

He added: “In the same way chatbots can answer the customer, it can answer the broker also [and] even the claims handler or underwriter – the people in insurance become more like assistants. There are lots of applications where users can ask about the content of the policy and what it covers.

“It gets quite useful when you have very complex policies, [as] it helps identify key parts.”

LLM-based chatbots can even be used to generate emails or PDFs, which is particularly useful where handling volume becomes important.

More pervasive

For some insurtechs, AI has been placed at the heart of the product offering.

Richard Hartley, Cytora chief executive, said: “Since LLMs have become more pervasive and available, we started tuning and virtualising our lens to the risk domain, as [LLMs] can understand risk information.”

Cytora utilises LLMs to streamline risk interpretation that previously required many risk professionals to read various documents to identify declarations.

Hartley continued: “It’s really soul destroying, it’s such a manual process that is repeated over and over again.

”So, one way we use LLMs is to streamline that whole process – new declarations are auto approved in an easy workflow. Even if there are small differences from the insurer’s own questions, they can approve that new declaration in a couple of seconds and the platform continues to learn about that.”

However, LLMs are also useful for improving a process that Cytora refers to as “chain of thought”. This is a where the LLM is “educated” to think like a risk professional, such as by learning how to extract claims information from a broker submission.

When done manually, information in a submission might be presented in various formats, usually over email via attachements. 

Data privacy

LLMs, like any technology, present their own challenges that must be overcome.

Hartley noted that the issue of having to sort through manual documents was “most prevalent” in commercial lines underwriting and claims management because risk information was more complex, compared to most risk information in personal lines being more simple and open to standardisation.

“The value of AI in the risk domain will be disproportionately high because it will enable analogue data [to be turned] into digital information accurately,” Hartley continued. 

Sibony noted that the increasing adoption of LLMs raises some issues around “security and data privacy,” especially around the handling of sensitive data

He said: “But [problems are] completely solvable – it just requires insurers to have the right levels of security and infrastructure in place.”