’It is very important to be cautious about the data they provide to these models,’ says chief technology officer

If artificial intelligence (AI) is to truly be embraced, a “more secure and responsible approach” must be adopted.

That was according to Vinod Singh, chief technology officer at insurtech Concirrus, who said that the insurance industry ”must not lose sight of the need to have human oversight and control”.

He felt that a responsible approach over AI should centre around accountability and governance, explainability and transparency, data privacy and security and ethical considerations.

Singh also felt there were several factors that insurers needed to consider about using large language models (LLMs), such as ChatGPT and Google Bard.

He told Insurance Times: “Firstly, it is the lack of accountability. ChatGPT and similar language models generate responses based on patterns and examples in their training data.

”They don’t have a real understanding of the context or a true ability to reason.

”This lack of accountability can lead to inaccurate or biased information being provided, which can have significant consequences for businesses if not used carefully.

“There are also legal and compliance issues to consider. Insurers operate within strict legal and regulatory frameworks, such as data protection and privacy laws.

”Using AI-powered chatbots or LLMs raises concerns around the proper handling of sensitive customer data and compliance with these regulations.”

Safeguarding

AI has been experimented with in the insurance industry – for example, insurtech Artificial Labs announced earlier this year (31 May 2023) that it was using ChatGPT as part of a pilot to assist underwriters.

And in March, insurer Zurich said it was experimenting with ChatGPT as it explored how it could use AI technology for tasks such as extracting data for claims and modelling.

However, Verlingue corporate director Ian McKinney, told Insurance Times earlier this month (14 June 2023) that it was “difficult to envisage a point in the near future where AI systems could be considered safe enough to use on a day-to-day basis”.

Singh added: ”Language models like ChatGPT or Google Bard require large amounts of data to train effectively.

”It is very important to be cautious about the data they provide to these models and ensure it doesn’t contain sensitive customer information that could be compromised.

”Safeguarding data security and privacy is crucial to maintaining customer trust and complying with the appropriate regulations.”

Privacy layer

This follows Concirrus launching its new advanced submissions offering, which provides an end-to-end solution for customers with an added privacy layer, which allows it to leverage LLMs.

Andrew Yeoman, co-founder and chief executive of Concirrus, noted the volume of email submissions insurers receive each year was “staggering”, with only some resulting in quotes.

He continued: “This leaves underwriters with limited capacity to assess risks effectively.

”While automation solutions have emerged, they require extensive training and labelling efforts, diminishing their benefits for insurers.”

Concirrus’ solution claims to increase productivity by 400% by streamlining this process through automation and addresses privacy issues.

Through a simplifed workflow, it can integrate with a customer’s pricing engine for faster decision-making, therefore allowing underwriters to focus on writing submissions.

Singh continued: “It’s for this reason that we’ve focused heavily on adding an extra layer of privacy into our platform to guarantee the security of sensitive data and prevention of any data leaks.

”In simple terms, it means that instead of relying on big LLMs such as ChatGPT and Google Bard that are available to everyone online, we are using language models that are specifically built for our local needs.

”This allows us to have more control and tailor the models to better suit our specific requirements.”