’While it’s possible to set general principles, how do you translate that into something that’s going to work for every company or use case?’ says head of analytics

On 31 January 2024, a group of 127 industry specialists launched a new voluntary code of conduct around the use of artificial intelligence (AI) in the insurance claims sector.

Led by Jel Consulting director Eddie Longworth, collaborators involved in the initiative included Hugh Hessing, former UK chief operating officer of Aviva, Prathiba Krishna, head of AI and ethics at Sas UK and Ireland, and Simon Murray, partner and head of insurance business services at DWF.

The collective behind the new code has urged insurance firms to sign up to the initiative, which establishes an ethical standard of responsibility when using AI in the claims settlement process.

The code is based on three principles – fairness, accountability and transparency. It aims to make the claims sector’s use of AI explainable, comprehensible and fair.

Launched at a point in time when many insurance firms were deploying AI to assist with personalisation, claims processing and underwriting, the code strives to bridge the gap between the supply chain industry and AI claims applications.

To date, there has been no official regulation centred around the use of AI in the insurance industry – although the FCA hinted in July 2023 that a “bespoke type” of the senior managers and certification regime (SMCR) could be appropriate for this purpose. 

Nutan Rajguru, head of analytics, UK and Europe at data and analytics firm Verisk – who was also a member of the code’s working group – explained that the new code of conduct was designed to work in alignment with any upcoming regulation the FCA may wish to formulate.

She told Insurance Times: “When we think of AI, ChatGPT and large language models (LLMs) come to mind – but AI covers such a broad range of applications, so it’s very difficult to be specific [because of the] huge variety of use cases.

“While it’s possible to set general principles [around the use of AI], how do you translate that into something that’s going to work for every company or use case?

”Principles-based guidance is the place to start and that’s why the code of conduct is so valuable.”

AI opportunities

Firms across the industry have moved swiftly to show support for the code of conduct  –  specialist insurer Ecclesiastical and law firm Minster Law were among the first to adopt its usage, for example.

Insurer Zurich – which started experimenting with ChatGPT to extract data for claims handling and modelling in March 2023 – also sees the benefit of the new code.

James Nicholson, its chief claims officer, said: “This code of conduct creates an opportunity for insurers to come together to ensure we deliver for our customers in the most efficient and ethical way.”

Similarly, Laurence Besemer, chief executive of the Forum of Insurance Lawyers (Foil), deemed the code’s launch a “positive development” and a “fantastic piece of cross-industry collaboration.”

He continued: “It provides insurers with helpful guidance surrounding the use of AI.”

Mark McDonald, general insurance practice director at business management consultancy Altus, believes AI can bring “significant opportunities for insurers”, with the new code of conduct providing a “consistent framework for the development of AI strategies and a level of positive intent”.

He said: “Ultimately, the code of conduct will act as an enabler for positive change for the adoption of AI technology in claims processes, which – given the inevitable increase in the use of this technology in insurance – will have a positive impact on end customers.”

Further structured regulation

Despite the warmth with which the code has been embraced upon its launch, both Besemer and McDonald flag that the speed of AI’s development and evolution could pose an issue to structuring formal regulation – hence why the code’s principles-based model may be more impactful.

McDonald said: “It is vital that a governance structure is in place, underpinned by a clear directive to avoid negative customer outcomes.”

Amit Tiwari, president for Europe, Middle East and Africa (Emea) and Asia Pacific (Apac) at insurtech Xceedance, agreed with McDonald that although ”this code of conduct marks a significant beginning, it’s crucial to establish a comprehensive governance structure”.

He explained: “This structure must be marked by fairness, clarity and accountability, focusing on mitigating biases, protecting the integrity of data and algorithms and upholding the privacy of all stakeholders involved.”

Tiwari added that insurers need to prioritise AI propositions that are both transparent and can “be subject to stringent evaluations”.

This message is emphasised by Saby Roy, AI and cloud engineering leader at professional services firm EY, who believes the code’s accountability metric is particularly important.

He said: “While the regulatory landscape [around AI] is constantly evolving, regulators within both the UK and Europe are focused on reinforcing the message that use of AI does not relieve [companies] of their accountability [for] any decisions or outcomes.

”Firms need to establish robust and carefully controlled operating frameworks for AI.”

Working group member Rory Yates, chief strategy officer at digital platform provider EIS, added: “It’s in everyone’s interest that when asked why a decision has been taken, that the answer isn’t because the machine says so. It’s because accountability sits with humans.”

Quality data

Accountability around AI usage includes making sure that the data being utilised to inform these models is up to scratch.

For example, Rajguru flagged that unconscious bias is a “big concern for the industry”.

She continued: “We don’t want bias against groups of people because without the proper checks and balances in place, things can inadvertently be biased.”

When AI models are trained, it is vital to “ensure that the dataset is representative”, she said. In addition, regular, mandatory data checks can prevent data becoming more biased over time. Verisk uses soft AI releases to combat this risk.

If firms find unconscious bias within the data used for AI tools, Rajguru suggested pulling the AI model, retraining it using new data and then retesting.

Rajguru additionally noted that the data entered into LLMs, for example, may be shared with third parties. 

Where sensitive data – such as ethnic origin – is used for AI, firms must comply with General Data Protection Regulation (GDPR) rules and be clear about data sharing intentions, she said. 

Roy added that it was “essential to build trust among customers, partners and regulators” when sensitive customer data is being used.

Human judgment vs AI

According to Selim Cavanagh, director of insurance at the Mind Foundry, the top two factors that insurers are concerned about regarding AI usage include governance and assurance.

For Roy, “insurers need to define how they will approach, develop and integrate new tech capabilities across their operations”.

He continued: “They will need to consider how to balance human and AI collaboration, know when to apply human judgment verses AI-generated recommendations and define how they protect against the introduction of bias and unfair outcomes in coverage, pricing and claims decisions.”

Meanwhile, Kevin Crawford global head of insurance at Endava said that “only time will tell if this new code of conduct will drive meaningful change given its voluntary nature and current absence of penalties for those who do not abide by the guidelines”. 

Although Crawford admitted that penalties might not be needed at this point in time. 

He added: ”I believe insurers will look to embrace the code and integrate it into their businesses wherever possible.But this isn’t without its challenges. Documenting AI change within a business is complex. The time and resource required to implement this, and how frequently it will need to be reviewed, should be considered. It is unclear at this stage whether the organisers have created a template to support insurers, but this would certainly aid clarity in what is needed and create a standard for all insurers to meet.”

Crawford recommended the activity be undertaken alongside the mandatory Consumer Duty requirements which are mandatory. 

Timeline of artificial intelligence (AI) in insurance

1956: AI is born. 

January 2019: Axa deployed three AI bots to save 18,000 man hours a year. 

April 2021: European Comission proposes European Union (EU) regulatory framework for AI, the EU AI Act. 

November 2022: ChatGPT launched by San Francisco firm OpenAI. 

February 2023: AI firm, Mind foundry launches research and development lab in Oxford dedicated to solving highs stakes AI problems. 

March 2023: Zurich experimented with ChatGPT. 

April 2023: UK government launched AI task force.

May 2023: Artificial Labs launched ChatGPT pilot. 

June 2023: Ignite launched a chatbot service using large language models (LLM). Cytora also used this technology to enhance risk assessment for insurers, while Lemonade used generative AI to complete a claim in two seconds. 

July 2023: FCA pondered using a bespoke SMCR regulatory regime for AI’s use in insurance. 

September 2023: Government disbanded AI task force.

October 2023: Insurtech Armilla launched warranty product for AI models. 

January 2024: AI code of conduct launched by an industry working group.

February 2024: UK government shelved its AI code of conduct for copyright material, which was proposed in 2022. Hiscox tapped into AI to enhance customer service. EU enacts AI regulation referred to as the AI Act, a stringent legal framework for AI. 

Insurance Times Fantasy Football