’It can do something as simple as algorithmic [underwriting], but it doesn’t reduce that complexity of risks,’ says data analytics lead

Generative artificial intelligence (GAI) is not a ”black box” solution, but the risk for insurers lies in treating it as such.

That was according to Alana Robertson, data analytics lead at broker First Central, who told Insurance Times that firms looking to implement artificial intelligence (AI) should do so with their eyes “wide open”.

In the parlance of engineering and computing, a black box is a device, system or object that produces useful information without revealing anything of its internal workings.

In recent years, AI has gained significant popularity, especially with the rise of large language models like ChatGPT. However, the technology has been criticised for its perceived lack of transparency.

A notable subset of AI, known as generative AI (GAI), has captured widespread interest due to its ability to create various forms of media, including text and images.

However, Roberston explained that people often envision AI as a sophisticated, futuristic technology with impressive capabilities.

However, whether AI is effectively addressing and mitigating the specific risks associated with insurance is the question the sector must ask itself, she said.

She continued: “It can do something as simple as algorithmic [underwriting], but it doesn’t reduce that complexity of risks.

“Firms looking to implement generative AI solutions into their way of working must first understand what the model is doing, what the data it is trained on and the answers it is giving you.

“Large language models and other generative AI technologies are hugely complex and their functionality draws on a myriad of variables.

“An inadequate understanding of the technology can open firms up to risk, if actions and results later turn out to be different to what was expected, as would be the case with so-called black-box solutions.”


Robertson’s comments followed after Nicholas Robert, emerging risks modeller at Lloyd’s, told Insurance Times that, when it came to AI, “optimism should be tempered with awareness of the risks it poses for transparency, ethics, security and safety”.

Iryna Chekanava, senior manager at Lloyd’s Lab, echoed Roberts’ comments as she explained that the opportunities with AI “stem largely from imagining an ideal and fair scenario”.

“The challenges broadly come from implementing the technology before we’ve fully understood its potential, or without have a clear plan for its use in place,” Chekanava said.