Global governments and technology leaders have met to discuss the threat AI can pose 

By Jon Guy

The insurance industry has long seen the benefits of the use of artificial intelligence (AI).

Jon Guy

By Jon Guy

Underwriters and brokers have viewed AI as a system that could reduce the need for staff to spend hour upon hour on repetitive tasks, which could be more efficiently and accurately undertaken by machine.

As a result, we have seen the creation of quotation systems, which can use a set range of replies to provide a premium price – in turn, the range of the questions that are able to be asked is getting ever broader.

The dream is for AI to access the internet of things to harvest information on even the most complex of risks to reduce the number of questions needed to be asked of the broker and their client to a minimum, speeding up the placement process.

In claims, the journey is much the same, with AI seen as a way to manage the high volume, low complexity claims via systems that are loaded with specific red flags to identify potential fraud.

The aim is to free staff up, so they carry out more market facing tasks and win more business.

The access to the level of data held by insurers has always been seen as a huge plus if it could be properly analysed and implemented.

However, the ability to analyse and utilise that data has always been tantalisingly out of the industry’s grasp.

Risk too far?

While that may well be set to change, the past week has seen global governments and technology leaders meeting at the Bletchley AI Summit (1 November 2023) to discuss the threats that AI poses not only to businesses and the economy, but to humankind itself.

The US and UK have been public on their efforts to identify the risks we face and rules are already being drawn up for the ethical and “safe” use of AI.

Insurers will say they have a good grasp of the use of AI in their business. However, like any company, insurers and brokers need to look at their supply chain and the way in which their clients are applying AI in their own operations.

We have seen the challenges the industry has faced with cyber cover, many of which have been down to the dynamic nature of the risks and the way technology has evolved in such a short period of time.

If those who attended the summit are to be believed, the threats posed by AI are at a level that is multiples of those currently envisaged by the standalone cyber market.

While state backed cyber-attacks are already subject of debate over coverage, will AI be another risk too far for the market in the coming years?