’There’s a bit of an arms race going on between fraudsters and counter fraud professionals in the use of AI,’ says head of fraud

The ongoing development of artificial intelligence (AI) powered fraud techniques such as deepfakes could slow down the progress of digitalisation and straight through processing in the insurance industry. 

That was according to counter fraud experts at last week’s Fraud Charter roundtable (14 May 2024) – hosted by Insurance Times and sponsored by law firm Carpenters Group. 

A deepfake is video, audio or an image that has been convincingly altered or manipulated to misrepresent someone as doing or saying something that was not actually done or said – with dangerous potential to be used by fraudsters.

In one recent high profile case from Hong Kong, fraudsters utilised generative AI to create what appeared to be a realistic copy of an employee’s chief financial officer on a video call and instructed the worker to send them over £20m. 

Commenting on the dangers of these techniques for the counter fraud sector, Shift Technology customer success manager Laura Horrocks explained: ”The frightening thing is that you need less than 30 seconds worth of content to be able to create totally new vocal content through deepfake strategies.

“We may think ‘why would anyone do that?’, but [fraudsters] can then incept a policy or create a claim. This goes directly against the whole move towards straight through processing because [a fraudster] using these techniques is going to get through a lot of the barriers that straight through processing has in place.” 

Kellie Lacey, Crawford Legal Services’ head of intelligence and claim insight, added: ”There needs to be a balance between straight through processing and protecting the customer where we minimise the impact on genuine customers trying to make claims.” 

Horrocks added that while detection methods for this sort of technique were expensive, the solution involved “layering detection methods”. 

James Burge, head of counter fraud at Allianz, added: ”The human element still plays a part here to be able to detect this, because detection software is expensive. People are still critical in the [counter fraud] journey and we should all question our own organisations to ask whether AI is already being used against us.”

AI arms race

When discussing the fraud risks that the development of more refined generative AI could create, counter fraud experts said that the industry must be switched on to its own capabilities. 

Lacey noted: ”The whole story of advancement in AI and digital is a double-edged sword. 

”Generative AI and machine learning has come on leaps and bounds in the last few years and the industry certainly takes advantage of it to try and stop fraud, but there is the reverse side where fraudsters have also taken advantage [of these tools].”

Mark Allen, the ABI’s head of fraud and financial crime, echoed this sentiment, saying: ”There’s a bit of an arms race going on between fraudsters and counter fraud professionals in the use of AI.”

And in terms of essential pre-requisites the counter fraud sector had to secure, he added: “We need a conducive regulatory environment to allow us to use these tools to stay one step ahead of the fraudsters.”

Burge, on the other hand, noted that the counter fraud sector had to act quickly to ensure it wasn’t outgunned in this arms race. 

He said: ”As an industry, we need to look at our speed and pace – how quickly can this evolve?

”We have been talking about it for years, but we need to build upon it a lot faster if we’re going to be able to stop it.”