’The AI can recommend a course of action, but the handler has to take the final decision,’ says chief executive

Despite the clear advantages of implementing modern forms of generative artificial intelligence (AI) into insurance processes – especially in areas such as claims management – the sector must eschew hype-based decision making and not over use this technology in areas where it is not suitable.

This was the message agreed upon by attendees to the latest Insurance Times TechTalk Live event – a roundtable held on 6 November 2025 in London, in association with insurance technology supplier RDT.
Speaking at the event, Lucy Feng, head of pricing models at Axa UK, said: ”The thing with AI is that you can’t expect it to give you the perfect answer to every problem, there’s lots of work that goes into making sure it provides efficiency gains in lots of areas, but it’s not going to be the be-all and end-all solution some people are looking for.”
Ruth Cameron-Errington, head of claims operations at Lloyd’s, added: ”We need to be using AI in a targeted way to increase efficiency. If we, as an industry, start trying to use it for complex claims then I don’t think that’s really the right use – people can struggle when handling those, so I’m not sure I want to see what an AI Model would do.”
AI clearly has its uses, whether that be in long-established forms such as machine learning for liability ratings or in newer, generative forms that allow for advanced data summarisation and document interpretation.
For example, in the Insurance Times AI Claims Report 2025/26, published in October, 38% of surveyed insurance organisations reported that they were already using AI, with another 38% saying that they were piloting or testing AI tools.
Nearly half (47%) of respondents that were using AI had deployed it in a counter fraud capability, with just over a third (35%) utilising it for image recognition and damage assessment.
However, Jon Mitchell, chief operating officer at RDT, explained: “We’ve worked with some of our customers to develop general solutions around document processing and ingestion, which lead onto claims creation or decision making through a task-based platform. Crucially, that allows companies to keep a human in the loop so that AI is not rejecting claims.”
Explainability and governance
Roundtable attendees also agreed that AI should not be used in ways that would create new problems for the sector, with special attention paid to areas such as governance and ethics.
Read: Is a ‘fail fast’ approach to innovation effective or overhyped?
Read: Briefing – How UKGI should respond as AI becomes more popular with consumers
Explore more TechTalk-related content here, or discover other news analysis stories here
Human operatives should still be part of the claims process, for example, so that customers are not involved in situations where an AI mistake leads a claim to be denied – which could also land an insurer in regulatory trouble.
Chloe Smith, Zurich’s customer and third party insights lead, explained: ”Utilisation of AI depends on what the risk appetite is for what you are looking. In some areas of your business you’ll need really high levels of accuracy, where as you can accept a bit of deviance in others.
“For all the benefits AI can provide, there are negatives. We should all be a bit worried about decisions that it makes that are a lot harder to track, or situations where it has autonomy and we’re not in control.
“Most of the time, humans are aware of what they don’t know, which means they know when to slow down, but AI hasn’t quite cracked that yet.”
Mark Bates, chief executive at RDT, added: ”The use of AI is giving you the right answer, but you have to be quite nuanced in asking the right question – you can’t just ask it ‘should I pay this claim?’”.
Internal governance of how AI systems should be used is also vital, as is being able to explain decisions that it has contributed to.
Alex Price, a senior programme manager at Axa UK, said: ”The key word for AI decision-making is recommendation. We have no automated decision making as an output of any AI decision. It’s key that you have explainability.
“AI isn’t going to solve everything and that shouldn’t be what we’re striving for – sometimes it’s not needed and therefore we shouldn’t be using it. That attitude should be at the forefront of technical decision making.”
Kate Bandhu, head of casualty claims at MS Amlin, explained: ”From a technical claims handling perspective, across any line of business, AI is very much a tool to support you, not something to replace you. You still need to validate what it’s telling you and you still need to justify and rationalise your decision making.”
Those adopting AI must ensure that decisions made with the use of the technology are made fairly – not just as a regulatory requirement, but as an ethical one.
Bates finished: ”As a tech supplier we’ve changed our approach significantly, so now it’s much more about exception handling. At the end of the day, all of these things are processes, so as long as that process is drawn out with the right decision points, then a system can effectively just guide the handler through them.
”The AI can recommend a course of action, but the handler has to take the final decision.”

With a particular interest in regulation, technology, innovation and political stories, he has covered issues from the multioccupancy buildings scandal to the insurance implications of quantum computing and the growth of new markets.View full Profile










































No comments yet