During the latest Insurance Times TechTalk Live event, the discussion raised the issue of increasing use of AI by customers to write policy inception or claims submission materials, leading to non-deliberate misrepresentation caused by AI-generated inaccuracies. How should the industry face this emerging threat while safeguarding the customer experience?
WE ASKED: “How should the insurance industry and regulators adapt their approach to AI-written claim or policy inception submissions to prevent accidental application ‘fraud’, or misrepresentation, while preserving the customer journey?”
Thomas Housden, claims product owner, RDT

AI-written claims submissions are becoming more common and they bring unique risks. Customers usually use these tools in good faith, but the output can still include errors or details that don’t quite line up.
Traditional claims processes weren’t built for this, so insurers need systems that can validate what’s been submitted, spot anomalies and flag issues before processing begins.
Without these safeguards, small mistakes can escalate, slowing resolution and increasing costs – and in some cases even influencing the settlement offer a customer receives.
Regulators are adapting too. Many of the existing standards were created for a world where every claim was written directly by the customer, not generated by an AI tool.
What we need now is guidance that reflects how claims are actually being submitted. That could mean clearer expectations around basic verification, simple audit trails and when insurers should take a closer look at a claim that doesn’t quite add up.
It doesn’t require heavy-handed rules – just practical guidance that supports fair, outcome-focused checks, whether the claim was written by a person or by AI.
For insurers, this isn’t only about speed. AI-generated claims need sensible verification, anomaly detection and clear instructions for customers.
With balanced controls and the right oversight, we can protect trust in the process and deliver fair outcomes.
Ben Fletcher, director of fraud, Allianz UK

Customers need to understand that, even if they use technology to help submit their claim, they still need to check the details are valid.
Equally, insurers need to understand not just the detail, but the intent behind the detail at every stage of the process.
The good news is that this should be an evolution for insurers and not a total revolution.
We are already well versed in fair processing notices and helping customers understand the importance of submitting correct and valid information and how that data will be used. However, continuing the education for consumers on the risks around AI will be important.
Most insurers will have a multi-layered control environment and, if they notice anomalies, should not be making automated decisions on any one element alone, but giving customers the opportunity to explain those discrepancies.
As with any discrepancy, establishing whether it was intentional, and also the materiality, will continue to be essential to determining the right course and outcome.
This assessment already takes place every day by teams of professionals and AI will simply be the next phase.
The far bigger risk for the market is organised fraudsters hiding behind various forms of AI enabled content, rather than genuine customers making honest mistakes being adversely impacted.
Read: Sector should not aim to implement AI as a ‘be-all and end-all’ solution
Read: One in four believe not disclosing information to an insurer is not fraud
Explore more fraud-related content here, or discover other news analysis stories here
Paul Holmes, partner, DWF Law LLP

We are already seeing policyholders drafting submissions and complaints on AI when claims are under investigation or questions are asked and there is potential for errors to creep in when these submissions are said to be from the policyholder’s own hand.
We have yet to see any cases where AI has been used – that we know of – to incept a policy and cause inception mistakes.
However, that could happen and be particularly problematic if the AI is at insurer system level on the inception or quote system.
It is important that insurers have confidence in their vulnerable customer protocols to ensure that, for example, the customer who may struggle to read and write does not fall back to AI and inadvertently make mistakes.
Having said all of that, I think insurers must be wary of policyholders alleging that they have used AI and it is ‘not their fault’ that they have misrepresented something either when incepting a policy or when submitting a claim.
AI does not negate the legislative requirement in both the Insurance Act and the Consumer Insurance (Disclosure and Representation) Act to make correct representations to insurers at inception and claims stage.
Unfortunately, we do already see fraudsters falsely claiming vulnerability when they are ‘caught out’ because they realise it is advantageous.
We must remember that even genuinely vulnerable customers are not a carte blanche to mislead insurers or be careless or reckless in their disclosures.

With a range of freelance experience, Harriet has contributed to regional news coverage in London and Sheffield, as well as music and entertainment reporting across various publications.View full Profile









































No comments yet