Multiple experts spoke at Insurance Times’ TechTalk Live roundtable, which focused on affirmative cover for artificial intelligence and the myriad impact this technology is having on policies and practices
Brokers are the “logical latecomers to the party” when it comes to using artificial intelligence (AI), with there being an “old guard that still wholeheartedly believes” that certain insurance processes must be done a certain way.
That was according to Jason Cohen, executive director at Specialist Risk Group owned broker Hamilton Leigh, who told Insurance Times that insurers are using AI far more than brokers are.
Cohen was speaking at Insurance Times’ latest TechTalk Live event – a roundtable held on 16 July 2025 at London’s 14 Hills restaurant, in association with insurer Hiscox. The afternoon event was chaired by Insurance Times editor Katie Scott and well attended by relevant broker and technology guests.
During the discussion, Cohen highlighted that – in his opinion – insurers are using AI far more than brokers are.
He explained: “There [are] brokers that are using it for documentation, production and maybe some marketing stuff. I don’t see loads of brokers adopting it at [a] large scale.”
Cohen added that brokers’ usage of AI will not go from “zero to 100” anytime soon, with there still being a “very strong reliance on contextualisation of cover, more so than it’s just binary”.
He continued: “If you were to look at it before [AI gained traction in the insurance market], brokers would have been the logical latecomers to the party in the sense that there is an old guard that still very much wholeheartedly believes in ‘no, this must be done in this way’ because it is the way it’s always been done.”
Barriers to entry
These comments come after Peter Hunter, head of research and development at software company Open GI, said at the 2025 Biba Conference in May that smaller brokers struggle to find a starting point when it comes to using AI.
Read: Major AI barrier for smaller brokers is ‘not knowing where to start’
Read: Gulf in AI adoption opening between national and regional brokers
Explore more artificial intelligence related stories here, or discover more news here
Hunter stated that around 60% of brokers are not using AI and that a major barrier to entry for smaller, single office firms is simply “not knowing where to start”.
That lack of expertise is partly responsible for a growing technological divide between large and small brokers, with Hunter reporting that 46% of national brokers have implemented some form of AI initiative, while only 10% of single office brokers have done the same.
Other common barriers to entry cited by Hunter include “concerns around data security, difficulty measuring return on investment (ROI), lack of skills, concerns about integration with existing systems and prohibitive costs”.
Responding to Cohen at the TechTalk Live roundtable this month, Dan Henry, account chief technology officer for insurance and investments at Microsoft, said: “People are starting to get on board with [AI, but] you’re right, I’m definitely seeing brokers less so.
“I don’t know if that’s just because the insurers hold more data – so [AI is seen as being] much more valuable there and actually brokers are more about the initial conversation.”
However, while barriers to AI usage evidently exist, Aviva revealed in its April 2025 Broker Barometer survey that 85% of broker respondents would be interested or very interested in enhancing their operations with digital or automated processes – an increase of 15% since 2022.
Improving customer service and gaining a competitive advantage were the most popular reasons for wanting to update processes (52%), according to the survey results, followed by increasing new business (40%).
This research was conducted by Censuswide across 205 general insurance brokers.
Henry added: “[Microsoft has] recognised how we need to evolve as an organisation and actually [be] less reliant on certain types of roles that existed in the past.
“But [we] definitely see more people actually able to do more with what they have now, as opposed to having to go ‘cool, we’re going to cut this many staff’.
“The one thing I always get asked by brokers is ‘can I not just have it on my desk here? I have in-person conversations, I’m not always on a Teams call or having an email conversation’.”
AI governance
Participants at the roundtable also discussed governance around AI and the importance of validating what AI is saying to ensure that final decisions are as accurate as possible.
Read: Claims handling the number one target for human led AI integration
Read: Agentic AI presents a sink or swim moment for insurance sector
Explore more artificial intelligence related stories here, or discover more news here
In a submission to the Treasury Select Committee in March 2025, the Chartered Insurance Institute (CII) – which represents more than 120,000 members – stressed that accountability around AI usage in financial services must be underpinned by rigorous validation and testing, to identify and mitigate discriminatory outcomes.
Henry said: “It’s really important that if you do start having that kind of multiagent setup where you put some information in and you get a result at the end – if the only answer you’re getting back is yes or no, that’s quite dangerous.
“If you look a ChatGPT, when you use it, if you say ‘is it going to be sunny [on] Wednesday’, [it does not just say] yes.
“It will say yes because of X, Y and Z and all of these different aspects. You need to make sure that you’re validating why [AI has] come to that decision.”
Responding to this point, Sam Haslam, director and risk and resilience advisory practice leader at WTW, said: “If challenged, you can confidently explain why you are happy to put that work forward as if it was your own.
“But, before you deploy a system like that, you want to validate it against human led processes and then check as it goes along.”
Insuring AI
During the discussion, Russ Shaw, founder of technology ecosystem focused firms Tech London Advocates and Global Tech Advocates, raised the question of how the insurance industry covers businesses that use AI themselves.
He asked: “You’ve got the actual models themselves, but then [there are the] companies that are using them. So, how do you determine if something’s gone wrong?
“Is it the actual user who’s using the AI tool? Could there have been a flaw that came out of ChatGPT that maybe led to some particular issue or problem?
“How do you delineate who has the responsibility?”
Thomas Lowin, London regional manager at Hiscox, responded: “For me, the critical part is the contract that the insured would have with their end client.
”So, if they’re responsible for delivering a system with an AI element, then you’re looking at what those those deliverables are in the contract and if it is just sourcing [an AI system rather than being] responsible for the delivery of the actual AI system itself, then there may be a different response [from insurers].
”It’s all about what the contract says. It doesn’t really matter [if an AI flaw] came from a third party. It’s about what the insured has signed up to.”
Henry added that, like any other tool, “you are ultimately responsible” for individual AI usage – but he referenced a car analogy to explain why cases may not always be clear cut.
“What if you’re driving a car [and] there’s a crash, but it’s actually because [there is] a fault with the brakes? Is there something broken in the technology itself that should be responsible?” he questioned.
This led Cohen to raise a point about coverage when it comes to autonomous vehicles.
In San Francisco and other parts of the US, driverless taxis are already a relatively common sight. But before this vision becomes a reality on the UK’s roads, potentially in 2026 if the government has its way, there are still a number of issues that need to be resolved.
Cohen said: “Where would liability sit if something happened with the vehicle that was outside the driver’s control? And then where does the claim actually go?
“[This is] because it’s to do with the technology in the vehicle rather than a driver and road risk is complicated enough at times in terms of getting a claim settled.
“I suppose it would be falling to AI really, those autonomous vehicles in that sense.”
Lowin added that when determining whether a policy may need to pay out on an AI related claim, there has to be a level of understanding between insurers’ ”intent versus the wording” of any applicable policy too.
A similar example would be the intent of business interruption clauses to cover Covid-19 pandemic linked losses versus the detail of the policy small print, for instance.
Haslam, meanwhile, noted that these myriad complications mean “there is a good driver for why good AI governance coupled with progress management is important”.
He continued: “For any of these use cases, what you need to understand is what could go wrong based on the reliance that you are placing in these tools and what are the reasonable steps you can take to get that risk under control.”
In terms of the size and type of businesses exploring AI governance, Haslam said: ”I’ve seen small businesses I’d start a conversation with and they would say ’we’ve got an extensive AI policy in place, we’ve rolled that out and trained everyone’.
”On the flip side, I’ve got organisations [that] are clearly using some AI tools – I’ll ask for the basics and they don’t exist yet.
”Existing standards and quality of governance and control [in organisations] at the moment [does not] seem to be a good mapping to where they are in terms of AI [implementation] because it’s just moving so quickly.”

His career began in 2019, when he joined a local north London newspaper after graduating from the University of Sheffield with a first-class honours degree in journalism.
He took up the position of deputy news editor at Insurance Times in March 2023, before being promoted to his current role in May 2024.View full Profile
No comments yet