As automation, AI and robotics pick up speed in insurance, the industry responds to Lloyd’s reports on the risks and opportunities

The deployment of AI and robotics to overhaul the workings of insurance is nothing new.

From basic administrative duties to claims management, these advances are changing the face the the industry. But with the publication of two reports by Lloyd’s last week the risks and the opportunities of the these technologies were laid out for all to see.

The reports were ostensibly to provide underwriters with guidance on “best practice” for use of AI and robotics in the short and longer term.

But the reports have got figures across the insurance industry asking should robots retire?

Robots retiring

The subject of what risks and opportunities AI and robotics pose to brokers and firms across the industry will be discussed at the upcoming Insurance 2025 event in July, but Simon Clayden, chief operating officer at AXA UK, told Insurance Times that the firm has been “living and breathing” these risks for the past 12-18 months.

AXA has been a leader in the adoption of robotics deploying three robots last January to carry-out repetitive administrative tasks, saving 18,000 man-hours per year.

And Clayden agreed with what the first Lloyd’s report said around trust, ethics, security and safety, with the former being his biggest challenge, and likens AI and robotics to a pre-school child.

“You have got to spend a great deal of time teaching them and setting them boundaries, around what you will have them do. We spent a great deal of time at the outset considering what those ‘golden rules’ would be – that was one of the big learnings for us right at the start,” he added.

Clayden said that this is an ongoing process, “as the technology evolves and becomes more sophisticated, you will need to ensure that those rules remain valid and the support that you have around the robotics and policies” will need to be reviewed.

But he says that the firm is in a good place at the moment, citing the privacy policy AXA deploys regarding AI and robotics. He stated that the robot is by no means replacing the human in the decision-making process, reiterating that it replaces tasks, not people, and allows staff to be able to spend time with customers and be empathetic.

He said: “The staff had direct involvement in giving the robots human names. It engages employees and it helps with guarding against cyber crime as real names are harder for criminals to spot than, for example, robot1.”

Clayden said that AI is not good at communicating uncertainty in results, but its strength is identifying patterns and trends – as it deals with black and white terms therefore caution around this type of automated decision-making must be upheld.

Overall, he said AXA has learned that like people, robots can and should retire. This would depend largely on what process the robot automates. This would dictate its lifespan.

Pace

The ABI believes that AI has the power to transform insurance including claims handling, underwriting and product offering.

But a spokesperson from the ABI, warned: “Insurers have a long track record of using new technology to improve their operations and customer service. But, as with any rapidly evolving technology, AI brings challenges as well as opportunities, and its application in the industry needs to be carefully considered and monitored to ensure it delivers real benefits for firms and customers alike.”

Martyn Beauchamp, chief customer officer at Slater and Gordon thinks the industry is only beginning to see the potential of such technology.

He told Insurance Times: “We are already deploying AI to support those who contact us with certain enquiries to deliver a better customer experience. We’re also experimenting with advanced machine learning to speed up process intensive tasks, freeing up colleagues to focus on what they do best – helping customers.

Insurance 2025_Logo

Register here for tickets! Early bird prices will be available until 31 May do not miss out!

“Like any new technology, it’s important to be alert to the risks and ensure safeguards are put in place to mitigate them, but we’ve very positive about this innovation. We believe it will deliver huge benefits to customers by making services cheaper, quicker and more accessible.”

Re-training staff

AI and intelligent automation presents a huge opportunity for staff to re-train, Terry Walby, chief executive and founder at Thoughtonomy said: ”Re-training people whose existing work is being automated to give them the skills to oversee AI and automation programmes is a great way of avoiding the challenge of trying to recruit new talent, when access to high quality digital skills is so difficult.

”However, insurance companies must recognize the need to prepare the wider workforce for the introduction of AI and automation, to ensure they have the skills, mindset and understanding to support and complement the contribution of AI and digital labor. Much has been written about the need for more digital skills within workforces, but human workers will also need to display greater levels of creativity, objectivity and agility to maximise the benefits of AI and to drive their organisations forward.”

The World Economic Forum which predicts that the average worker will need an extra 101 days of learning by 2022 to prepare for this. 

”Insurers should look to create ‘automation champions’ across each function of the business to help their peers to become more comfortable working alongside AI, to reassure staff and help them to understand the benefits both to the business and to individual workers. The ultimate goal for insurance companies should be to instil a positive ‘culture of automation’, where people are proactively looking to automate some of their work to free up their capacity to focus on more interesting, high-value work they enjoy.

”We talk about AI and the virtual workforce empowering people to maximise their full potential. It may seem counter-intuitive but the best AI and Intelligent Automation programs are essentially about people; about how they can be best deployed to add value to the business and how organizations can achieve more with their existing workforce.” Walby added. 

Victimless crime

Lloyd’s warned in its two reports, that as AI increases in complexity that cyber breaches are more likely to have a bigger impact.

And given the ambiguity around legal uncertainty, it is questionable as to who is liable in the event of a claim.

Cate Wright, global insurance product manager for BAE Systems, said that insurers need to be aware of creating new types of risk. The firm provides security and anti-fraud solutions to the insurance sector.

Wright continued: “People are more likely to feel OK about defrauding or lying to robots than human beings – for example, taking the extra chocolate bar that a vending machine mistakenly drops into the tray than to shoplift a second bar of chocolate in a newsagent. There’s a perception that stealing from a machine is a victimless crime, even though it very rarely is.”

Wright added that the majority of people will manipulate seemingly trivial details on a quote to game their premium, such as an instance where the vehicle is left overnight as seen with online personal motor applications.

Broker opportunity

But these new technologies also hold an opportunity for brokers according to Stuart Walters, chief information officer at BGL Group. He said: “We already use complex machine learning algorithms in many areas across our business, including pricing, the digitisation of voice services and fraud prevention, among others.”

It is currently trialling new software, integrated technology and the art of customer conversation design. BGL also intends to continue its investment in voice recognition, NLP (neuro-linguistic programming) and intent modelling.

“Used effectively, artificial intelligence opens up a world of possibilities, ” he said. ”What’s crucial is that companies pick the right use cases and remain cognisant of the challenges, including, as the reports’ authors identify, trust and acceptance issues.

“Appropriate human oversight of machine learning and AI systems is vital. At BGL, for example, we’ve created a data ethics committee to make sure we are holding ourselves to account in our use of data, with full governance controls, challenge and oversight.

“We’re also focused on using the skills and experience of those people in our operation who have been delivering high quality service to our customer base for years to guide and shape our future AI capabilities,” he added.

To find out more about what risks and opportunities AI and robotics hold for the insurance industry, ask you questions to the experts at Insurance 2025.