AI-powered chatbots have been known to hallucinate and spit out ‘unpredictable’ results, but could faulty software accounting systems from the late nineties be a sign of what’s to come with the technology?

There is a lesson to be learned for fans of the TV drama Mr Bates vs The Post Office, which recently renewed public interest around the Post Office scandal – and it’s to do with the risks of emerging technology in insurance.

Clare Ruel

Clare Ruel 

The four-part drama, which first aired at the beginning of the year (1 January 2024), documented a real-life story, which took place between 1999 and 2015 and involved more than 900 subpostmasters and postmistresses who were prosecuted for theft after faulty accounting software mistakenly highlighted missing money from the Post Office’s branches.

The Horizon accounting software that caused the issue was developed by information and technology firm Fujitsu and first deployed by the Post Office in 1999.

Many postmasters and postmistresses at the time flagged the problem with the software, but few were believed because it was not believed that the technology could make mistakes.

As a result, some victims were forced to fork over their life’s savings to pay for the technology’s blunder.

To me, the whole scandal is a warning about the dangers of artificial intelligence (AI) – and what could happen if the right regulation isn’t applied.

I recently caught up with James Teare, commercial and technology partner at law firm Bexley Beaumont. He told me that, when it comes to AI being used responsibly, the guardrails around the quality of data are paramount.

Computer says no

Teare stressed that the idea of people not being involved in decision making processes and information being “automatic” was problematic, as it feeds into the counter-factual narrative that a computer cannot be incorrect.

AI is an emerging technology and, as such, regulations have yet to be applied in the UK. One of the major problems with the technology is that AI can “hallucinate” and provide inaccurate information in a minority of cases.

Regulation of the technology is on the agenda for various organisations and featured in broker trade body Biba’s latest 2024 manifesto.

Teare explained: “AI can’t empathise, it won’t pick up that it’s being biased and there will always be a certain number of errors as the data is never going to be 100% accurate.”

He noted that chatbots “learn by making mistakes” and stressed the importance of keeping a “human in the loop” as sense checks are necessary.

Teare continued: “For the Post Office scandal, you can see the human in the loop oversight of what was happening with the software was dysfunctional. With that, the danger is everybody thinks AI is automatic and doesn’t require a human.”

If AI was to be used in similar decisions in the insurance sector, the danger would be at a “much greater scale”, he noted.

Who is to blame?

Were AI technology to provide the wrong information to a client, who would be to blame for any ensuing issues? 

Neil Garrett, UK and Ireland sales director at data and analytics firm Verisk, said: “That [question] is why regulation is needed.”

He explained that, were an AI model to be trained on “incorrect data”, it could be argued that the party who provided the data was to blame. However, if data was provided by a third party, for example, things could become rather complicated.

This is why it’s important for firms to complete their due diligence when implementing any technology – and especially AI.

In September 2023, the UK government quietly sacked its AI taskforce, which it launched a few months earlier in April 2023. 

Speaking to the impact that the Post Office scandal may have for technology providers in the insurance sector, Garrett explained: “There will be even more hyperfocus on transparency and sustainability.”

In being transparent however, there is also the pitfall of ensuring any AI model’s decisions are explainable “without giving away the intellectual property from the supplier side”.

Garrett also believes that insurance policies that cover for AI mishaps will become increasingly popular.

“The challenge for the insurer is how to underwrite a policy like that. It’s not impossible because [insurers] are doing it with cyber security, which is a similar ilk,” Garrett added.

I agree with Teare that the Post Office Scandal should “shine a light on the need to have very robust guardrails and governance procedures in place” going forward.

However, I also agree with Garrett that the scandal could put further pressure regarding transparency and explainability on tech providers.

As insurance is an industry with humanness at its core, I believe that humans will never be obsolete in terms of comprehending risk, although technology certainly enhances certain processes, long may this continue.

Insurance Times Fantasy Football