‘AI is beginning to have motivation of its own – that’s a different kind of risk,’ said senior law firm partner

Artificial intelligence (AI) is already transforming the way insurers operate – but what happens when AI stops being just a tool and starts to become a risk-bearing entity in its own right?

In a world where businesses may not be far from handing decision-making powers to autonomous systems, the insurance market must ready itself to grapple with the possibility that these systems may need to be assessed, priced and covered – not as tools used by humans, but as quasi-independent actors.

This raises complex questions around liability, legal personhood, market failure and how far insurance can stretch in the age of machine autonomy.

“If AI wrongs a human, a human might want to sue that AI,” said Adam Atkins, head of technology at Hiscox.

“But there’s no point suing AI if it doesn’t have a bank account. They can’t pay lawyers, can they?”

Atkins’ analogy was grounded in traditional legal logic – if a mechanic signed off on a faulty jet engine and that engine caused a crash, the tool was not to blame, the person using it was.

For now, the same applies to AI. However, the industry is increasingly aware that the line is beginning to blur.

Insurable entities – or just very complicated tools?

Richard Hodson, business development director at cyber MGA Onda, was sceptical that AI would ever be treated as a fully independent insurable party.

He said: “AI will never be insured in itself because it’s never going to be its own person or have its own currency. You don’t insure a tool, you insure the company.”

For Hodson, it is more useful to focus on outcomes than processes.

He explained: “Forget AI for a second and consider, what are the consequences? Could there be loss of life, property damage or financial loss? That’s what businesses are insuring against.”

His point was clear – AI may reshape how decisions are made, but from an insurance perspective, it is still the outcome that really matters.

Still, this practical framing does not silence the wider philosophical and legal debate. If a system begins making independent decisions – including financial ones – then does the nature of liability change?

David Pryce, senior partner at Fenchurch Law, said: “Companies are, in one respect, imaginary entities. They only exist because we say they exist – they are capable of both owning assets and incurring liabilities.”

This point raises questions for the future. If corporations can already be recognised as legal entities, despite having no physical body, could the same logic one day apply to AI systems? Pryce believed this was already being tested in the real world.

He said: “There may well be companies which are controlled by AI. Now, if that’s right, then that AI needs insurance.”

The ethics of blame

Paul De’Ath, head of market intelligence at Oxbow Partners, agreed that the industry was heading towards “a real debate” – and potentially a legal challenge – over who was responsible when AI systems made harmful decisions.

He asked: “Is it the company that has built the AI tools? Or is it the company that implemented them?”

This ambiguity around liability presents an underwriting challenge. Insurers might struggle to assign risk – let alone price it – without clarity on who was at fault when an AI system failed.

De’ath added: “There’s a chicken-and-egg situation. Part of the reason why companies might not implement AI to the fullest is because they’re worried something’s going to go wrong and they’re not covered for it.”

The lack of actuarial history on AI systems compounds this issue.

De’ath noted: “You don’t really know how regularly or badly things could go wrong.” 

That lack of historic data leaves a knowledge gap for firms, as well as a potential major market exposure.

Aura Radu, technology practice leader at CFC, pointed to this knowledge gap as a defining challenge for the future implementation of AI.

She said: “It is not very easy to explain how a model has reached a certain conclusion and, because it’s not very easy to explain, then it makes it very difficult to fix any issue that arises with it.”

She said she believed insurers were only beginning to understand the implications of this opacity, as well as the claims data they would need to become more confident in their coverage.

“The claims that we will be seeing out of these type of policies are probably going to go on for a few years,” she added.

Machines with agency?

The biggest shift, however, may be in how AI behaves – and whether that behaviour takes on a form of independent agency.

Pryce referenced a Wall Street Journal article that he found particularly striking, which described how a specific AI system had shown signs it was beginning to learn how to escape human control.

He said: “Even when it was explicitly instructed to allow itself to shut down, it disobeyed 7% of the time. AI tried to blackmail the lead engineer. That’s AI beginning to have motivation of its own that is self-interested.”

These developments raise a fundamental question. If AI systems begin acting unpredictably – and even deceptively – how could they be risk assessed like traditional tools?

Hodson added: “People are removing human control and handing it to AI, which introduces new cyber security risks. But, if it goes wrong, the impact is far more wide-ranging than a cyber attack. It’s akin to a professional indemnity or management liability claim.”

As AI takes over more processes in the future – particularly in high-value or high-stakes systems such as payments and supply chains – the risk landscape becomes broader, less predictable and harder to underwrite.

What comes next?

As with all future gazing, there was no settled consensus on what would come next. Some, like Hiscox’s Atkins, saw space for “standalone artificial intelligence liability products,” even if they were aimed at the company deploying the AI, rather than the AI itself.

In fact, following the conversation with Atkins, Hiscox UK announced the introduction of explicit coverage for AI-related claims.

Others, like Radu at CFC, emphasised the need for insurers to actively question their clients’ AI usage across the business.

“There may well be companies which are controlled by AI. Now, if that’s right… that AI needs insurance.”

Yet all agreed that insurers needed to prepare – and fast.

“It’s like we’re in the Industrial Revolution,” said Pryce. “But whereas that took decades to have a significant impact, this is months.”

For now, insurers can not underwrite AI machines themselves – they do not currently own assets, can not be sued and definitely can not buy insurance. However, that assumption may not hold for long in a world where AI systems act with increasing autonomy.

As artificial intelligence starts to make decisions, direct capital and affect real-world outcomes, the industry has to confront a strange new possibility – one day, the risk entity might not be the person who used the machine. It might be the machine itself.