Capital Law partner Nick Pester looks at the implications for insurance of AI and machine learning

“People are too busy thinking whether they can, they never think whether they should…

AI, and its subsets: machine learning, and deep learning, are busy changing the way we do insurance. But are people too focused on what they can do, rather than thinking whether they should?

In the coming weeks / months / years (the time scale is yet unknown) technological developments will overhaul the insurance sector. So, what can we expect?

With the advent of deep learning, insurers will be able to mine, and interpret data never before available. Underwriting will become quick and automated, patterns of fraud will become more easily detectable, and risk management will be perfected.

The crucial point in all of this is that it’s beyond the reach of humans. The data sets are so big, and so complex, that the patterns elude the human eye. Without AI, and deep learning, these ideas cannot be a reality.

Is this all good?

In a word, no.

First, let’s look at the basics. What we’re talking about here is creating an algorithm that looks at all past examples to predict future events. The problem is that people assume algorithms are objective. In her book, Weapons of Math Destruction Cathy O’Neil explains that algorithms that learn from past experience aren’t objective; they’re quite the opposite. They’re opinions embedded in code; they codify the status quo and trick us into thinking it’s fact. As a result, discrimination becomes rampant, as we take old truisms and cement them into future practices.

Lest we forget GDPR…

Big companies are looking at how to harness our data. The concept of a Digital Fingerprint allows insurance companies to access any data related to you from across the internet – your Facebook, tinder, twitter, anything, and use it to make their evaluations. If you’re seen drinking in numerous Facebook photos, or smoking, or on the phone in the car – your insurance premiums will go up, or your claim denied. Deep learning allows machines to scan, and for all intents and purposes, understand photos. Couple this with algorithmic bias, and even we’re a bit scared.

In fact, issues with data are already surfacing. An American multinational was recently sued, unsuccessfully, for asking employees to provide their biometric data as an exchange for discounts on health policies. But what happens when the companies don’t have to ask? When they can tell all they need to know by analysing your internet profiles?

The irony…

Morality aside, there’s a troubling irony in all of this.

Insurers need claims; insurance is investing in compensation for future possible damages. Further, they need ‘bad claims’ – it validates their existence when something goes wrong, and they have to pay out.

Let’s assume that we get to the point where risk management has been perfected. Insurance companies can look at you, your digital fingerprint, and all previous data they have on everyone else, and they can predict exactly where the risks in your life will be. As a result, you mitigate those risks, making the likelihood of them happening minute. When you know all the risks, and how to minimise them, the likelihood of you taking out insurance for them plummets.

So ‘perfect risk management’ could threaten the traditional insurance model 

An added complication

Alongside this we have the GDPR coming in May (in case you didn’t know), and before it’s even come into force the regulations are arguably outdated. Innovation is moving at a million miles an hour, and regulation is struggling to keep up. With deep learning, for example, how can you hold something accountable when by its very design it’s supposed to break the rules?

There will come a time, and it may be sooner than we think, that the most valuable employees to any insurance company will be data scientists. And, in terms of service providers, expect to see ‘algorithmic auditing’ firms become common place.

So how do insurers prepare and adapt?

The clear shift towards the customer in terms of control over personal data leaves insurers in a tricky position. It is uncontroversial to say that, insurance doesn’t always have the friendliest face, and the possibility of the traditional insurance model effectively becoming redundant might not upset the consumer too much. So, in our opinion, the time is approaching for insurers to start reimagining themselves as risk managers, and to focus more proactively on mitigating risk, rather than a compensatory culture.

Ultimately the longer term path for insurers must be to become that friendly face that’s there to help customers to minimise risks, but also to compensate when the unlikely happens, which, as well all know, it does.