Speaking at a recent Forum of Insurance Lawyers’ (FOIL) webinar, Melissa Collett, professional standards director at the Chartered Insurance Institute (CII)  explains why insurers must be “explicitly careful” with data 

The surveillance of an individual’s activity using mobile phone data, for example, could be a threat to privacy and insurers must be “explicitly careful”.

This was the view of Melissa Collett, professional standards director at the Chartered Insurance Institute (CII), who was speaking at the recent Forum of Insurance Lawyers’ (FOIL) webinar, titled ’The Ethics in AI (Artificial Intelligence) in Insurance part two’.

Artificial intelligence (AI) is being used increasingly more in everyday life, for example to detect dementia by how someone uses their phone. Some insurers are already using AI phone data to monitor activity levels, such as John Hancock in the US and Vitality in the UK.

Collett said: “More concrete steps need to be taken, addressing these real risks with certain sets of data and insurers being explicitly careful about how it manages this data. But this is only really the first step.”

Ethical standards

She explained that the CII decided it was going to take a lead ethical standards for the insurance industry, convening a data and ethics forum with representatives including practitioners, trade bodies and lawyers.

“We developed the digital ethics companion [guide], which makes key recommendations about how to insure the right consumer outcomes with the rise of this technology. One key recommendation is transparency,” she added.

Referencing telematics black boxes, she questioned why more people are not taking up this kind of cover, stating that this may allude to the trust gap in insurance.

She named other concerns too, which included algorithmic bias around ethnicity and gender.

On the flipside of this, she also highlighted the benefits of AI – these include disease prevention, driverless cars and increased productivity.