’Any regulation must be proportionate enough to foster beneficial innovation,’ says chief executive 

The FCA has outlined how it plans to regulate artificial intelligence (AI) and big tech following the government’s call for the UK to be the global hub of AI regulation.

Earlier this week (12 July 2023), the regulator published a report aimed to stimulate conversation on areas where big tech entry is likely to create the biggest competition benefits for consumers.

It also looks at where there is the greatest risk of significant harm if competition does not develop effectively.

Big tech in insurance refers to tech giants such as Amazon and Google entering the insurance space.

As part of its regulatory approach, FCA chief executive Nikhil Rathi said big tech’s role as the gatekeepers of data in financial services will be under increased scrutiny.

In a speech at the Economist Impact Finance Transformed event on the same day the framework was published, he said: ”We have announced a call for further input on the role of big tech firms as gatekeepers of data and the implications of the ensuing data-sharing asymmetry between big tech firms and financial services firms.

We are also considering the risks that big tech may pose to operational resilience in payments, retail services and financial infrastructure.

“We are mindful of the risk that big tech could pose in manipulating consumer behavioural biases.

“Partnerships with big tech can offer opportunities – particularly by increasing competition for customers and stimulating innovation – but we need to test further whether the entrenched power of big tech could also introduce significant risks to market functioning.”

Accountability 

This came amid the EU currently working on a set of rules to govern AI, with a draft version suggesting that companies using generative AI will have to disclose copyrighted material used to develop its systems.

Rathi admitted that the regulatorstill have questions to answer about where accountability should sit – with users, with the firms or with the AI developer”.

This included the amount of compensation or redress if customers lose out should AI go wrong.

“Any regulation must be proportionate enough to foster beneficial innovation but robust enough to avoid a race to the bottom and a loss in trust and confidence, which can be deleterious for financial services and very hard to win back,” he said.

“One way to strike the balance and make sure we maximise innovation but minimise risk is to work with us, through our upcoming AI Sandbox.

“While the FCA does not regulate technology, we do regulate the effect on – and use of – tech in financial services – [including insurance].”

He added that the senior managers and certification regime gave the FCA a clear framework to respond to innovations in AI.

Rolled out in 2018, it is designed to reduce harm to consumers and strengthen market integrity by making individuals more accountable for their conduct and competence.

Rathi said: “There have recently been suggestions in Parliament that there should be a bespoke SMCR-type regime for the most senior individuals managing AI systems, individuals who may not typically have performed roles subject to regulatory scrutiny but who will now be increasingly central to firms’ decision-making and the safety of markets.

“This will be an important part of the future regulatory debate.”

Global hub

Rathi also welcomed the government’s call for the UK to be the global hub of AI regulation.

Ealirer this year (28 April 2023), Prime Minister Rishi Sunak launched a £100m taskforce to help the UK build and adopt the next generation of AI.

Rathi said that with AI, insurance firms have the ability to hyper-personalise products and services to people, better meeting their needs.

Rathi said: “We are training our staff to make sure they can maximise the benefits from AI.

“We have invested in our tech horizon scanning and synthetic data capabilities and this summer have established our digital sandbox to be the first of its kind used by any global regulator, using real transaction, social media and other synthetic data to support fintech and other innovations to develop safely.

“Internally, the FCA has developed its supervision technology. We are using AI methods for firm segmentation, the monitoring of portfolios and to identify risky behaviours.”