FCA: Outcomes-based approach will help protect consumers in AI driven world

15 July 2023

The FCA has said that its outcomes-based approach will not only serve to protect but also to encourage ‘beneficial innovation’. It also has hopes that AI will help address the Advice Gap. 

In a speech given to attendees at the ‘Economist Impact, Finance Transformed: exploring the intersection of finance 2.0 and web3′ event in London, FCA chief executive Nikhil Rathi said the financial sector is seeing AI-based business models coming through its Authorisations gateway, both from new entrants and within the 50,000 firms it already regulates.

The FCA said the introduction of the Consumer Duty later this month as well as The Senior Managers & Certification Regime provide a strong framework to address many of the issues that come with AI.

“And with these developments, it is critical we do not lose sight of our duty to protect the most vulnerable and to safeguard financial inclusion and access,” he said.

“We will remain super vigilant on how firms mitigate cyber-risks and fraud given the likelihood that these will rise. We are open to innovation and testing the boundaries before deciding whether and what new regulations are needed.

“Any regulation must be proportionate enough to foster beneficial innovation but robust enough to avoid a race to the bottom and a loss in trust and confidence, which when it happens can be deleterious for financial services and very hard to win back.”

Rathi acknowledged that there are many opportunities driven by AI, including a boost to productivity and the creation of new jobs. He also cited the ability to use Generative AI and synthetic data to help improve financial models and cut crime, as well as delivering “better, more accurate information to everyday investors, not just the wealthiest customers who can afford bespoke advice”. In addition, Rathi pointed to the use of AI in the personalisation of products and services to people and the ability to tackle fraud and money laundering more quickly and at scale.

However, he warned that AI also carries risks that can affect the integrity, price, transparency and fairness of markets.

Rathi said: “We have observed how intraday volatility has doubled and amplified compared to during the 2008 financial crisis. This surge in intraday short-term trading across markets and asset classes suggests investors are increasingly turning to highly automated strategies.

“There are other risks too, involving cyber fraud, cyber attacks and identity fraud increasing in scale and sophistication and effectiveness. This means that as AI is further adopted, the investment in fraud prevention and operational and cyber resilience will have to accelerate at the same time.

“There are also potential problems around data bias. AI model outcomes depend heavily on accuracy of data inputs. So what happens when the input data is wrong or is skewed and generates a bias?”

Rathi said that poor quality, or historically-biased data sets can have exponentially worse effects when coupled with AI which augments the bias.

As a data-led regulator, the FA said it is training staff to ensure they maximise the benefits of AI.

He said: “We have invested in our tech horizon scanning and synthetic data capabilities, and this summer have established our Digital Sandbox to be the first of its kind used by any global regulator, using real transaction, social media, and other synthetic data to support Fintech and other innovations to develop safely.

“Internally, the FCA has developed its supervision technology. We are using AI methods for firm segmentation, the monitoring of portfolios and to identify risky behaviours.”

However, Rathi said that the control of AI will require a globally co-ordinated approach.

He added: “The FCA plays an influential role internationally both bilaterally and within global standard setting bodies and will be seeking to use those relationships to manage the risks and opportunities of innovations and AI.”

Professional Paraplanner