Skip to content

Compliance

FCA tech head claims AI enabling ‘Minority Reportʼ-like detection of financial harm

By 0 minute read

July 3, 2025

The Financial Conduct Authorityʼs (FCA) deployment of artificial intelligence (AI) has given the regulator the ability to detect financial harm two and a half years sooner than it previously could, said chief data, information and intelligence officer Jessica Rusu this week.

“Iʼm really pleased to see that our intelligence, network analytics capabilities, combined with the power of [large language models], has been able to — almost in a ‘Minority Reportʼ kind of way — identify emerging harm two and a half years sooner than it would otherwise take to materialise,” Rusu said at City Weekʼs AI and Digital Innovation Summit in London.

“Two and a half years of harm is a significant amount of time to capture bad actors before they have time to take economic value out of the economy,” she added.

In the 2002 science-fiction film Minority Report, Tom Cruise stars as a police officer who can arrest people for crimes they have yet to commit, predicted by technology.

The FCA press office could not give specific details behind the two-and-a-half years claim, which does not appear in the version of the Rusu speech published on the regulatorʼs website.

The FCA said it operated an innovative platform designed to enhance risk management by combining rule-based and model-based risk signals from multiple combined data sources. This multi-layered approach created a “unified, interpretable firm-level risk score, making it easier to identify and manage potential risks”.

The system’s early detection capabilities allow it to flag high-risk firms significantly earlier than traditional systems, the FCA said, adding that by “fusing expert-driven rules with AI and machine learning, the system captures both known and emerging risk patterns, ensuring a comprehensive and forward-looking risk management strategy”.

Leeds AI sprint

The FCA will host a sprint for fintech firms to experiment with artificial intelligence (AI) in Leeds this September, Rusu said during her keynote. “Itʼs really important to understand that we have opportunities for firms everywhere in the UK,” she said.

The regulator has been building its presence outside London for the past few years and its Leeds office now houses around 300 staff. According to data from industry body TheCityUK published today, two thirds of the nearly 2.5 million people who work in financial and professional services in the UK are based outside London.

Nvidia

In her speech at the event, Rusu also gave an update on the FCA’s deal with Nvidia to provide a sandbox powered by the latterʼs AI. She repeated her claim that it would “supercharge” the regulator’s ability to help “innovative firms to build their proofs of concept”, adding that the FCA had been flooded with applications for the sandbox.

“Weʼve seen different use cases, such as financial inclusion, financial wellbeing, fincrime, and fraud,” Rusu said. The motivation behind the Nvidia deal was to “level the playing field for start-ups” by enabling them to build their proofs of concept in the UK.

Rusu added that the regulator had a deep understanding of technology, AI and data science, and that it was evolving its approach to ensure firms knew it was “open for AI business”.

The Nvidia AI sandbox is distinct from the regulatorʼs AI Live Testing lab. The former allows firms to experiment at the beginning of their AI journey, while the latter is for firms who wish to test an AI solution that is “further along in development”.

According to research by the Bank of England, in 2024, 75% of UK financial institutions were already using AI in their businesses. The percentage is even higher for insurance firms, at 94%.

Separately, the European Insurance and Occupational Pensions Authority (EIOPA) is currently conducting a survey to establish how much Generative AI is used by the insurance firms it regulates.