Regulatory Reform
FCA announces live AI testing service: ‘It’s not about new regulationʼ
• 0 minute read
May 1, 2025

The UK Financial Conduct Authority (FCA) has announced a live artificial intelligence (AI) testing service and is seeking industry feedback in an engagement paper published on April 29.
FCA head of innovation Colin Payne told the Innovate Finance Global Summit (IFGS) 2025 that a standalone regulation for AI was not required, as the UK watchdog is confident its current regulations are sufficient to mitigate AI risks.
It was less a matter of new regulations and more about “clarity and regulatory confidence”, said Payne, adding that the FCA is focusing on “simplifying and streamlining” regulations to support innovation in financial markets.
Jessica Rusu, chief data, information and intelligence officer at the FCA, iterated that existing regulations were sufficient. “We believe our existing frameworks like the Senior Managers and Certification Regime and Consumer Duty give us enough regulatory bite that we don’t need to write new rules for AI,” she said in her keynote speech.
The testing service will assess whether firms’ AI tools are ready, helping to accelerate the adoption of AI among financial services providers. It will be launched in September and is expected to run for 12 to 18 months.
Live testing “enables generative AI model testing in partnership between firms and supervisors, to develop shared understanding and explore evaluation methods that will facilitate the responsible deployment of AI in UK financial markets”, said Rusu.
The service will be open to all firms that meet criteria outlined in chapter 3.8 of the engagement paper, which includes having post-deployment monitoring plans. The FCA could make it a permanent feature of its innovation services, depending on industry feedback, which is due from all of its stakeholders by June 10.
Responsible use
The FCA’s testing service is part of its wider AI Lab, a hub to promote “safe and responsible” use of AI in financial services. As part of the initiative, the FCA hosted an “AI sprint” in January, to discuss opportunities and challenges with 115 academics, regulators, technologists and consumer representatives.
The live AI testing service builds on the AI sprint summary published on April 23, which concluded there was a need for a “safe testing environment to encourage responsible innovation”.
The FCA plans to run a “supercharged sandbox” with AI-focused tech sprints, and to enhance its Digital Sandbox infrastructure through “greater computing power, datasets and AI testing capabilities”, adding: “We look forward to inviting firms to collaborate and experiment in new ways.”
It plans to publish an evaluation report on the first wave of firms using the AI testing service by spring 2026.
Risk warning
Meanwhile, concerns were expressed in several quarters about the possible unaddressed risks of AI.
Earlier this week, Patrick Opet, chief information security officer at JPMorganChase, warned that AI was amplifying security risks at financial services firms.
“The explosive growth of new value-bearing services in data management, automation, AI and AI agents amplifies and rapidly distributes these [security] risks, bringing them directly to the forefront of every organisation,” Opet said in an open letter to the bankʼs third-party suppliers.
Civil society groups have also been expressing concern about the march of AI into financial services. European non-governmental organisation (NGO) Finance Watch warned of the “fundamental” conflict between AI and principles of causality and accountability in financial regulations. In a report published in March, the non-profit called on policymakers and supervisors to reassess market instability and other risks associated with the use of AI in finance.
“AI systems generate outputs without clear explanations of their reasoning. This black-box nature places their decisions beyond human analytical ability and renders oversight impractical, if not impossible,” it said, announcing the report.
Finance Watch is particularly concerned about AI-driven credit assessments undermining public confidence in the fairness of financial markets. It warned that use of AI in investment products cannot be adequately explained and could, therefore, introduce unrecognised errors, biases and risks into global financial systems.
“A free market economy without accountability is a licence to rip off customers,” said Thierry Philipponnat, chief economist at Finance Watch: “By abandoning the AI liability regime, the European Commission is telling US tech giants that they can benefit from the EU market without any responsibility for what they are selling to EU citizens.”