Skip to content

Risk Management

Clearer governance, more training needed as financial firms adopt AI

By 0 minute read

September 3, 2025

Women working in risk and compliance want clearer governance around ownership of the artificial intelligence (AI) being deployed by financial institutions, according to the findings of a ‘pulseʼ survey shared exclusively with Compliance Corylated.

Surveys by both UK and European financial regulators last year found that upwards of 75% of financial services firms were already using some form of AI in their operations.

The pulse survey was conducted in August by T3 — a female-led management consultancy specialising in AI risk and regulatory change — among members of networking group Risky Women. The full findings will be presented at an event in London on September 9.

Around a third of respondents said their employer had no clear owner of AI governance in place — or if they had, this had not been communicated throughout the firm.

T3 partner Gwendoline Grollier said it was important for risk practitioners to understand who was responsible for AI within their organisation, as this “is one of the most important things you need in any governance arrangement”. She added the lack of a clear structure was acting as a “significant blocker” to firms’ successful adoption of AI.

The survey also found that no single model for AI governance has as yet emerged among those firms which have established an ownership framework.

Respondents also want more training, and believe companies are missing out on the full potential of AI by not offering employees more opportunities to experiment with the technology in a controlled environment.

Training

Respondents said the lack of ownership meant there was a knock-on effect for training. Some 55% felt they lacked the skills to use AI effectively, and only 33% said they had received training on AI.

In the European Union, the AI Act has required firms to ensure AI literacy among their workforce since February 2025. Companies could ultimately face civil litigation if the use of AI systems by inadequately trained employees harms another business, or consumers.  

In July, a study from the Massachusetts Institute of Technology (MIT) found that 90% of workers surveyed were regularly using GenAI to carry out tasks at work. However, only 40% of their employers had a GenAI enterprise subscription — which meant these workers were using AI tools that had not been officially signed off by their employers. MIT dubbed this practice “shadow AI”.

The T3/Risky Women pulse survey reflected this finding. Gennai said much AI usage was for “basic reasons”, such as personal day-to-day efficiency including summarising documents or content preparation, rather than more advanced applications such as predictive modelling or real-time monitoring for decision-making.

Positive on AI

Overall, the majority (75%) of respondents viewed AI positively, with none reporting a “strongly negative” view. This finding stood out to T3, given recent media coverage about AI making humans redundant.

Jen Gennai, T3 partner and head of responsible AI, said the elevated positivity towards AI among respondents might be attributed to the fact that risk and compliance professionals see themselves as providing “essential human oversight”.

“It makes sense, as AI can’t regulate itself. It can be a helpful tool to identify where risk is occurring and to analyse, quantify and contextualise risks in a much better way but there always would have to be human oversight for risk management, for that last line of defence,” she said.

Gennai worked at Google for 17 years, latterly setting up and leading its global responsible AI team.