Compliance
AI-assisted emotion detection too risky for bank use
• 0 minute read
May 22, 2025

Artificial intelligence (AI) assisted emotion detection and behavioural surveillance tools are too risky and unproven for use by banks, British members of parliament heard.
The House of Commons Treasury Committee’s inquiry on AI in financial services received written evidence that both lauded emotional and behavioural surveillance and also warned about its harmful effects.
Conservative MP John Glen asked witnesses about AI’s ability to detect harassment, bullying, rogue trading and performance. Some of the written evidence suggested AI surveillance tools could also detect traders’ stress levels and other “wellbeing issues”.
Jana Mackintosh, managing director, payments and innovation at UK Finance, said: “When it comes to recruitment, emotional monitoring [and] emotional manipulation, those are very high-risk applications and use cases when it comes to AI. It is not well understood enough or developed enough for us to feel certain that there are not any risks associated with deploying that confidently and comfortably.
“We do not see — certainly in the use cases that we have explored across banking and payments —that [emotional and wellbeing surveillance] are being used as use cases,” she added.
Reliable tools?
Glen asked whether that meant these were not reliable tools for monitoring and evaluating people.
“The technology and the adoption have been focused predominantly on either well-understood areas such as risk management and fraud detection, or emerging areas where there are low risk use cases with human monitoring over those use cases. I think some of the use cases that you are talking about we certainly have not seen being adopted in that same sense,” Mackintosh said.
Amandeep Luther, AI lead at the Association for Financial Markets in Europe (AFME), told the committee he “canvassed several of our member banks a couple of days ago to ask them about this point. None of the wholesale banks that I spoke to reported using technology of that nature”.
Witnesses noted some surveillance is required by regulations such as the Markets in Financial Instruments Directive (MiFID II) and the Market Abuse Regulation (MAR) for detecting market abuse, misconduct and monitoring customer communications. However, there are clear guidelines for how monitoring and surveillance can be used. Firms, too, should follow privacy controls and consult privacy committees before any surveillance technology is implemented, Luther added.
“This technology generally is ringfenced around business platforms, business hardware and business activities. At this point in time, none of the firms that we work with is using any kind of emotional detection using AI or has plans to implement surveillance of that nature,” Luther said.
Trade union highlights harms
Unite the Union, which represents workers in the UK financial services industry, including banks, insurance companies, and outsourcers, submitted evidence showing how AI is being used in the sector. Besides the advantages of AI management of day-to-day functions, such as scheduling, Unite reported on risks. For example, scheduling algorithms attempt to remove all downtime and “enforce great intensity”. That is “simply not viable for a human being in the workplace”, Unite said.
“The extreme pressure when micro rests are removed between calls in contact centres can leave people in a high state of constant stress and anxiety. Hyper efficiency driven by AI dehumanises the workplace, leading to huge mental health challenges,” it added.
Over-reliance on AI for decision making has a dehumanising effect on the workplace, removing “all compassion and discretion”, Unite observed: “An algorithm cannot recognise the importance of emergency family leave or a mental health crisis for a dependent.”