Skip to content

Conduct & Culture

Can AI determine if your CEO is lying?

By 0 minute read

July 31, 2025

Artificial intelligence (AI), robotics and large language models are often touted for their capacity to relieve human effort by extracting meaning from vast and varied datasets, or doing repetitive tasks more efficiently.

Now the technology is increasingly being employed to analyse human speech and determine whether the speaker is being truthful — and, therefore, trustworthy.

At last month’s IMpower FundForum conference in Monte Carlo, senior leaders in the investment community commented that we are entering a world where AI can determine “whether your chief executive officer (CEO) is lying during earnings calls”. 

According to Thomas McHugh, CEO of investment and data management solutions provider Finbourne, AI makes it easy to highlight any anomalies within earning calls or identify anything that is different or stands out “from the previous 15 earning calls”.

CEOs of publicly traded companies must present quarterly results to explain performance variances and highlight strategic accomplishments. Besides prepared statements, they frequently have to respond to spontaneous questions from investors, journalists, and analysts. 

Confidence, or otherwise

Their on-the-spot responses can affect market perception and stock valuation — as well as offering clues on whether the CEO is being truthful about the company’s performance. 

By using techniques such as natural language processing (NLP) and sentiment analysis to scrutinise human language and behaviour, AI can uncover subliminal and non-verbal indications of confidence (or otherwise) in a company’s future performance.

Researchers from the University of Minnesota and Goethe University Frankfurt have been examining how AI can be used to identify these deceptions and their potential impact on investments and the stock market. 

Both Doron Reichmann of Goethe University Frankfurt’s department of finance and John Heater of the University of Minnesota Twin Cities’ department of accounting are currently working on a follow-up to their 2024 paper, Slow Tone: Detecting white lie disclosures using response latency.

Gaining an edge

According to Reichmann, investors would be very interested in this type of analysis because “everyone wants to gain an edge”. 

“If you get this piece of information, you can make money off it,” he added. 

Meanwhile, “for any investor, getting just a small hint on whether the CEO or chief financial officer (CFO) is not really telling the truth can result in not trading on this piece of information while everyone else does”.

The research paper examined 300,000 calls over a number of years to determine whether there is any connection between various senior executive response times when making public statements about their businesses and disclosures that were subsequently found to have been deceptive.

The study suggests that longer response times — particularly when accompanied by latency in responding to a question, and combined with a so-called ‘slow tone’, which can imply hesitation or cognitive effort — may be an indicator of a white lie or deceptive statement. 

The idea of response latency being an indicator of a lie “has been around in the psychology literature for over 100 years,” said Reichmann. 

For their paper, both Reichmann and Heater analysed decades of public statements and earnings calls — more than 300,000 interactions — using AI speech recognition models to measure the milliseconds it took for managers to respond to analysts’ questions. What emerged from the research was a correlation between this type of speech from C-suite leaders and the simultaneous occurrence of operational or business issues within the company.

Non-verbal clues

From both a regulatory and investor perspective, analysis into company performance has moved from structured documents that “companies put a lot of effort into crafting” to “interactive venues”, such as earnings, calls where CEOs are “kind of put on the spot”, said Reichmann. From a legal or regulatory perspective, this type of “non-verbal” analysis could be useful when investigating financial maleficence or fraud, he said. 

However, he added: “I’m not saying that response latency as an indicator will hold up in court or anything, but it might be a measure that helps you spot those critical responses”— especially from a regulator’s perspective, if something seems “off” that merits further investigation.

‘All-star’ influence

Further evidence of the benefits of AI in determining honesty was found in a separate study, The Tangled Webs We Weave: Examining the effects of CEO deception on analyst recommendations, published in the Strategic Management Journal

Researchers used machine learning (ML) models to operationalise the likelihood of CEO deception, as well as analysts’ suspicions of CEO deception on earnings calls.

The experiment controlled for analysts’ suspicions of deception, the study found analysts are more likely to assign superior recommendations to deceptive CEOs, particularly those deemed ‘All-Star analysts’. However, it also found that the deception was less effective with analysts who are repeatedly exposed to deception.

The research revealed the benefits of CEO deception are lower for habitual deceivers, pointing to diminishing returns from deception. 

Asked whether it would rely on AI analysis of non-verbal cues from CEOs, the UK Financial Conduct Authority (FCA) said: “Under the Market Abuse Regulation, individuals, including the directors of companies whose shares are admitted to trading, are prohibited from disseminating false and/or misleading statements that may unduly influence the price of those shares. We expect directors to take this into account when communicating publicly.

“We review all allegations that we receive, regardless of whether they are generated by AI or other sources.  If an allegation turns out to be true and a CEO has misled investors, we will consider what action to take in that scenario.”

Rubbish in, rubbish out?

With any analysis conducted via sophisticated AI tools, the quality and diversity of the data used is crucial to producing trustworthy results. McHugh warns that while Finbourne has seen clients use these types of analysis to make better investment decisions, machine learning tools are only as good as the data on which the models are trained. 

“Most training data is from white, American CEOs, who all tend to be 40 to 50 years old,” he said. 

However, models that use quality data and achieve accuracy rates “above 50%, maybe as high as 70%” are a good indicator that the results can help investors make informed decisions, he says. The key is to look for “anomalies” in a senior leader’s speech: “Tell me if there’s anything different from the previous 15 earnings calls that stands out.”

AlphaSense is used by a number of investment firms for AI sentiment analysis. However, a spokesperson said the firm does not market it as a way to judge executives on their “honesty”. 

“At AlphaSense, we use advanced language models to analyse spoken and written financial communications — including earnings calls — to surface patterns and signals that might otherwise be missed, such as positive and negative sentiment. Our approach is rooted in transparency and helping professionals make more informed decisions,” said the spokesperson.

Meanwhile Damien Barry, head of global investor and distribution solutions, EMEA, at cloud-based financial services technology provider SS&C Technologies, believes it is “pretty clear” that research houses are using AI to look for sentiment in market events and market announcements. 

“It’s definitely possible to fact-check an audio stream almost in real time,” he said. “Obviously, you need a human loop. From my perspective, it’s obvious you need a human. [It’s a very subjective kind of truth.”