Clues to accounting fraud are hiding in plain sight

In This Story

People Mentioned in This Story
Body

When financial markets manipulate or bend earnings, the first step may be hiring someone willing to do it. And surprisingly, this might show up in plain sight—in the language of new accounting job postings. Using artificial intelligence and machine learning models, Yi Cao, assistant professor of accounting at the Donald G. Costello College of Business at George Mason University, has uncovered a subtle link between how companies describe open jobs and the likelihood of engaging in reporting misconduct. 

While past existing research on the broader financial market focused on firm characteristics and the role of

Yi Cao

 executives, an area that has largely been overlooked is the human capital element—the people behind the numbers hired to make it happen. In a working paper, Cao and co-authors Nick Seybert of University of Maryland and Chi Wan of University of Massachusetts Boston, shift focus to personnel recruitment and job postings predisposed to bend the rules. Ultimately, earnings management is often a team effort. 

“It always have been a challenge for capital market participants to obtain a good tool to detect earnings management and fraud,” Cao says. “This includes the regulator, auditors, investors and other stakeholders. We are looking at this issue from a more grassroots level because, ultimately, all these numbers are not manipulated by the CEO — they don’t touch the work. It's the people who are working and changing these numbers.”

Cao and his co-authors analyze over 40,000 accounting job postings from 2009 to 2022 using Google’s BERT—a Large Language Model (LLM). Through training the LLM, known for its ability to detect nuance in text, they classify language for each job posting as “rule-bender,” “rule-follower,” or “neutral.” This is used to identify patterns in how firms hiring describe ideal job applicants. 

A typical “rule-bender” sentence would include language such as, “think out of the box,” “explore alternative solutions,” and “strategic data interpretation”. An expansive interpretation of these phrases could imply a less literal approach to rules and industry norms.

“Rule-follower” language tends more toward the straight and narrow: “ensure conformance,” “enforce compliance,” “provide accurate and timely financial reporting,” etc. 

The researchers find that firms whose accounting job ads used “rule-bender” language are more likely to manipulate their disclosures to paint a rosier picture of their performance. Among other signals, the researchers use a series of abnormal accrual or real activities-based proxies to capture earnings management. 

Further, the study finds that the connection isn’t as strong when it comes to misconduct that would require changing actual business practices and operations. Cao suggests this is due to accountants often not influencing operational decision-making as their roles are more closely associated with reporting. “Real earnings management need to have managers from the real operational side to coordinate with these accountants to manipulate not only the numbers, but also the real inputs and outputs.” Cao says. But when there is not much room to achieve accrual-based earnings management – meaning that firms’ balance sheet has already been bloated up—the firms with rule-bending job postings are more likely to engage in real activity manipulation. 

Additionally, competing machine learning models PALM and LSTM were used to test the reliability of these findings. Across all models, the results remained robust throughout. 

Though the study did not capture who was actually hired as a result of these job ads, Cao stresses that “our focus is not to dig out if the firms fill that position with someone who indeed possesses the psychological trait. As long as the firms put the word out, we take that as a signal of potential earnings management.” 

Job postings can serve as early indicators of a company’s intent, offering insight into its values without needing insider access. By analyzing public-facing language used in hiring, the study offers a new way for regulators and investors to identify financial manipulation, i.e. detect risk not just through numbers, but through words. 

“We utilize large language models as a tool that is capable of gauging verbal or textual variation—using this powerful tool to find information that can be indicative of financial misbehaviors,” Cao says. “This is beneficial to everyone in the financial market: policy makers, shareholders, tax agencies, and other stakeholders. So, we hope this study can bring new insight to a hot topic that people care about.”