CPAs: How to Spot AI Powered Deception and Fraud - Pamela Meyer

How You Can Detect AI Fraud:
Deepfakes and Digital Deception

Key Takeaways About AI Fraud and Deception

  1. AI-powered deception is making receipts, voices, and documentation look authentic, raising the stakes for AI fraud detection in accounting.
  2. Deepfake scams exploit trust and urgency, so finance teams must harden verification steps for payment and approval requests.
  3. Human interviewing and behavioral analysis skills remain essential because technology alone can’t reliably detect intent or deception.
  4. Synthetic identity fraud blends real and fake data, allowing criminals to pass standard checks and stay hidden longer.

What Is AI Deception?


AI deception is the use of artificial intelligence to generate convincing but false information intended to manipulate decisions, actions, or beliefs. These systems can produce realistic voice recordings, videos, documents, and written messages that impersonate real individuals or organizations.

Common examples include deepfakes that simulate a person’s face or movements on video, voice cloning that reproduces speech patterns from short audio samples, and AI-generated emails, invoices, or receipts designed to appear authentic. These tools allow deceptive materials to be produced quickly and at large scale.

The underlying mechanism of AI deception is behavioral rather than purely technical. These systems replicate the signals people associate with credibility, including familiar voices, recognizable authority figures, and polished documentation. The result is a digital artifact that appears legitimate even when the surrounding context is false.

AI deception therefore represents an evolution of traditional fraud and social engineering. Long-standing psychological pressures such as urgency, authority, and emotional leverage remain central to these schemes. Artificial intelligence primarily changes the speed, cost, and realism with which those tactics can be deployed.

Because the manipulation targets human judgment, detecting AI-enabled deception often depends less on identifying technical artifacts and more on evaluating behavior, context, and verification procedures. Understanding how deception operates remains one of the most reliable defenses against AI-assisted fraud.

How Can Accountants Detect Deception in the Age of AI Fraud?


Artificial intelligence has fundamentally changed how deception shows up in accounting, auditing, and fraud investigations. Receipts can now be fabricated in seconds. Voices and faces can be convincingly cloned. Entire narratives can be generated that sound plausible, confident, and internally consistent. In many cases, the documentation looks perfect.


Technology is becoming essential in AI fraud detection, but it is also accelerating the scale, speed, and sophistication of deception. According to the ACFE’s Anti-Fraud Technology Benchmarking Report, more than 90 percent of organizations now use data analytics as part of their anti fraud programs, and the use of AI and machine learning is expected to nearly triple in the next two years.


At the same time, global fraud losses continue to rise, driven by cybercrime, social engineering, and AI powered deception that outpaces traditional controls. This creates a paradox for CPAs, auditors, and fraud professionals. The tools are getting smarter, but the risks are getting harder to see.


That risk is amplified by the growing effectiveness and scale of AI assisted phishing. AI automated phishing emails have achieved 54 percent click through rates compared with 12 percent for standard attempts, a 4.5 times increase. More concerning still, AI automation may increase phishing profitability by as much as 50 times by allowing highly targeted attacks to be scaled across thousands of recipients at minimal cost.


As deception expert Pamela Meyer has long argued, the tools may change, but the underlying behavioral pressures remain familiar. That is why detecting AI enabled deception now requires more than anomaly detection. It requires a disciplined way to verify identity, evaluate pressure, and separate appearances from behavior.


The Meyer Framework for Detecting AI Deception


AI tools can fabricate voices, images, documents, and narratives that appear legitimate at first glance. Because these artifacts can be highly convincing, detection cannot rely on technical clues alone. Instead, investigators need a structured way to evaluate the request itself: who is making it, how it is delivered, and whether the surrounding circumstances make sense.


The framework below focuses on behavioral verification rather than technological artifacts. Each step helps separate appearance from reality and reduces the chance that urgency or authority will override sound judgment:

  1. Verify identity independently. Do not rely on a voice, face, email address, or document alone. Use a second channel or a known point of contact to confirm that the person is real and authorized.
  2. Analyze urgency. Deception often works by creating time pressure. When a request demands immediate action, especially around money, access, or confidentiality, slow the process down.
  3. Check whether the channel makes sense. Fraud often arrives through an unfamiliar number, a changed account, a rushed video call, or a message that bypasses ordinary approval patterns.
  4. Look for resistance to verification. When someone discourages callbacks, refuses standard documentation, or insists that normal procedures do not apply, that behavior matters.
  5. Confirm the story, not just the artifact. AI can generate a convincing receipt, invoice, email, or video. The real question is whether the surrounding facts, timing, relationships, and incentives hold together.

Why AI Cannot Replace Human Judgment in Fraud Detection


AI excels at pattern recognition. It can scan millions of transactions, flag anomalies, and surface correlations that no human could detect alone. But even the most advanced systems still require human validation to be useful.


Fraud is not only a data problem. It is a behavioral problem. A system may identify an outlier, but it cannot reliably interpret motive, detect manipulation, or judge whether a person is using urgency, authority, or intimidation to force a decision. Those are still human tasks.


This is especially important in cases involving AI powered impersonation. Deepfakes can imitate appearance. Voice cloning can imitate tone. AI generated receipts and supporting materials can imitate legitimacy. None of that proves the request is genuine. In practice, the strongest defense is still careful verification combined with sound judgment.


How is AI Changing Fraud?


One of the clearest examples of this shift in AI usage is expense fraud. Generative AI can now produce receipts that are indistinguishable from real ones, complete with accurate logos, fonts, timestamps, and totals. A recent Accounting Today article detailed how even experienced reviewers and OCR systems were unable to tell the difference between fabricated and authentic receipts.


Deepfake fraud presents an even more alarming risk. In one real case detailed in the Journal of Accountancy, the global engineering firm Arup lost approximately $25 million after an employee participated in what appeared to be a routine internal video conference with senior leadership. During the call, the employee saw and heard individuals who looked and sounded exactly like Arup’s chief financial officer and other executives. Their facial movements, voices, and conversational timing appeared natural and familiar, and the discussion followed established internal protocols.


Citing confidentiality and urgency, the executives instructed the employee to initiate a series of wire transfers. Believing the request to be legitimate and reinforced by what appeared to be real time visual confirmation, the employee complied. Investigators later determined that every participant on the call, except the victim, was a deepfake generated using AI cloned video and voice, and that none of the executives had actually been present.


These schemes succeed not because accountants lack intelligence, but because deception exploits human behavior. Research shows that only about one third of corporate fraud is ever detected. That number may fall further as AI deception becomes faster, cheaper, and easier to deploy.


AI Detection, Deception, and Fraud

Warning Signs of AI Enabled Deception


AI generated fraud is designed to look convincing. In many cases the voice sounds authentic, the documentation appears polished, and the message seems to come from a trusted authority. Because of this realism, the most reliable signals of deception often come from behavior rather than technology.


When reviewing unusual requests, accountants and fraud professionals should pay attention to patterns like the following. Individually they may seem minor, but together they often signal manipulation:

  1. Urgency designed to bypass verification. The request demands immediate action before anyone can slow down and confirm details.
  2. Identity that cannot be independently verified. The sender wants to be trusted based on appearance, voice, or apparent authority rather than standard confirmation.
  3. Authority impersonation. The message appears to come from a senior executive, client, vendor, regulator, or someone whose role discourages pushback.
  4. An unusual communication pattern. The request comes through a new number, a different platform, a changed account, or an unfamiliar workflow.
  5. Pressure to bypass normal controls. The person asks you to ignore approval steps, confidentiality rules, or routine payment procedures.
  6. Documentation that looks perfect but lacks context. The invoice, receipt, or supporting record appears polished, yet the surrounding facts do not fully make sense.

Deepfakes and AI Fraud Tools are Improving


Recent research shows that generative AI is dramatically lowering the cost, skill, and time required to create convincing fraud. According to Juniper Research, global fraud losses are projected to grow 153 percent by 2030, rising from $23 billion in 2025 to $58.3 billion, driven largely by synthetic identity fraud and AI-enabled impersonation.


These synthetic identities blend real and fabricated data, allowing them to pass traditional verification checks and remain undetected longer.


At the same time, scammers are rapidly adopting generative AI tools to scale deception. Data from TRM Labs shows that reported generative-AI-enabled scams increased 456 percent between May 2024 and April 2025, following an already significant rise the year before. Deepfake scams are now the most commonly reported form of AI fraud, with criminals using cloned voices, faces, and live video deepfakes to impersonate executives, colleagues, and trusted authorities.


As Pamela Meyer has emphasized in her work on deception, realism is not proof. That principle matters even more now that AI can generate convincing voices, faces, and supporting materials at scale.


Academic research confirms that this trend is not slowing. A 2025 study published in Issues in Information Systems shows that advances in generative adversarial networks and diffusion models have made deepfakes increasingly realistic and increasingly difficult to detect, even as detection technologies improve. The study notes, “Detection systems struggle to keep pace with deepfake generation because deepfakes continue to evolve and bypass improvements in detection.”


Protection and Verification Protocol for Finance Teams


When AI makes deception more realistic, the answer is not paranoia. It is process. Strong verification habits reduce the odds that urgency, authority, or polished digital evidence will override judgment.

  1. Verify requests through a second channel. If a payment, approval, or sensitive request arrives by email, text, or video call, confirm it through a separate and known contact path.
  2. Require confirmation for money movement. No urgent transfer, account change, or payment release should move forward on the strength of one message or one meeting alone.
  3. Use known contacts, not reply paths. Contact the executive, client, or vendor through an established number or address rather than replying directly to the incoming request.
  4. Slow down urgency. Build a rule that urgent financial requests trigger more scrutiny, not less. Pressure is a cue to verify, not a reason to skip steps.
  5. Escalate suspicious communications. If something feels off, unusual channel, odd timing, resistance to confirmation, or a mismatch in context, escalate it before acting.
  6. Train teams on behavioral signals. Staff should know that realism is no longer proof. The relevant question is whether the behavior, process, and story all align.
AI Fraud Protection

The Risk of Overconfidence in Deception Detection


Ironically, well-trained professionals are often more vulnerable than they realize. Studies consistently show that people overestimate their ability to detect deception, while performing only slightly better than chance. This is dangerous in an AI driven environment, where documentation can look flawless and confidence can be manufactured.


How to Spot the Lies


While AI is being used to develop new fraud techniques, it has not changed the underlying psychology of deception.


Lying still creates cognitive load. Research shows that when people lie, deceptive responses typically take longer and require additional mental effort to fabricate information, suppress the truth, and monitor the listener.


Deception still produces emotional leakage. Studies of deception indicate that high-stakes lies are accompanied by brief involuntary emotional expressions, such as fleeting micro-expressions of fear or anxiety, because liars experience heightened emotional arousal that can leak out even when they attempt to control it.


Stories told under pressure still fall apart. Research on deceptive storytelling shows that when individuals are under cognitive or emotional pressure, fabricated narratives are more likely to break down over time, revealing inconsistencies, omissions, or shifts in detail.


Pamela Meyer, the author of Liespotting, TED speaker, and one of the world’s leading experts on deception detection, has spent decades studying exactly how these signals appear and how professionals can reliably surface the truth. Her research shows that deception is best detected not by searching for single tells, but by observing clusters of verbal and nonverbal indicators, establishing behavioral baselines, and strategically increasing cognitive load through expert questioning.


The Human Skills That Technology Can Never Replace


Effective deception detection today requires accountants to strengthen the human layer that sits on top of analytics and controls.


That means learning how to:

  • Establish what normal behavior looks like for a client or colleague before asking hard questions
  • Recognize when answers are overly simplified, evasive, or linguistically distancing
  • Spot inconsistencies between words, tone, and body language
  • Ask questions that disrupt rehearsed narratives
  • Notice post interview relief and what happens when the conversation resumes
  • Maintain professional skepticism without becoming adversarial

These are research-based techniques that can be taught, practiced, and applied.


Pamela Meyer’s NASBA approved courses are designed specifically for this purpose. They help CPAs, auditors, and fraud professionals develop the behavioral literacy needed to detect deception when technology alone cannot provide answers.


Learning to Detect Deception While Earning CPE Credit


If you want to strengthen your ability to detect deception in an AI driven fraud landscape while earning NASBA approved CPE credits, Pamela Meyer offers several self-paced, online courses tailored for financial professionals.


These courses focus on the human judgment, interviewing, and behavioral analysis skills that technology cannot replace. They are practical, evidence based, and designed for real world financial and audit contexts.


To learn more, check out the article, CPA’s: 10 Tips to get the Truth.

Glossary/Key Terms in AI Enabled Fraud

Artificial intelligence has introduced new tools for deception, but many of the underlying fraud strategies remain familiar. The terms below describe some of the most common technologies and tactics used in AI enabled scams.

Deepfake
A deepfake is a video or audio recording generated using artificial intelligence that convincingly imitates a real person’s face, voice, or movements. Deepfakes are often used in fraud schemes to impersonate executives, public officials, or trusted contacts.

Voice Cloning
Voice cloning uses artificial intelligence to reproduce a person’s speech patterns and tone from a short audio sample. Criminals can use cloned voices to stage convincing phone calls or voice messages that appear to come from someone the victim knows.

Synthetic Identity Fraud
Synthetic identity fraud combines real and fabricated information to create a new identity that appears legitimate. These identities may pass traditional verification checks and can remain active for long periods before the fraud is discovered.

Generative AI Fraud
Generative AI fraud refers to scams that use AI systems to produce realistic text, images, documents, or media designed to deceive victims. These tools allow criminals to generate convincing materials quickly and at large scale.

Impersonation Fraud
Impersonation fraud occurs when a criminal pretends to be a trusted individual or organization in order to obtain money, information, or access. AI tools now allow impersonation to include realistic voices, faces, and written communications.

Social Engineering
Social engineering is the practice of manipulating people into revealing information or performing actions that benefit the attacker. These schemes often rely on urgency, authority, fear, or trust rather than technical hacking.

Frequently Asked Questions About AI Fraud and Deception

How can you tell if something is a deepfake?

You can tell if something is a deepfake by combining technical verification with behavioral analysis, rather than relying on how realistic it looks. Deepfakes are designed to appear authentic, often featuring cloned voices, familiar faces, and convincing conversational timing. In many cases, the visuals and documentation look flawless, which makes visual inspection alone unreliable.

Because detection systems struggle to keep pace with generative AI advances, human judgment remains essential. Deepfake fraud often exploits urgency, authority, and established internal protocols to lower skepticism. The real warning signs may emerge in how the request is framed, how quickly action is demanded, or whether confidentiality is emphasized in unusual ways.

You can avoid AI scams by strengthening your interviewing process, preparing strategically, and refusing to rely on intuition alone. Well trained financial professionals are often overconfident in their ability to detect fraud, yet research shows detection rates are frequently no better than chance. Preparation and structured questioning are your first line of defense.

Do not assume that the absence of obvious signs of deceit means a request is legitimate. When you are rushed, tired, or underprepared, you are more vulnerable to clever fabrications and exaggerations. Instead, plan your questioning strategy carefully and ask unexpected questions that raise cognitive load and surface material facts.

It is equally important to stay objective. Focus on facts rather than pursuing a person, and be willing to open an investigation even when it feels uncomfortable or time consuming.

You detect fraud that uses AI generated documents or receipts by combining data analytics with structured human judgment and behavioral analysis. Generative AI can now produce receipts with accurate logos, fonts, timestamps, and totals that appear indistinguishable from authentic documentation, which means visual review alone is no longer sufficient.

AI fraud detection tools can flag anomalies and narrow the field of risk, but they cannot assess causation, context, or intent. Human validation is essential. When documentation looks flawless, shift attention to the person behind the submission. Establish behavioral baselines, ask follow up questions that increase cognitive load, and listen for answers that are overly simplified or linguistically distancing.

Yes. Modern voice cloning tools can reproduce a person’s speech patterns, tone, and cadence from only a short audio sample. In fraud schemes, criminals may use cloned voices to impersonate executives, coworkers, or family members in phone calls or voice messages that request money, sensitive information, or urgent action.

Because these recordings can sound convincing, organizations should not treat a familiar voice as proof of identity. Financial requests or unusual instructions should always be confirmed through a second communication channel or a known contact.