A Hong Kong finance worker sat through a 40-minute multi-person video call with deepfaked versions of the CFO and colleagues. They wired $25 million. The faces looked real. The voices sounded real. The expressions, the movements, the conversation — all AI-generated in real time. Detecting deepfakes is getting harder, but not impossible. Understanding the tells, the verification techniques that work regardless of AI quality, and the tools available in 2026 gives you a practical advantage. Here is the complete guide.
What You’ll Learn
Visual tells for deepfake video — what to look for frame by frame
Audio tells for voice clones — the subtle signs that still exist in 2026
Free detection tools and how reliable they actually are
Verification techniques that defeat deepfakes regardless of quality
How to spot AI-generated profile photos and images
⏱️ 12 min read
How to Spot AI Deepfakes – Complete Detection Guide 2026
Deepfakes are one of the most financially damaging AI fraud methods of 2026, and one of the six AI scam types covered in the AI Scams 2026 guide. The technical layer — how AI systems are used in the attacks described here — is covered in the AI Vulnerabilities overview. For real-time suspicious link checking, the Phishing URL Scanner helps identify fake sites linked from deepfake-accompanied fraud attempts.
Video Deepfake Tells — What to Look For
The visual tells for deepfake video are real but shrinking as the technology improves. Understanding them is useful for casual screening — but I want to be direct about the limits: a determined attacker using current best-in-class tools can produce video that passes most of these checks. My honest assessment of where we are in 2026: a 10-second look at a still image is usually insufficient for reliable detection. Subtle motion over multiple seconds is more revealing. The tells I describe below are what current deepfake tools struggle with — they may not apply to future improvements, which is why the verification section is the most important part of this guide.
VIDEO DEEPFAKE VISUAL TELLS
# Face and skin
Blurring or softness around the hairline and face boundary
Skin texture that looks too smooth or inconsistently rendered
Inconsistent skin tone between face and neck/hands
Face appears slightly “floating” — doesn’t move completely naturally with head
# Eyes and mouth
Unnatural blinking pattern — too regular, too infrequent, or missed blinks
Eyes don’t track naturally — gaze direction slightly wrong
Lip sync slightly mismatched — especially on complex mouth movements
Teeth rendering: deepfakes often struggle with realistic teeth
# Movement and lighting
Lighting inconsistency: face lit differently from background or clothing
Head rotation past 45 degrees: quality degrades at extreme angles
Hair movement: individual strands often rendered poorly in current tools
Earrings, glasses, and jewellery: often misrendered, flicker, or disappear
# How to test on a live call
Ask the person to turn their head slowly to one side
Ask them to put their hand in front of their face briefly
Ask for an action that requires specific, unusual physical coordination
Note: these tests are temporary — future tools will handle them better
securityelites.com
Deepfake Detection — Checklist for Live Video Calls
⬜
Hairline and face boundary — blurry or soft edges?
High signal
⬜
Blinking pattern — unnatural frequency?
High signal
⬜
Lighting — face matches background?
Medium signal
⬜
Head rotation test — quality holds past 45°?
High signal
⬜
Teeth rendering — realistic or blurred?
Medium signal
⬜
Accessories (glasses, earrings) — stable?
Medium signal
⬜
End call and call back on known number
Always works
📸 Deepfake detection checklist for video calls. High-signal indicators are the most reliable for current deepfake tools. However, the only fully reliable verification — regardless of detection score — is the last item: ending the call and calling back on a number you already have. Detection tools and visual tells are a temporary advantage that may not apply to next-generation deepfake technology.
Voice Clone Tells — Audio Detection
Voice clones are more convincing than deepfake video in many cases because voice only needs to fool one sense. I find that people who know a person’s voice well can still sometimes detect clones — but this is not a reliable defence. The tells are subtle and decreasing.
VOICE CLONE AUDIO TELLS
# Subtle audio artifacts (best heard in quiet conditions)
Slightly mechanical quality at the edges of words and sentence endings
Unnatural breathing patterns — pauses in unusual places
Emotional tone that doesn’t quite match the words being said
Background noise that’s too clean — real calls have ambient noise variation
Consistent audio quality — real calls have natural fluctuations
# What voice clones struggle with
Specific local dialect features and idiosyncratic speech patterns of the target
Answering genuinely unexpected questions that require personal knowledge
# Testing voice authenticity on a call
Ask a question only the real person would know (specific recent shared experience)
Say something unexpected that requires genuine emotional or personal response
Ask them to say an unusual specific phrase you request: “repeat after me: purple elephant 47”
Note: sophisticated real-time voice conversion agents can sometimes handle this via operator
AI-Generated Image Detection
AI-generated profile photos and images are used in romance scams, fake customer service, and disinformation campaigns. In my security work I encounter these regularly — the quality of AI-generated faces from tools like Midjourney and DALL-E 3 has made visual detection meaningfully harder than it was two years ago. The tells are different from video deepfakes because static images can be examined more carefully.
AI IMAGE DETECTION — COMMON TELLS
# Face and body tells
Ears: asymmetric, malformed, or one ear significantly different from the other
Text in the image: often distorted, misspelled, or illegible
Background objects that look slightly wrong: chairs with odd proportions, walls at wrong angles
Lighting: main subject lit differently from background
Pattern repetition: fabric patterns that don’t repeat correctly
# Quick verification steps
Reverse image search: Google Images, TinEye — checks if photo is reused from elsewhere
Note: AI-generated images won’t match — but you’ll confirm the photo is unique to this account
Request: ask for a photo holding a specific object or in front of a landmark
Detection Tools — What Works and What Doesn’t
Several deepfake detection tools exist, and my honest assessment is that they’re useful for screening but not reliable enough to treat as definitive. Detection tools are consistently behind the latest generation of generation tools — there’s a fundamental asymmetry where generating a convincing deepfake is a one-time cost but detecting it requires correctly identifying a new, unknown generation method.
DEEPFAKE DETECTION TOOLS — HONEST ASSESSMENT
# Free tools worth trying
Hive Moderation (hivemoderation.com/deepfake-detection): free web tool, decent on photos
Deepware Scanner (deepware.ai): video deepfake scanner, free tier available
Microsoft Video Authenticator: enterprise tool for video frames analysis
FotoForensics (fotoforensics.com): error level analysis for image manipulation
# Their limitations
Trained on known deepfake methods — new generation tools can evade detection
High false positive rate on legitimate compressed/processed video
Inconsistent results across different tools for the same content
Real-time video calls can’t be easily submitted to async detection tools
# The honest conclusion
Detection tools are useful for flagging suspicious content for further scrutiny
A “clean” result from a detection tool does not confirm content is genuine
Verification techniques (below) are more reliable than detection tools for high-stakes situations
Verification Techniques That Always Work
The most important section of this guide. Detection tools and visual tells give you probabilistic signals — they reduce your confidence that something is real, but they can’t guarantee a correct answer. My experience running deepfake awareness training: people significantly overestimate their ability to detect deepfakes through visual inspection alone. Verification techniques work by requiring something the deepfake cannot provide: proof of identity through an independent, pre-established channel.
VERIFICATION TECHNIQUES — HIGH RELIABILITY
# For video calls involving financial requests or sensitive decisions
End the call. Initiate a new call yourself on a number you already have.
The attacker cannot intercept a call you initiate to a number you independently hold.
Legitimate callers will understand this request for a significant financial decision.
# For voice calls requesting urgent action
Same principle: hang up, call back on a known number from your contacts or a trusted directory.
Pre-agree a family or team “safe word” — ask for it if you’re uncertain.
Never take financial action based on a voice call you didn’t initiate.
# For profile photos and online identities
Reverse image search all profile photos before trusting the identity.
Request a live video call with a specific unusual action they’d need to perform.
Check account creation date vs claimed relationship timeline.
# Organisational policy recommendation
Any financial transaction above threshold requires independent verbal confirmation
on a pre-registered number — not a number provided in the suspicious communication.
This policy defeats both voice clones and deepfake video fraud.
Building Deepfake Resilience at Work
Individual detection skills matter, but my recommendation to organisations is that policy and procedure provide more reliable protection than asking employees to spot sophisticated deepfakes under pressure. A finance team that knows to call back on a stored number before any transfer is more deepfake-resilient than one that’s been told to watch for blurry hairlines.
Images: check hands (finger count), ears (asymmetry), background text (distortion)
Detection tools: useful for flagging, not reliable as definitive verdict
Verification always wins: end call → call back on known number → defeats all deepfakes
Deepfake Detection — Your Action Plan
Two things to do now: (1) establish a callback verification habit for any video or voice call requesting financial action, and (2) set a family safe word. Both take 5 minutes and defeat the most financially damaging deepfake fraud scenarios. The AI Scams guide covers all six AI fraud types with the full financial loss data.
Quick Check
You submit a video to a deepfake detection tool and it returns a “99% authentic” result. How confident should you be that the video is genuine?
Frequently Asked Questions
Can you always spot a deepfake?
No — not reliably. High-quality deepfakes, especially real-time video deepfakes used in live calls, can be indistinguishable from genuine video to an untrained human observer and can fool current detection tools. The $25 million Hong Kong fraud demonstrates this clearly. The practical defence is not detection but verification: regardless of how convincing something looks, use independent verification channels (call back on a known number) for any consequential action.
What are the best free deepfake detection tools?
Hive Moderation (hivemoderation.com) offers free deepfake detection for images and video. Deepware Scanner (deepware.ai) provides video analysis. Both are useful for flagging suspicious content but should not be treated as definitive — a clean result doesn’t confirm authenticity. For image verification, FotoForensics provides error level analysis useful for detecting image manipulation.
How do I spot an AI-generated profile photo?
Check the hands — AI image generators frequently produce incorrect finger counts or merged fingers. Check the ears — they’re often asymmetric or oddly shaped. Check background text — AI generators typically produce distorted or misspelled text. Run a reverse image search via Google Images or TinEye to check if the photo was taken from elsewhere online. AI-generated images won’t match existing photos, but the reverse search confirms whether the image has any other web presence.
Are voice clones detectable?
Not reliably by ear, especially for people who don’t know the target’s voice well. Subtle tells include mechanical quality at word edges, unnatural breathing patterns, and too-clean background audio. However, these tells are decreasing as the technology improves. The reliable defence is procedural: for any call requesting financial action or sensitive information, end the call and call back on a number you already have stored independently. This defeats voice clones completely.
AI Scams 2026— All six AI scam types including the $25M Hong Kong deepfake fraud case, voice clone fraud, AI phishing, romance scams, and fake investment endorsements — with the specific warning signs and financial loss data for each.
AI Voice Cloning Authentication Bypass 2026— The technical detail on voice cloning attacks against authentication systems. How real-time voice conversion works, the security implications, and the authentication approaches that defeat it.
How Hackers Use Social Engineering 2026— The complete social engineering framework. Deepfake video and voice cloning are Method 7 (AI-enhanced attacks) in this guide — see how they fit into the broader social engineering methodology.
MIT Media Lab — Detect Fakes— MIT’s research project on deepfake detection, including a test where you can try to identify AI-generated faces. Useful for calibrating your own detection ability and understanding current detection limits.
ME
Mr Elite
Owner, SecurityElites.com
The deepfake detection arms race is one I expect defenders to lose on the technical side — detection tools will always lag behind the latest generation technology. My advice to clients is to invest in verification procedures, not detection technology. A company-wide policy that requires a callback verification for any payment above £10,000, regardless of how the request arrives or how convincing it looks, defeats every deepfake fraud attempt at zero incremental cost. The procedure doesn’t need to keep pace with AI development. The AI can get better at faking; it can’t get better at intercepting a call you initiate to a number you independently hold.
Founder of Securityelites and creator of the SE-ARTCP credential. Working penetration tester focused on AI red team, prompt injection research, and LLM security education.