A finance worker in Hong Kong wired $25 million after a video call with people who turned out to be entirely AI-generated deepfakes. A British energy company wired €220,000 to a fraudster after a phone call from what sounded exactly like their CEO — a voice cloned from public recordings. A grandmother in California lost $18,000 to someone she thought was her grandson in trouble, but was an AI voice clone reading from a script. These aren’t future warnings. They happened. AI has made scams faster, more convincing, and harder to detect. Here’s exactly how each type works, the real financial losses they’ve caused, the specific warning signs for each, and the one verification step that defeats most of them regardless of how convincing the technology gets.
What You’ll Learn
The 6 main AI scam types with real documented cases and losses
How each scam works technically — in plain English
The specific warning signs for each type
How to verify when you’re not sure something is real
1. Voice Clone Scams — Fake Relatives and Fake CEOs
Voice cloning technology can recreate a person’s voice from as little as three seconds of audio. Criminals use this to impersonate family members, bosses, and colleagues in phone calls. The voice sounds authentic because it is — built from real recordings of the actual person. My concern about this category is how rapidly the technology has become accessible: tools that required expensive equipment two years ago now run on a laptop and produce convincing results in minutes.
VOICE CLONE SCAMS — REAL CASES AND HOW THEY WORK
# The Grandparent Scam (AI upgrade)
Traditional: scammer calls pretending to be grandchild in trouble
AI version: uses cloned voice of actual grandchild (from social media videos)
Real case: Jennifer DeStefano (Arizona, 2023) — heard her daughter’s voice
asking for $1M ransom. The daughter was home safe. Voice was cloned.
# CEO Voice Fraud (Business Email Compromise upgrade)
Real case: UK energy company (2019, pre-LLM but voice synthesis): €220,000 wired
after call from “CEO” asking for urgent supplier payment
2024 update: same attack now uses real-time voice conversion — live, adaptive calls
# Warning signs
Unusual urgency — “I need you to act NOW, don’t tell anyone”
Request for money, gift cards, or wire transfer
Caller won’t let you call them back on their usual number
Slight unnatural cadence or robotic quality at the edges of sentences
# The only reliable defence
Hang up and call back on a number you already have for the person
Pre-agree a family “safe word” — ask for it if you’re unsure if the call is real
Never transfer money, buy gift cards, or give personal information based on a call alone
2. Deepfake Video Fraud — Fake Video Calls
The $25 million Hong Kong fraud is the clearest demonstration of how dangerous deepfake video has become. Employees sat on a multi-person video call with what appeared to be their CFO and several colleagues — all of whom were AI-generated in real time. Every face, every voice, every expression was fake. They wired $25 million based on what they saw. This is no longer the technology’s ceiling — it’s a documented case.
DEEPFAKE VIDEO SCAMS — WHAT’S POSSIBLE
# Documented incident
Hong Kong, February 2024: finance worker wired HK$200M ($25M USD)
Method: multi-person Teams/Zoom call with deepfaked CFO and colleagues
Source: Hong Kong police confirmed; widely reported globally
# How real-time deepfakes work
Attacker uses webcam software that replaces their face with target’s face in real time
Voice conversion applies target’s voice to attacker’s speech simultaneously
Result: video call participant appears to be the impersonated person
# Current limitations (how to detect)
Artifacts visible on rapid head movement or unusual lighting
Blurring around hairline edges, especially with complex backgrounds
Eye contact sometimes slightly off — eyes don’t track naturally
Background may look too clean or static compared to normal video calls
# Verification technique for video calls involving money
Ask the person on-screen to wave with both hands and turn sideways
Current deepfake tools struggle with rapid profile-view requests
Or: end the call and initiate a fresh call yourself on a known number
3. AI-Powered Phishing — Personalised at Scale
Traditional phishing emails were easy to spot — generic greetings, odd grammar, implausible pretexts. My experience testing phishing awareness across organisations: click rates on AI-generated personalised emails are consistently 3–5x higher than on generic templates, regardless of how much training employees have received. AI-generated phishing removes every one of those tells. The email references your real name, your actual job title, your current projects, and recent public statements — all pulled from LinkedIn, company websites, and social media in seconds. Click rates on AI-generated spear-phishing are four to five times higher than on generic phishing campaigns.
AI PHISHING — HOW TO RECOGNISE IT
# What makes AI phishing convincing
Uses your real name, job title, company, colleagues, and current projects
References recent events (LinkedIn post, company announcement, news story about you)
Arrives shortly after a triggering event (post, press release, job change)
Perfect grammar and natural tone — no tells a human didn’t write it
# Warning signs to look for instead
Unexpected urgency combined with a financial or credential request
Sender domain that’s close but not exactly right (company-corp.com vs company.com)
Link destination that doesn’t match the claimed source (hover over link to check)
Request to click a link and enter credentials for something you’d normally do in an app
# The golden rule
If an email asks you to click a link and enter credentials: don’t.
Navigate directly to the site yourself by typing the URL in your browser.
This defeats every phishing attack regardless of how convincing the email is.
4. AI Romance Scams — Fake Relationships
Romance scams cost victims billions annually — the FBI reports over $1.3 billion in losses in 2022 alone, and that figure is widely considered an undercount because most victims don’t report out of embarrassment. In my security briefings, I describe AI romance scams as the most emotionally damaging category. AI has supercharged this category by enabling scammers to run hundreds of simultaneous fake relationships with automated AI agents that maintain coherent personas, remember previous conversations, and respond to messages at any hour in any language. Victims often invest months of emotional energy before any money is requested.
AI ROMANCE SCAMS — HOW THEY WORK
# How AI changes romance scams
Profile photos: AI-generated faces that don’t exist — no reverse image search match
Conversations: AI chatbot maintains consistent persona across weeks of messaging
Video calls: deepfake video used to “prove” identity to suspicious victims
Scale: one operator manages hundreds of simultaneous fake relationships with AI help
# Warning signs
Met online only — no mutual friends, no public social media history before recently
Always has an excuse not to meet in person or video call spontaneously
Profile photos look too perfect — reverse search them anyway
Relationship moves very fast — emotional intensity before you’ve met
Eventually asks for money: “investment opportunity”, medical emergency, travel to meet you
# Verification steps
Reverse image search every photo (Google Images, TinEye)
Ask for a live video call with a specific unusual action (hold up three fingers, write your name)
Never send money to someone you’ve only met online, regardless of relationship length
5. AI Investment Scams — Fake Endorsements
AI-generated deepfakes of celebrities and public figures endorsing fake investment platforms have caused significant losses globally. Videos appear to show Elon Musk, Martin Lewis, and other known investors promoting cryptocurrency platforms — the face and voice are convincing, but the person never said any of it. The UK’s Financial Conduct Authority reported a significant increase in AI-assisted investment fraud reports in 2024 and 2025.
AI INVESTMENT SCAMS — RECOGNITION GUIDE
# How they work
Deepfake video of celebrity endorsing a specific crypto/investment platform
Distributed via social media ads targeting people who follow the real celebrity
Platform looks professional, shows fake profits, allows small withdrawals to build trust
Victim deposits large sum → “technical issue” prevents withdrawal → money gone
# Red flags
Celebrity endorsement of specific investment platform (almost always fake)
Guaranteed returns or unusually high interest rates
Urgency: “limited time offer”, “only 50 spots left”
Platform not registered with FCA (UK) or SEC (US) — check the regulator’s register
# How to verify
FCA register: register.fca.org.uk — check if the firm is authorised
SEC: sec.gov/check-adviser — verify US investment advisers
Search: “[firm name] scam” or “[firm name] review” — check for victim reports
6. Fake AI Customer Service — Impersonating Brands
Criminals are deploying AI chatbots that impersonate major brands’ customer service — appearing in search results and social media when people look for support contact details. The chatbot handles the entire conversation convincingly, builds trust, and eventually requests payment details, login credentials, or personal information “to process the refund” or “to verify the account.”
FAKE AI CUSTOMER SERVICE — HOW TO AVOID IT
# How this scam works
Victim searches for customer service contact for a bank, airline, or retailer
Fake result appears (paid ad or SEO-ranked page) with AI chatbot or fake phone number
AI chatbot convincingly impersonates the brand’s support style and tone
Eventually requests: login credentials, payment details, OTP codes, remote access
# Warning signs
Contact details found via Google search rather than the official website
Chatbot or agent asks for your full password (legitimate support never does)
Asked to install remote access software (TeamViewer, AnyDesk)
Asked to read out a one-time code that was just sent to your phone
# The safe approach
Always go directly to the company’s official website — type the URL yourself
Find contact details on the back of your card or on an official statement
Never give OTP codes to anyone who calls or messages you
AI Scams 2026 — Quick Reference
Voice clone: any urgent call requesting money — call back on a known number first
Deepfake video: $25M Hong Kong fraud — verify unusual video call requests independently
AI phishing: personalised emails are now indistinguishable — navigate directly, never click
Investment: deepfake celebrity endorsements — check FCA/SEC register before investing
Fake support: find contact details on official website only, never give OTP codes
AI Scams — Stay Protected
The common thread I see across all six: urgency, unexpected financial requests, and pressure to act before you can verify. Any time you face those three together — especially involving AI-quality audio or video — slow down, hang up, and contact the person through a channel you already know is real. Check if your contact details are in scam targeting databases with the Email Breach Checker.
Quick Check
You receive a video call from what appears to be your company’s CFO on Microsoft Teams, with two other familiar-looking colleagues also present. The CFO asks you to urgently authorise a £180,000 payment to a new supplier before the end of the day. What should you do?
Frequently Asked Questions
What are AI scams?
AI scams use artificial intelligence tools — voice cloning, deepfake video, automated chatbots, and AI-generated phishing emails — to make fraud more convincing and scalable. The technology allows criminals to impersonate real people with authentic-sounding voices and realistic-looking video, and to send highly personalised phishing messages to millions of people simultaneously. The financial losses from AI-assisted scams have reached billions annually globally.
How do I know if a voice call is a clone?
Increasingly, you can’t tell by listening. The safest approach is procedural: hang up and call back on a number you already have for the person (from your contacts, the back of their business card, or a directory — not a number provided in the suspicious call). Any legitimate person will understand this verification step for requests involving money or sensitive information. If the caller resists or creates urgency around calling back, that’s a warning sign.
Are deepfake video calls detectable?
Current deepfake tools have some detectable artifacts — blurring around the hairline, unnatural eye movement, difficulty with rapid head movements. You can test by asking the person on screen to turn sideways or perform an unexpected physical action. However, this is a temporary detection window — the technology is improving rapidly. The most reliable approach is the same as for voice: end the call and initiate a new one yourself on a known number rather than relying on visual inspection.
What should I do if I’ve been scammed by an AI scam?
Report to Action Fraud (UK: actionfraud.police.uk, 0300 123 2040) or the FBI’s IC3 (ic3.gov) in the US. Contact your bank immediately if a payment was made — some banks can recall transfers if you act quickly. Don’t transfer additional money even if the scammer promises to return what was taken or says it’s needed to “unlock” the first transfer. Document everything: save messages, call logs, account numbers, and website URLs as evidence for the report.
→ Related
How to Spot AI Deepfakes 2026 — Detection Guide
→ Related
How Hackers Use Social Engineering 2026
Further Reading
How to Spot AI Deepfakes 2026— The full detection guide for deepfake video, voice clones, and AI-generated images. Detection tools, visual tells, and the verification approaches that work as the technology improves.
Social Engineering 2026— The complete 7-method social engineering framework. All six AI scam types above map to the social engineering methods covered in this guide.
Phishing URL Scanner— Check any suspicious link from an AI-generated phishing email before clicking. Identifies phishing infrastructure and newly registered domains used in targeted campaigns.
Action Fraud (UK)— The UK’s national reporting centre for fraud and cybercrime. Report AI scams here and find guidance on getting money back after fraud.
ME
Mr Elite
Owner, SecurityElites.com
The pattern I see across every AI scam type is the same: urgency engineered to prevent verification. The voice clone creates panic so you don’t pause to call back on the number you already have. The deepfake video call adds social proof so you don’t question it. The AI phishing email creates a deadline so you don’t check the sender domain. The defence against all of them is the same: build the habit of verification before action, and make that habit automatic enough that urgency doesn’t override it. My recommendation to every client: one pause to call back on a known number defeats every voice clone scam. One look at the sender domain defeats most phishing. The technology is sophisticated. The defence doesn’t have to be.
Founder of Securityelites and creator of the SE-ARTCP credential. Working penetration tester focused on AI red team, prompt injection research, and LLM security education.