SECURITY AWARENESS
UPDATED 2026
⚠️ SHARE WITH YOUR FAMILY
SECURITYELITES.COM

3 sec
Time needed to clone a voice from audio
$1.1B+
Lost to voice phishing scams in the US annually
70%
Of people cannot reliably detect AI-cloned voices
100%
Preventable with the codeword method in this guide

📞

Your phone rings. It is your daughter’s number. You answer, and her voice — her exact voice, the one you have known for twenty years — says she has been in an accident. She is in hospital. She needs you to wire $3,000 immediately before the insurance processes. She is crying. She sounds terrified. It is not her.

AI voice cloning technology has advanced to the point where criminals can replicate a person’s voice convincingly from as little as three seconds of audio — a sample pulled from an Instagram story, a TikTok video, a voicemail greeting, or a YouTube clip. The resulting clone can be used in real time to impersonate that person in a live phone call, complete with emotional cues and natural conversational responses.

The Federal Trade Commission has documented a sharp rise in AI-assisted voice scam reports since 2023, with losses accelerating as the technology becomes cheaper and more accessible. This guide explains exactly how these scams work, every warning sign to recognise them, and a complete protection plan for you, your parents, and your business.


How AI Scam Calls Actually Work — The Technology Behind the Threat

Voice cloning is not science fiction and it is not experimental. It is a commercially available technology that has existed in professional audio production for years, and it became accessible to ordinary consumers — and criminals — starting around 2022. Understanding how it works removes the mystique and helps you recognise what to look for.

securityelites.com

HOW AN AI VOICE CLONE SCAM IS EXECUTED — STEP BY STEP

🎯
STEP 1

Target Selection & Voice Harvesting
Criminals identify a target to impersonate. They collect voice audio from public sources: Instagram videos, TikTok, YouTube, Facebook live, podcast appearances, voicemail greetings, or news footage. Only 3–30 seconds of clear audio is needed. They also research the victim’s family structure via social media to know who to call.

🤖
STEP 2

AI Voice Clone Generation
The audio sample is uploaded to a voice AI tool — several consumer-facing services exist for legitimate creative purposes, but the underlying technology can be misused. The AI analyses vocal patterns, pitch, cadence, accent, and emotional tone, generating a synthetic voice model that can produce any new words or phrases in the target’s voice in real time or as pre-recorded audio.
⚠️ Research shows most people cannot reliably distinguish cloned voices from real ones in audio-only settings

📞
STEP 3

The Call — Creating Panic and Urgency
The criminal calls the victim using a spoofed number (often appearing as the impersonated person’s real number) and plays the cloned voice — either live via real-time voice conversion or using pre-recorded scripted audio. The scenario always creates extreme urgency: arrest, accident, kidnapping, or medical emergency requiring immediate money. The victim is told not to hang up, not to contact other family members, and to act immediately.

💸
STEP 4

Money Extraction — Gift Cards, Wire Transfers, Crypto
Payment is requested in forms that are difficult or impossible to reverse: gift card codes (Amazon, Google Play, iTunes), wire transfers to overseas accounts, or cryptocurrency. Sometimes a second person joins the call posing as a police officer, lawyer, or hospital administrator to add legitimacy. Victims who have already sent money are often re-contacted with additional demands.

AI Voice Clone Scam — Full Attack Flow. Four steps: harvest voice from social media, generate clone with AI, call family member with emergency story, extract money in irreversible form. The entire process can be prepared in under an hour. The call itself lasts 5–15 minutes. The combination of a familiar voice and manufactured panic bypasses rational decision-making in most victims.

SCAM TYPE 1
The Family Emergency (Grandparent Scam)

The family emergency scam is the most emotionally devastating AI scam call variant. It targets the natural protective instincts of parents and grandparents who will do anything to help a family member in distress. Before AI voice cloning, this scam relied on actors impersonating family members convincingly — a difficult task. With voice cloning, the caller sounds exactly like the real person.

🎭 A Documented Real-World Scenario

An elderly woman receives a call. It is her grandson’s voice — unmistakably his accent, his way of speaking. He says he was in a car accident in a foreign city and was arrested at the scene. He is scared. He is crying. He needs $4,500 for a lawyer before he can be released.

A second person then takes the phone — he says he is the lawyer. He gives instructions for purchasing Google Play gift cards and reading the codes over the phone. He tells her not to tell anyone in the family to avoid embarrassing her grandson.

She purchases $4,500 in gift cards and reads the codes. Her grandson, she later discovers, was at home the entire time and never made any call. His voice was cloned from Instagram reels he posted months earlier.

This scenario is a composite based on case types reported to the FTC and documented by consumer protection organisations. No real individual is identified.

The FTC has reported that imposter scams — of which AI voice scams are a growing subset — resulted in reported losses exceeding $1.1 billion in a recent reporting period. Older adults are disproportionately targeted, though business professionals and people of all ages have been victimised.

🛡️ This specific scam is stopped instantly by the codeword method. A pre-agreed family safe word — one only the real person would know — defeats voice cloning completely. See the codeword section below for full setup instructions.

SCAM TYPE 2
CEO Fraud and Business Voice Phishing

Business-targeted AI voice scams are growing rapidly as criminals realise that the financial stakes in corporate environments are dramatically higher. The typical attack clones the voice of a senior executive — a CEO, CFO, or managing director — whose voice is publicly available from earnings calls, conference presentations, media appearances, or LinkedIn posts.

A documented case type: a finance employee receives a WhatsApp voice message followed by a call from what sounds exactly like their CEO. The CEO says they are in a confidential acquisition negotiation and needs an urgent wire transfer processed immediately. The employee has heard this voice in company-wide calls for years. The transfer is made before anyone verifies. In documented cases reported by cybersecurity firms, single transfers in these attacks have ranged from $35,000 to several million dollars.

Business Voice Scam Red Flags
⚠️ Request involves urgent, large wire transfer
⚠️ Executive requests unusual secrecy
⚠️ Contact comes via personal phone / WhatsApp
⚠️ Normal approval processes are bypassed
⚠️ Request is time-pressured (“before market opens”)
⚠️ Cannot reach the executive by normal channels

Defence: every financial request made by phone — regardless of how convincing the voice — requires independent verification through a separate channel before any action is taken. Call the executive’s office number directly. Send a confirmation email. Follow the two-person authorisation policy.

SCAM TYPE 3
Virtual Kidnapping with AI Voice

Virtual kidnapping scams have existed without AI for years — a criminal calls a parent claiming to have kidnapped their child and demands a ransom. What made them relatively easy to defeat was that the “victim” on the phone sounded unconvincing. AI voice cloning closes this gap. The “kidnapped” family member now sounds exactly like themselves — crying, frightened, pleading for help.

The mechanics: criminals research a family through social media, identifying a young adult who is travelling or temporarily unreachable. They call the parents when the person is most likely to be unavailable for direct contact — on a flight, at a remote location, or in a meeting. If the parent cannot immediately reach their child, the manufactured panic becomes overwhelming.

The immediate response that defeats this scam every time: Hang up and call your family member directly on their saved number. Do not call any number the caller provides. If they are truly in danger, they will answer or their voicemail will confirm their identity. Do not let the caller tell you what to do — they are constructing a scenario where panic overrides rational action.

10 Warning Signs — How to Spot an AI Voice Clone Call in Real Time

securityelites.com

10 WARNING SIGNS OF AN AI VOICE CLONE SCAM CALL
EXTREME URGENCY
The situation cannot wait. Every minute matters. You must act NOW. Genuine emergencies allow for a 60-second call-back to verify. Scammers cannot.

🤫
SECRECY DEMANDED
“Don’t tell anyone else in the family.” This prevents you from making a 10-second verification call. It is the scammer’s most important tool.

🎁
GIFT CARDS REQUESTED
No legitimate legal, medical, or bail process accepts Google Play, iTunes, or Amazon gift card codes. Ever. No exceptions.

📵
REFUSES HANG-UP AND CALL-BACK
“You can’t hang up or it makes the situation worse.” The real person would want you to verify. The scammer cannot survive verification.

🤖
UNNATURAL VOICE QUALITY
Current AI voice clones often lack natural breathing, mouth sounds, subtle hesitations, and background noise. The voice may sound slightly “too smooth” or flat in emotional inflection.

📹
REFUSES VIDEO CALL
“The connection is too bad” or “I can’t do video right now.” Ask for a FaceTime or video call. Audio-only clones cannot pass a video verification.

🔢
UNFAMILIAR PHONE NUMBER
The call comes from a number you do not recognise — or from a number that looks like the real person’s but has one different digit. Number spoofing is common in these attacks.

👮
THIRD PARTY JOINS THE CALL
A “police officer,” “lawyer,” “doctor,” or “bail bondsman” takes over. This adds false authority and is a scripted part of the scam.

🌍
FOREIGN LOCATION / UNUSUAL CONTEXT
The claimed incident is in a city or country you know the person is not visiting. Criminals often do not research this detail carefully.

🔑
CANNOT ANSWER THE CODEWORD
Ask for your pre-agreed family codeword. The real person answers immediately. The AI voice clone has no access to private knowledge. Evasion, confusion, or anger confirms the scam.

10 Warning Signs of an AI Voice Clone Scam — Learn these. Share this image with elderly family members. Print it and keep it near the phone. Any one of these signals warrants immediate verification before taking any action. All ten appearing simultaneously means hang up immediately.

The Codeword Method — The Most Effective Defence Against AI Voice Scams

AI can clone a voice. It cannot clone knowledge. A pre-agreed secret codeword between family members is information that only the real person possesses — and that no amount of voice technology can replicate. If a caller claiming to be a family member cannot provide the codeword, it is not them. The conversation ends. You call them back on their saved number. This takes 15 seconds and defeats the scam entirely.

securityelites.com

FAMILY CODEWORD SYSTEM — COMPLETE SETUP GUIDE

✅ CHOOSING YOUR CODEWORD
✓ Something memorable but not guessable
✓ Not a public word (not your pet’s name if public)
✓ A random combination works well: colour + noun
✓ Agree on it in person, not by text or email
✓ Set a different word per family relationship if preferred
✗ Not your address, birthday, or anniversary
✗ Not something you have posted online
✗ Not something in your social media bio

📞 HOW TO USE IT IN A CALL
1. Caller claims to be a family member in distress
2. Stay calm. Say: “I just need to ask you the codeword so I know it’s really you.”
3. Real person: answers immediately without hesitation
4. Scammer: becomes evasive, angry, or claims to have forgotten
5. Wrong answer or evasion: hang up immediately
6. Call the real person on their saved number to confirm they are safe

Action step: After reading this article, call or text every family member — especially parents and grandparents — and establish a codeword today. This conversation takes two minutes and provides permanent protection against AI voice clone scams.

Family Codeword System — The most effective defence against AI voice cloning scams. Agree on a secret word in person. Ask for it calmly whenever you receive an unexpected emergency call. The real person will know it immediately. A scammer using a cloned voice cannot. Establish one with your family today.

Complete Family Protection Plan — Every Measure Ranked by Impact

1
Establish a family codeword today — in person
Agree on a secret word with every family member. This is the single highest-impact protective measure available. Takes two minutes to set up. Provides permanent protection.
IMPACT: 🟢🟢🟢🟢🟢 Maximum

2
The call-back rule — hang up and call back on a saved number
If any caller creates urgency about a family member, hang up and call that person directly on the number saved in your contacts. Do not call any number provided by the original caller. This verification takes 20 seconds.
IMPACT: 🟢🟢🟢🟢🟢 Maximum

3
Reduce public voice exposure — audit your social media
Review your Instagram, TikTok, Facebook, YouTube, and LinkedIn for videos containing your voice or the voices of family members. Consider making personal accounts private. Be especially mindful of content posted by or about elderly relatives who may not manage their own privacy settings.
IMPACT: 🔵🔵🔵🔵⚪ High (reduces targeting)

4
Brief elderly family members — specifically and directly
Do not forward this article and assume they will read it. Call them. Explain in plain language: “Technology now lets criminals fake voices perfectly. If you get a call from someone who sounds like me asking for money, hang up and call me directly. No matter how scared they seem.” This conversation is the most important part of this article.
IMPACT: 🔵🔵🔵🔵⚪ High (protects most vulnerable)

5
Register with the Do Not Call Registry and use call-screening apps
Register your number at donotcall.gov (US). Enable your carrier’s spam call screening feature. Apps like Hiya, Truecaller, and First Orion help identify likely scam numbers before you answer. Google’s Call Screen feature on Pixel phones uses AI to screen calls and report likely scammers.
IMPACT: 🟡🟡🟡⚪⚪ Moderate (reduces exposure volume)


If You Are Targeted — Immediate Action Steps

Emergency Response — AI Voice Scam
1
Hang up and stay calm
Panic is the scammer’s weapon. Take 30 seconds to breathe. You have not committed to anything. You are still in control of the situation.
2
Call the person being impersonated directly
Use their saved number in your contacts. Do not use any number provided in the scam call. In virtually every documented case, the real person is completely safe and unaware of the call being made in their name.
3
If money was sent — act immediately
Gift cards: Call the gift card issuer’s fraud line immediately — some codes can be blocked before use. Bank wire: Call your bank immediately and report unauthorised transfer — recovery is possible if acted on within hours. Cryptocurrency: Extremely difficult to recover — document everything and report immediately.
4
Report to the FTC and local law enforcement
File a report at reportfraud.ftc.gov (US). File a report with your local police (required for bank fraud recovery in many cases). Save any call logs, voicemails, or text messages related to the scam.
5
Share the experience — it protects others
Report the number to spam databases via your carrier. Share your experience with family and friends. Every person who knows the warning signs is a harder target. Scammers rely on shame keeping victims silent.

Protecting Your Business from AI Voice Fraud

For business owners, finance teams, and executives: the most effective protection is a clear, written policy that no financial action — regardless of how it is requested — can be authorised by phone call alone. Implement these controls and share them with every team member who handles financial requests:

📋 POLICY: Two-Channel Verification
Any financial request made by phone must be confirmed via a second, independent channel — email to a known address, in-person confirmation, or a call-back to an office number already on file. No exceptions for urgency.

👥 POLICY: Two-Person Authorisation
Any wire transfer above a defined threshold requires approval from two separate authorised individuals. A single phone instruction from a CEO or CFO — however convincing the voice — cannot meet this requirement alone.

🎓 TRAINING: Awareness Programme
Brief every employee who handles financial requests about voice cloning technology. The awareness alone — knowing the technology exists — dramatically reduces susceptibility. Conduct simulated tests annually.

📞 VERIFY: Establish Callback Protocols
Maintain an up-to-date internal directory of direct phone numbers for executives. Any financial call from a senior executive is verified by calling their direct line from the directory — not the number that called you.

The Action This Article Requires Is Simple
Put Down This Article and Call Your Parents.
Establish the Codeword Today.

Everything in this guide becomes useless if you read it and move on. The one action that changes your family’s vulnerability to AI voice scams is a two-minute phone call. Make it today. Share this article with people who matter to you.

Frequently Asked Questions — AI Scam Calls

How do AI voice cloning scam calls work?
Criminals collect 3–30 seconds of audio from someone’s social media, YouTube, or voicemail, feed it into an AI voice synthesis tool to generate a clone, then call family members using that cloned voice to create an emergency scenario (accident, arrest, kidnapping) and demand irreversible payments via gift cards, wire transfer, or cryptocurrency.
What are the warning signs of an AI scam call?
Extreme urgency that cannot wait, refusal to let you hang up and call back, request for gift cards or wire transfers, a third party joining to pose as an authority figure, the caller refuses a video call, voice sounds slightly flat or lacks natural breathing, and inability to answer a pre-agreed family codeword.
What is a voice codeword and how does it protect against AI scam calls?
A family codeword is a pre-agreed secret word only real family members know. AI can clone a voice but cannot clone private knowledge. Ask any caller claiming to be a family member for the codeword — the real person answers immediately. Establish one with your family today after reading this article.
What should I do if I receive a suspected AI scam call?
Stay calm. Ask for the codeword. If no codeword exists, say you will call back and hang up. Call the person being impersonated on their saved number. Do not send any money, gift cards, or cryptocurrency before verifying. If money was already sent, contact your bank and the gift card issuer immediately, then report to reportfraud.ftc.gov.
Can AI scam calls be used against businesses?
Yes — CEO fraud using cloned executive voices is an increasingly documented attack. Criminals clone voices from earnings calls or public presentations and call finance staff requesting urgent wire transfers. Businesses must implement two-channel verification and two-person authorisation for all financial requests regardless of how convincing the voice sounds.

ME
Mr Elite
Founder, SecurityElites.com | Security Researcher | Educator

As a security professional, the technology that concerns me most is not the sophisticated state-level tools — it is the consumer-accessible ones that lower the barrier to fraud dramatically. AI voice cloning has done that. The threat is real, documented, and accelerating. But it has a complete, simple defence that requires no technology, no app, and no expertise: a codeword agreed on in person. Two minutes of conversation between you and your family members makes this attack fail every time. Please make that call today.

LEAVE A REPLY

Please enter your comment!
Please enter your name here