The Voice Deepfake Shield β Personal Security Prompt
Create a comprehensive personal and family defense plan against AI voice cloning scams, vishing attacks, and social engineering that exploits synthetic media.
Prompt
The Voice Deepfake Shield
Context
Voice cloning crossed the "indistinguishable threshold" in early 2026 β 3 seconds of audio is now enough to generate a convincing clone with natural intonation, breathing, and emotion. AI-powered vishing (voice phishing) attacks are up 400% year-over-year. You are a cybersecurity advisor specializing in synthetic media threats, helping an individual or family build practical defenses.
User Profile
[DESCRIBE YOUR SITUATION β e.g., "family of 4, parents are 60+, kids are teenagers", "solo founder who does podcasts", "executive with public speaking footage online"]
Defense Plan
1. Threat Assessment
Based on the user's profile, assess:
Voice exposure level: How much of your voice is publicly available? (podcasts, YouTube, social media, conference talks, voicemail greetings)
Attack surface: Who in your circle is most vulnerable to being fooled? (elderly parents, children, assistants, employees)
Value targets: What could an attacker gain? (financial transfers, confidential info, account access, reputational damage)
Risk tier: Low / Medium / High / Critical
2. Family Safe Word Protocol
Design a verification system:
Safe word: A random, non-guessable word or phrase that every family member/close contact knows. NEVER say it on any recorded call or public setting.
Challenge-response pairs: "What did we have for dinner on [specific date]?" β things only the real person would know.
Duress signal: A different word that means "I'm being coerced, call the police."
Rotation schedule: Change safe words quarterly.
3. Communication Hygiene
Specific, actionable rules:
Callback rule: If anyone calls requesting money, access, or urgent action β hang up and call them back on a number you already have saved. Never use the number they give you.
3-second rule: Assume any unexpected call from a known voice could be cloned. Verify before acting.
No voice-only wire transfers: Any financial transaction over $[AMOUNT] requires a secondary channel confirmation (text + call, or in-person).
Voicemail hardening: Consider removing or shortening voicemail greetings to reduce clonable audio.
4. Digital Footprint Audit
Walk through:
What voice samples exist publicly? List platforms and estimate total seconds of audio.
Can any be removed or made private?
Are there video interviews, podcast appearances, or conference talks? These are the richest cloning sources.
Social media voice notes, Instagram stories, TikTok β audit and decide what stays.
5. Technical Defenses
Recommend tools and settings:
Caller authentication apps that flag AI-generated audio
Bank/financial institution settings β add voice verification PINs, set transfer limits, enable multi-factor for all transactions
Email/messaging β enable advanced phishing protection where available
For organizations: Consider deepfake detection APIs (Pindrop, Resemble AI Detect) on customer-facing voice channels
6. Incident Response Plan
If you suspect a voice deepfake attack:
Do NOT engage further. Hang up immediately.
Document: time, number, what was said, what was requested.
Verify the real person's safety through a different channel.
Report to: local law enforcement, FTC (reportfraud.ftc.gov), your bank's fraud line.
If financial damage occurred: freeze affected accounts within 30 minutes.
Output
Deliver a personalized, printable security plan (Markdown format) that the user can share with their family. Use plain language β this needs to be understandable by a 70-year-old grandparent and a 15-year-old teenager. Include a one-page cheat sheet summary at the end.