Synth Identity Fraud & Synthetic Voices: How they’re reshaping contact center risk profiles
By Milind Borkar, Illuma
Contact centers have always stood at the intersection of convenience and risk – working to deliver fast, effortless service to real customers while keeping increasingly sophisticated fraudsters out. Today that balance is under pressure from two related threats: synthetic identities (engineered profiles made of real + fake data) and synthetic voices (AI-generated or cloned speech that impersonates real people). Together, these tools let fraudsters create fake identities and even sound like real people on the phone – making it easier and faster to launch attacks that contact centers must now defend against.
What Are Synthetic Identities and Synthetic Voices?
Synthetic identities are “Frankenstein” identities stitched from stolen or public Personally Identifiable Information (PII) like Social Security numbers, dates of birth, addresses, plus fabricated names, emails, phone numbers, and activity patterns. Fraudsters use them to open accounts, pass automated Know Your Customer (KYC) checks, build credit history, and then exploit those accounts for loans, refunds, or fraud intermediaries. These identities can be long-living and difficult to trace because they’re not tied to a single real victim in a straightforward way.
Synthetic voices are audio clones or voice models generated by AI. They range from quick, low-quality impersonations to extremely convincing deepfakes created from mere seconds of source audio. Criminals use them for social engineering: authorize transactions, convince agents to change account details, or attempt to bypass voice-based authentication. Recent industry analyses predict sharp increases in deepfake/voice fraud across contact centers.
How Attackers Combine Them
- Create a believable persona. Build an account using a synthetic identity (real Social Security Number + fake name + fabricated credit history or separate activity records).
- Train or source a voice. Use scraped audio, public videos, or a cheap voice-cloning service to generate a voice match for the fabricated persona.
- Social-engineer the contact center. Call support, present the synthetic identity, use the cloned voice to match the victim’s profile or to persuade agents to reset passwords, approve payouts, or change linked contact details.
- Cash out. Move funds, request refunds, transfer benefits, or create downstream fraud chains (returns, credit card chargebacks, loan fraud). The attack can be repeated at scale because the identities and voices are reusable.
Why They’re Hard to Detect
- No single red flag. Synthetic identities are designed to look like plausible, low-risk customers: mixed real/fake data, patched credit/activity histories, and carefully crafted social traces. Traditional rules that look for exact matches to stolen IDs often miss them.
- Automation accelerates scale. Generative AI dramatically reduces the time and cost to fabricate identities and voices – what once took months now takes minutes. That makes routine defenses obsolete if they aren’t automated and adaptive.
- Voice authenticity is a moving target. High-quality voice synthesis can mimic subtle prosody and cadence. Simple playback detection or reliance on short text-dependent phrases can be defeated. Analysts expect deepfake voice fraud to grow sharply year-over-year.
Smarter Voice Security Strategies for a Trusted Contact Center
No single safeguard can stop synthetic identity and voice fraud. True protection comes from layered, intelligence-driven defenses that verify authenticity at every step and make attacks slow, difficult, and costly to execute.
- Strengthen verification with advanced voiceprint intelligence.
Illuma’s AI-driven AudioPrint™ intelligence continuously authenticates callers in the background of natural conversation—recognizing each speaker’s unique vocal characteristics while detecting signs of synthesis or manipulation. This passive approach delivers frictionless verification for legitimate callers and stops fraudulent callers in real time. - Add adaptive multi-factor authentication that adjusts to risk.
Static rules can’t keep up with evolving fraud tactics. Adaptive MFA dynamically layers additional verification – like device checks or contextual risk scoring – only when anomalies arise. This keeps interactions fast and seamless for trusted users while adding protection where it’s needed most. - Build deepfake resistance into every layer of defense.
Attackers are now using AI-generated voices and synthetic identities to exploit traditional systems. Illuma’s deepfake-resistant technology analyzes subtle audio cues, spectral fingerprints, and behavioral inconsistencies that even trained ears can miss. Combined with ongoing model retraining and AI-human collaboration, this creates a defense that evolves as fast as the threat does. - Empower agents through human + AI collaboration.
Even the best automation benefits from human oversight. Illuma blends intelligent automation with agent-assist tools that flag suspicious activity and enforce secure workflows. This partnership between people and technology minimizes social-engineering success and strengthens every frontline interaction. - Unite detection, insight, and response.
Fraud doesn’t happen in isolation. Integrating voice, device, and behavioral intelligence across the customer journey enables faster detection of synthetic patterns and coordinated response. When signals are shared across systems, the entire organization becomes more resilient – and every conversation becomes more trusted.
The Cost of Inaction
Voice deepfakes and synthetic identities are no longer future threats – they’re here. Fraud volumes and sophistication are climbing fast, with industry forecasts projecting contact center exposure in the tens of billions if defenses don’t keep pace. Regulators are taking notice, and organizations that rely on outdated, static identity controls risk growing losses, reputational damage, and compliance failures.
Resilience Through Layered Intelligence
Synthetic identities and synthetic voices are potent because they exploit disjointed defenses: convincing data in one place, convincing audio in another, and a human agent in the middle. The antidote is integrated, adaptive defenses that operate across the full customer journey: continuous and passive authentication during conversations, real-time deepfake resistance, and human + AI collaboration to make every interaction trusted and seamless.
At Illuma, we believe the future of contact-center security must be friction-free for legitimate customers and airtight against AI-assisted attack chains – continuous voice verification, layered security, and agent protection working together. If your contact center still treats voice as a weak or static control, now is the time to evolve: attackers already have the tools – it’s your turn to raise the bar.
Connect with Illuma to learn more.


