Burger menu

Voice-Clone Spoofing in Financial Fraud

Voice cloning has been used in financial scams to target both private individuals and large corporate entities.

What is Voice Cloning and How It Can Be Used in Financial Fraud?

Zimbabwe’s vice-president claims his voice was cloned as part of a smear campaign

Voice cloning is a machine learning-based technology which can plausibly mimic a voice. With the help of neural models, it’s possible to imitate timbre, intonation, accent, and even emotion that is inherent to human speech.

Voice cloning is known to be featured in spoofing attacks in a method similar to phishing. The difference is that instead of deceptively crafted emails, criminals employ samples of synthetic voice to pass as a corporate superior, business partner, family member in distress, bank employee, or authority.

Mechanisms of AI Voice Cloning in Financial Fraud

Voice cloning relies on Artificial Intelligence (AI) and follows a simple algorithm:

  1. The target’s voice is sampled.
  2. The samples are fed to the AI model.
  3. The model explores acoustic characteristics of the audio.
  4. The voice gets cloned by the AI by manipulating white noise to make it match the data taken from a sample.

There are various sources to get a voice sample from: social media, voice notes, secretly recorded speech, etc.

A malicious actor typically has to resort to a voice-cloning as a service (VCaaS) platform. These platforms are sometimes free, but with lower quality output. More tech-savvy fraudsters can train their own models suitable for the task using a Recurrent or a Generative Adversarial Neural Network (RNN or GAN). Though it results in more convincing speech, this method is time-consuming, as AI training can require a few hours of speech samples, transcription and data cleaning, immense computational power, and so on.

ZSE-VITS is a zero-shot voice-cloning model capable of emulating emotions in speech

Notable Incidents of Voice Clone Spoofing in Financial Sphere

The first known instance of a voice-cloning attack took place in 2019, when a UK energy firm lost $243,000 after a phone call. Attackers synthesized a CEO’s voice that instructed the staff to transfer the money. It gave rise to the voice phishing, or “vishing” phenomenon. 

  1. Virtual Kidnapping

An incident happened in January 2023 when Jennifer DeStefano — a mother of two daughters from Arizona — received a call in which she heard her older daughter desperately pleading for help. She was told by a fake kidnapper to deliver ransom money without contacting authorities. 

However, it turned out that the allegedly kidnapped girl was safe in her skiing resort. As Mrs. DeStefano later commented, “(her) voice was so real, and her crying and everything was so real.”  According to the FBI, the call might have come from Mexico, as there are many similar calls originating from that country.

DeStefano family who experienced a scam attack featuring voice cloning

  1. Newfoundland Cases (Grandchild scheme)

As reported by The Royal Newfoundland Constabulary, at least 8 seniors were extorted to pay $200,000 combined in a “grandchild scheme” over the course of three days. In each episode, a senior citizen was phoned by their “grandchild” claiming in panic that they got in a car accident and were arrested. The goal was to receive “bail money”.

  1. Cloning Company Director’s Voice

During a corporate acquisition deal, an unnamed U.A.E. company had undergone a voice spoofing attack. It resulted in a $35 million loss, and the scenario was again a CEO scam, during which a manager was instructed to transfer money to a specific bank account. It was corroborated with artfully designed emails as if they were sent by the company’s superiors.

  1. Chief Financial Officer’s Deepfake Incident (CFO Scam)

An instance of a CFO scam worth $25 million was orchestrated in January 2024. It was a more elaborate hoax, as an entire video conference — featuring convincing deepfakes of the CFO’s colleagues — was orchestrated. Supposedly, it have been crafted using publicly available footage that was modified later. 

  1. Breaking into a Bank Account

In 2023, a Vice author managed to bypass his bank’s voice recognition security system by using an openly available platform Eleven Labs to recreate his own voice using AI. After repeating the passphrase “My voice is my password” with the help of a mimicked voice, he gained access to the account details.

  1. Double Call to the Bank

An unsuccessful scam targeted a Florida-based investor and Bank of America in 2023. After the first call to the bank from the legitimate bank holder, another one came in, instructing the bank manager to wire money somewhere else. Apparently, the target’s calls were also somehow monitored to find a window of opportunity and make the fraud look less suspicious.

Countermeasures and Prevention of Cloned Voice Spoofing Attacks

To avoid cloned voice scams, it is suggested to:

  • Employ verification protocols. 
  • Avoid exposing your voice online.
  • Contact the person whose identity can be jeopardized right away after the call. 
  • Pay attention to strange quirks appearing in the caller’s speech: timbre distortion, robotic or unnatural intonations, extremely abrupt phrasing and short sentences, and so on.

It is also supposed that a dose of healthy skepticism can help mitigate the threat.

To read on about voice liveness, its place in antispoofing, and the challenges being faced by emerging technologies, read our next article here

Try our AI Text Detector

Avatar Antispoofing

1 Followers

Editors at Antispoofing Wiki thoroughly review all featured materials before publishing to ensure accuracy and relevance.

Article contents

Hide

More from AI Generated Content

avatar Antispoofing What Is Voice Cloning, and How Can We Detect It?

Voice cloning is a technology based on machine learning with the goal of seamlessly mimicking a person’s voice. Voice cloning…

GenAI and the Unavoidable “Copyright Problem”
avatar Antispoofing GenAI and the Unavoidable “Copyright Problem”

In the light of the rapid advancement of machine learning, issues regarding Intellectual Property (IP) have become especially relevant. In…

avatar Antispoofing Data Poisoning Attacks and LLM Chatbots: How Experts Are Responding

What Are Data Poisoning Attacks? With the recent boom of AI and Large Language Model (LLM) usage among the public,…

avatar Antispoofing Prompt Injection Attacks: How Fraudsters Can Trick AI Into Leaking Information

What are Prompt Injection Attacks? A prompt injection attack is a malicious technique which uses a text prompt to trick…