Notable Cases of Spoofing Attacks and Bypassing Liveness Detection

From Antispoofing Wiki

Liveness detection is still vulnerable to spoofing, which resulted in a number of successful presentation attacks.

General Methods of Bypassing Liveness Detection

Liveness detection is a necessary component in today’s verification systems. However, use of biometric verification in home-grade devices (Internet of Things) and smartphones led to the increase of biometric spoofing, which is currently presented in numerous Presentation Attacks (PAs).



PAs range in sophistication, execution and damages produced. Tools and methods are mostly predetermined by the type of sensors attacked. Fraudsters replicate facial features, fingerprints, voice, and even eyes in some cases. Certain systems prove to be highly vulnerable. As reported, Face ID could be unlocked by using glasses with black dots tapped to them, which imitated eyes. Another experiment proved that iPhone X could be unlocked with a 3D mask that cost only $150 to make.

Presentation Attack Instruments (PAIs) vary significantly as well: from printed photos to high-class face or voice manipulations and realistic masks made from silicone elastomers that can simulate human skin.

At the same time, bypassing or hacking is also practiced in biometric spoofing. It employs injection or data-swapping attacks, which allow tampering the signal received by a sensor or manipulating the biometric data stored inside the system. They aim at hacked or stolen devices, internet traffic interception and servers that contain data essential for verification.


Face Spoofing Attacks

There are multiple ways and means to perform a face spoofing attack. Regular tools include digital face manipulations, face synthesis from scratch, morphing, 2D/3D masks, cutouts, and so on. Certain methods may include a complex algorithm that at times successfully circumvents liveness detection.

One such method allows completing digital onboarding, involving seven steps in total:

  1. Synthetic face. To obtain a picture of a nonexistent person’s face a number of generators are used: Thispersondoesntexist.com, 100,000 Faces, Virtual Models by Rosebud AI, Generated Photos, and others.
  2. Face swapping. Then, the synthetic face is placed on a random individual’s photo to make it look more believable. This can be done with either a traditional software like Photoshop or a mobile editor like FaceApp, B612, MSQRD, Face Swap Live, and others.
  3. Polishing. The photo undergoes extra doctoring to make it look more organic and erase visible artifacts left by editing.
  4. Multiplication. At this stage a few more copies of the fake photo are created with different expressions, eye directions, or head positions. The step is necessary if the security system is challenge-based. This is achievable with apps like Mug Life, Kaedim Platform, Colorful Studio Beta, etc.
  5. Fake ID. The photo and its altered copies are p[laced on an ID template.
  6. Onboarding. An application like Digital Onboarding Toolkit is downloaded to legally register a fake document. A fraudster will perform all actions required by the active verification system simply by showing to the camera the altered copies of the main spoof photo.
  7. Completion. The onboarding system will recognize and register the fake document, as demonstrated during the experiment.



However, success of this method also depends on which liveness detection techniques are utilized by a system. State-of-the-art solutions are capable of effectively spotting 2D objects that lack depth and other natural parameters of a human face.

A different technique also employs a synthetic face, as well as Avatar SDK for sculpting its 3D copy and OBS virtual camera to perform an injection type attack. However, this doesn’t target a liveness detection system directly and is more dedicated to backend weaknesses.


Voice Spoofing Attacks

Voice spoofing, as of now, is considered as the most effective fraud technique as it’s simple to orchestrate, relies on social engineering, and still remains hard to detect. Commonly, there are two attack scenarios: speech synthesis and speech conversion.

Scenario 1. Fraudsters employ a voice-cloning tool, based on deep learning, to imitate the target’s voice. After ample training time, such a tool can produce a fairly realistic output even ‘picking up’ subtle nuances of a target’s speech: accent, intonation, vocal range, tempo-rhythm, and so forth.

What worsens the situation is that voice-mimicking tools are freely available online. A well-known example is CyberVoice (Steosvoice), which was used to mimic Doug Cockle’s voice acting for a Witcher 3 fan mode. Similar solutions include Replica, Resemble.ai, Speechelo, Adobe Voco (unreleased), and others. The training samples are usually taken from the target’s social media or can be recorded during a phone call.

Scenario 2. Voice conversion allows changing vocal characteristics of an uttered phrase without altering its linguistic content. In other words, a malicious actor can disguise their voice as someone else’s to convey any given message.

At least two instances of successful voice spoofing attacks were reported. The first documented incident ever took place in 2019. The target was a British energy firm manager who received an ‘urgent’ call from the company’s CEO residing in Germany. During the call, the manager was instructed to wire €220,000 ($231,583) to a Hungarian bank account.

As noticed, the realism of a fake voice was supported by the unexpected call and overall stressfulness of the situation. Saurabh Shintre from the security company Symantec commented that "When you create a stressful situation like this for the victim, their ability to question themselves for a second <...> goes away".

The second incident resulted in more serious financial damages with an unnamed U.A.E. company losing $35 million. The attack algorithm was somewhat similar. Malicious actors replicated the voice of a company director residing in Dubai and called a Hong Kong bank to confirm an acquisition and approve a multimillion transfer. Interestingly, the attack was backed with a number of false emails, allegedly coming from a company’s legal representative. Classified by the FBI as Business Email Compromise (BEC), the tactic was in use years before deepfakes have become publicly known.

Deepfake Attacks

To an extent, voice spoofing is a type of deepfake attack as it employs deep learning for realistic speech synthesis. However, the term ‘deepfake’ mostly implies fabricated videos featuring resemblance of real people or a live video stream, in which someone else’s appearance is exploited.



This attack type is envisioned in detail by many researchers. However, real-life incidents involving deepfake attacks either weren’t documented or remain undisclosed. Partly, this could be explained by the fact that video deepfake attacks are cost-demanding and aren’t always efficient, especially when aimed at real people — context awareness can be used for revealing deepfakes, according to an MIT’s study.

One of the serious threats represented by the deepfakes is misinformation. As reported by World Economic Forum, deepfakes transposed from "a technology that began as little more than a giggle-inducing gimmick" to a serious political force that can predetermine elections through AI-powered fakery. For example, deepfake allegations triggered civil unrest in Gabon, which nearly resulted in a coup d'état.

Interpol notes that deepfakes can be used for creating "a false narrative apparently originating from trusted sources", which can be applied to the criminal spectrum broadly. Such schemes as phishing or identity fraud can greatly benefit from it.

Another threat is the reputation manipulation, which can result in tremendous losses for a single person or an organization. For instance, a fake statement on behalf of a certain company or its CEO can lead its stock value to collapse.

References

  1. Face spoof detection test at FaceTec
  2. A review of iris anti-spoofing
  3. Masks, Animated Pictures, Deepfakes…—Learn How Fraudsters Can Bypass Your Facial Biometrics
  4. Hackers just broke the iPhone X's Face ID using a 3D-printed mask
  5. Materials used to simulate physical properties of human skin
  6. Spoof of Innovatrics Liveness
  7. Random Face Generator (This Person Does Not Exist)
  8. Spoof BioID with easy generated deepfake face
  9. CyberVoice
  10. An artificial-intelligence first: Voice-mimicking software reportedly used in a major theft
  11. Fraudsters Cloned Company Director’s Voice In $35 Million Bank Heist, Police Find
  12. Business Email Compromise
  13. Deepfake used for altering a music video
  14. Deepfake Detection by Human Crowds, Machines, and Machine-informed Crowds
  15. How misinformation helped spark an attempted coup in Gabon