General Overview
In liveness taxonomy, Presentation Attacks (PA) is a demonstration of a certain physiological trait — fingerprint, face, voice — that is aimed at bypassing a biometric security system. Numerous sources also use the spoofing term to describe the issue. These attacks are typically separated into two categories:
- Imposter attack. A perpetrator attempts to get recognized and authorized as someone else. (Also referred to as synthetic identity fraud.)
- Concealment attack. A perpetrator seeks to conceal their identity from facial recognition or other identification systems.
PAs involve a wide scope of tools known as Presentation Attack Instruments or PAIs. Usage of a certain PAI is dictated by the modality of a targeted biometric system. PAIs, therefore, range from simple face cutouts or videos replayed on a tablet to artificially synthesized voices, realistic silicone masks, prosthetic eyes, and even a deceased person’s body parts.

Presentation Attack Detection (PAD) has been introduced as a response to the threat of biometric spoofing. Initially, its basis has been outlined in the ISO/IEC 30107-1 standard, which established a framework and terminology for PA detection.
Statistics on Presentation Attacks are barely collected or released into public. However, their number, intensity and creativity appear to grow every year. According to ID.me, 80,000 attempts were made to fool its face recognition system throughout 26 American states from June 2020 to January 2021.
According to Statista, 10.3 million attacks have been conducted against Internet of Things (IoT) devices in October 2020 alone. Even though generally classified as malware attacks, a part of them can be PAs, as IoT gadgets prove to be vulnerable to the Presentation Attacks.

Types of Presentation Attacks
Currently, a PA taxonomy is proposed, which divides Presentation Attacks into accidental and deliberate. However, the former category may seem disputable as the term attack essentially implies an adversarial action. At the facial Antispoofing Wiki the following classification is presented:
Non-malicious attacks
This class includes incidents when presentation of a biometric trait caused an interference: false alarm, misidentification, and so on. At the same time, no malicious intent is observed on the subject’s side. Usually, such incidents may occur due to the usage of contact lenses, makeup, plastic surgery or other medically prescribed or cosmetic artifacts.
A similar episode took place in 2017 when a group of Chinese women could not return home after undergoing plastic surgeries as their faces could not be identified by identification systems. Even though identification was carried out by the human personnel, such a scenario may potentially reoccur with an Automatic Border Control (ABC) system.
Malicious
This class includes pre-planned and deliberate PAs performed to bypass a biometric system. In turn, they can be separated into extra subclasses:
- With synthetic PAIs. It can involve usage of prosthetic body parts, synthesized images, deepfake videos, textured eye lenses, occluded face images, and others. They can be static, dynamic or mixed.
- With human-based PAIs. This class implies that live or dead biometric traits are presented to a system. Imitation or mimicry can also be added to this class, but only if it does not employ specialized software or any other tools of synthesis. The PAIs can be live or dead body parts, impersonated voice, manually forged handwriting, etc.
Additionally, PAs can be classified according to the targeted modalities:
Facial attacks
Facial attacks cover a wide scope of techniques: from printed photos to digital face manipulations. The PAIs typically include:
- Masks. Masks may range in quality and sophistication, including elaborate silicone masks equipped with heat emitters to simulate facial warmth.
- Deepfakes. With the help of AI, it’s possible to morph and swap faces both in static pictures and dynamic footage. Besides, tools similar to Face2Face allow ‘wearing’ a deepfake image like a mask in real time, which can be used to bypass an active liveness detection system.
- Printed photos. A rudimentary approach, in which a printed photo is presented to a system’s sensor.
- Replay attacks. A digital photo, prerecorded video or deepfake footage is replayed from a high-definition screen to the system’s camera.
- 3D Printed Faces. Advanced 3D printing technology can be used to create lifelike replicas of faces. These models can exhibit facial textures, structures, and features that may trick some facial recognition systems.
- Projected Faces. Digital projectors can be used to project a face onto a blank mask or even onto another person's face. When carefully executed, this could potentially fool some facial recognition systems by mimicking the person's face.
- Face Morphing. It refers to the blending of different facial features to create new identities. Criminals might combine their own facial features with those of their intended targets, confusing facial recognition systems.
- Virtual Reality Avatars. As VR technology improves, so too does the realism of VR avatars. These virtual representations may be used to simulate the likeness of a specific person, potentially bypassing some facial recognition systems.
- Cosmetic Surgery. Some individuals may go to extreme lengths, such as cosmetic surgery, to alter their facial features enough to evade detection by facial recognition systems. This is a more drastic and less accessible method, but it's not out of the realm of possibilities.
- Makeup and Hairstyling. Certain types of makeup and hairstyles can alter a person's appearance significantly enough to evade facial recognition. Artists and experts in disguise have been using these methods for centuries, and the techniques can also be employed against modern technology.

It’s worth noting that facial morphing — blending faces of two or more individuals into one — is sometimes seen as a separate attack scenario. It is most often used for producing fake documents or illegal border crossing.
Voice attacks
Voice spoofing comprises impersonation and artificial voice attacks. The former method implies that a fraudster will mimic idiosyncrasies innate to the target’s speech: intonations, tempo, pronunciation, and other prosodic features, lexical preferences, logopaedical nuances, etc.

A more advanced and dangerous technique employs voice cloning based on deep learning. By applying various means, such as analyzing a speaker's voice and encoding its characteristics into a vector embedding (SV2TTS), a voice-cloning tool can produce a highly realistic result. It can be successfully used for spoofing both an Automatic Speaker Verification system (ASV) or a human vis-à-vis.

Other attack types
Other biometric modalities are also vulnerable to spoofing attacks:
Fingerprint attacks
Historically, it has been possible to steal a target’s fingerprints through various machinations: coercion, threatening, remote fingerprint hijacking with a malicious mobile application, and even through photography. Later, these fingerprints can be replicated with gelatin, play dough and even sodium alginate-based or hydrocolloid dental impression materials.
The severed finger scenario is also mentioned in literature, although it appears ineffective.

Iris attacks
This type of spoofing aims at simulating intricate patterns of a human iris. It ranges from zero-effort attack scenarios and usage of iris photos to mimicking a target’s iris in the form of a textured lens or a prosthetic eye. The cadaver eye attack is poorly explored, but it is suggested that it can be successful since iris scanning analyzes mostly the surface of the eye.
Retina attacks
It appears that retinal spoofing would mostly employ the same tactics as iris attacks, namely presenting a photo of a target’s eyes. At the same time, this modality is more challenging to fool, as it requires a great level of cooperation from a subject and offers a highly unique blood vessel pattern used for recognition.
Handwritten signature attacks
Forging a person’s signature is also a biometric PA type, which includes zero-effort, blind, static, dynamic, and regained forgeries.

Presentation Attack Detection
PAD is an essential component in today’s biometric solutions. It executes two central tasks:
- Preventing a wrong person from accessing a system that may contain valuable and sensitive data or gaining control over a remotely administered system/device.
- Allowing the right person to complete authorization safely and quickly.
Liveness detection is, by far, the most vital element in PAD. It analyzes the liveness cues — skin texture, oxygen saturation, craniofacial depth and structure, fingerprint deformation, and others — to reach a verdict on whether a presented biometric trait is genuine or not.
Presentation Attack Detection Metrics
Numerous guidelines, standards and metrics are suggested to ensure accurate and successful antispoofing in terms of PAD. For example, facial spoofing detection includes such protocols as Remote Identity Verification Service Providers (PVID), BSI, as well as more generalized eIDAS Regulation.
In turn, these initiatives greatly rely on the ISO antispoofing standards, such as ISO/IEC 19795-1 or ISO/IEC DIS 30107-3. These standards take into consideration such metrics as Attack Presentation Classification Error Rate (APCER), Bona Fide Presentation Classification Error Rate (BPCER), Average Classification Error Rate (ACER), and others.
To learn more about metrics use this page: https://antispoofing.org/antispoofing-performance-metrics-types-and-details/
References
- Black bank robbery suspect 'wore white old man mask to con police'
- ISO/IEC 30107-1:2016 Information technology — Biometric presentation attack detection — Part 1: Framework
- Number of Internet of Things (IoT) malware attacks worldwide from 2020 to 2021, by month
- Faces Are the Next Target for Fraudsters
- A Survey in Presentation Attack and Presentation Attack Detection
- Term attack
- Unrecognisable After Plastic Surgery, Chinese Women Detained At Airport
- Detecting Morphing Attacks through Face Geometry Features
- Prosodic features
- Introduction to Voice Presentation Attack Detection and Recent Advances
- Real-Time Voice Cloning on GitHub
- Voice Cloning Using Deep Learning
- German Defense Minister von der Leyen's fingerprint copied by Chaos Computer Club
- Alginate impressions: A practical perspective
- Why Dead Fingers (Usually) Can't Unlock a Phone
- This fake finger could help make our fingerprint scanners more secure
- Presentation Attacks in Signature Biometrics: Types and Introduction to Attack Detection
- Publication Of The Requirement Rule Set For Remote Identity Verification Service Providers
- BSI
- eIDAS Regulation
- ISO/IEC 19795-1:2021 Information technology — Biometric performance testing and reporting — Part 1: Principles and framework
- ISO/IEC 30107-3:2017 Information technology — Biometric presentation attack detection — Part 3: Testing and reporting