Presentation Attacks: Types, Instruments and Detection

From Antispoofing Wiki

Presentation attacks are considered the main threat that can jeopardize a biometric system.

General Overview

A Presentation Attacks (PA) is a presentation of a certain physiological trait — fingerprint, face, voice — that is aimed at bypassing a biometric security system. Numerous sources also use the spoofing term to describe the issue. These attacks are regularly separated into two categories:

  • Imposter attack. A perpetrator attempts to get recognized and authorized as someone else. (Also referred to as synthetic identity fraud.)
  • Concealment attack. A perpetrator seeks to conceal their identity from facial recognition or other identification systems.

PAs involve a wide scope of tools known as Presentation Attack Instruments or PAIs. Usage of a certain PAI is dictated by the modality of a targeted biometric system. So, their repertoire can range from simple face cutouts or videos replayed on a tablet to artificially synthesized voices, realistic silicone masks, prosthetic eyes, and even a deceased person’s body parts.



Presentation Attack Detection (PAD) has been introduced as a response to the spoofing threat. Initially, its basis has been outlined in the ISO/IEC 30107-1 standard, which established a framework and terminology for PA detection.

Statistics on Presentation Attacks are barely collected or released into public. However, their number, intensity and creativity seem to grow every year. According to ID.me, 80,000 attempts have been made to fool its face recognition system throughout 26 American states from June 2020 to January 2021.

According to Statista, 10.3 million attacks have been conducted against Internet of Things (IoT) in October 2020 alone. Even though generally classified as malware attacks, a part of them can be PAs, as IoT gadgets prove to be vulnerable to the Presentation Attacks.


Types of Presentation Attacks

Currently, a PA taxonomy is proposed, which divides Presentation Attacks in accidental and deliberate. However, the former category may seem disputable as the term attack essentially implies an adversarial action.

Non-malicious attacks

This class includes incidents when presentation of a biometric trait caused an interference: false alarm, misidentification, and so on. At the same time, no malicious intent is observed on the subject’s side. Usually, such incidents may occur due to the usage of contact lenses, makeup, plastic surgery or other medically prescribed or cosmetic artifacts.

A similar episode took place in 2017 when a group of Chinese women couldn’t return home after undergoing plastic surgeries as their faces couldn’t be identified. Even though identification was carried out by the human personnel, such a scenario may potentially reoccur with an Automatic Border Control (ABC) system.

Malicious

This class includes pre-planned and deliberate PAs performed to bypass a biometric system. In turn, they can be separated into extra subclasses:

  • With synthetic PAIs. It can involve usage of prosthetic body parts, synthesized images, deepfake videos, textured eye lenses, occluded face images, and others. They can be static, dynamic or mixed.
  • With human-based PAIs. This class implies that live or dead biometric traits are presented to a system. Imitation or mimicry can also be added to this class, but only if it doesn't employ specialized software or any other tools of synthesis. The PAIs can be live or dead body parts, impersonated voice, manually forged handwriting, etc.

Additionally, PAs can be classified according to the targeted modalities:

Facial attacks

Facial attacks cover a wide scope of techniques: from printed photos to digital face manipulations. The PAIs typically include:

  • Masks. Masks may range in quality and sophistication, including elaborate silicone masks equipped with heat emitters to simulate facial warmth.
  • Deepfakes. With the help of AI, it’s possible to morph and swap faces both in static pictures and dynamic footage. Besides, tools akin to Face2Face allow ‘wearing’ a deepfake image like a mask in real time, which can be used to bypass an active liveness detection system.
  • Printed photos. A rudimentary approach, in which a printed photo is presented to a system’s sensor.
  • Replay attacks. A digital photo, prerecorded video or deepfake footage is replayed from a high-definition screen to the system’s camera.



It’s worth noting that facial morphing — blending faces of two or more individuals into one — is sometimes seen as a separate attack scenario. It is most often used for producing fake documents or illegal border crossing.

Voice attacks

Voice spoofing comprises impersonation and artificial voice attacks. The former method implies that a fraudster will mimic idiosyncrasies innate to the target’s speech: intonations, tempo, pronunciation, and other prosodic features, lexical preferences, logopaedical nuances, etc.



A more menacing technique employs voice cloning based on deep learning. By applying various means, such as analyzing a speaker's voice and encoding its characteristics into a vector embedding (SV2TTS), a voice-cloning tool can produce a highly realistic result. It can be successfully used for spoofing both an Automatic Speaker Verification system (ASV) or a human vis-à-vis.


Other attack types

Other biometric modalities are also vulnerable to spoofing attacks:

Fingerprint attacks

On practical experience, it’s possible to steal a target’s fingerprints through various machinations: coercion, threatening, remote fingerprint hijacking with a malicious mobile application, and even through photography. Then, fingerprints can be replicated with gelatin, play dough and even sodium alginate-based or hydrocolloid dental impression materials.

The severed finger scenario is also mentioned in the study literature, although it appears ineffective.


Iris attacks

This type of spoofing aims at simulating intricate patterns of a human iris. It ranges from zero-effort attack scenarios and usage of iris photos to mimicking a target’s iris in the form of a textured lens or a prosthetic eye. The cadaver eye attack is poorly explored, but it is suggested that it can be successful since iris scanning analyzes mostly the surface of the eye.

Retina attacks

It appears that retinal spoofing would mostly employ the same tactics as iris attacks, namely presenting a photo of a target’s eyes. At the same time, this modality is more challenging to forfeit, as it requires a great level of cooperation from a subject and offers a highly unique blood vessel pattern used for recognition.

Handwritten signature attacks

Forging a person’s signature is also a biometric PA type, which includes zero-effort, blind, static, dynamic, and regained forgeries.


Presentation Attack Detection

PAD is an essential component in today’s biometric solutions. It executes two central tasks:

  • Preventing a wrong person from accessing a system that may contain valuable and sensitive data or gaining control over a remotely administered system/device.
  • Allowing the right person to complete authorization safely and quickly.

Liveness detection is, by far, the most vital element in PAD. It analyzes the liveness cues — skin texture, oxygen saturation, craniofacial depth and structure, fingerprint deformation, and others — to reach a verdict whether a presented biometric trait is genuine or not.

Presentation Attack Detection Metrics

Numerous guidelines, standards and metrics are suggested to ensure accurate and successful PAD. For example, facial spoofing detection includes such protocols as Remote Identity Verification Service Providers (PVID), BSI, as well as more generalized eIDAS Regulation.

In turn, these initiatives greatly rely on the ISO antispoofing standards, such as ISO/IEC 19795-1 or ISO/IEC DIS 30107-3. These standards take into consideration such metrics as Attack Presentation Classification Error Rate (APCER), Bona Fide Presentation Classification Error Rate (BPCER), Average Classification Error Rate (ACER), and others.

References

  1. Black bank robbery suspect 'wore white old man mask to con police'
  2. ISO/IEC 30107-1:2016 Information technology — Biometric presentation attack detection — Part 1: Framework
  3. Number of Internet of Things (IoT) malware attacks worldwide from 2020 to 2021, by month
  4. Faces Are the Next Target for Fraudsters
  5. A Survey in Presentation Attack and Presentation Attack Detection
  6. Term attack
  7. Unrecognisable After Plastic Surgery, Chinese Women Detained At Airport
  8. Detecting Morphing Attacks through Face Geometry Features
  9. Prosodic features
  10. Introduction to Voice Presentation Attack Detection and Recent Advances
  11. Real-Time Voice Cloning on GitHub
  12. Voice Cloning Using Deep Learning
  13. German Defense Minister von der Leyen's fingerprint copied by Chaos Computer Club
  14. Alginate impressions: A practical perspective
  15. Why Dead Fingers (Usually) Can't Unlock a Phone
  16. This fake finger could help make our fingerprint scanners more secure
  17. Presentation Attacks in Signature Biometrics: Types and Introduction to Attack Detection
  18. Publication Of The Requirement Rule Set For Remote Identity Verification Service Providers
  19. BSI
  20. eIDAS Regulation
  21. ISO/IEC 19795-1:2021 Information technology — Biometric performance testing and reporting — Part 1: Principles and framework
  22. ISO/IEC 30107-3:2017 Information technology — Biometric presentation attack detection — Part 3: Testing and reporting