Burger menu
-


Facial Liveness Detection Challenges

Both institutions and companies host ongoing facial liveness challenges to bolster their detection abilities

Definition, Overview and Goals

Liveness detection challenges are public competitions organized to expose vulnerabilities of the biometric systems, test and identify the most promising algorithms and models, and adapt them to hostile and uncontrolled environments. They are also important for introducing training techniques and testing existing datasets.

Facial recognition (FR), one of the most non-invasive modalities, is already deployed in 98 countries. As the need for quick and frictionless user authentication grows, FR requires more reliable — and failproof — solutions. This is especially crucial when dealing with such issues as partial facial occlusions, wanted criminals trying to evade facial identification, avoiding demographic bias, and so on.

Example of a partial occlusion spoofing attack
Example of a partial occlusion spoofing attack

A cavalcade of face antispoofing challenges dedicated to facial liveness detection has emerged which focuses on pitfalls surrounding the technology: face morphing, deepfake attacks, impersonation, obfuscation, mobile face recognition, etc. Therefore, types, countermeasures and challenges of facial antispoofing should be reviewed accordingly.

Methods Overview

Industry-level facial recognition is subject to antispoofing certification: ISO/IEC 30107 and Fast Identity Online (FIDO) standards. They define Presentation Attacks (PAs), exemplify attack methods, provide testing guidelines, and set various testing rates: False Acceptance Rate (FAR), Average Classification Error Rate (ACER), Bona Fide Presentation Classification Error Rate (BPCER), and others.

In most cases, these standards serve as a framework for facial recognition challenges and focus on two principal groups of FR technology: hardware and software-based.

Hardware-based

This group of FR solutions can be categorized by its immense robustness when it comes to spoofing attacks. 

Thanks to sophisticated equipment — such as Near-infrared region (NIR) sensors — hardware-based solutions are capable of spotting discrepancies between a real human face and a spoofing artifact quite easily. This is possible due to accurate craniofacial structure and multispectral analysis. 

The biggest drawback of the hardware-based liveness detection is that it is costly, hard-to-deploy, and difficult to integrate into mobile gadgets.

Software-based

Software-based approaches rely on algorithms capable of analyzing skin texture, frequency content, liveness cues, and other facial parameters. They are separated into Passive and Active solutions. An Active approach incorporates challenges that a user must solve, such as blinking or nodding, while Passive methods focus on non-invasive liveness detection — thus staying immune to reverse engineering.

The Passive approach combines a variety of techniques, including blinking analysis with the Conditional Random Fields (CRFs), image distortion analysis, discriminative image patch analysis, and other various approaches based on Convolutional Neural Networks (CNNs): ResNet-50, AlexNet, hybrid HOG-CNN, etc.

This type of facial liveness detection is more flexible: these solutions are based on deep learning algorithms that can be fine-tuned according to new threats, conditions, and goals.

Note: Even though these two groups can be developed and examined independently, software and hardware solutions often overlap.

Contests

An array of facial liveness detection challenges are hosted to spotlight different attack methodologies.

Competitions on Countermeasures to 2D Facial Spoofing Attacks

The International Joint Conference on Biometrics (IJCB) is an ongoing event which was introduced in 2011 as perhaps the first FR liveness detection challenge. Due to being ahead of its time, it struggled with the lack of public datasets and comparative research dedicated to the topic.

Samples from the PRINT-ATTACK and REPLAY-ATTACK datasets
Samples from the PRINT-ATTACK and REPLAY-ATTACK datasets

Despite these challenges, IJCB laid the groundwork for future endeavors in the facial liveness field. The main attack specimens were the two-dimensional spoofing attacks withdrawn from the PRINT-ATTACK dataset with 200 bona fide videos and 200 spoofed videos. The algorithms featured in the challenge were AMILAB, CASIA, IDIAP, SIANI, UNICAMP, and UOULU. The best results were shown by the AMILAB and UNICAMP solutions.

Algorithms featured in the 2011 IJCB challenge
Algorithms featured in the 2011 IJCB challenge
The 2011 IJCB challenge results
The 2011 IJCB challenge results

The event was followed by IJCB 2013, which expanded the attack repertoire with video replays. It was based on the REPLAY-ATTACK database, a portion of samples of which were captured with a MacBook Air 13 to simulate a hostile environment with mediocre lighting. The best solutions in this field were submitted by CASIA and LNMIIT.

Algorithms featured in the 2013 IJCB challenge
Algorithms featured in the 2013 IJCB challenge
The 2013 IJCB challenge results
The 2013 IJCB challenge results

Competition on Mobile Face PAD in Mobile Scenarios

In 2017, a Presentation Attack Detection (PAD) contest was held in which participants were faced with facial authentication PAs normal to mobile gadgets. The OULU-NPU dataset was introduced, which contains 4,950 samples of both bona fide and print/replay spoofed material that was captured with mobile devices of varying types.

The dataset approximates its content to the real-life scenario: volunteers were asked to behave as they would normally during the facial authentication process. Wavering illumination levels were also introduced to add an extra element of realism.

OULU-NPU dataset samples
OULU-NPU dataset samples

Participants received a baseline method based on a color texture technique. Featured solutions included Massy HNU, VSS, MixedFASNet, and others. Results revealed that hand-crafted features enhanced with suitable color spaces did well on generalization and attack detection, even in different environment conditions.

List of Mobile Face PAD in Mobile Scenarios participants
List of Mobile Face PAD in Mobile Scenarios participants

ChaLearn Looking at People Challenges

The ChaLearn challenges began in 2014 and focused on human pose recovery, gesture recognition, etc. In the span of 2019-21, the competition concentrated on facial antispoofing. One of its signature components was the behemoth data corpus CASIA-SURF, which contains 21,000 videos from 1,000 volunteers. The data is separated into three categories:

  • Depth.
  • Infrared.
  • RGB spectrum.

It also offers a multitude of potential attack scenarios. For instance, volunteers were instructed to present to a camera a printed photo of their face with the eye cutouts. Subsequent attack types featured more regions where the cutouts were made.

ChaLearn Looking at People 2019 participants
ChaLearn Looking at People 2019 participants
Attack scenarios and categories featured in the CASIA-SURF dataset
Attack scenarios and categories featured in the CASIA-SURF dataset

The 2019 competition proposed a baseline which employed the squeeze and excitation fusion capable of boosting the feature representational ability of the modalities through "explicitly modeling the interdependencies among different convolutional channels". Solutions by VisionLabs, RedSense, and Feather were the top three, owing their success to ensemble learning.

Sample preprocessing used in the CASIA-SURF dataset
Sample preprocessing used in the CASIA-SURF dataset

See here and here to learn more about ChaLearn Challenge 2020 and 2021 results.

CelebA-Spoof Challenge

The CelebA-Spoof Challenge was hosted in 2020 and featured its own colossal spoofing dataset with 625,537 image samples of 10,177 individuals. This content presented various digital manipulations, 43 rich facial attributes, as well as varying environment and illumination.

A sample 'medley' from the CelebA-Spoof dataset
A sample 'medley' from the CelebA-Spoof dataset

The competition attracted 19 teams. The winning solution was a complex method, which included:

  • Spoof modeling. A number of models piled together to predict the spoof cue of every testing sample.
  • Spoof fusion. Heuristic voting is applied to achieve accurate multiple scores combination.
Architecture of the winning solution
Architecture of the winning solution

LivDet-Face 2021

LivDet is a series of ongoing liveness detection competitions. Launched in 2021, it was the first dedicated specifically to facial antispoofing. It consists of two segments, Image and Video PAs, but it offers no specific dataset. The goal of the event is to find a method of generalizing "to uncertain circumstances". The 2021 contest featured a rich variety of the Presentation Attack Instruments (PAIs): 3D masks of varying quality, laptop displays, photo cutouts, etc.

Team Fraunhofer IGD proposed the winning solution in the Image category with the lowest ACER of 16.47% and BPCER of 5.33%. In the Video category, the winner was FaceMe with the ACER of 13.81% and BPCER of 14.29%. The general trend of the contest showed that the winning methods performed much better against the low-quality PAIs.

PAIs used in LivDet-Face 2021
PAIs used in LivDet-Face 2021

FAQ

Why is facial liveness detection important?

Facial liveness detection helps prevent a number of threats, from cybercrime to real-life attacks.

Antispoofing techniques and facial liveness detection play an important role in security and financial systems. They help prevent Presentation Attacks (PAs), which seek to imitate a facial representation to gain access to sensitive information or control options, as well as material valuables.

Threats may vary from simple money theft to more sinister and complicated crimes such as hijacking vehicles remotely, money laundering, or falsifying materials to spread disinformation. 

Facial liveness detection techniques keep evolving with time, leading to highly advanced methods. They are employed widely: from business and retail to various institutions such as airport terminals and police stations. To read more about how facial liveness detection works, check out the facial liveness detection article archive.

References

  1. Mapped: The State of Facial Recognition Around the World
  2. Example of a partial occlusion spoofing attack
  3. ISO/IEC 30107-1:2016. Information technology — Biometric presentation attack detection — Part 1: Framework
  4. FIDO
  5. Face Spoof Attack Recognition Using Discriminative Image Patches
  6. Face Recognition Using Hybrid HOG-CNN Approach
  7. International Joint Conference On Biometrics (IJCB 2022)
  8. Samples from the PRINT-ATTACK and REPLAY-ATTACK datasets
  9. PRINT-ATTACK. Subset of Replay-Attack Dataset
  10. Competition on counter measures to 2-D facial spoofing attacks
  11. The Replay-Attack Database for face spoofing
  12. Review of Face Presentation Attack Detection Competitions
  13. A Competition on Generalized Software-based Face Presentation Attack Detection in Mobile Scenarios
  14. The Oulu-NPU face presentation attack detection database
  15. Multi-modal Face Anti-spoofing Attack Detection Challenge at CVPR2019
  16. ChaLearn Looking at People @ ECCV2014: Challenge and Workshop on Pose Recovery, Action and Gesture Recognition
  17. CASIA-SURF
  18. A Dataset and Benchmark for Large-scale Multi-modal Face Anti-spoofing
  19. Face Anti-spoofing (Presentation Attack Detection) Challenge@CVPR2020
  20. Face Anti-spoofing (Presentation Attack Detection) Challenge@ICCV2021
  21. CelebA-Spoof
  22. CelebA-Spoof Challenge 2020 on Face Anti-Spoofing: Methods and Results
  23. LivDet 2023 Competition Overview
  24. Face Liveness Detection Competition (LivDet-Face) - 2021
Avatar Antispoofing

1 Followers

Editors at Antispoofing Wiki thoroughly review all featured materials before publishing to ensure accuracy and relevance.

Contents

Hide