Facial Liveness Detection Challenges
Definition, Overview and Goals
Liveness detection challenges are public competitions organized to expose vulnerabilities of the biometric systems, attest the most promising algorithms and models, adapt them to hostile and uncontrolled environments, as well as introduce training and testing datasets.
Facial recognition (FR), one of the most non-invasive modalities, is deployed in 98 countries. As the need for quick and frictionless user authentication grows, FR requires more reliable, and failproof solutions. Especially when dealing with such issues as partial facial occlusions, wanted criminals trying to evade facial identification, demographic bias, and so on.
A cavalcade of anti-spoofing challenges dedicated to facial liveness detection exists today. They focus on pitfalls surrounding the technology: face morphing, deepfake attacks, impersonation, obfuscation, mobile face recognition, etc.
Methods Overview
Industry-level facial recognition is subject to anti-spoofing certification: ISO/IEC 30107 and Fast Identity Online (FIDO) standards. They define Presentation Attacks (PAs), exemplify attack methods, provide testing guidelines, and set various testing rates: False Acceptance Rate (FAR), Average Classification Error Rate (ACER), Bona Fide Presentation Classification Error Rate (BPCER), and others.
In most cases, these standards serve as a framework for facial recognition challenges. In turn, they focus on two principal groups of FR technology: hardware and software-based.
Hardware-based
This group of FR solutions can be categorized by its immense robustness to spoofing attacks. Thanks to its sophisticated equipment — such as Near-infrared region (NIR) sensors — they are capable of spotting discrepancies between a real human face and a spoofing artifact quite easily. This is possible thanks to accurate craniofacial structure and multispectral analysis. The biggest drawback of the hardware-based liveness detection is that it’s costly, hard-to-deploy, and difficult to integrate into mobile gadgets.
Software-based
Software-based approaches rely on algorithms capable of analyzing skin texture, frequency content, liveness cues, and other facial parameters. They are separated into Passive and Active solutions. While an Active approach is based on challenges that a user must solve — blinking or nodding — Passive methods focus on non-invasive liveness detection, staying immune to backwards engineering.
It combines a cavalcade of techniques including blinking analysis with the Conditional Random Fields (CRFs), image distortion analysis, discriminative image patch analysis, and other various approaches based on Convolutional Neural Networks (CNNs): ResNet-50, AlexNet, hybrid HOG-CNN, etc.
This type of facial liveness detection is more flexible: these solutions are based on deep learning algorithms that can be fine-tuned according to new threats, conditions, and goals.
Note: Even though these two groups can be developed and examined independently, software and hardware solutions often overlap.
Contests
An array of facial liveness detection challenges are hosted spotlighting various attack methodologies.
Competitions on Countermeasures to 2-D Facial Spoofing Attacks
International Joint Conference on Biometrics (IJCB) is an ongoing event, which introduced in 2011 perhaps the first FR liveness detection challenge. Being the earliest, it struggled with the lack of public datasets and comparative research dedicated to the topic.
At the same time, IJCB laid the groundwork for future endeavors in the field. The main attack specimens were the two-dimensional spoofing attacks withdrawn from the PRINT-ATTACK dataset with 200 bona fide videos and 200 spoofing videos. The algorithms featured in the challenge were AMILAB, CASIA, IDIAP, SIANI, UNICAMP, and UOULU. The best results were shown by the AMILAB and UNICAMP solutions
The event was followed by IJCB 2013, which extended the attack repertoire with video replays. It was based on the REPLAY-ATTACK database, a portion of samples of which were captured with a MacBook Air 13 to simulate a hostile environment with mediocre lighting. The best solutions were submitted by CASIA and LNMIIT.
Competition on Mobile Face PAD in Mobile Scenarios
In 2017 a Presentation Attack Detection (PAD) contest was held, in which participants were faced with facial authentication PAs normal to mobile gadgets. For that purpose the OULU-NPU dataset was introduced. It contains 4,950 samples of both bona fide and print/replay spoofing material that was captured with mobile devices of varying prices.
The dataset approximates its content to the real-life scenario: volunteers were asked to behave as they would normally during the facial authentication process. Besides, wavering illumination levels add an extra portion of realism.
Participants received a baseline method based on a color texture technique. Featured solutions included Massy HNU, VSS, MixedFASNet, and others. Results revealed that hand-crafted features enhanced with suitable color spaces did well on generalization and attack detection even in different environment conditions.
ChaLearn Looking at People Challenges
The ChaLearn challenges began in 2014 focusing on the human pose recovery, gesture recognition, etc. In the span of 2019-21, the competition concentrated on facial anti-spoofing. One of its signature components was the enormous CASIA-SURF, which contains 21,000 videos from 1,000 volunteers. The data corpus is separated into three categories:
- Depth.
- Infrared.
- RGB spectrum.
Besides, it offers a multitude of potential attack scenarios. For instance, volunteers were instructed to present to a camera a printed photo of their face with the eye cutouts. Subsequent attack types feature more regions where the cutouts were made.
The 2019 competition proposed a baseline, which employed the squeeze and excitation fusion capable of boosting the feature representational ability of the modalities through "explicitly modelling the interdependencies among different convolutional channels". Solutions by VisionLabs, RedSense, and Feather were the top three owing their success to ensemble learning.
See here and here to learn more about ChaLearn Challenge 2020 and 2021 results.
CelebA-Spoof Challenge
CelebA-Spoof Challenge was hosted in 2020 and featured its own colossal spoofing dataset with 625,537 image samples of 10,177 individuals. This content presented various digital manipulations, 43 rich facial attributes, as well as varying environment and illumination.
The competition attracted 19 teams. The winning solution is a complex method, which includes:
- Spoof modelling. A number of models are piled together for predicting the spoof cue of every testing sample.
- Spoof fusion. Heuristic voting is applied to achieve accurate multiple scores combination.
LivDet-Face 2021
LivDet is a series of ongoing liveness detection competitions. LivDet-Face 2021 was the first dedicated to facial anti-spoofing. It consisted of two segments: Image and Video PAs, but offered no specific dataset. The goal of the event was to find a method of generalizing "to uncertain circumstances". The contest featured a rich variety of the Presentation Attack Instruments (PAIs): 3D masks of wavering quality, laptop displays, photo cutouts, etc.
Team Fraunhofer IGD proposed the winning solution in the Image category with the lowest ACER of 16.47% and of 5.33%. In the Video category the winner was FaceMe with the ACER of 13.81% and BPCER of 14.29%. The general trend of the contest showed that the winning methods performed much better against the low-quality PAIs.
FAQ
Why is facial liveness detection important?
Facial liveness detection helps prevent a number of threats: from cybercrime to real-life attacks.
Antispoofing techniques and facial liveness detection play an important role in security and financial systems. They help prevent the so-called Presentation Attacks (PAs) that are initiated for the sake of gaining access to sensitive information or control options, as well as material valuables.
Threats may vary from simple money theft to hijacking vehicles remotely, money laundering or falsifying materials — like deepfake videos — that can cause social unrest. Facial liveness detection techniques keep evolving with time, leading to highly advanced methods. They are employed widely: from business and retail to various institutions such as airport terminals and police stations.
References
- Mapped: The State of Facial Recognition Around the World
- Example of a partial occlusion spoofing attack
- ISO/IEC 30107-1:2016. Information technology — Biometric presentation attack detection — Part 1: Framework
- FIDO
- Face Spoof Attack Recognition Using Discriminative Image Patches
- Face Recognition Using Hybrid HOG-CNN Approach
- International Joint Conference On Biometrics (IJCB 2022)
- Samples from the PRINT-ATTACK and REPLAY-ATTACK datasets
- PRINT-ATTACK. Subset of Replay-Attack Dataset
- Competition on counter measures to 2-D facial spoofing attacks
- The Replay-Attack Database for face spoofing
- Review of Face Presentation Attack Detection Competitions
- A Competition on Generalized Software-based Face Presentation Attack Detection in Mobile Scenarios
- The Oulu-NPU face presentation attack detection database
- Multi-modal Face Anti-spoofing Attack Detection Challenge at CVPR2019
- ChaLearn Looking at People @ ECCV2014: Challenge and Workshop on Pose Recovery, Action and Gesture Recognition
- CASIA-SURF
- A Dataset and Benchmark for Large-scale Multi-modal Face Anti-spoofing
- Face Anti-spoofing (Presentation Attack Detection) Challenge@CVPR2020
- Face Anti-spoofing (Presentation Attack Detection) Challenge@ICCV2021
- CelebA-Spoof
- CelebA-Spoof Challenge 2020 on Face Anti-Spoofing: Methods and Results
- LivDet 2023 Competition Overview
- Face Liveness Detection Competition (LivDet-Face) - 2021