Definition, Overview and Goals
Liveness detection challenges are public competitions organized to expose vulnerabilities of the biometric systems, test and identify the most promising algorithms and models, and adapt them to hostile and uncontrolled environments. They are also important for introducing training techniques and testing existing datasets.
Facial recognition (FR), one of the most non-invasive modalities, is already deployed in 98 countries. As the need for quick and frictionless user authentication grows, FR requires more reliable — and failproof — solutions. This is especially crucial when dealing with such issues as partial facial occlusions, wanted criminals trying to evade facial identification, avoiding demographic bias, and so on.
A cavalcade of face antispoofing challenges dedicated to facial liveness detection has emerged which focuses on pitfalls surrounding the technology: face morphing, deepfake attacks, impersonation, obfuscation, mobile face recognition, etc. Therefore, types, countermeasures and challenges of facial antispoofing should be reviewed accordingly.
Industry-level facial recognition is subject to antispoofing certification: ISO/IEC 30107 and Fast Identity Online (FIDO) standards. They define Presentation Attacks (PAs), exemplify attack methods, provide testing guidelines, and set various testing rates: False Acceptance Rate (FAR), Average Classification Error Rate (ACER), Bona Fide Presentation Classification Error Rate (BPCER), and others.
In most cases, these standards serve as a framework for facial recognition challenges and focus on two principal groups of FR technology: hardware and software-based.
This group of FR solutions can be categorized by its immense robustness when it comes to spoofing attacks.
Thanks to sophisticated equipment — such as Near-infrared region (NIR) sensors — hardware-based solutions are capable of spotting discrepancies between a real human face and a spoofing artifact quite easily. This is possible due to accurate craniofacial structure and multispectral analysis.
The biggest drawback of the hardware-based liveness detection is that it is costly, hard-to-deploy, and difficult to integrate into mobile gadgets.
Software-based approaches rely on algorithms capable of analyzing skin texture, frequency content, liveness cues, and other facial parameters. They are separated into Passive and Active solutions. An Active approach incorporates challenges that a user must solve, such as blinking or nodding, while Passive methods focus on non-invasive liveness detection — thus staying immune to reverse engineering.
The Passive approach combines a variety of techniques, including blinking analysis with the Conditional Random Fields (CRFs), image distortion analysis, discriminative image patch analysis, and other various approaches based on Convolutional Neural Networks (CNNs): ResNet-50, AlexNet, hybrid HOG-CNN, etc.
This type of facial liveness detection is more flexible: these solutions are based on deep learning algorithms that can be fine-tuned according to new threats, conditions, and goals.
Note: Even though these two groups can be developed and examined independently, software and hardware solutions often overlap.
An array of facial liveness detection challenges are hosted to spotlight different attack methodologies.
Competitions on Countermeasures to 2D Facial Spoofing Attacks
The International Joint Conference on Biometrics (IJCB) is an ongoing event which was introduced in 2011 as perhaps the first FR liveness detection challenge. Due to being ahead of its time, it struggled with the lack of public datasets and comparative research dedicated to the topic.
Despite these challenges, IJCB laid the groundwork for future endeavors in the facial liveness field. The main attack specimens were the two-dimensional spoofing attacks withdrawn from the PRINT-ATTACK dataset with 200 bona fide videos and 200 spoofed videos. The algorithms featured in the challenge were AMILAB, CASIA, IDIAP, SIANI, UNICAMP, and UOULU. The best results were shown by the AMILAB and UNICAMP solutions.
The event was followed by IJCB 2013, which expanded the attack repertoire with video replays. It was based on the REPLAY-ATTACK database, a portion of samples of which were captured with a MacBook Air 13 to simulate a hostile environment with mediocre lighting. The best solutions in this field were submitted by CASIA and LNMIIT.
Competition on Mobile Face PAD in Mobile Scenarios
In 2017, a Presentation Attack Detection (PAD) contest was held in which participants were faced with facial authentication PAs normal to mobile gadgets. The OULU-NPU dataset was introduced, which contains 4,950 samples of both bona fide and print/replay spoofed material that was captured with mobile devices of varying types.
The dataset approximates its content to the real-life scenario: volunteers were asked to behave as they would normally during the facial authentication process. Wavering illumination levels were also introduced to add an extra element of realism.
Participants received a baseline method based on a color texture technique. Featured solutions included Massy HNU, VSS, MixedFASNet, and others. Results revealed that hand-crafted features enhanced with suitable color spaces did well on generalization and attack detection, even in different environment conditions.
ChaLearn Looking at People Challenges
The ChaLearn challenges began in 2014 and focused on human pose recovery, gesture recognition, etc. In the span of 2019-21, the competition concentrated on facial antispoofing. One of its signature components was the behemoth data corpus CASIA-SURF, which contains 21,000 videos from 1,000 volunteers. The data is separated into three categories:
- RGB spectrum.
It also offers a multitude of potential attack scenarios. For instance, volunteers were instructed to present to a camera a printed photo of their face with the eye cutouts. Subsequent attack types featured more regions where the cutouts were made.
The 2019 competition proposed a baseline which employed the squeeze and excitation fusion capable of boosting the feature representational ability of the modalities through "explicitly modeling the interdependencies among different convolutional channels". Solutions by VisionLabs, RedSense, and Feather were the top three, owing their success to ensemble learning.
The CelebA-Spoof Challenge was hosted in 2020 and featured its own colossal spoofing dataset with 625,537 image samples of 10,177 individuals. This content presented various digital manipulations, 43 rich facial attributes, as well as varying environment and illumination.
The competition attracted 19 teams. The winning solution was a complex method, which included:
- Spoof modeling. A number of models piled together to predict the spoof cue of every testing sample.
- Spoof fusion. Heuristic voting is applied to achieve accurate multiple scores combination.
LivDet is a series of ongoing liveness detection competitions. Launched in 2021, it was the first dedicated specifically to facial antispoofing. It consists of two segments, Image and Video PAs, but it offers no specific dataset. The goal of the event is to find a method of generalizing "to uncertain circumstances". The 2021 contest featured a rich variety of the Presentation Attack Instruments (PAIs): 3D masks of varying quality, laptop displays, photo cutouts, etc.
Team Fraunhofer IGD proposed the winning solution in the Image category with the lowest ACER of 16.47% and BPCER of 5.33%. In the Video category, the winner was FaceMe with the ACER of 13.81% and BPCER of 14.29%. The general trend of the contest showed that the winning methods performed much better against the low-quality PAIs.
Why is facial liveness detection important?
Facial liveness detection helps prevent a number of threats, from cybercrime to real-life attacks.
Antispoofing techniques and facial liveness detection play an important role in security and financial systems. They help prevent Presentation Attacks (PAs), which seek to imitate a facial representation to gain access to sensitive information or control options, as well as material valuables.
Threats may vary from simple money theft to more sinister and complicated crimes such as hijacking vehicles remotely, money laundering, or falsifying materials to spread disinformation.
Facial liveness detection techniques keep evolving with time, leading to highly advanced methods. They are employed widely: from business and retail to various institutions such as airport terminals and police stations. To read more about how facial liveness detection works, check out the facial liveness detection article archive.
- Mapped: The State of Facial Recognition Around the World
- Example of a partial occlusion spoofing attack
- ISO/IEC 30107-1:2016. Information technology — Biometric presentation attack detection — Part 1: Framework
- Face Spoof Attack Recognition Using Discriminative Image Patches
- Face Recognition Using Hybrid HOG-CNN Approach
- International Joint Conference On Biometrics (IJCB 2022)
- Samples from the PRINT-ATTACK and REPLAY-ATTACK datasets
- PRINT-ATTACK. Subset of Replay-Attack Dataset
- Competition on counter measures to 2-D facial spoofing attacks
- The Replay-Attack Database for face spoofing
- Review of Face Presentation Attack Detection Competitions
- A Competition on Generalized Software-based Face Presentation Attack Detection in Mobile Scenarios
- The Oulu-NPU face presentation attack detection database
- Multi-modal Face Anti-spoofing Attack Detection Challenge at CVPR2019
- ChaLearn Looking at People @ ECCV2014: Challenge and Workshop on Pose Recovery, Action and Gesture Recognition
- A Dataset and Benchmark for Large-scale Multi-modal Face Anti-spoofing
- Face Anti-spoofing (Presentation Attack Detection) Challenge@CVPR2020
- Face Anti-spoofing (Presentation Attack Detection) Challenge@ICCV2021
- CelebA-Spoof Challenge 2020 on Face Anti-Spoofing: Methods and Results
- LivDet 2023 Competition Overview
- Face Liveness Detection Competition (LivDet-Face) - 2021