Burger menu
-


Video Injection Attacks

A video injection attack is a malicious technique, with which the video flow is tampered internally with some software/hardware to fool an authentication system

Definition & Overview

In biometric security, an injection attack is a type of indirect attack, in which a bona fide biometric trait is digitally replicated and presented to a security system on the internal, software level. This in turn tricks a recognition system into making a specific decision. (Although hardware-level attacks are also possible.) Among all else, it can be used for biometric spoofing.

Example of a video injection attack (hardware level)

Video injection attacks follow a similar algorithm. The only distinction is that they employ photos or video footage as a replica of a targeted person. This media is covertly presented to the authentication system — hence the term injection — which then makes a decision to authorize or reject the subject. Injection attacks are not classified Presentation Attacks (PAs) as footage is not physically presented to the camera.

As noticed by Antispoofing.org, video injection attacks have become more common since the advent of smart devices. Remote identity proofing (RIDP) and onboarding greatly rely on user verification conducted with the help of smartphone cameras. Additionally, the onset of the deepfake technology has made this injection attacks highly insidious.

in liveness taxonomy, it is suggested that liveness taxonomy challenge-based systems are at a higher risk since digital face manipulations can effectively bypass them. For example, a technology akin to Face2Face, based on a Recurrent Neural Network (RNN), enables a perpetrator to copy a target’s face and literally use it as a digital mask — the so-called face re-enactment. This solution copies head rotation, movement and facial expressions in real time, which can subvert a challenge-based system.

Availability of deepfake tools — such as Generative Adversarial Networks stockpiled on GitHub — further aggravates the seriousness of the issue. As reported by Sensity, the overall amount of deepfakes produced and circulated has shown a nearly exponential growth since at least 2019.

General Scheme of Biometric Systems & Types of Injection Attacks

As summarized by the ISO/IEC 30107-1 standard, a biometric system typically has 9 attack points (or vulnerability points). In turn, they serve to classify attack types that are separated into Direct and Indirect.

Animaze software allows controlling digital 3D puppets in real time akin to Face2Face
Animaze software allows controlling digital 3D puppets in real time akin to Face2Face

General Biometric System Scheme

The following attack points are outlined according to ISO/IEC 30107 standard:

Type 1 — Sensor attack

Type 1 is classified as the only Direct attack. In this scenario, a replicated biometric feature known as Presentation Attack Instrument (PAI) — facial mask, falsified fingerprint, fake iris — is presented to a system’s sensor physically. If the targeted biometric system is not equipped with a liveness detection component, it cannot prevent this attack.

Type 2 — Communication channel attack

Starting with type 2, attacks are classified as indirect. At this stage the communication channel connecting the sensor and feature extraction components is threatened.

Type 3 — Feature extraction module attack

This type of attack is aimed at the feature extraction module to manipulate the biometric feature values (biometric template) and fool the system.

Type 4 — Feature value theft

This attack type implies that a culprit can steal a bona fide person’s biometric feature values when they are transferred from the feature extractor to the matcher.

Type 5 — Matcher attack

In this case a matching algorithm can be sabotaged to make it deliver a specific matching score: a high one for an impostor attack or a low one for a concealment attack.

Type 6 — Biometric template attack

In this attack, a perpetrator attempts to intercept the biometric template transferred from the system database to the matcher. The template can then be stolen, replaced or tampered with.

Type 7 — Database attack

A Type 7 attack indicates that the content of the system’s database can be tampered by modifying or deleting the existing models or adding new ones.

Type 8 — The match score attack

To affect the decision-making algorithm, an attacker will tamper the match score. It is set between 0 and 100% and transmitted between the matcher and the decision module. Thus, an attacker can get a decision in their favor.

Type 9 — Decision attack

Finally, an attacker can affect the verdict reached by the decision module. This is done by tampering the correspondence transferred between the decision module and the application device. This attack scenario is considered the most critical due to the binary nature of the match decision process.

Injection Attack Types

Injection video attacks are separated into Software-level and Hardware-level.

Software-level

Software-level attacks, popularly referred to as ‘hacking’, involve tampering a non-malicious software or masquerading a malware as a bona fide application. One of the possible scenarios describes modifying an original APK file by inserting new JavaScript elements into it with a penetration testing tool Frida. Interestingly, the attacked application can be running at that moment.

Then, a culprit can make a hook aimed at the camera of a gadget and overwrite the video stream by injecting photos or videos, including deepfakes prepared in advance. This step is especially vital if the perpetrator needs to bypass a challenge-based system.

Video injection attack algorithm aimed at bypassing a liveness detection system
Video injection attack algorithm aimed at bypassing a liveness detection system

Hardware-level

This approach has two central steps:

  • Using a module that converts HDMI stream to MIPI CSI.
  • Connecting the mentioned module to an LCD controller board to finalize the attack device.

After the device is ready, it can be used to masquerade the HDMI output coming from a PC as a native video stream of an attacked device.

Hardware level attack schematic
Hardware level attack schematic
Example of an LCD controller board
Example of an LCD controller board

As reported, hardware-level approach is superior compared to software-level: it provides lower latency, helps to avoid focus blur and HSL space color loss, mitigates frequency response distortion, remains compatible with multiple applications, ensures fake media import in real time, and so on.

Protection Against Video Injection Attacks

A proposed solution employed a modified version of the WebAuthn protocol of attestation and authentication supported in web browsers. (The GitHub page.) The core idea of the solution implies usage of public and private keys. Further algorithm includes these steps:

  • Once the media has been captured, its hash will be signed with a private key.
  • When the media is submitted, its hash will be computed again on the server side with the same hash function and then signed with a public key.
  • If the signed hash matches the hash of the received media, then the media will be considered authentic and will be cleared of tampering.

Remote Image Attestation serves to further validate authenticity of the media submitted by confirming the website identity, its Trusted Computing Base (TCB) level, and so on.

Schematic of the photo authenticity protection model
Schematic of the photo authenticity protection model

Experiments

An experiment was hosted, in which doctored photos, a few deepfake types and a morphed video were used as injection attack tools. The experiment focused on bypassing both passive and active security systems. Interestingly, the tested system was enhanced with additional security layers dedicated to injection attacks.

Experiment results are presented below:

Results for attacks reconstructed in live and Results for all challenge orders prepared in advance
Results for attacks reconstructed in live and Results for all challenge orders prepared in advance

It appears that even specialized security solutions can at least be partly vulnerable to the video injection attacks. In the case of regular systems that do not have such elements, the success rate of injection attacks will be much higher.

Other Cases of Injection Attacks

Video injection attacks are observed in some other fields as well.

False Image Injection Prevention in Medicine

A peculiar example shows a visual injection attack, in which one of the wisdom teeth was hidden on purpose. It is noted that consequences can be far more detrimental if scans of vital organs will be modified and replaced with malicious intentions. Various examples of falsified medical data further prove how vital anti-spoofing for IoT is.

Authentic (left) and tampered (right) dental photos
Authentic (left) and tampered (right) dental photos

Injection Attacks in Automotive Ethernet-Based Networks

Autonomous vehicles and in-vehicle networks (IVNs) are also vulnerable to injection attacks, among which are Content-addressable memory (CAM) table overflow, fuzzing, replay attack, command injection, and other scenarios. To prevent the potential threat, authors suggest IDS monitoring of the AVTP streams to exclude injections, collecting automotive Ethernet packets, and other techniques.

Example of an injection attack, which can result in a fatal road accident
Example of an injection attack, which can result in a fatal road accident
Architecture of the system that prevents injection attacks in self-driving vehicles
Architecture of the system that prevents injection attacks in self-driving vehicles

References

  1. Face2Face: Real-time Face Capture and Reenactment of RGB Videos
  2. Awesome-GANS-and-Deepfakes on GitHub
  3. Report 2019: The State Of Deepfakes
  4. ISO/IEC 30107-1:2016 Information technology — Biometric presentation attack detection — Part 1: Framework
  5. Animaze by FaceRig
  6. Video injection attacks on remote digital identity verification solution using face recognition
  7. Frida Tutorial 1
  8. Video injection attack algorithm aimed at bypassing a liveness detection system
  9. Biometric Authentication Under Threat: Liveness Detection Hacking
  10. Example of an LCD controller board
  11. Ensuring the Authenticity and Fidelity of Camera data
  12. WebAuthn protocol
  13. Webauthn.io on GitHub
  14. False image injection prevention using iChain
  15. Convolutional Neural Network-based Intrusion Detection System for AVTP Streams in Automotive Ethernet-based Networks
Avatar Antispoofing

1 Followers

Editors at Antispoofing Wiki thoroughly review all featured materials before publishing to ensure accuracy and relevance.

Contents

Hide