Video Injection Attacks

From Antispoofing Wiki

Video injection attacks are a serious threat to biometric systems as they can be harder to detect than regular Presentation Attacks.

Definition & Overview

In biometric security, an injection attack is a type of indirect attack, during which a bona fide biometric trait is digitally replicated and presented to a security system on the internal, software level tricking a recognition system into making a specific decision. (Although hardware-level attacks are also possible.)



Video injection attacks follow that algorithm. The only distinction is that they employ photos or video footage as a replica of a targeted person. This media is covertly presented to the authentication system — hence the term injection — which then makes a decision to authorize or reject the subject. It’s not classified a Presentation Attack (PA) as footage is not physically presented to the camera.

Video injection attacks have burgeoned since the advent of smart gizmos. Remote identity proofing (RIDP) and onboarding greatly rely on user verification conducted with the help of smartphone cameras. And the further onset of the deepfake technology makes this attack scenario especially insidious.

It is suggested that challenge-based systems are at risk since digital face manipulations allow bypassing them. Namely, a technology akin to Face2Face, based on a Recurrent Neural Network (RNN), enables a perpetrator to copy a target’s face and literally use it as a digital mask — the so-called face re-enactment. This solution copies head rotation. movement and facial expressions in real time, which can subvert a challenge-based system.

Availability of deepfake tools — such as Generative Adversarial Networks stockpiled on GitHub — further aggravates the issue. As Sensity reports, the overall deepfake amount has been showing nearly an exponential growth since at least 2019.

General Scheme of Biometric Systems & Types of Injection Attacks

As summarized by the ISO/IEC 30107-1 standard, a biometric system typically has 9 attack points (or vulnerability points). In turn, they serve to classify attack types that are separated into Direct and Indirect.


General Biometric System Scheme

The following attack points are outlined:

Type 1 — Sensor attack

Type 1 is classified as the only Direct attack. In this scenario, a replicated biometric feature known as Presentation Attack Instrument (PAI) — facial mask, falsified fingerprint, fake iris — is presented to a system’s sensor physically. If the attacked solution is not equipped with a liveness detection component, it cannot prevent this attack.

Type 2 — Communication channel attack

Starting with type 2, attacks are classified as indirect. At this stage a communication channel is threatened, which connects the sensor and the feature extraction.

Type 3 — Feature extraction module attack

This type is aimed at the feature extraction module to manipulate the biometric feature values (biometric template) and fool the system.

Type 4 — Feature value theft

This attack type implies that a culprit can steal the bona fide person’s biometric feature values when they are transferred from the feature extractor to the matcher.

Type 5 — Matcher attack

In this case a matching algorithm can be sabotaged to make it deliver a specific matching score: a high one for an impostor attack or a low one for a concealment attack.

Type 6 — Biometric template attack

A perpetrator attempts to intercept the biometric template transferred from the system database to the matcher to steal, replace or tamper it.

Type 7 — Database attack

Type 7 attack indicates that the content of the system’s database can be tampered by modifying or deleting the existing models or adding new ones.

Type 8 — The match score attack

To affect the decision-making algorithm, an attacker will tamper the match score. It is set between 0 and 100% and transmitted between the matcher and the decision module. Thus, an attacker can get a decision in their favor.

Type 9 — Decision attack

Finally, an attacker can affect the verdict reached by the decision module. This is done by tampering the correspondence transferred between the decision module and the application device. This attack scenario is considered the most critical due to the binary nature of the match decision process.

Injection Attack Types

Injection video attacks are separated into Software-level and Hardware-level.

Software-level

Software-level attacks, popularly referred to as ‘hacking’, involve tampering a non-malicious software or masquerading a malware as a bona fide application. One of the possible scenarios describes modifying an original APK file by inserting into it new JavaScript elements with a penetration testing tool Frida. Interestingly, the attacked application can be running at that moment.

Then, a culprit can make a hook aimed at the camera of a gadget and overwrite the video stream by injecting photos or videos, including deepfakes prepared in advance. This step is especially vital if the perpetrator needs to bypass a challenge-based system.


Hardware-level

This approach has two central steps:

  • Using a module that converts HDMI stream to MIPI CSI.
  • Connecting the mentioned module to an LCD controller board to finalize the attack device.

After the device is ready, it can be used to masquerade the HDMI output coming from a PC as a native video stream of an attacked device.




As reported, hardware-level approach is superior compared to software-level: it provides lower latency, helps to avoid focus blur and HSL space color loss, mitigates frequency response distortion, remains compatible with multiple applications, ensures fake media import in real time, and so on.

Protection Against Video Injection Attacks

A proposed solution employed a modified version of the WebAuthn protocol of attestation and authentication supported in web browsers. (The GitHub page.) The core idea of the solution implies usage of the public and private keys. Further algorithm includes these steps:

  • Once the media has been captured, its hash will be signed with a private key.
  • When the media is submitted, its hash will be computed again on the server side with the same hash function and then signed with a public key.
  • If the signed hash matches the hash of the received media, then it is authentic and hasn’t been tampered.

Remote Image Attestation serves to further validate authenticity of the media submitted by confirming the website identity, its Trusted Computing Base (TCB) level, and so on.


Experiments

An experiment was hosted, in which doctored photos, a few deepfake types and a morphed video were used as injection attack tools. The experiment focused on bypassing both passive and active security systems. Interestingly, the tested system was enhanced with additional security layers dedicated to injection attacks.

Experiment results are presented below:



It appears that even specialized security solutions can at least be partly vulnerable to the video injection attacks. In the case of regular systems that do not have such elements, success rates will be much higher.

Other Cases of Injection Attacks

Video injection attacks are observed in some other fields as well.

False Image Injection Prevention in Medicine

A peculiar example shows a visual injection attack, in which one of the wisdom teeth was hidden on purpose. It is noted that consequences can be far more detrimental if scans of vital organs will be modified and replaced with malicious intentions.


Injection Attacks in Automotive Ethernet-Based Networks

Autonomous vehicles and in-vehicle networks (IVNs) are also vulnerable to injection attacks, among which are Content-addressable memory (CAM) table overflow, fuzzing, replay attack, command injection, and other scenarios. To prevent the potential threat, authors suggest IDS monitoring of the AVTP streams to exclude injections, collecting automotive Ethernet packets, and other know-hows.



References

  1. Face2Face: Real-time Face Capture and Reenactment of RGB Videos
  2. Awesome-GANS-and-Deepfakes on GitHub
  3. Report 2019: The State Of Deepfakes
  4. ISO/IEC 30107-1:2016 Information technology — Biometric presentation attack detection — Part 1: Framework
  5. Animaze by FaceRig
  6. Video injection attacks on remote digital identity verification solution using face recognition
  7. Frida Tutorial 1
  8. Video injection attack algorithm aimed at bypassing a liveness detection system
  9. Biometric Authentication Under Threat: Liveness Detection Hacking
  10. Example of an LCD controller board
  11. Ensuring the Authenticity and Fidelity of Camera data
  12. WebAuthn protocol
  13. Webauthn.io on GitHub
  14. False image injection prevention using iChain
  15. Convolutional Neural Network-based Intrusion Detection System for AVTP Streams in Automotive Ethernet-based Networks