Burger menu
-


Standardizing Detection of Deepfakes: Why Experts Say It’s Important

With deepfakes and disinformation spreading at an alarming rate, experts are setting their sights on standardized detection methods.

Definition & Problem Overview

Deepfake media have shown an alarming increase in recent times. According to expert reports, the amount of that type of media, including facial deepfakes, doubles every six months, as the tools and means to produce such fabricated media are becoming greatly available to the amateurs and the general public. At this rate, it could be only a matter of months before you will start to doubt the authenticity of everything you see online. 

For instance, Adobe released the so-called Project Morpheus — a software that allows changing facial attributes in videos: facial expressions, hair thickness, age, and so on. The program is meant for the common end-user and offers simple functionality.

Facial attributes of a video changed using Project Morpheus by Adobe
Project Morpheus demonstration

Deepfakes, however, are produced with the specific purpose of replicating a person’s likeness using various tools and techniques along with a base of source materials. The current deepfake detection tools are mostly limited to what’s available in their training datasets… Meaning they are much less efficient when shown unfamiliar media.

Whenever a secret testing dataset or undisclosed methods of digital manipulation are used, a deepfake detector can fail. For example, the most accurate detection method proposed at the Deepfake Detection Challenge could identify only 65% of all falsified media.

Deepfake production is highly accessible today
Deepfake production is highly accessible today

The way to combat these newly emerging deepfakes, along with deepfake detection competitions, is to make a universal standard for detecting fabricated media. In fact, the need for regulation is reaching peak urgency. Currently, two concepts have been proposed which could potentially hinder distribution of deepfakes and stop the disinformation they spread.

The Content Authenticity Initiative (CAI)

The Content Authenticity Initiative is a collaborative effort by a group of institutions including Adobe, University of California, BBC, New York Times, and nonprofit organization Witness, which specializes in protecting the human right to free and unbiased information.

Announcement, background

Initially, CAI began in 2019 as an initiative taken by Adobe, New York Times and Twitter. Later, the list of participants grew and now more companies, organizations and educational institutions are welcome to join the cause.

Description

CAI outlines three important steps to be taken to mitigate deepfake threats:

  • Detection. The first step refers to multiple deepfake detection methods proposed today. They should focus not only on altered digital media, be it visual or audio, but also help identify whether it was manipulated for malicious purposes.
  • Education. Content creators — filmmakers, video game developers, vloggers — should understand that disinformation is dangerous. Therefore, creative tools that allow basically anyone to doctor digital media should be used responsibly. These ideas must be taught to regular viewers and listeners as well.
  • Content attribution. The most important step is being able to trace the origins of the source media, according to CAI’s whitepaper. It implies that creators must be equipped with the simple-to-use tools to provide details on authorship. In turn, spotting manipulated media will become much easier. (This technique is also referred to as provenance check).

According to the initiative, most media files circulate around the web without any metadata — like EXIF, XMP, VRA — whatsoever. This happens due to various reasons: authors seeking to conceal their identity, illegal copying of the source file, and so on.

Deepfake-like tools used to restore color to famous Beatles photograph
Deepfake-like tools are often used for restoring footage and other benign purposes

To prevent this, CAI seeks to establish an intuitive platform or tool that will "tie" the digital assets to the attribution data. This will enable fact-checkers, moderators, journalists, and investigators to identify forgeries with relative ease.

Execution

At the same, CAI faces a series of challenges:

  • Technical difficulties. It may be difficult to insert attribution data, as media files are created on a variety of platforms and with a large assortment of tools: Adobe Photoshop alone has a huge number of rival and free alternatives.
  • Privacy concerns. Some creators may discard the idea, as it can potentially violate their privacy.
  • Lack of standardization. As mentioned previously, a universal metadata format does not exist. It is unclear whether software developers will comply and include a metadata extension that ensures proper content attribution.
  • Workflow disruption. If content attribution requires an extra step, it is likely that creators will skip it.

To avoid these problems, CAI suggests that its main principles must be observed: Interoperability, Global Accessibility, Overarching Goals, Privacy, (minimized) Cost Burden, and others. Plus, introduction of the novel technology should be "sparing", meaning that whenever an older solution is applicable, it should be applied.

From the technical point of view, the verifiable metadata will be embedded via a cryptographic protocol, and creation of the trusted certificates or certification authorities will further secure it. It is suggested that the members of the CAI will be able to create their own Trust Lists of certificates.

Claim and assertion embedded in an image via CAI protocol
Claim and assertion embedded in an image via CAI protocol

Interestingly, it is also suggested that the content creator will not be identified as a certificate holder. Instead, hardware and software manufacturers will assume that position.

To further ensure content authenticity, CAI will employ digital identities — a unique piece of data that allows users to identify a person or organization online. It includes tokens and identifiers that allow recognition of a digital ID in many cases: geographical region, multimedia application, specific community, etc.

As many Uniform Resource Identifiers (URIs) are based on popular identity formats — WebID, OpenID, ORCiD — providing an anonymous identity as part of CAI can be possible, thus preserving an author’s privacy.

OpenID Connect Protocol is used to preserve online identities
OpenID Connect Protocol Suite

Other proposed techniques of digital identity preservation include digital signatures, Crypto Graphic Message Syntax (CMS), XMP metadata standard, Distributed Ledger Technology, and so on. The Coalition for Content Provenance and Authenticity (C2PA)

The Coalition for Content Provenance and Authenticity (C2PA) is an alliance that brings together CAI members and a similar in nature Project Origin led by Microsoft. The goal of C2PA is to introduce and establish universal provenance standards that will protect video, audio, image, and document files from tampering.

Anti-spoofing initiative C2PA was launched as a remedy against disinformation in media
C2PA’s official logo

According to C2PA, this can be achieved through the introduction of an open standard that will be easily accessible for online platforms, digital creators and common users. C2PA suggests a number of cases, in which their solution can be employed:

  • Journalistic work. Origin of a given photo, video footage or digital document is important in fake debunking. For instance, C2PA-enhanced cameras will prevent photos made with them from being stolen and doctored for malicious purposes.
  • Halting disinformation. Deepfakes and cheapfakes rely on social media and instant messengers as the main proliferation channels according to researchers Paris and Donovan. The authenticity of media shared online can be confirmed or denied by a C2PA free app.
  • Integrity protection. Media integrity can be secured with the C2PA certificates. In turn, this can be beneficial for journalistic investigations, scientific research, and so on.
  • Brand value enhancement. News agencies can greatly benefit from C2PA, as they will be able to publish content much faster thanks to authenticity confirmation of the materials. Analytical, consulting, financial and other companies can also take advantage of C2PA.

Additionally, the C2PA standard will help in revealing retouched images, splicing, recontextualization, and other cheapfake techniques that are in wide use.

Police officers in a street after a kidnapping Cheapfake video caused riots and lynching
"Kidnapper cheapfake" in India caused lynching and mass violence

Difference between CAI and C2PA

CAI intends to create an accessible and simple tool that will help preserve authorship and locate the origins of an altered media file. C2PA, on the other hand, seeks to establish a universal ecosystem where provenance verification is possible in the first place.

C2PA plays the role of an alliance that will bring the already existing ideas, solutions, and technologies to minimize threats of disinformation and create an effective and accessible remedy against the deepfakes.

One of the ways to create such an ecosystem is an extensive collaboration with news agencies, chipmakers, independent journalists, activists, educators, and hardware manufacturers. In short, a global effort is required to achieve the ultimate protection against malicious Deepfakes, making it one of C2PA’s primary goals.

FAQ

Originality vs. Authenticity

Authenticity is a key parameter for liveness detection in biometric recognition.

Originality and authenticity appear to be similar parameters, however, there is a subtle difference between them. Originality represents something that never existed before while authenticity is something that may already exist, while also retaining a degree of uniqueness.

When it comes to liveness detection, it is important to focus on authenticity. This characteristic implies biometric uniqueness of a user, as well their unique behaviors and habits. The latter characteristics can be quite important in IoT antispoofing. For example, unusual usage of domestic appliances of a user or uncontrollable shopping via a voice assistant can indicate that the recognition system is hacked and is being used by an imposter.

What is originality?

Originality is a group of traits inherent only to a specific object.

Originality represents a group of characteristics exclusively possessed by a specific object. In the case of antispoofing, originality implies traits of anatomy possessed by a given person. They include fingerprints, face, vocal timbre and intonations, as well as some other anatomical traits that cannot be altered (cranial geometry). Even though terms originality and uniqueness overlap, they are not exactly identical. An object can be unique and original. But it doesn’t have to be unique to remain original. Originality is one of the critical parameters along with liveness in the antispoofing systems.

What is authenticity?

Authenticity is a proof that a user accessing a system is actually the legitimate person who they claim to be.

Authenticity is a parameter that guarantees:

  • A person who tries to access a system is a living human.
  • They are also the exact same person they claim to be.

In simple terms, authenticity relies upon parameters such as liveness and originality. Using these parameters, authenticity remains the key factor protecting a system against malicious actors who carry out presentation attacks (PAs). Along with biometric features, authenticity also includes aspects of human personality. For example, shopping habits, voice assistant requests, or credit card transactions form stable patterns — each one being unique for every single person.

Is there any deepfake detection standardization?

Deepfake standardization is a set of measures brought in to minimize the deepfake threat.

There are two standards for deepfake detection that are in the development phase: Content Authenticity Initiative (CAI) and Coalition for Content Provenance and Authenticity (C2PA).

CAI is proposed by Adobe in cooperation with British Broadcasting Corporation (BBC), Truepic, and other organizations. Its goal is to set up a universal platform that will protect digital content from tampering. This can be done by introducing universal metadata that will not be ‘erasable’ from the authentic file.

C2PA seeks to complement CAI’s vision, as well attract other promising ideas and solutions, thus helping to build a universal ecosystem to stop the global spread of disinformation and assist antispoofing.

What is Content Authenticity Initiative (CAI)?

CAI is an initiative/standard to protect media content from malicious altering.

Content Authenticity Initiative (CAI) is a deepfake detection standard proposed by Adobe in collaboration with BBC, New York Times, Microsoft, and others. Its primary goal is to create a simple and publicly available solution that will allow securing original authorship of media content anywhere in the world.

The CAI initiative seeks to stop online disinformation, spread of deepfakes/cheapfakes and other malicious practices. According to the vision, this can be done if the origins of media content can be made traceable. The technology will be based on cryptography and metadata that will make media spoofing powerless against protected data.

What is Coalition for Content Provenance and Authenticity (C2PA)?

C2PA is a coalition of companies, media agencies and foundations that seeks to stop disinformation.

Coalition for Content Provenance and Authenticity or C2PA is an alliance organized by Microsoft together with Sony, Nikon, RIAA, Truepic, and others. The goal of the alliance is to create a universal environment where digital content can be:

  • Secured with universally available means, such as metadata.
  • Protected from the legal point of view.
  • Easily traced back to the original.

C2PA seeks to unite the existing ideas in the area by integrating Content Authenticity Initiative (CAI) and other standards. This is going to help mitigate threats of deepfakes, online fraud, disinformation spread, and biometric attacks.

What is the difference between CAI and C2PA?

CAI and C2PA are two different initiatives that complement each other through integration.

CAI and C2PA intend to create standardization that will help detect deepfakes, trace the original content and diminish disinformation threats. CAI offers an out-of-the-box solution that will be available for creators, fact-checkers, journalists and investigators around the world to protect and verify data.

C2PA, on the other hand, strives to consolidate existing ideas, solutions and standards to create a universal ecosystem where digital data will be traceable, confirmable and protectable. C2PA plays the role of a framework dedicated to fact-checking and antispoofing measures.

With media alteration methods being so widely available to the general public – and easy to use – “cheapfakes” are invading social media. Read more about it in this next article.

References

  1. Report: number of expert-crafted video deepfakes double every six months
  2. Project Morpheus on Adobe Labs
  3. New Standards for Deepfake Detection Unveiled Amidst U.S. Election Security Concerns
  4. Easy Deepfake Tutorial: DeepFaceLab 2.0 Quick96
  5. Content Authenticity Initiative
  6. Witness
  7. Metadata on Wikipedia
  8. The world owes Yoko an apology! 10 things we learned from The Beatles: Get Back
  9. Claim and assertion embedded in an image via CAI protocol
  10. Digital Identity
  11. WebID on Wikipedia
  12. OpenID
  13. ORCiD
  14. Crypto Graphic Message Syntax (CMS)
  15. C2PA Founding Press Release
  16. Project Origin
  17. C2PA’s official logo
  18. C2PA Explainer
  19. Deepfakes and cheap fakes. The Manipulation of Audio and Visual Evidence
  20. "Kidnapper cheapfake" in India caused lynching and mass violence
Avatar Antispoofing

1 Followers

Editors at Antispoofing Wiki thoroughly review all featured materials before publishing to ensure accuracy and relevance.

Contents

Hide