Deepfake Detection Standardization: Origin, Goals and Implementation

From Antispoofing Wiki

The emergence of Deepfakes has created a need for universally accepted ways to combat and neutralize them.

Definition & Problem Overview

Deepfake media have shown an alarming increase in recent times. According to expert reports, the number of Deepfakes produced doubles every six months, as the tools and means to produce such fabricated media are becoming greatly available to the amateurs and the general public.

For instance, Adobe released the so-called Project Morpheus — a software that allows changing facial attributes in videos: facial expressions, hair thickness, age, and so on. The program is meant for the common end-user and offers simple functionality.



The issue has another important aspect. Deepfakes are produced with different tools and techniques, while based on various source materials. The current detection tools are mostly efficient at spotting fake media that is similar to samples in their own training datasets.

Whenever a secret testing dataset or undisclosed methods of digital manipulation are used, a deepfake detector can fail. For example, the most accurate detection method proposed at the Deepfake Detection Challenge could identify only 65% of all falsified media.



As a result, universal standardization at detecting fabricated media has become an urgency. Currently, two concepts are proposed that could potentially hinder deepfake distribution and its threats.

The Content Authenticity Initiative (CAI)

The Content Authenticity Initiative is a collaborative effort by a group of institutions including Adobe, University of California, BBC, New York Times, and nonprofit organization Witness, which specializes in protecting the human right to free and unbiased information.

Announcement, background

Initially, CAI began in 2019 as an initiative taken by Adobe, New York Times and Twitter. Later, the list of participants grew and now more companies, organizations and educational institutions are welcome to join the cause.

Description

CAI outlines three important steps to be taken to mitigate deepfake threats:

  • Detection. The first step refers to multiple deepfake detection methods proposed today. They should focus not only on altered digital media, be it visual or audio, but also help identify whether it was manipulated for malicious purposes.
  • Education. Content creators — filmmakers, video game developers, vloggers — should understand that disinformation is dangerous. Therefore, creative tools that allow basically anyone to doctor digital media should be used responsibly. These ideas must be taught to regular viewers and listeners as well.
  • Content attribution. The most important step is being able to trace the origins of the source media, according to CAI’s whitepaper. It implies that creators must be equipped with the simple-to-use tools to provide details on authorship. In turn, spotting manipulated media will become much easier. (This technique is also referred to as provenance check).

According to the initiative, most media files circulate around the web without any metadata — like EXIF, XMP, VRA — whatsoever. This happens due to various reasons: authors seeking to conceal their identity, illegal copying of the source file, and so on.



To prevent this, CAI seeks to establish an intuitive platform or tool that will "tie" the digital assets to the attribution data. This will enable fact-checkers, moderators, journalists, and investigators to identify forgeries with relative ease.

Execution

At the same, CAI faces a series of challenges:

  • Technical difficulties. It may be difficult to insert attribution data, as media files are created on a variety of platforms and with a large assortment of tools: Adobe Photoshop alone has a huge number of rival and free alternatives.
  • Privacy concerns. Some creators may discard the idea, as it can potentially violate their privacy.
  • Lack of standardization. As mentioned previously, a universal metadata format does not exist. It is unclear whether software developers will comply and include a metadata extension that ensures proper content attribution.
  • Workflow disruption. If content attribution requires an extra step, it is likely that creators will skip it.

To avoid these problems, CAI suggests that its main principles must be observed: Interoperability, Global Accessibility, Overarching Goals, Privacy, (minimized) Cost Burden, and others. Plus, introduction of the novel technology should be "sparing", meaning that whenever an older solution is applicable, it should be applied.

From the technical point of view, the verifiable metadata will be embedded via a cryptographic protocol, and creation of the trusted certificates or certification authorities will further secure it. It is suggested that the members of the CAI will be able to create their own Trust Lists of certificates.



Interestingly, it is also suggested that the content creator will not be identified as a certificate holder. Instead, hardware and software manufacturers will assume that position.

To further ensure content authenticity, CAI will employ digital identities — a unique piece of data that allows users to identify a person or organization online. It includes tokens and identifiers that allow recognition of a digital ID in many cases: geographical region, multimedia application, specific community, etc.

As many Uniform Resource Identifiers (URIs) are based on popular identity formats — WebID, OpenID, ORCiD — providing an anonymous identity as part of CAI can be possible, thus preserving an author’s privacy.



Other proposed techniques of digital identity preservation include digital signatures, Crypto Graphic Message Syntax (CMS), XMP metadata standard, Distributed Ledger Technology, and so on. The Coalition for Content Provenance and Authenticity (C2PA)

The Coalition for Content Provenance and Authenticity (C2PA) is an alliance that brings together CAI members and a similar in nature Project Origin led by Microsoft. The goal of C2PA is to introduce and establish universal provenance standards that will protect video, audio, image, and document files from tampering.



According to C2PA, this can be achieved through the introduction of an open standard that will be easily accessible for online platforms, digital creators and common users. C2PA suggests a number of cases, in which their solution can be employed:

  • Journalistic work. Origin of a given photo, video footage or digital document is important in fake debunking. For instance, C2PA-enhanced cameras will prevent photos made with them from being stolen and doctored for malicious purposes.
  • Halting disinformation. Deepfakes and cheapfakes rely on social media and instant messengers as the main proliferation channels according to researchers Paris and Donovan. The authenticity of media shared online can be confirmed or denied by a C2PA free app.
  • Integrity protection. Media integrity can be secured with the C2PA certificates. In turn, this can be beneficial for journalistic investigations, scientific research, and so on.
  • Brand value enhancement. News agencies can greatly benefit from C2PA, as they will be able to publish content much faster thanks to authenticity confirmation of the materials. Analytical, consulting, financial and other companies can also take advantage of C2PA.

Additionally, the C2PA standard will help in revealing retouched images, splicing, recontextualization, and other cheapfake techniques that are in wide use.


Difference between CAI and C2PA

CAI intends to create an accessible and simple tool that will help preserve authorship and locate the origins of an altered media file. C2PA, on the other hand, seeks to establish a universal ecosystem where provenance verification is possible in the first place.

C2PA plays the role of an alliance that will bring the already existing ideas, solutions, and technologies to minimize threats of disinformation and create an effective and accessible remedy against the deepfakes.

One of the ways to create such an ecosystem is an extensive collaboration with news agencies, chipmakers, independent journalists, activists, educators, and hardware manufacturers. In short, a global effort is required to achieve the ultimate protection against malicious Deepfakes, making it one of C2PA’s primary goals.

References

  1. Report: number of expert-crafted video deepfakes double every six months
  2. Project Morpheus on Adobe Labs
  3. New Standards for Deepfake Detection Unveiled Amidst U.S. Election Security Concerns
  4. Easy Deepfake Tutorial: DeepFaceLab 2.0 Quick96
  5. Content Authenticity Initiative
  6. Witness
  7. Introducing the Content Authenticity Initiative
  8. Metadata on Wikipedia
  9. The world owes Yoko an apology! 10 things we learned from The Beatles: Get Back
  10. Claim and assertion embedded in an image via CAI protocol
  11. Digital Identity
  12. WebID on Wikipedia
  13. OpenID
  14. ORCiD
  15. Crypto Graphic Message Syntax (CMS)
  16. C2PA Founding Press Release
  17. Project Origin
  18. C2PA’s official logo
  19. C2PA Explainer
  20. Deepfakes and cheap fakes. The Manipulation of Audio and Visual Evidence
  21. "Kidnapper cheapfake" in India caused lynching and mass violence