Cheapfakes: Definition, Overview and Potential Threats

From Antispoofing Wiki

Cheapfakes: Origin & Application

Cheapfake — ( a combination of cheap and deepfake) — are a class of fake media that is easy and quick to produce, especially for amateur users. Compared to the traditional deepfakes, cheapfakes do not require highly specific skills such as coding, manual neural network tuning, and post-production using applications such as Adobe After Effects etc.

Deepfakes first rose to public attention in 2017 when an eponymous subreddit was created. Used for producing fake pornography, it remained a privilege of the technically skilled actors. However, in the following years, numerous apps and tools have been designed that allow basic deepfake techniques — such as face swapping — accessible to the broader audience.

Experts highlight that cheapfakes can be highly dangerous for the targeted public. As they take little effort and cost to fabricate, cheapfakes can elevate blackmailing, bullying, defamation, fraud and other threats typically posed by digital manipulations to a greater level.

Definition & Overview

In essence, cheapfake (or shallowfake) is a manipulation of audiovisual content performed with a cheap, accessible and easy-to-use software or without using any software at all. Hence, their nickname has emerged as poor man’s deepfake.

Such a manipulation imply merging two pieces of existing media into one or creating a fully synthetic material from scratch. The term was proposed by authors Britt Paris and Joan Donovan. In their work Deepfakes and Cheap Fakes, they mention that cheapfakes can serve to sabotage the widely accepted norms of truth and evidence.

Among the typical tools used for their production, researchers name Photoshop, video-editing softwares like Adobe Premiere or Sony Vegas Pro, apps equipped with real-time filters capable of altering user’s appearance, and in-camera  effects etc.

The most common cheapfake techniques are:

  • Relabeling
  • Face swapping
  • Speeding/slowing
  • Footage editing

One of the most popular cheapfake techniques is recontextualization. This method withdraws a certain element —  a person’s statement, gesture or action — and presents it in a completely different context.  

An especially notable hazard lies in the viral nature of the cheapfakes. Paris and Donovan note that media proliferation is easy to orchestrate via channels such as instant messengers. This creates a favorable "breeding ground" for spreading disinformation.

One of the early instances of cheapfake causing real-life consequences took place in 2018. A number of short videos began circulating in WhatsApp chats in India, presenting completely unrelated incidents as actions performed by "kidnappers" and "organ harvesters". This caused outrage in Indian rural areas, resulting in violence against random local residents and outsiders. The hoax was based on the typical recontextualization principle.  

The 2020 US presidential election saw a flood of cheapfakes. As reported by CREOpoint, they had been viewed 120 million times during the election campaign. A cheapfake tactic was used against Filipino politician Lila de Limain 2019 with fake videos circulating on online streaming platforms that depicted her involvement in drug dealing.

Political deepfakes and cheapfakes are commonly seen to fail at manipulating public opinion. This can be ascribed to the critical thinking and contextual awareness that help viewers spot fake media of even the finest quality.

Difference Between Cheapfakes & Deepfakes

The core difference between deepfakes and cheapfakes is that the latter does not require sophisticated tools, methods and skills. There are three main groups of differences to observe:

  • Creation. Cheapfakes are often manually produced using applications like Photoshop or Movie Maker. They are aimed at restructuring and recontextualizing an already existing media — photo, video, audio. A deepfake, on the contrary, is made with more advanced and automated tools such as neural networks.
  • Deep learning factor. Unlike deepfakes, shallowfakes do not rely upon machine/deep learning. This results in rather unpolished and rough-edged media.
  • Artificial Intelligence (AI) usage. Cheapfakes mostly do not use AI-powered tools, or employ them to a limited extent. This may be due to the fact that AI requires high computing power that is not easily accessible by shallowfake creators.

While the listed factors remain true, sometimes cheapfakes can also rely on the more sophisticated tools and techniques. In this case deepfake and cheapfake work anatomies overlap to some degree.

Similarities Between Deepfakes & Shallowfakes

There are also a number of traits native to both deepfake and cheapfakes:

  • Shared goal. Both types of media aim at distorting reality and producing something that was not done or said in real life.
  • Shared basis. Fake media is usually based on a real prototype that can be altered with different tools: from a video editor to a sophisticated CNN.
  • Shared dangers. Cheapfakes and deepfakes are equally dangerous when it comes to information security, as both can be used for deception, libel and disinformation.
  • AI usage. In some cases, fraudsters can employ easy-to-use AI tools to speed things up. Among them are FakeApp, Zao, FaceApp, Wombo, DeepFaceLab, and others. It’s possible that in the future, tools similar to Adobe Voco will allow unskilled users to produce falsified media of the highest quality at home.

It’s worth mentioning that more similarities may arise as both technologies keep evolving.

Cheapfake Motives, Examples & Techniques

Motives for cheapfake creation can range from entertainment and online "trolling" to serious criminal attempts, as well as deliberate misinformation and slander. Often, cheapfakes are used to cause discord between certain groups or political or social anarchy.

One example features the US wrestler and actor Dwayne Johnson. In an artfully montaged video, he appears to be singing insults to Hilary Clinton, although in reality the song was addressed to his female fellow-wrestler Vickie Guerrero as part of the wrestling act. As a result, it caused a backlash when Johnson was seen supporting the Democrats later in 2020.

Another instance of political propaganda was spotted in 2021. Allegedly, the photo showed a Serbian woman who had to flee Kosovo from NATO’s bombing. In reality, the picture showed an Albanian refugee standing in line to enter Macedonia during the Yugoslav wars.

Mostly, simple techniques are used for producing cheapfakes:

  • Editing. It includes rearranging frames and audio clips, as well as splicing or adding foreign elements, replicating, rotation, etc. It’s used to make it appear as if certain things, that never happened in reality, were said or done.
  • Face swapping. Achieved with simple AI algorithms or tools like Photoshop, it allows extracting and "putting" a target's face on another person. This method is often used by fraudsters who aim to spoof biometric systems.
  • Tempo alteration. It involves speeding or slowing video footage.
  • Photoshopping. Doctoring images through cutting, etc.
  • Recontextualizing. Extracting a certain element from media and delivering it in a completely different light/context.

Even though some methods are well-known — like photoshopping — they are still widely used by the fraudsters.

Cheapfake Detection Methods

A number of effective cheapfake detection methods are proposed:

  • Camera fingerprint. When a camera acquires an image, it leaves some artifacts that are unique to each model, like photo-response non-uniformity noise (PRNU). If certain objects in the picture lack these artifacts, then the image has been tampered with. This method involves denoising algorithms, high-pass filters, etc.

  • Compression clues. After doctoring, a JPEG image goes through a second compression. In turn double compression or double quantization leaves  a number of artifacts that can be used as clues, especially in tampered regions. A similar technique can be applied to videos.
  • Editing artifacts analysis. When new elements are inserted into a video or an image, they require additional tweaking to fit into the context. It implies resampling, rotation, replication, coloring, blurring, gluing, and so on. In turn, these procedures leave detectable artifacts.
  • Localization. It focuses on the pixel characteristics, rather than an entire image. It includes patch-focused analysis, segmentation-like approach, and detection-like method, which can spot manipulated pixel regions.  
  • Audio detection. Audio cheapfake detection is based on analyzing the wideband oscillation signals (Gibbs effect). This method can decide whether the audio has been tampered or not, while also differentiating the possible clues from normal transients or other sonic artifacts.

Moreover, some CNNs can be used for preprocessing and denoising or applying high-pass filters to detect evidence. Two-branch method employs a CNN architecture capable of analyzing low-end and high-end data at the same time. Siamese networks can analyze and compare camera-based features to detect splicing, and so on.


  1. Porn Producers Offer to Help Hollywood Take Down Deepfake Videos
  2. What Are Cheapfakes? Are They Different From Deepfakes?
  3. A cheapfake image used by the Nigerian scammers
  4. Deepfakes and cheap fakes. The manipulation of audio and visual evidence
  5. Re-Contextualization. The Effects of Digital Medias Audiovisual Documents
  6. The deepfake /cheapfake spectrum proposed by Paris and Donovan
  7. Deepfake videos could 'spark' violent social unrest
  8. De Lima is persecuted for her criticism of the political regime in Philippines
  9. "Drunken Pelosi" cheapfake created with speed slowing
  10. The deepfake apocalypse never came. But cheapfakes are everywhere
  11. A comedy song from ROCK was used as a cheapfake basis
  12. The ‘cheapfake’ photo trend fuelling dangerous propaganda
  13. Data-Driven Digital Integrity Verification
  14. Photo-response non-uniformity noise by Wikipedia
  15. DHNet: Double MPEG-4 Compression Detection via Multiple DCT Histograms
  16. A supporting method to detect manipulated zones in digitally edited audio files
  17. The Scientist and Engineer's Guide to Digital Signal Processing