The rise in deepfakes is causing concern among intelligence professionals. Here's how researchers can keep up with verification as technology advances.
Since the term “deepfake” was first coined in 2017, the ability of artificial intelligence to use deep learning to generate lifelike images and videos has been steadily improving. Deepfakes can vary in quality and may include face swapping, voice replication or lip-syncing to audio. The videos can make it appear that a politician or celebrity said or did something that never happened with a likeness nearly indistinguishable to the general public.
The rise in deepfake technology has been causing concern among the intelligence community (IC) and human rights organizations alike. The use cases of deepfakes vary from use in professional film production to sinister exploitations. AI-generated deepfakes are also being increasingly used in cybercrimes, according to Iproov. As the technology improves, identifying deepfakes will become more difficult — and imperative for researchers to master.
How deepfakes are used
Despite the concern with deepfake technology, it does have some beneficial uses. In film productions, deepfakes can be used to match an actor’s face to a stunt double's movements or even avoid the need for humans to perform risky stunts at all. They can also be used for comedy and even in witness protection programs. The account @deeptomcruise on TikTok is a high-quality, benevolent parody account that, as the title suggests, is forthcoming about its origin, rather than trying to trick people.
@deeptomcruise Yes I CANADA ������… eh?
♬ original sound - Metaphysic.ai
But with the good comes the bad, and sinister uses concern many in the IC, law enforcement and fact-checkers. The ability to use deepfakes to spread disinformation by impersonating a political figure, expert or newscaster is a growing fear. The technology is also ripe for committing fraud, including costly scams via deepfake voice impersonations.
Yet most deepfakes are exploitative and used to create pornographic films using the likeliness of celebrities without their consent. While fears about the potential for disinformation and fraud dominate the topic, according to a study in 2019 over 96% of deepfakes were of a non-consensual, degrading nature.
Targets of deepfakes tend to be celebrities and politicians, but as the technology improves and becomes more widespread, identity theft and fraud targeting everyday people will become an increasing risk.
Shallowfakes
In contrast to deepfakes, shallowfakes (or cheapfakes) are the act of editing an existing video to make slight changes. This could mean editing the background or adding something to the image. It could also mean adjusting the speech to make it appear someone said something they did not, or speeding up or slowing down the video. The video manipulation for this kind of fake is quick to create and much less costly than a deepfake, while still having the potential to create substantial harm.
Spotting fake content
Keys to identifying deepfakes are similar to the analysis researchers are already performing. Researching the individual behind the post is the first step when discussing whether something is fake. Is it a real or verified account? Is their profile picture a real person, AI-generated or a stock image? Using simple reverse image search tactics can help a researcher quickly identify the original source of a photo and whether it is being used in earnest. By using freeze frames from videos, the same tactics can be used in video detection as well. Geolocation information may be another way to gain insight on the images.
Deepfake video technology is generating concern, but with the increase in likeness, so too the technology to spot deepfakes is also advancing. Tools like FakeCatcher, a detection tool developed by Intel, have announced they can spot deepfakes with 96% accuracy by using traces of blood flow.
Another component of countering disinformation of deepfakes is educating the public. Knowledge of deepfakes can help media-consumers be skeptical about videos they may see online or from unofficial sources.
Researchers can practice their ability to determine if something is a deepfake via the website Detect Fakes. The Massachusetts Institute of Technology’s Media Lab project challenges users to discern AI manipulation from the truth. This tool is a great way for researchers to hone their practice of determining deep fake technology.
When investigating nefarious deepfakes, researcher anonymity is crucial. Malicious actors could be tracking and exploiting your digital fingerprint.
For more information on researching videos and images, check out an in-depth conversation on fact-checking images on NeedleStack. For more information on conducting image research with Silo for Research, check out our EXIF viewer utility.
Tags Anonymous research Fraud and brand misuse OSINT research