Search
Close this search box.

The first step in fighting against deepfakes is to know how to spot them. There is currently no 100% reliable technical way of spotting a fake, given that the majority of images are retouched. Some AI-assisted software exist, but they will  reveal only the crudest ones. The first step is always to think critically. You need to ask yourself whether the information is credible, and whether anyone would have an interest in publishing the photo/video in question. If the information “revealed” by the deepfake is not reported by reputable news sources, if what the person in a video says or does is shocking or important, the media will cover it. If no reputable source mentions it, it may be a deepfake.

There are also a number of purely material elements that can be observed when determining whether an image or video is fake.

How to spot deepfakes?

The simplest elements we can observe are very specific parts of the body and how natural they look.

  • Eye movement: especially in videos, unblinking eyes are very revealing. Algorithms learn  from photos, and in photos found online, people don’t often close their eyes. Also, eye movements tend to follow the speaker, so it’s difficult to reproduce them realistically.
  • Facial expressions: These can appear unnatural and may not express emotion at all.
  • Body movements: if they’re jerky, if certain parts don’t move in sync with others, this can help you spot a fake video. The authors of less elaborate deepfakes focus mainly on the face, so it’s not uncommon for body movements or positions to be unnatural enough to make it easy to spot.
  • Shadows: The way light hits the face can easily reveal a fake image or video, as shadows will fall in a way that’s inconsistent with the scenery/position.
  • Hair that’s too perfect: deepfakes tend to produce images with perfect hairstyles; the algorithm doesn’t necessarily recognize, and therefore doesn’t learn unruly hair.
  • Unnatural teeth: currently, the algorithms don’t seem to be able to generate slightly imperfect teeth, as this would require going tooth by tooth; deepfakes therefore have a “denture effect”.
  • Inconsistent sounds: Deepfakers generally pay more attention to the image or sound.
  • A video in slow motion can sometimes be used to detect desynchronization between speech and lip movements.
  • Looking at the image on a larger screen: Deepfakes are often designed for people watching from their phones. On a larger screen, such as a computer monitor, details may be easier to see.

It is also possible to use reverse image search to find similar images online; for films, there is currently no publicly available reverse video search tool.

Other possible approaches to fight deepfakes

Facebook had launched the Deepfake Detection Challenge in 2019, which aimed to bring together digital companies, but also universities, to encourage them to develop detection tools, with a view to the 2020 election. But the project never really got off the ground and seems to have stalled ever since, notably after Facebook refused to remove videos even though they had been detected as deepfakes.

Other avenues to be explored in order to prevent the proliferation of deepfakes are, as always, international cooperation between governments in  legislation, and raising public awareness, particularly through education programs for young people.

When it comes to cybersecurity, companies need to apply a “zero-trust” method, systematically verifying all information, and training and encouraging employees to do so, by offering them tools and reference people to help them check the veracity of information.

Come back in May for our series of articles on less-known facts about piracy. In the meantime, if you have a film, series, software or e-book to protect, don’t hesitate to call on our services by contacting one of our account managers; PDN has been a pioneer in cybersecurity and anti-piracy for over ten years, and we’re bound to have a solution to help you. Happy reading and see you soon!

Share this article