What Is a Deepfake Video?
A deepfake video is a synthetic video in which AI replaces one person's face with another's — typically inserting a real person's likeness into existing pornographic footage without their knowledge or consent.
Deepfake videos are created by training a generative AI model on images of a target person, then mapping that person's facial geometry onto each frame of a source video. The result is a video that appears to show the target person performing acts they never performed. The process now takes minutes with consumer software. In 2023, Sensity AI reported over 95,000 deepfake videos published online; nearly all depicted non-consensual intimate content. ScanErase's detection system scans for face embeddings across video content — finding deepfake videos that traditional hash-matching systems miss because each AI rendering is technically unique.
Key facts about this term
-
Each deepfake video render is technically unique Unlike copied files, each AI-rendered deepfake is a new unique file. Traditional content moderation systems that rely on file hashes cannot detect them. Biometric face embedding detection is required.
-
Source videos are often taken from legitimate platforms Perpetrators often source the base pornographic footage from adult platforms and then overlay the victim's face. This is why the resulting deepfake may not be detectable by the source platform's hash database.
-
Deepfake videos are explicitly covered by the TAKE IT DOWN Act Congress specifically addressed deepfake videos in the legislative text. Platforms cannot argue that AI-generated content is not 'real' content to avoid their removal obligations.
Frequently asked questions
How does ScanErase detect deepfake videos rather than just photos?
ScanErase extracts facial frames from video content and compares the face embeddings against your biometric reference. This works regardless of whether the video is a direct photo post or a deepfake video.
What if the deepfake video was created from my public social media photos?
This is the most common scenario. Perpetrators source target images from public profiles. The TAKE IT DOWN Act covers the output — the deepfake itself — regardless of what photos were used to create it.
Find and remove your photos now
Upload a photo. We scan 2.4 billion face embeddings and send legal removal notices in 48 hours.
Remove photos now
ScanErase