The word 'deepfake' combines 'deep learning' and 'fake.' Early deepfakes required significant computational resources and expertise; today, smartphone apps can generate convincing face-swaps in seconds. The vast majority of deepfake content on the internet is non-consensual sexual material featuring women. Research by Sensity AI found that 96% of deepfake videos online are non-consensual pornography. The 2026 TAKE IT DOWN Act was partly motivated by the explosion of deepfake intimate imagery and explicitly covers AI-generated material in its definition of non-consensual intimate visual depictions.

Key facts about this term

  1. Deepfakes use generative AI to create realistic likenesses Modern deepfakes use diffusion models or GANs to generate highly realistic images or videos. A single photograph of a person can be enough to generate thousands of synthetic intimate images.
  2. Deepfake intimate imagery is covered by federal law The TAKE IT DOWN Act explicitly covers AI-generated and synthetic intimate images. You do not need to have an authentic intimate image of yourself to have been victimized by deepfake NCII.
  3. Biometric detection identifies deepfakes across platforms ScanErase uses facial recognition and image fingerprinting to find deepfake content featuring your likeness, even when the AI output has been modified, compressed, or re-uploaded.

Frequently asked questions

Do I need an original intimate image to have been targeted by a deepfake?

No. Deepfakes require only a photo of your face — which is publicly available on social media for most people. Your face can be placed onto someone else's body without any intimate image of you ever existing.

How can I tell if an image of me is a deepfake?

Common deepfake tells include unnatural skin texture, lighting inconsistencies, blurred edges around the hairline, and facial geometry errors. However, high-quality deepfakes may be indistinguishable to the naked eye. ScanErase's detection system uses biometric analysis rather than visual inspection.