Diffusion models work by training on billions of images and learning to reverse a 'diffusion' process that gradually adds noise to images. At inference, the model starts with noise and removes it step by step, guided by a prompt, to generate a new image. Applied to intimate content, fine-tuned diffusion models can generate photorealistic intimate images of real individuals using nothing more than facial reference photos. The resulting images — despite being entirely synthetic — are legally equivalent to authentic NCII under the TAKE IT DOWN Act.

Key facts about this term

  1. Fine-tuning makes diffusion models target-specific A general diffusion model can be fine-tuned on 10-20 photos of a specific person to generate highly realistic intimate images of that person. This is the most common technical method used to create deepfake NCII.
  2. Open-source diffusion models are widely accessible Stable Diffusion and its derivatives are freely downloadable and can be run locally without any platform oversight. This makes technical prevention difficult without legal mechanisms.
  3. The output is covered regardless of the model used Whether generated by a commercial API or a locally-run open-source model, intimate imagery of real identifiable individuals is covered by the TAKE IT DOWN Act.

Frequently asked questions

Can AI image generation companies be held liable for NCII?

Under the TAKE IT DOWN Act, platform liability focuses on hosting and distribution rather than generation tools. However, commercial image generation companies that knowingly facilitate NCII creation may face separate civil liability.

How does ScanErase detect diffusion-model-generated NCII?

ScanErase uses biometric face embedding comparison — matching facial geometry in any image against your reference — rather than trying to detect the technical artifacts of diffusion model outputs.