Deepfakes: everything you need to know
AI-generated images and videos depicting real people without their consent are now one of the fastest-growing categories of online harm. The 2026 TAKE IT DOWN Act covers them explicitly. Here is how to understand, identify, and remove them.
Understand deepfakes
What deepfakes are and how the technology works.
Take action
Legal rights, removal steps, and what to do right now.
What the law says
The 2026 TAKE IT DOWN Act amended 47 U.S.C. to add Section 223a, which creates a direct federal obligation for covered platforms to remove non-consensual intimate imagery within 48 hours of receiving a compliant notice. The Act's definition of protected content explicitly includes synthetically generated or altered visual depictions, meaning AI-generated content is covered in exactly the same way as authentic photographs.
Prior to 2026, deepfakes occupied a grey area in US federal law. Some states had specific deepfake statutes, but there was no unified federal removal mechanism. The TAKE IT DOWN Act resolved this by creating a single federal pathway applicable to all covered platforms, regardless of where they are hosted.
Why biometric scanning is required
Deepfakes are generated from a reference image, typically scraped from public social media. The generation process changes the pixel-level data of the image entirely, which means standard reverse image search, which matches pixel patterns, returns no results. What remains constant is your facial geometry. Biometric scanning maps the geometry of your face and searches for matches regardless of whether the found image is an authentic photograph or AI-generated content.
Frequently asked questions
What is a deepfake?
Does the TAKE IT DOWN Act cover deepfakes?
Can standard reverse image search find deepfakes of me?
How long does deepfake removal take?
Found a deepfake of yourself?
Upload a photo. We biometrically scan 200 or more platforms and dispatch 48-hour removal notices to every match.
Start removal scan