YouTube is stepping up its fight against one of the most troubling uses of AI: deepfake videos that impersonate real people. The company announced it is expanding its likeness detection technology to a pilot group of journalists, government officials, and political candidates. It’s a move aimed at protecting public figures from AI-generated impersonation.
The feature works somewhat like Content ID for faces. Participants submit a short video and a government ID so the system can learn their likeness. Once enrolled, YouTube scans uploads for AI-generated videos that mimic their appearance. If such content appears, the individual can review it and potentially request its removal.
A new shield against AI impersonation
YouTube first introduced likeness detection for creators in the YouTube Partner Program last year. The company now believes the next priority is protecting public figures whose identities are often used in misinformation campaigns, especially around elections and political discourse.
Deepfakes have become increasingly realistic thanks to generative AI tools, making it easier to create convincing videos of people saying or doing things they never actually did. In politics and journalism, that risk can have serious consequences, from misinformation to reputational damage. However, the system isn’t a simple “delete button.” YouTube says removal requests will still be subject to its existing privacy and moderation guidelines, meaning some videos may remain online if they qualify as parody, satire, or legitimate commentary.
Interestingly, YouTube says the original rollout to creators didn’t lead to many takedowns. Most detected content turned out to be relatively benign, though the company expects the situation to be different for public figures and political leaders who face a higher risk of targeted deepfake attacks.
For now, the program will remain limited to influential individuals rather than the general public. But the expansion signals a broader shift across the tech industry: moving quickly to build guardrails before AI-generated media becomes impossible to distinguish from reality.






