YouTube is expanding its likeness detection technology – which identifies AI-generated deepfakes – to a pilot group of government officials, political candidates, and journalists. The announcement came Tuesday.
The tool first rolled out in October 2025, initially available to YouTube Partner Programme creators. Participants now need to provide a video of themselves along with government identification. YouTube will then notify them via YouTube Studio when deepfake videos matching their appearance are detected, and they can flag content for potential removal.
The system works similarly to YouTube’s existing Content ID technology but scans for a person’s likeness rather than copyrighted audio or video. Detection does not guarantee removal – YouTube says it will continue to allow parody and satire and will evaluate each case individually.
The expansion to political and civic figures comes ahead of the US midterm elections. YouTube VP of Government Affairs Leslie Miller said the initiative is designed to protect the integrity of public discourse. The company is also exploring voice impersonation detection and is considering allowing people to monetise their likeness in detected content, following the Content ID model. Data provided during enrolment will not be used to train Google’s generative models.
The expansion has an obvious logic: synthetic media of politicians and journalists carries higher misinformation risk than synthetic media of most content creators, and platforms face mounting regulatory pressure to do something measurable about it. What it doesn’t resolve is the detection gap for the public figures who aren’t enrolled, the creators who can’t get on a pilot list, or the platforms that don’t have YouTube’s technical resources. The deepfake problem is considerably larger than the population of verified political candidates and credentialled journalists.



