YouTube is expanding its new “likeness detection” technology, which identifies AI-generated content, such as deepfakes, to people within the entertainment industry, the company announced on Tuesday.
The technology works similarly to YouTube’s existing Content ID system, which detects copyright-protected material in users’ uploaded videos, allowing rights owners to request removal or share in the video’s revenue.
Likeness detection does the same, but for simulated faces. The feature is meant to help protect creators and other public figures from having their identities used without their permission — a common problem for celebrities who find their likenesses have been used in scam advertisements.
The technology was first made available to a subset of YouTube creators in a pilot program last year before expanding more broadly to include politicians, government officials, and journalists this spring.
Now YouTube says the technology is being made available to those in the entertainment industry, including talent agencies, management companies, and the celebrities they represent. The company has support from major agencies like CAA, UTA, WME, and Untitled Management, which offered feedback on the new tool.
Use of the likeness detection tool does not require entertainers to have their own YouTube channels.
Instead, the feature scans for AI-generated content to detect visual matches of an enrolled participant’s face. Users can then choose to request removal of the video for privacy policy violations, submit a copyright removal request, or do nothing. YouTube notes that it won’t remove all content, as it permits parody and satire content under its rules.
In the future, the technology will support audio as well, the company says.
Related to this, YouTube has also been advocating for similar protections at a federal level, with its support for the NO FAKES Act in Washington, D.C. This would regulate the use of AI to create unauthorized re-creations of an individual’s voice and visual likeness.
The company hasn’t yet said how many removals of AI deepfakes have been managed by the tool so far, but noted in March that the amount of removals was still “very small.”