YouTube is ramping up its efforts to tackle the growing threat of AI-generated deepfakes by introducing new detection technologies. These advancements will alert creators and publishers whenever their face or voice is used in another video, aiming to curb misrepresentation and misinformation on the platform.
As the use of generative AI surges, deepfakes have increasingly portrayed artists and politicians through computer-generated replicas. In response, YouTube has developed a “synthetic-singing identification technology” that will allow creators and publishers to detect and manage AI-generated content simulating their singing voices.
The company explained, “This technology uses audio matching to highlight likely fakes and copies, enabling artists and publishers to better manage false depictions of their work.”
The move has been welcomed by the music industry, which has long battled copyright violations. Many music publishers now have dedicated teams scouring the web to police these infringements, and YouTube’s new tool offers another resource in this fight.
In addition to protecting musical content, YouTube is also working on a tool to detect AI-generated content that uses real people’s faces. This will give celebrities and talent agents more control over how their likeness is used, while political parties may also benefit from its implementation.
These features will expand YouTube’s existing copyright protection systems, which the platform says are already widely used. “Since 2007, Content ID has provided granular control to rightsholders across their entire catalogs on YouTube, with billions of claims processed every year,” YouTube said. “We’re committed to bringing this same level of protection and empowerment into the AI age.”
As copyright enforcement on the platform becomes stricter, rights holders now have more control over their clients’ likenesses, a move that has strengthened YouTube’s relationship with the publishing industry. YouTube is also working on providing creators with more control over how their content may be used by third parties, including AI developers, and plans to release more details on this process later this year.
These updates are expected to set a new standard across the industry, as platforms increasingly adopt AI detection processes to safeguard real people from misuse.