YouTube is pulling some strings to tighten restrictions around the misuse of AI-generated content. The video-sharing platform already requires creators to label realistic content made using generated AI tools.
Adding more to it, you can now report AI-generated content that looks or sounds like you if it was made without your consent or knowledge. In an updated support page (via TechCrunch), YouTube explained various factors it will take into account when acting on your complaint:
We will consider a variety of factors when evaluating the complaint, such as:
- Whether the content is altered or synthetic
- Whether the content is disclosed to viewers as altered or synthetic
- Whether the person can be uniquely identified
- Whether the content is realistic
- Whether the content contains parody, satire or other public interest value
- Whether the content features a public figure or well-known individual engaging in a sensitive behavior such as criminal activity, violence, or endorsing a product or political candidate
You can start a Privacy Complaint Process to tell YouTube that someone has used AI to alter or create synthetic content that looks or sounds like you. However, merely filing a complaint may not get the content kicked out of YouTube.
YouTube says the content will qualify for removal if it depicts "a realistic altered or synthetic version of your likeness." When filing a report, you need to make sure that "you are uniquely identifiable within the content you seek." In other words, "there must be enough information in the video that allows others to recognize you."
After filing a privacy complaint, YouTube gives the uploader 48 hours to act on a complaint. It starts the review process if the private information in question isn"t edited out or the video is removed.
Keep in mind that YouTube requires first-party claims with some exceptions, such as if the person whose privacy is being violated is a minor, vulnerable individual or doesn"t have access to the internet.
It was reported last year that bad actors used AI-generated content to spread malware through YouTube. As part of the process, accounts of existing YouTube creators (usually with over 100,000 subscribers) compromised in data leaks were targeted to reach a larger audience and make the uploads seem legitimate.