The rise of generative AI has also meant we are seeing more and more art and videos that have been created by AI tools like DALL-E 3, Bing Image Creator, and more. We have also seen these tools designed to be "deep fakes", made to fool people into thinking they are real videos filmed in the real world.
The biggest user-created video service remains Google"s YouTube. Today, the company announced that over the coming months, it plans to help its billions of users find out if a video they are viewing was made with generative AI tools.
In a blog post, Google stated:
Specifically, we’ll require creators to disclose when they"ve created altered or synthetic content that is realistic, including using AI tools. When creators upload content, we will have new options for them to select to indicate that it contains realistic altered or synthetic material. For example, this could be an AI-generated video that realistically depicts an event that never happened, or content showing someone saying or doing something they didn"t actually do.
The new labels will be added to a video"s description panel. If such a video deals with what Google feels are "sensitive topics", a similar label will be added directly to the video player. That will include content made with YouTube’s own generative AI products.
YouTube content creators will be given some time to learn more about these generative AI video labeling requirements before they are rolled out. Creators who don"t place these labels when required on their videos could have the clips removed, or their accounts suspended. If the videos violate YouTube"s Community Guidelines they will be removed even if they are properly labeled as being made by generative AI.
Google will also add a way for people to request an AI-made video that "simulates an identifiable individual" be removed from YouTube. That will also be made available for people who find music on the service that replicates an artist"s voice.