YouTube just announced a new policy that requires content creators to explicitly disclose the presence of realistic but altered or synthetic content produced with AI (artificial intelligence) tools in their videos.
For instance, this involves videos generated by AI that convincingly portray events that never occurred or feature individuals engaging in actions or utterances they never executed. The platform aims to prevent users from being misled by the capabilities of AI.
Creators who consistently opt not to reveal this information may face consequences such as “content removal, suspension from the YouTube Partner Program, or other penalties.”
Although the changes have not been implemented yet, YouTube stated that it will collaborate with creators to ensure a clear understanding of the new requirements.
Upon the rollout of the disclosure requirements, YouTube will insert a label in the video description panel indicating the presence of altered or synthetic content. For particularly sensitive subjects, a more prominent label will be employed.
However, merely labeling the content may not suffice to mitigate the risk of harm on specific topics. If a video breaches YouTube’s Community Guidelines, it may still be removed, irrespective of whether it accurately labeled itself as AI-generated content.
Videos created using YouTube’s generative AI products and features will be distinctly labeled as altered or synthetic.
YouTube will also facilitate the removal of AI-generated or “other synthetic or altered content” that replicates an identifiable individual, encompassing their face or voice, through the platform’s privacy request process. While not all content will be taken down, users will have some recourse.
Additionally, music partners will have the ability to request the removal of AI-generated music content that imitates an artist’s distinctive singing or rapping voice.