To handle the increasing AI-generated video content on its platform, YouTube has decided to change the video uploading policy. YouTube will soon ask video creators to disclose when they upload AI-generated content that looks real.
It will include all kinds of content, including long videos created using artificial intelligence technology and tools. The new policy is not yet implemented, but there are chances that the Google-owned video-sharing platform will implement it shortly, possibly in the first quarter of next year.
This will tackle the events that have not happened in actuality as well as people doing or saying something they didn’t do. One of the YouTube representatives said in a blog post that it is essential to handle sensitive topics such as health, elections, public relations, and other such topics.
For content that is digitally changed, the creator must have to select the option to display a warning message in the description by disclosing the AI-manipulated content.
Besides this disclosure, YouTube is also planning to add another feature by which users will have the option to ask for the removal of AI-manipulated content. The same is planned for the removal of music content.
Comments