Saturday, April 27, 2024

YouTube labels AI-generated content to fight misinformation

- Advertisement -spot_imgspot_img
- Advertisement -spot_imgspot_img

YouTube has unveiled a new tool in Creator Studio that requires creators to disclose to viewers when their videos contain realistic content made with generative AI or other synthetic media.

The move, YouTube said, comes in response to growing concerns about the potential for deep fakes and other manipulated content to mislead viewers.

This approach was announced by YouTube in November 2023. The platform stated, “When creators upload content, we will have new options for them to select to indicate that it contains realistic altered or synthetic material. For example, this could be an AI-generated video that realistically depicts an event that never happened, or content showing someone saying or doing something they didn’t actually do.”

“When creators upload content, we will have new options for them to select to indicate that it contains realistic altered or synthetic material. For example, this could be an AI-generated video that realistically depicts an event that never happened, or content showing someone saying or doing something they didn’t actually do.”

YouTube.

The internet video hosting service company further explained that “This is especially important in cases where the content discusses sensitive topics, such as elections, ongoing conflicts, and public health crises, or public officials. Creators who consistently choose not to disclose this information may be subject to content removal, suspension from the YouTube Partner Program, or other penalties. We’ll work with creators before this rolls out to make sure they understand these new requirements.”

YouTube Label: Here’s the breakdown:

What needs a label? Creators will need to disclose content that could be mistaken for a real person, place, or event. This includes things like digitally altered footage, synthetically-generated voices, or realistic scenes depicting fictional events.

What doesn’t? Videos that are clearly unrealistic (like cartoons or unicorns), use special effects, or only rely on AI for minor things like captions are exempt.

Why the labels?

Transparency: YouTube wants viewers to know if what they’re watching has been manipulated.

Trust Building: By requiring disclosure, YouTube hopes to create a more trustworthy platform for both creators and viewers.

According to YouTube, examples of content that needs labels are those that:

Use the likeness of a realistic person: This includes digitally altering content to replace the face of one individual with another’s or synthetically generating a person’s voice to narrate a video.

Alter footage of real events or places: Such as making it appear as if a real building caught fire, or altering a real cityscape to make it appear different than in reality.

Generate realistic scenes: Showing a realistic depiction of fictional major events, like a tornado moving toward a real town.”

How will viewers see the labels?

Most videos will have a label in the description box.

Videos on sensitive topics (health, news, elections) will have a more prominent label on the video itself.

The rollout will begin on mobile and eventually reach desktops and TVs.

Regarding enforcement, YouTube said it will eventually take action against creators who consistently fail to disclose. Also, in some cases, YouTube might even add a label itself if the content has the potential to mislead or confuse people.

YouTube further revealed that it is working with other companies to improve transparency around digital content. The video service is also developing a way for people to request the removal of AI-generated content that impersonates them.

Youtube said these labels are expected to roll out in the weeks ahead.

By requiring disclosure and working towards better content removal tools, YouTube is taking a step towards a future where AI enhances creativity but doesn’t mislead viewers, according to the social media service. This is emblematic of a wider approach or stance taken by a lot of social media platforms to combat misinformation.

For instance, X (formerly Twitter) has taken an approach to this with the introduction of community notes in order to tackle inaccurate and misleading information. Community notes were initially launched in 2021 only to U.S. users but it became more popular in March 2022. These notes appear on viral and high visibility content in order to inform and educate people about a topic and clear up any misinformation.

Meta, the parent company of Facebook, Threads, and Instagram, also revealed in February that it will begin labeling AI-generated images based on industry standards, aiming to provide transparency to users about the content’s origin. This initiative follows collaborations with industry partners to establish common technical standards for identifying AI content. Facebook’s Meta AI feature already labels photorealistic images, and the company plans to expand this labeling to content generated by other AI tools in the coming months, aligning with industry best practices.

- Advertisement -spot_imgspot_img
Latest news
- Advertisement -spot_img
Related news
- Advertisement -spot_img

LEAVE A REPLY

Please enter your comment!
Please enter your name here

%d bloggers like this: