Artificial Intelligence & Machine Learning , Next-Generation Technologies & Secure Development

Meta Will Label AI-Generated Content Starting in May

Tech Giant Asks Creators to Use 'Made With AI' Label to Identify Content
Meta Will Label AI-Generated Content Starting in May
Facebook will start labeling a wider range of generative AI content. (Image: Shutterstock)

Meta will slap a "made with AI" label on generative artificial intelligence content posted on its social media sites starting in May, a change the social media giant says will result in more content carrying a warning for users.

See Also: AI and ML: Ushering in a new era of network and security

The company's new policy requires content creators to self-declare they made audio, video and image content using generative AI. The company will also look for "industry standard AI image indicators" any time users upload content to Facebook, Instagram or Threads.

The change comes after the Meta Oversight Board in February urged the company to update its policy for "manipulated media," writing that Facebook's policy, formulated in 2020, was too permissive and too restrictive. That policy required Meta to remove content manipulated to show someone saying something that they didn't say. After four years of advancements in generative AI, it's now "equally important to address manipulation that shows a person doing something they didn't do," said Meta Vice President of Content Policy Monika Bickert in a blog post.

Starting in July, Facebook will no longer take down content deepfake videos unless they violate other Meta community standards against voter interference, bullying and harassment, violence and incitement. The Oversight Board "argued that we unnecessarily risk restricting freedom of expression when we remove manipulated media that does not otherwise violate our Community Standards," Bickers said.

The social media giant argued that leaving up and labeling AI-generated content, even if it has a "particularly high risk of materially deceiving the public on a matter of importance" is better than removing it. "This overall approach gives people more information about the content so they can better assess it and so they will have context if they see the same content elsewhere."

Other technology giants have taken steps to make it easier to identify if a visual was manipulated or not. Google last year introduced an "about this image" feature to help users track an image's history in search results to determine if was manipulated or not. YouTube, like Meta, offers a self-labeling mechanism for creators that allows them to check a box before posting, declaring that their media contains AI-generated or synthetic material.

The Coalition for Content Provenance and Authenticity, a Microsoft-backed industry body, created a technical standard to add a "nutrition label" to visual and audio content called C2PA to encode details about the origins of a piece of content.

The move to mark AI-generated content has a measurable impact - studies show that social media users are less likely to believe or share content labeled as misleading.


About the Author

Rashmi Ramesh

Rashmi Ramesh

Assistant Editor, Global News Desk, ISMG

Ramesh has seven years of experience writing and editing stories on finance, enterprise and consumer technology, and diversity and inclusion. She has previously worked at formerly News Corp-owned TechCircle, business daily The Economic Times and The New Indian Express.




Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing bankinfosecurity.com, you agree to our use of cookies.