Meta Plans To Label AI-Generated Content In May

45

Meta, announced on Friday it will start labelling AI-generated content in May, aiming to address concerns over deepfakes.

The social media giant stated it would cease removal of manipulated images and audio that do not violate its rules, opting for labelling and contextualization to preserve freedom of speech.

The move follows the board’s urging to revamp Meta’s approach to manipulated media due to advancements in AI and the potential for widespread disinformation, particularly during crucial election periods globally.

Meta’s new “Made with AI” labels will apply to AI-altered content, with a prominent label for highly misleading content.

“We agree that providing transparency and additional context is now the better way to address this content,” Meta’s Vice President of Content Policy, Monika Bickert said in a blog post.

Also Read: EU Election: Meta Set Up Team To Combat Disinformation, AI Misuse

“The labels will cover a broader range of content in addition to the manipulated content that the Oversight Board recommended labelling,” she added.

These novel labelling strategies stem from an accord reached in February among prominent tech giants and AI stakeholders to combat manipulated content designed to mislead voters.

Meta, Google, and OpenAI had previously committed to employing a unified watermarking standard to tag images produced by their AI applications.

Meta disclosed that its implementation will unfold in two phases, with AI-generated content labelling commencing in May 2024, while the removal of manipulated media solely based on the previous policy will halt in July.

According to the new standard on content, even if manipulated using AI, will remain accessible on the platform unless it violates other Community Standards, such as those prohibiting hate speech or voter interference.

Punch

Comments are closed.