Meta Board Urges Update of Deepfake Rules for AI Age

415

Meta’s oversight board urged the tech giant on Thursday to update its rules on porn deepfakes, moving beyond the outdated “Photoshop” terminology to reflect the advancements in artificial intelligence (AI).

The independent board, often considered the top authority for Meta’s content moderation decisions, made this recommendation after reviewing two cases involving deepfake images of high-profile women in India and the United States.

In one case, a deepfake shared on Instagram remained online despite a complaint, while in another, a fake image was removed from the Meta platform. Both decisions were appealed to the board. The board determined that the deepfakes violated Meta’s rule against “derogatory sexualized photoshop,” a term it said needs to be clarified for better public understanding.

Also Read: Meta Plans to Train AI with European Content

The board noted that Meta defines this term as involving manipulated images sexualized in unwanted ways for those pictured.

The term “Photoshop” originated from Adobe’s image editing software released in 1990 and became synonymous with image manipulation. However, the board found this reference “too narrow” in the context of modern technology like generative AI, which can create images or videos with simple text prompts.

The oversight board recommended that Meta explicitly prohibit AI-created or manipulated non-consensual sexual content. While Meta has agreed to follow the board’s decisions on specific content moderation cases, it considers policy suggestions as optional recommendations.

Comments are closed.