Meta set to label AI-generated images from other companies
Meta Platforms has revealed plans to implement the detection and labelling of images generated by other companies’ artificial intelligence services in the upcoming months by incorporating invisible markers within the files for enhanced identification
According to Meta, it will apply the labels to all content featuring the embedded markers when posted on Facebook, Instagram, and Threads services. The company already labels any content generated using its own AI tools.
The aim, as outlined by Nick Clegg, the company’s president of global affairs, is to alert users that these images, often closely resembling real photos, are indeed digital creations.
Upon the successful implementation of the new system, Meta plans to extend the application of labels to images generated on platforms operated by OpenAI, Microsoft, Adobe, Midjourney, Shutterstock, and Alphabet’s Google, as stated by Nick Clegg.
The announcement provides an early glimpse into an emerging system of standards technology companies are developing to mitigate the potential harms associated with generative AI technologies, which can spit out fake but realistic-seeming content in response to simple prompts.
The approach builds off a template established over the past decade by some of the same companies to coordinate the removal of banned content across platforms, including depictions of mass violence and child exploitation.
During an interview, Clegg expressed confidence in the ability of the companies to reliably label AI-generated images. However, he acknowledged the complexity of developing tools for marking audio and video content, stating that progress in this area is still ongoing.
“Even though the technology is not yet fully mature, particularly when it comes to audio and video, the hope is that we can create a sense of momentum and incentive for the rest of the industry to follow,” Clegg said.
Also Read: Meta unveils new safeguards for teens on Facebook, Instagram
In the interim, he added, Meta would start requiring people to label their own altered audio and video content and would apply penalties if they failed to do so. Clegg did not describe the penalties.
He added there was currently no viable mechanism to label written text generated by AI tools like ChatGPT.
A Meta spokesman declined to say whether the company would apply labels to generative AI content shared on its encrypted messaging service WhatsApp.
On Monday, Meta’s independent oversight board criticized the company’s policy on misleadingly doctored videos, deeming it too narrow. The board suggested that instead of removal, such content should be labeled for clarity.
Clegg said the board was right, adding that Meta’s existing policy “is just simply not fit for purpose in an environment where you’re going to have way more synthetic content and hybrid content than before.”
He cited the new labelling partnership as evidence that Meta was already moving in the direction the board had proposed.
Comments are closed.