Canada Elections: Meta Introduces AI Disclosure Rules for Political Ads

361

Meta Platforms has introduced new transparency measures requiring advertisers to disclose the use of artificial intelligence (AI) or digital techniques in creating or altering political and social issue advertisements ahead of Canada’s upcoming federal elections.

The policy aims to curb misinformation by ensuring transparency in ads that feature photorealistic images, videos, or realistic-sounding audio that depict real people or events in a misleading way.

New Disclosure Requirements

Under the new rules, advertisers must declare when AI-generated or manipulated content is used in political ads, particularly if:

  • A real person is portrayed saying or doing something they never did.
  • A fictitious individual is created or an event that never happened is depicted.
  • Genuine events are digitally altered to mislead audiences.

Also Read: Meta to Extends Post-Election Ban on New Political Ads

These measures align with Meta’s broader efforts to tackle misinformation in political advertising. The company previously extended its ban on new political ads after the U.S. elections to counter the spread of false information.

Meta’s Efforts to Curb AI-Driven Misinformation

In 2023, Meta prohibited political campaigns and advertisers in regulated industries from using its generative AI tools in advertising. The company is now reinforcing these measures by ensuring AI-generated content in political ads is clearly identified.

The new directive reflects growing concerns over AI’s role in spreading disinformation, particularly in election campaigns worldwide. By enforcing these disclosure requirements, Meta aims to enhance transparency and maintain the integrity of political communications in the digital space.

The disclosure rules are expected to take effect in the lead-up to Canada’s elections, providing voters with clearer insights into the authenticity of political advertisements.

Comments are closed.