/
1 min read

Meta has imposed limitations on political advertisers, preventing them from using Generative AI advertising tools

Meta, the parent company of Facebook, has announced new restrictions on the use of its generative AI advertising tools by political campaigns and advertisers in regulated sectors. This decision aims to curb the spread of election misinformation through AI-generated content. While Meta’s advertising guidelines already prohibit ads with false information, there were no specific regulations in place for AI-generated content until this policy update.

The new restrictions were publicly disclosed on Meta’s help center and specifically target advertisers involved in campaigns related to housing, employment, credit, social issues, elections, politics, health, pharmaceuticals, or financial services. Advertisers in these regulated sectors will not be allowed to use Generative AI features. Meta believes that implementing these restrictions will help them better understand potential risks and develop suitable safeguards for the use of Generative AI in ads related to sensitive topics within regulated industries.

This decision by Meta comes on the heels of the company’s expansion of AI-powered advertising tools, enabling advertisers to create ad content instantly based on text prompts. Initially available to a select group of advertisers, these tools are set to be rolled out globally in the coming year.

 

 

 

 

How These Meta Generative AI Features Can Help Advertisers
How These Meta Generative AI Features Can Help Advertisers

Google, under Alphabet, has also introduced similar generative AI ad tools but with additional measures. Google plans to block the use of “political keywords” as prompts and requires election-related ads to include disclosures for synthetic content.

Nick Clegg, Meta’s top policy executive, emphasized the importance of updating rules regarding generative AI in political advertising. He expressed concerns about the potential for AI interference in the 2024 elections and called for a focus on election-related content across various platforms.

In response to the broader challenges associated with AI-generated content, Meta has implemented various measures. These include blocking its AI virtual assistant from creating photo-realistic images of public figures, working on watermarking AI-generated content, and placing limitations on misleading AI-generated videos.

Meta’s independent Oversight Board is actively examining the company’s approach to AI-generated content, particularly in cases involving manipulated videos, such as a doctored video featuring U.S. President Joe Biden.

Leave a Reply