A company representative for Facebook’s parent company, Meta (META.O), announced on Monday that it prevents advertisers in other regulated industries and political campaigns from utilizing its new generative AI advertising products. This move prevents access to tools lawmakers have warned could accelerate the spread of false information ahead of elections.
After this report was published on Monday night, Meta made the decision publicly known through modifications to their support center. Its advertising guidelines do not contain any regulations directly around AI, but they do forbid ads containing material that the company’s fact-checking partners have refuted.
“Ads running campaigns that qualify as ads for housing, employment, credit or social issues, elections or politics, or related to health, pharmaceuticals, or financial services aren’t currently permitted to use these Generative AI features,” the company stated in a note appended to several pages outlining how the tools work, “even though we continue to test new Generative AI ad creation tools in Ads Manager.”
“We believe this approach will allow us to better understand potential risks and build the right safeguards for the use of generative AI in ads that relate to potentially sensitive topics in regulated industries,” added the statement.
The policy update was made a month after Meta, the second-largest digital ad platform in the world, revealed it was beginning to give advertisers more access to AI-powered advertising tools. These tools can instantly create backgrounds, alter images, and create different ad copy variations in response to straightforward text prompts.
Beginning in the spring, the tools were initially exclusively provided to a select few advertisers. As stated at the time, the corporation is on schedule to roll them out to all marketers worldwide by the following year.
After OpenAI’s ChatGPT chatbot, which can respond to queries and other prompts with written replies that resemble those of a person, debuted last year, Meta and other tech companies hurried to release generative AI ad solutions and virtual assistants in the last several months.
As a result, Meta’s decision on political advertisements is among the most essential AI policy decisions the industry has made so far. The businesses have not disclosed much information about the safety guard rails they intend to impose on such systems.
The most well-known provider of digital advertising, Alphabet’s (GOOGL.O) Google, debuted similar image-customizing generative AI ad capabilities last week. A Google official told Reuters that the company intends to prevent politics from being incorporated into its products by avoiding using a list of “political keywords” as suggestions.
Google also intends to alter its policies in mid-November, requiring election-related advertisements to disclose if they use “synthetic content that inauthentically depicts real or realistic-looking people or events.”
While X, formerly known as Twitter, has not released any generative AI advertising tools, TikTok and Snapchat owner Snap (SNAP.N) prohibit political advertisements.
Nick Clegg, chief policy officer at Meta, declared that generative AI applications in political advertising are “clearly an area where we need to update our rules.”
He called for a specific focus on election-related information “that moves from one platform to the other.” He issued a warning ahead of a recent AI safety forum in the UK, advising governments and internet businesses alike to get ready for the technology to be used to meddle in subsequent elections in 2024.
Before this, Clegg informed Reuters that Meta prevented the creation of lifelike portraits of prominent people via its user-facing Meta AI avatar. Meta pledged to create a mechanism to “watermark” artificial intelligence (AI) material this summer.
Meta strictly prohibits deceptive artificial intelligence (AI)-generated video in any content, including organic, non-paid uploads, except parodies and satire.
In a case involving a doctored video of US President Joe Biden that Meta said it had left up since it was not AI-generated, the company’s independent Oversight Board announced last month that it would investigate the viability of such a strategy.
Comment Template