Facebook

Report March 2025

Submitted
Commitment 15
Relevant Signatories that develop or operate AI systems and that disseminate AI-generated and manipulated content through their services (e.g. deepfakes) commit to take into consideration the transparency obligations and the list of manipulative practices prohibited under the proposal for Artificial Intelligence Act.
We signed up to the following measures of this commitment
Measure 15.1 Measure 15.2
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
Yes
If yes, list these implementation measures here
We recognise that widespread availability and adoption of generative AI tools may have implications for how we identify, and address disinformation on our platforms.

We recognize that the widespread availability and adoption of generative AI tools may have implications for how we identify and address disinformation on our platforms. We also acknowledge that, under the AIA, certain AI techniques are considered purposefully deceptive or manipulative if they impact people's behavior and decision-making abilities and are reasonably likely to cause significant harm.

We want people to know when they see posts that have been made with AI. In early 2024, we announced a new approach for labeling AI-generated organic content. An important part of this approach relies on industry standard indicators that other companies include in content created using their tools, which help us assess whether something is created using AI.

In H2 2024, we rolled out a change to the “AI info” labels on our platforms so they better reflect the extent of AI used in content. Our intent has always been to help people know when they see content that was made with AI, and we’ve continued to work with companies across the industry to improve our labeling process so that labels on our platforms are more in line with peoples’ expectations.

For organic content that we detect was only modified or edited by AI tools, we moved the “AI info” label to the post’s menu. We still display the “AI info” label for content we detect was generated by an AI tool and share whether the content is labeled because of industry-shared signals or because someone self-disclosed. 

In September 2024, we also began rolling out “AI Info” labels on ad creative images using a risk-based framework. When an image is created or significantly edited with our generative AI creative features in our advertiser marketing tools, a label will appear in the three-dot menu or next to the “Sponsored” label. When these tools result in the inclusion of an AI-generated photorealistic human, the label will appear next to the Sponsored label (not behind the three-dot menu). 

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
Yes
If yes, which further implementation measures do you plan to put in place in the next 6 months?
In January 2025, we began gradually rolling out “AI Info” labels on ad creative videos using a risk-based framework. When a video is created or significantly edited with our generative AI creative features in our advertiser marketing tools, a label will appear in the three-dot menu or next to the “Sponsored” label. When these tools result in the inclusion of an AI-generated photorealistic human, the label will appear next to the Sponsored label (not behind the three-dot menu).
Measure 15.1
Relevant signatories will establish or confirm their policies in place for countering prohibited manipulative practices for AI systems that generate or manipulate content, such as warning users and proactively detect such content.
Facebook
QRE 15.1.1
In line with EU and national legislation, Relevant Signatories will report on their policies in place for countering prohibited manipulative practices for AI systems that generate or manipulate content.
We address potential abuses from AI-generated content in two primary ways: (1) we remove content that violates our Community Standards regardless of how it was generated; and (2) our third-party fact-checkers can rate content that is false and misleading regardless of how it was generated. 

In February 2024 Meta’s Oversight Board provided feedback regarding our approach to manipulated media, arguing that we unnecessarily risk restricting freedom of expression when we remove manipulated media that does not otherwise violate our Community Standards. It recommended a “less restrictive” approach to manipulated media, such as labels with context. 

We agree that providing transparency and additional context is now the better way to address this content. In May 2024 we began labelling AI generated or edited content (based on industry aligned standards on identifying AI as well as through users self declaring AI influenced content) with the label ‘Made with AI’. While we work with companies across the industry to improve the process so our labelling approach better matches our intent, we’ve updated the “Made with AI” label to “AI info” across our apps, which people can click for more information. These labels cover a broader range of content in addition to the manipulated content that the Oversight Board also recommended labelling in their feedback. 

If we determine that digitally-created or altered images, video or audio create a particularly high risk of materially deceiving the public on a matter of importance, we may add a more prominent label so people have more information and context.

In H2 2024, we rolled out a change to the “AI info” labels on our platforms so they better reflect the extent of AI used in content. Our intent has always been to help people know when they see content that was made with AI, and we’ve continued to work with companies across the industry to improve our labeling process so that labels on our platforms are more in line with peoples’ expectations.

For content that we detect was only modified or edited by AI tools, we are moving the “AI info” label to the post’s menu. We will still display the “AI info” label for content we detect was generated by an AI tool and share whether the content is labeled because of industry-shared signals or because someone self-disclosed.