Facebook

Report March 2025

Submitted
Commitment 18
Relevant Signatories commit to minimise the risks of viral propagation of Disinformation by adopting safe design practices as they develop their systems, policies, and features.
We signed up to the following measures of this commitment
Measure 18.1 Measure 18.2 Measure 18.3
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
No
If yes, list these implementation measures here
As mentioned in our baseline report, we continue to enforce our policies to combat the spread of misinformation.

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
No
If yes, which further implementation measures do you plan to put in place in the next 6 months?
As mentioned in our baseline report, our policies are based on years of experience and expertise in safety combined with external input from experts around the world. 

Commitment 18 covers the current practices for Facebook in the EU. In keeping with Meta’s public announcements on 7 January 2025, we will continue to assess the applicability of this chapter to Facebook and we will keep under review whether it is appropriate to make alterations in light of changes in our practices, such as the deployment of Community Notes.

Measure 18.1
Relevant Signatories will take measures to mitigate risks of their services fuelling the viral spread of harmful Disinformation, such as: recommender systems designed to improve the prominence of authoritative information and reduce the prominence of Disinformation based on clear and transparent methods and approaches for defining the criteria for authoritative information; other systemic approaches in the design of their products, policies, or processes, such as pre-testing.
Facebook
QRE 18.1.1
Relevant Signatories will report on the risk mitigation systems, tools, procedures, or features deployed under Measure 18.1 and report on their deployment in each EU Member State.
As mentioned in our baseline report, we work to prevent the spread of harmful content, including misinformation, through: Meta’s technologies, as well as through human review teams. 

In our baseline report we mentioned our Content Distribution Guidelines outline some of the most significant reasons why content receives reduced distribution in Feed. In 2023 we summarised the changes that we've made to the Content Distribution Guidelines and detailed any specific adjustments to the types of content we demote. For example by removing the guideline for posts from broadly untrusted news publishers, because we no longer use it as a ranking signal.

QRE 18.1.2
Relevant Signatories will publish the main parameters of their recommender systems, both in their report and, once it is operational, on the Transparency Centre.
As mentioned in previous reports, Facebook system cards help people understand how AI shapes their product experiences and provides insights into how the Feed ranking system dynamically works to deliver a personalised experience on Facebook. 

These cards provide detail on how our systems work in a way that is accessible for those who don’t have deep technical knowledge. In June 2023, we released 14 system cards for Facebook. There are 15 system cards for Facebook which are periodically updated. They give information about how our AI systems rank content, some of the predictions each system makes to determine what content might be most relevant, as well as the controls users can use to help customise users' experience. They cover Feed, Stories, Reels and other surfaces where people go to find content from the accounts or people they follow. The system cards also cover AI systems that recommend “unconnected” content from people, groups, or accounts they don’t follow. A more detailed explanation of the AI behind content recommendations is available here.
To give a further level of detail beyond what’s published in the system cards, we have shared the types of inputs – known as signals – as well as the predictive models these signals inform that help determine what content users may find most relevant from their network on Facebook. Users can find these signals and predictions in the Transparency Centre, along with how frequently they tend to be used in the overall ranking process. 
We also use signals to help identify harmful content, which we remove as we become aware of it, as well as to help reduce the distribution of other types of problematic or low-quality content in line with our Content Distribution Guidelines.


QRE 18.1.3
Relevant Signatories will outline how they design their products, policies, or processes, to reduce the impressions and engagement with Disinformation whether through recommender systems or through other systemic approaches, and/or to increase the visibility of authoritative information.
As mentioned in our baseline report, our policies articulate different categories of misinformation and try to provide clear guidance about how we treat that speech when we see it: 
  • We remove misinformation where it is likely to directly contribute to the risk of imminent physical harm. We also remove content that is likely to directly contribute to interference with the functioning of political processes. For all other misinformation, we focus on reducing its prevalence or creating an environment that fosters a productive dialogue. As part of that effort, we partner with third-party fact-checking organisations to review and rate the accuracy of the most viral content on our platforms. We also provide resources to increase media and digital literacy so people can decide what to read, trust and share themselves.
Regarding the impact of our fact-checking labels, focused specifically on people who have already demonstrated an intent to share the fact-checked content: on average 47% of people on Facebook in the EU do not complete this action after receiving a warning from Meta that the content has been fact-checked. 

SLI 18.1.1
Relevant Signatories will provide, through meaningful metrics capable of catering for the performance of their products, policies, processes (including recommender systems), or other systemic approaches as relevant to Measure 18.1 an estimation of the effectiveness of such measures, such as the reduction of the prevalence, views, or impressions of Disinformation and/or the increase in visibility of authoritative information. Insofar as possible, Relevant Signatories will highlight the causal effects of those measures.
Rate of reshare non-completion among the unique attempts by users to reshare a content on Facebook that was treated with a fact-checking label in EU member state countries from 01/07/2024 to 31/12/2024.
Country Reduction of prevalence of disinformation Reduction of views/ impressions of disinformation Increase in visibility of authoritative information Other relevant metrics
Austria 41% 0 0 0
Belgium 46% 0 0 0
Bulgaria 51% 0 0 0
Croatia 46% 0 0 0
Cyprus 56% 0 0 0
Czech Republic 35% 0 0 0
Denmark 40% 0 0 0
Estonia 35% 0 0 0
Finland 39% 0 0 0
France 54% 0 0 0
Germany 42% 0 0 0
Greece 51% 0 0 0
Hungary 54% 0 0 0
Ireland 44% 0 0 0
Italy 54% 0 0 0
Latvia 39% 0 0 0
Lithuania 47% 0 0 0
Luxembourg 43% 0 0 0
Malta 58% 0 0 0
Netherlands 41% 0 0 0
Poland 43% 0 0 0
Portugal 58% 0 0 0
Romania 35% 0 0 0
Slovakia 39% 0 0 0
Slovenia 37% 0 0 0
Spain 56% 0 0 0
Sweden 45% 0 0 0