Facebook

Report September 2025

Submitted
Commitment 18
Relevant Signatories commit to minimise the risks of viral propagation of Disinformation by adopting safe design practices as they develop their systems, policies, and features.
We signed up to the following measures of this commitment
Measure 18.2 Measure 18.3
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
No
If yes, list these implementation measures here
As mentioned in our baseline report, we continue to enforce our policies to combat the spread of misinformation.
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
No
If yes, which further implementation measures do you plan to put in place in the next 6 months?
As mentioned in our baseline report, our policies are based on years of experience and expertise in safety combined with external input from experts around the world. 

Commitment 18 covers the current practices for Facebook in the EU. In keeping with Meta’s public announcements on 7 January 2025, we will continue to assess the applicability of this chapter to Facebook and we will keep under review whether it is appropriate to make alterations in light of changes in our practices, such as the deployment of Community Notes.
Measure 18.2
Relevant Signatories will develop and enforce publicly documented, proportionate policies to limit the spread of harmful false or misleading information (as depends on the service, such as prohibiting, downranking, or not recommending harmful false or misleading information, adapted to the severity of the impacts and with due regard to freedom of expression and information); and take action on webpages or actors that persistently violate these policies.
QRE 18.2.1
Relevant Signatories will report on the policies or terms of service that are relevant to Measure 18.2 and on their approach towards persistent violations of these policies.
As mentioned in our baseline report, our policies and approach to tackle misinformation - which are summarised in QRE 18.1.3 -  are published in our Transparency Centre: 

These include specific actions taken against actors that repeatedly share misinformation. We take action against Pages, groups, accounts and domains that repeatedly share or publish content that is rated False or Altered, near-identical to what fact-checkers have debunked as False or Altered, and content we enforce against under our policy on vaccine misinformation. If Pages, groups, accounts or websites repeatedly share such content they will see their distribution reduced. 

In 2023, we updated our penalty system to restrict accounts that violate our Community Standards on the platform. For most violations, the user’s first strike will result in a warning with no further restrictions. If Meta removes additional posts that go against the Community Standards in the future, we'll apply additional strikes to the account, and the user may lose access to some features for longer periods of time.

These restrictions generally only apply to Facebook accounts, but they may also be extended to Pages that represent an individual, such as a celebrity or political figure. (Note that while we count strikes on both Facebook and Instagram, these restrictions only apply to Facebook accounts).

If content that users have posted goes against our more severe policies, such as our policy on dangerous individuals and organisations or adult sexual exploitation, the user may receive additional, longer restrictions from certain features.

For most violations, if the user continues to post content that goes against the Community Standards after repeated warnings and restrictions, we will disable the account.

These policies apply across all EU Member States.


SLI 18.2.1
Relevant Signatories will report on actions taken in response to violations of policies relevant to Measure 18.2, at the Member State level. The metrics shall include: Total number of violations and Meaningful metrics to measure the impact of these actions (such as their impact on the visibility of or the engagement with content that was actioned upon).
Number of unique contents that were removed from Facebook for violating our harmful health misinformation or voter or census interference policies in EU Member State countries from 01/01/2025 to 30/06/2025.

Country determined by inferred user (responsible for the content) location.

*Meta's policies to tackle false claims about COVID-19 which could directly contribute to the risk of imminent physical harm changed in June 2023 following Meta's independent Oversight Board’s advice. We now only remove this content in countries with an active COVID-19 public health emergency declaration (during the reporting period no countries had an active health emergency declaration). This change has impacted our enforcement metrics on removals for this reporting period but does not change our overall approach to fact-checking. These changes are an expected part of fluctuating content trends online*
Number of unique contents that were removed from Facebook for violating our harmful health misinformation or voter or census interference policies in EU Member State countries from 01/01/2025 to 30/06/2025. Metric 1: indicating the impact of the action taken Metric 2: indicating the impact of the action taken Metric 3: indicating the impact of the action taken
Austria 13 0 0 0
Belgium 5 0 0 0
Bulgaria 10 0 0 0
Cyprus 1 0 0 0
Czech Republic 10 0 0 0
Denmark 2 0 0 0
Estonia 1 0 0 0
Finland 1 0 0 0
France 59 0 0 0
Germany 319 0 0 0
Greece 28 0 0 0
Hungary 4 0 0 0
Ireland 1 0 0 0
Italy 68 0 0 0
Lithuania 1 0 0 0
Luxembourg 1 0 0 0
Malta 2 0 0 0
Netherlands 12 0 0 0
Poland 107 0 0 0
Portugal 33 0 0 0
Romania 25 0 0 0
Slovakia 3 0 0 0
Spain 21 0 0 0
Sweden 4 0 0 0