As mentioned in our baseline report, our policies and approach to tackle misinformation are published in our Transparency Centre:
These include specific actions taken against actors that
repeatedly share misinformation. We take action against Pages, groups, accounts and domains that repeatedly share or publish content that is rated False or Altered, near-identical to what fact-checkers have debunked as False or Altered, and content we enforce against under our policy on vaccine misinformation. If Pages, groups, accounts or websites repeatedly share such content they will see their distribution reduced.
Our penalty system to restrict accounts that violate our Community Standards on the platform can be found
here. For most violations, the user’s first strike will result in a warning with no further restrictions. If Meta removes additional posts that go against the Community Standards in the future, we'll apply additional strikes to the account, and the user may lose access to some features for longer periods of time.
These restrictions generally only apply to Facebook accounts, but they may also be extended to Pages that represent an individual, such as a celebrity or political figure. (Note that while we count strikes on both Facebook and Instagram,
these restrictions only apply to Facebook accounts).
If content that users have posted goes against our more severe policies, such as our policy on dangerous individuals and organisations or adult sexual exploitation, the user may receive additional, longer restrictions from certain features.
For most violations, if the user continues to post content that goes against the Community Standards after repeated warnings and restrictions, we will disable the account.
These policies apply across all EU Member States.