As mentioned in our baseline report, our policies and approach to tackle misinformation are published in our Transparency Centre:
These include specific actions taken against actors that
repeatedly violate our policies. We take action against accounts that repeatedly share or publish content that is rated False or Altered, near-identical to what fact-checkers have debunked as False or Altered, and content we enforce against under our policy on vaccine misinformation. If accounts repeatedly share such content they will see their distribution reduced.
For most violations, the user’s first strike will result in a warning with no further restrictions. If Meta removes additional posts that go against the Community Standards in the future, we'll apply additional strikes to the account, and the user may lose access to some features for longer periods of time.
If content that users have posted goes against our more severe policies, such as our policy on dangerous individuals and organisations or adult sexual exploitation, the user may receive additional, longer restrictions from certain features.
For most violations, if the user continues to post content that goes against the Community Standards after repeated warnings and restrictions, we will disable the account.
These policies apply across all EU Member States.