Meta

Report March 2025

Submitted

Executive summary

We are pleased to share our fifth report under the 2022 EU Code of Practice on Disinformation, which also draws from our work with the Code’s Taskforce.

The aim of this report is to provide the latest updates, for July to December 2024, on how Meta approaches misinformation and disinformation in the European Union. We have additionally included any pertinent updates which occurred after the reporting period, where relevant in the report. Highlights include: 

  • Elections: We have aligned this report with Meta’s post-elections report which covers the Legislative Elections in France . We also included information about the Presidential and Parliamentary Elections in Romania in the National Elections chapter, which provides an overview of our work, including information on our core policies, processes, and implementation. 

  • Media literacy

    • National Elections: In preparation for the French legislative elections, Meta invested in media literacy by launching a campaign on its platforms, Facebook and Instagram. This initiative aimed to raise awareness among French users about the tools and processes Meta employs to combat misinformation, prevent electoral interference, and protect electoral candidates. Running from 20 June to 4 July 2024, just before the second round of elections, the campaign reached 2.1 million users in France and generated 10.6 million impressions. Additionally, Meta collaborated with the European Fact-Checking Standards Network (EFCSN) and the European Disability Forum (EDF) to educate users on identifying AI-generated and digitally altered media.

    • Fraud and Scams: Meta launched a campaign to raise awareness of fraud and scams. The campaign ran in several EU markets, including France, Germany, Poland, Romania, Belgium, and Spain and used a range of relevant mediums including Meta’s platforms (Facebook and Instagram),  and other third-party platforms. The campaign featured ads from Facebook, Instagram, and WhatsApp, emphasizing our commitment to user safety.

  • CIB trends and Doppelganger: As a result of our ongoing aggressive enforcement against recidivist efforts by Doppelganger, its operators have been forced to keep adapting and making tactical changes in an attempt to evade takedowns, as indicated in our Quarterly Adversarial Threat report for Q3 2024. These changes have led to the degradation of the quality of the operation’s efforts.

  • Researcher data access: As part of our ongoing efforts to enhance the Meta Content Library tool and incorporate feedback from researchers, we've made searching more efficient by adding exact phrase matching, and researchers can now share editable content producer lists with their peers, enabling quick filtering of public data from specific content producers on Facebook and Instagram.


  • Labelling AI generated images for increased transparency: In H2 2024, we rolled out a change to the “AI info” labels on our platforms so they better reflect the extent of AI used in content. Our intent is to help people know when they see content that was made with AI, and we continue to work with companies across the industry to improve our labeling process so that labels on our platforms are more in line with peoples’ expectations.


Here are a few of the figures which can be found throughout the report:

  • From 01/07/2024 to 31/12/2024, we removed over 5.1 million ads from Facebook and Instagram in EU member states, of which over 87,000 ads were removed from Facebook and Instagram for violating our misinformation policy.

  • From 01/07/2024 to 31/12/2024, we labelled over 810,000 ads on both Facebook and Instagram with “paid for by” disclaimers in the EU.

  • We removed 2 networks in Q3 2024 and 1 network in Q4 2024 for violating our Coordinated Inauthentic Behaviour (CIB) policy which targeted one or more European countries (effectively or potentially). We also took steps to remove fake accounts, prioritising the removal of fake accounts that seek to cause harm. In Q3, we took action against 1.1 billion fake accounts and in Q4 2024, we took action against 1.4 billion fake accounts on Facebook globally. We estimate that fake accounts represented approximately 3% of our worldwide monthly active users (MAU) on Facebook during Q3 2024 and 3% during Q4 2024. 

  • In July-December 2024, we worked through our global fact-checking programme, so that our independent fact-checking partners could continue to quickly review and rate false content on our apps. We‘ve partnered with 29 fact-checking organisations covering 23 different languages in the EU. On average 46% of people on Instagram and 47% of people on Facebook in the EU who start to share fact-checked content do not complete this action after receiving a warning that the content has been fact-checked. 

  • Between 01/07/2024 to 31/12/2024, over 150,000 distinct fact-checking articles on Facebook in the EU were used to both label and reduce the virality of over 27 million pieces of content in the EU. As for Instagram, over 43,000 distinct articles in the EU were used to both label and reduce the virality of over 1 million pieces of content in the EU. 


As currently drafted, this report addresses the practices implemented for Facebook, Instagram, Messenger, and WhatsApp within the EU during the reporting period of H2 2024. In alignment with Meta's public announcements on 7 January 2025, we will continue to evaluate the applicability of these practices to Meta products. We will also regularly review the appropriateness of making adjustments in response to changes in our practices, such as the deployment of Community Notes.


Download PDF

Crisis 2024
[Note: Signatories are requested to provide information relevant to their particular response to the threats and challenges they observed on their service(s). They ensure that the information below provides an accurate and complete report of their relevant actions. As operational responses to crisis/election situations can vary from service to service, an absence of information should not be considered a priori a shortfall in the way a particular service has responded. Impact metrics are accurate to the best of signatories’ abilities to measure them].
Threats observed or anticipated
Reporting on the service’s response during a crisis

[War of aggression by Russia on Ukraine]


As outlined in our benchmark report, we took a variety of actions with the objectives of:

  • Helping to keep people in Ukraine and Russia safe: We’ve added several privacy and safety features to help people in Ukraine and Russia protect their accounts from being targeted.
  • Enforcing our policies: We are taking additional steps to enforce our Community Standards, not only in Ukraine and Russia but also in other countries globally where content may be shared.
  • Reducing the spread of misinformation: We took steps to fight the spread of misinformation on our services and consulted with outside experts. 
  • Transparency around state-controlled media: We have been working hard to tackle disinformation from Russia coming from state-controlled media. Since March 2022, we have been globally demoting content from Facebook Pages and Instagram accounts from Russian state-controlled media outlets and making them harder to find across our platforms. In addition to demoting, labelling, demonetizing and blocking ads from Russian State Controlled Media, we are also demoting and labelling any posts from users that contain links to Russian State Controlled Media websites.
  • In addition to these global actions, in Ukraine, the EU and UK, we have restricted access to Russia Today, Sputnik, NTV/NTV Mir, Rossiya 1, REN TV and Perviy Kanal and others.
  • On 15 June 2024, we added restrictions to further state-controlled media organisations targeted by the EU broadcast ban under Article 2f of Regulation 833/2014. These included: Voice of Europe, RIA Novosti, Izvestia, Rossiyskaya Gazeta.
  • On 17 September 2024, we expanded our ongoing enforcement against Russian state media outlets. Rossiya Segodnya, RT, and other related entities were banned from our apps globally due to foreign interference activities.






[Israel - Hamas War]
In the spirit of transparency and cooperation we share below the details of some of the specific steps we are taking to respond to the Israel - Hamas War.

Mitigations in place
[War of aggression by Russia on Ukraine]

Our main strategies are in line with what we outlined in our benchmark report, with a focus on safety features in Ukraine and Russia, extensive steps to fight the spread of misinformation (including through media literacy campaigns), tools to help our community access crucial resources, transparency around state controlled media and monitoring/taking action against any coordinated inauthentic behaviour.


This means (as outlined in previous reports) we will continue to: 

  • Monitor for coordinated inauthentic behaviour and other adversarial networks  (See commitment 16 for more information on behaviour we saw from Doppelganger during the reporting period). 
  • Enforce our Community Standards  
  • Work with fact-checkers 
  • Strengthen our engagement with local experts and governments in the Central and Eastern Europe region 






[Israel - Hamas War]
In the wake of the 07/10/2023 terrorist attacks in Israel and Israel’s response in Gaza, expert teams from across Meta took immediate crisis response measures, while protecting people’s ability to use our apps to shed light on important developments happening on the ground. As we did so, we were guided by core human rights principles, including respect for the right to life and security of the person, the protection of the dignity of victims, and the right to non-discrimination - as well as balancing those with the right to freedom of expression. We looked to the UN Guiding Principles on Business and Human Rights to prioritise and mitigate the most salient human rights risks: in this case, that people may use Meta platforms to further inflame an already violent conflict. We also looked to international humanitarian law (IHL) as an important source of reference for assessing online conduct. We have provided a public overview of our efforts related to the war in our Newsroom. The following are some examples of the specific steps we have taken:

Taking Action on Violating Content:


Safety and Security:
  • In addition to this, our teams have detected and taken down a cluster of activity linked to Coordinated Inauthentic Behaviour (CIB) and attributed to Hamas in 2021. These fake accounts attempted to re-establish their presence on our platforms.
  • In Q3 2024, we also removed 15 Facebook accounts, 15 Pages, and 6 accounts on Instagram for violating our policy against coordinated inauthentic behavior. This network originated in Lebanon and targeted primarily Israel. This network posted original content in Hebrew about news and geopolitical events in Israel with generic hashtags like #Israel, #Jerusalem, #Netanyahu, among others. It included posts about Israel’s dependence on US support, claims that Israeli people are leaving the country, claims of food shortages in Israel, and criticism of the Israeli government and its military strikes in the Middle East. 
  • We memorialise accounts when we receive a request from a friend or family member of someone who has passed away, to provide a space for people to pay their respects, share memories and support each other.

Reducing the Spread of Misinformation:
  • We’re working with third-party fact-checkers in the region to debunk false claims. Meta’s third-party fact-checking network includes coverage in both Arabic and Hebrew, through AFP, Reuters and Fatabyyano. When they rate something as false, we move this content lower in Feed so fewer people see it. 
  • We recognise the importance of speed in moments like this, so we’ve made it easier for fact-checkers to find and rate content related to the war, using keyword detection to group related content in one place.
  • We’re also giving people more information to help them decide what to read, trust, and share, by adding warning labels on content rated false by third-party fact-checkers and applying labels to state-controlled media publishers. 
  • We also have limits on message forwarding and we label messages that haven’t originated with the sender so people are aware that something is information from a third party.

User Controls:
We continue to provide tools to help people control their experience on our apps and protect themselves from content they don’t want to see. These include but aren’t limited to:
  • Hidden Words: This tool filters offensive terms and phrases from DM requests and comments.
  • Limits: When turned on, Limits automatically hide DM requests and comments on Instagram from people who don’t follow you, or who only recently followed you.
  • Comment controls: You can control who can comment on your posts on Facebook and Instagram and choose to turn off comments completely on a post by post basis. 
  • Show More, Show Less: This gives people direct control over the content they see on Facebook. 
  • Facebook Reduce: Through the Facebook Feed Preferences settings, people can increase the degree to which we demote some content so they see less of it in their Feed. 
  • Sensitive Content Control: Instagram’s Sensitive Content Control allows people to choose how much sensitive content they see in places where we recommend content, such as Explore, Search, Reels and in-Feed recommendations. 

More detail on these tools can be found in the chapter sections below.

Oversight Board cases: 
The Oversight Board remains another avenue for review of Meta’s crisis response, and during the reporting period the Board has reviewed and decided on 2 cases relating to the Hamas-Israel war. Details of these cases can be found here:

Policies and Terms and Conditions
War of aggression by Russia on Ukraine

Policy
No further policy updates since our benchmark report

Rationale
We continue to enforce our Community Standards and prioritise people’s safety and well-being through the application of these policies alongside Meta’s technologies, tools and processes. There are no substantial changes to report on for this period. 




Israel - Hamas War
For the duration of the ongoing crisis, Meta has taken various actions to mitigate the possible content risks emerging from the crisis. This includes, inter alia, under the Dangerous Organisations and Individuals Policy, removes imagery depicting the moment an identifiable individual is abducted, unless such imagery is shared in the context of  condemnation or a call to release, in which case we allow with a Mark as Disturbing (MAD) interstitial; and, remove Hamas-produced imagery for hostages in captivity in all contexts. Meta has some further discretion policies which may be applied when content is escalated to us.