WhatsApp

Report March 2025

Submitted

Executive summary

We are pleased to share our fifth report under the 2022 EU Code of Practice on Disinformation, which also draws from our work with the Code’s Taskforce.

The aim of this report is to provide the latest updates, for July to December 2024, on how Meta approaches misinformation and disinformation in the European Union. We have additionally included any pertinent updates which occurred after the reporting period, where relevant in the report. Highlights include: 

  • Elections: We have aligned this report with Meta’s post-elections report which covers the Legislative Elections in France . We also included information about the Presidential and Parliamentary Elections in Romania in the National Elections chapter, which provides an overview of our work, including information on our core policies, processes, and implementation. 

  • Media literacy

    • National Elections: In preparation for the French legislative elections, Meta invested in media literacy by launching a campaign on its platforms, Facebook and Instagram. This initiative aimed to raise awareness among French users about the tools and processes Meta employs to combat misinformation, prevent electoral interference, and protect electoral candidates. Running from 20 June to 4 July 2024, just before the second round of elections, the campaign reached 2.1 million users in France and generated 10.6 million impressions. Additionally, Meta collaborated with the European Fact-Checking Standards Network (EFCSN) and the European Disability Forum (EDF) to educate users on identifying AI-generated and digitally altered media.

    • Fraud and Scams: Meta launched a campaign to raise awareness of fraud and scams. The campaign ran in several EU markets, including France, Germany, Poland, Romania, Belgium, and Spain and used a range of relevant mediums including Meta’s platforms (Facebook and Instagram),  and other third-party platforms. The campaign featured ads from Facebook, Instagram, and WhatsApp, emphasizing our commitment to user safety.

  • CIB trends and Doppelganger: As a result of our ongoing aggressive enforcement against recidivist efforts by Doppelganger, its operators have been forced to keep adapting and making tactical changes in an attempt to evade takedowns, as indicated in our Quarterly Adversarial Threat report for Q3 2024. These changes have led to the degradation of the quality of the operation’s efforts.

  • Researcher data access: As part of our ongoing efforts to enhance the Meta Content Library tool and incorporate feedback from researchers, we've made searching more efficient by adding exact phrase matching, and researchers can now share editable content producer lists with their peers, enabling quick filtering of public data from specific content producers on Facebook and Instagram.

  • Labelling AI generated images for increased transparency: In H2 2024, we rolled out a change to the “AI info” labels on our platforms so they better reflect the extent of AI used in content. Our intent is to help people know when they see content that was made with AI, and we continue to work with companies across the industry to improve our labeling process so that labels on our platforms are more in line with peoples’ expectations.


Here are a few of the figures which can be found throughout the report:

  • From 01/07/2024 to 31/12/2024, we removed over 5.1 million ads from Facebook and Instagram in EU member states, of which over 87,000 ads were removed from Facebook and Instagram for violating our misinformation policy.


  • From 01/07/2024 to 31/12/2024, we labelled over 810,000 ads on both Facebook and Instagram with “paid for by” disclaimers in the EU.

  • We removed 2 networks in Q3 2024 and 1 network in Q4 2024 for violating our Coordinated Inauthentic Behaviour (CIB) policy which targeted one or more European countries (effectively or potentially). We also took steps to remove fake accounts, prioritising the removal of fake accounts that seek to cause harm. In Q3, we took action against 1.1 billion fake accounts and in Q4 2024, we took action against 1.4 billion fake accounts on Facebook globally. We estimate that fake accounts represented approximately 3% of our worldwide monthly active users (MAU) on Facebook during Q3 2024 and 3% during Q4 2024. 

  • In July-December 2024, we worked through our global fact-checking programme, so that our independent fact-checking partners could continue to quickly review and rate false content on our apps. We‘ve partnered with 29 fact-checking organisations covering 23 different languages in the EU. On average 46% of people on Instagram and 47% of people on Facebook in the EU who start to share fact-checked content do not complete this action after receiving a warning that the content has been fact-checked. 


  • Between 01/07/2024 to 31/12/2024, over 150,000 distinct fact-checking articles on Facebook in the EU were used to both label and reduce the virality of over 27 million pieces of content in the EU. As for Instagram, over 43,000 distinct articles in the EU were used to both label and reduce the virality of over 1 million pieces of content in the EU. 


As currently drafted, this report addresses the practices implemented for Facebook, Instagram, Messenger, and WhatsApp within the EU during the reporting period of H2 2024. In alignment with Meta's public announcements on 7 January 2025, we will continue to evaluate the applicability of these practices to Meta products. We will also regularly review the appropriateness of making adjustments in response to changes in our practices, such as the deployment of Community Notes.


Download PDF