Meta

Report March 2025

Submitted

Executive summary

We are pleased to share our fifth report under the 2022 EU Code of Practice on Disinformation, which also draws from our work with the Code’s Taskforce.

The aim of this report is to provide the latest updates, for July to December 2024, on how Meta approaches misinformation and disinformation in the European Union. We have additionally included any pertinent updates which occurred after the reporting period, where relevant in the report. Highlights include: 

  • Elections: We have aligned this report with Meta’s post-elections report which covers the Legislative Elections in France . We also included information about the Presidential and Parliamentary Elections in Romania in the National Elections chapter, which provides an overview of our work, including information on our core policies, processes, and implementation. 

  • Media literacy

    • National Elections: In preparation for the French legislative elections, Meta invested in media literacy by launching a campaign on its platforms, Facebook and Instagram. This initiative aimed to raise awareness among French users about the tools and processes Meta employs to combat misinformation, prevent electoral interference, and protect electoral candidates. Running from 20 June to 4 July 2024, just before the second round of elections, the campaign reached 2.1 million users in France and generated 10.6 million impressions. Additionally, Meta collaborated with the European Fact-Checking Standards Network (EFCSN) and the European Disability Forum (EDF) to educate users on identifying AI-generated and digitally altered media.

    • Fraud and Scams: Meta launched a campaign to raise awareness of fraud and scams. The campaign ran in several EU markets, including France, Germany, Poland, Romania, Belgium, and Spain and used a range of relevant mediums including Meta’s platforms (Facebook and Instagram),  and other third-party platforms. The campaign featured ads from Facebook, Instagram, and WhatsApp, emphasizing our commitment to user safety.

  • CIB trends and Doppelganger: As a result of our ongoing aggressive enforcement against recidivist efforts by Doppelganger, its operators have been forced to keep adapting and making tactical changes in an attempt to evade takedowns, as indicated in our Quarterly Adversarial Threat report for Q3 2024. These changes have led to the degradation of the quality of the operation’s efforts.

  • Researcher data access: As part of our ongoing efforts to enhance the Meta Content Library tool and incorporate feedback from researchers, we've made searching more efficient by adding exact phrase matching, and researchers can now share editable content producer lists with their peers, enabling quick filtering of public data from specific content producers on Facebook and Instagram.


  • Labelling AI generated images for increased transparency: In H2 2024, we rolled out a change to the “AI info” labels on our platforms so they better reflect the extent of AI used in content. Our intent is to help people know when they see content that was made with AI, and we continue to work with companies across the industry to improve our labeling process so that labels on our platforms are more in line with peoples’ expectations.


Here are a few of the figures which can be found throughout the report:

  • From 01/07/2024 to 31/12/2024, we removed over 5.1 million ads from Facebook and Instagram in EU member states, of which over 87,000 ads were removed from Facebook and Instagram for violating our misinformation policy.

  • From 01/07/2024 to 31/12/2024, we labelled over 810,000 ads on both Facebook and Instagram with “paid for by” disclaimers in the EU.

  • We removed 2 networks in Q3 2024 and 1 network in Q4 2024 for violating our Coordinated Inauthentic Behaviour (CIB) policy which targeted one or more European countries (effectively or potentially). We also took steps to remove fake accounts, prioritising the removal of fake accounts that seek to cause harm. In Q3, we took action against 1.1 billion fake accounts and in Q4 2024, we took action against 1.4 billion fake accounts on Facebook globally. We estimate that fake accounts represented approximately 3% of our worldwide monthly active users (MAU) on Facebook during Q3 2024 and 3% during Q4 2024. 

  • In July-December 2024, we worked through our global fact-checking programme, so that our independent fact-checking partners could continue to quickly review and rate false content on our apps. We‘ve partnered with 29 fact-checking organisations covering 23 different languages in the EU. On average 46% of people on Instagram and 47% of people on Facebook in the EU who start to share fact-checked content do not complete this action after receiving a warning that the content has been fact-checked. 

  • Between 01/07/2024 to 31/12/2024, over 150,000 distinct fact-checking articles on Facebook in the EU were used to both label and reduce the virality of over 27 million pieces of content in the EU. As for Instagram, over 43,000 distinct articles in the EU were used to both label and reduce the virality of over 1 million pieces of content in the EU. 


As currently drafted, this report addresses the practices implemented for Facebook, Instagram, Messenger, and WhatsApp within the EU during the reporting period of H2 2024. In alignment with Meta's public announcements on 7 January 2025, we will continue to evaluate the applicability of these practices to Meta products. We will also regularly review the appropriateness of making adjustments in response to changes in our practices, such as the deployment of Community Notes.


Download PDF

Commitment 26
Relevant Signatories commit to provide access, wherever safe and practicable, to continuous, real-time or near real-time, searchable stable access to non-personal data and anonymised, aggregated, or manifestly-made public data for research purposes on Disinformation through automated means such as APIs or other open and accessible technical solutions allowing the analysis of said data.
We signed up to the following measures of this commitment
Measure 26.1 Measure 26.2 Measure 26.3
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
No
If yes, list these implementation measures here
As mentioned in our previous reports, Meta rolled out the Content Library and API tools to provide access to near real-time public content on Instagram. Details about the content, such as the number of reactions, shares, comments and, for the first time, post view counts are also available. Researchers can search, explore and filter that content on a graphical User Interface (UI) or through a programmatic API. 

Together, these tools provide comprehensive access to publicly-accessible content across Facebook and Instagram.

Individuals, including journalists affiliated with qualified institutions pursuing scientific or public interest research topics can apply for access to these tools through partners with deep expertise in secure data sharing for research, starting with the University of Michigan’s Inter-university Consortium for Political and Social Research. This is a first-of-its-kind partnership that will enable researchers to analyse data from the API in ICPSR’s Social Media Archives (SOMAR) Virtual Data Enclave.

Meta continues to publish reports with relevant data regarding content on Instagram via its Transparency Centre. We’ve shared our quarterly reports throughout 2024 there: 

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
Yes
If yes, which further implementation measures do you plan to put in place in the next 6 months?
We continue to, and are in process of adding new features and functionality to Meta Content Library, including  improvements to the application processes for access to the research tools. In addition to this, we regularly seek feedback from the research community for critical updates. 
Measure 26.1
Relevant Signatories will provide public access to non-personal data and anonymised, aggregated or manifestly-made public data pertinent to undertaking research on Disinformation on their services, such as engagement and impressions (views) of content hosted by their services, with reasonable safeguards to address risks of abuse (e.g. API policies prohibiting malicious or commercial uses).
Instagram
QRE 26.1.1
Relevant Signatories will describe the tools and processes in place to provide public access to non-personal data and anonymised, aggregated and manifestly-made public data pertinent to undertaking research on Disinformation, as well as the safeguards in place to address risks of abuse.
As mentioned in our baseline report, we publish a wide range of regular reports on our Transparency Centre including to give our community visibility into how we enforce our policies or respond to some requests: https://transparency.fb.com/data/. We also publish extensive reports on our findings about coordinated behaviour in our newsroom and we have a dedicated public website hosting our Ad Library tools.
QRE 26.1.2
Relevant Signatories will publish information related to data points available via Measure 25.1, as well as details regarding the technical protocols to be used to access these data points, in the relevant help centre. This information should also be reachable from the Transparency Centre. At minimum, this information will include definitions of the data points available, technical and methodological information about how they were created, and information about the representativeness of the data.
Ad Library Tools: The dedicated website for the Ad Library allows users to search all of the ads currently running across Meta technologies. All ads that are currently running on Meta technologies show: the ad content; the basic information, such as when the ad started running and which advertiser is running it. For the ads that have run anywhere in the European Union in the past year, it includes additional transparency specific to the EU. Regarding Ads about social issues, elections or politics that have run in the past seven years, it shows: the ad content, the basic information, such as when the ad started running and which advertiser is running it and additional transparency about spend, reach and funding entities.

As mentioned in our baseline report, we publish on our Transparency Centre numerous reports : 
  • Community Standards Enforcement Report: We publish this report publicly in our Transparency Centre on a quarterly basis to more effectively track our progress and demonstrate our continued commitment to making our services safe and inclusive. The report shares metrics on how we are doing at preventing and taking action on content that goes against our Community Standards (against 12 policies on Instagram). 
  • Quarterly Adversarial Threat Report: We share publicly our findings about coordinated inauthentic behaviour (CIB) we detect and remove from our platforms. As part of our quarterly adversarial threat reports, we will publish information about the networks we take down to make it easier for people to see progress we’re making in one place.