Meta

Report September 2025

Submitted
Executive summary

We are pleased to share our sixth report under the 2022 EU Code of Conduct on Disinformation, which also draws from our work with the Code’s Taskforce. In accordance with the subscription form submitted by Meta Platforms Ireland Limited (Meta) in January 2025, this report is being submitted by Meta in respect of the Facebook, Messenger, and Instagram services and on behalf of WhatsApp Ireland Limited in respect of the WhatsApp messaging service. 

The aim of this report is to provide an update on how Meta approached misinformation and disinformation in the European Union between January and June 2025. We have additionally included any pertinent updates which occurred after the reporting period, where relevant in the report. Highlights include: 

  • Elections: The National Elections chapter provides an overview of our work on elections within the EU, detailing our core policies, processes, and implementation strategies. It outlines our comprehensive approach to elections, which continued for European elections held in the first half of 2025. The election responses covered in this report include the parliamentary elections in Germany, the presidential and presidential runoff elections in Romania, the parliamentary elections in Portugal, and the presidential elections in Poland.

  • Expanding GenAI Transparency for Meta’s Ads Products: We began gradually rolling out “AI Info” labels on ad creative videos using a risk-based framework. When a video is created or significantly edited with our generative AI creative features in our advertiser marketing tools, a label will appear in the three-dot menu or next to the “Sponsored” label. We plan to share more information on our approach to labeling ad images made or edited with non-Meta generative AI tools. We will continue to evolve our approach to labeling AI-generated content in partnership with experts, advertisers, policy stakeholders and industry partners as people’s expectations and the technology change.

  • Media literacy: Meta published its first Media Literacy Annual Plan on 21 July 2025, which set out its current approach to media literacy and the products and features we make available to users of Facebook and Instagram. It also provided details on specific media literacy initiatives run by Meta, including its work on digital citizenship, its media literacy lessons in Get Digital, We Think Digital and Soy Digital, and its election literacy programs.

  • Coordinated Inauthentic Behaviour trends: We are sharing insights into a covert influence operation that we disrupted in Romania at the beginning of 2025. We detected and removed this campaign before it was able to build authentic audiences on our apps.

Here are a few of the figures which can be found throughout the report:

  • From 01/01/2025 to 30/06/2025, we removed over 5 million ads from Facebook and Instagram in EU Member States, of which over 83,000 ads were removed from Facebook and Instagram for violating our misinformation policy.


  • From 01/01/2025 to 30/06/2025, we labelled over 1.2 million ads on both Facebook and Instagram with “paid for by” disclaimers in the EU.

  • We removed 1 network for violating our Coordinated Inauthentic Behaviour (CIB) policy which targeted one or more European countries (effectively or potentially). We also took steps to remove fake accounts, prioritising the removal of fake accounts that seek to cause harm. In Q1 2025, we took action against 1 billion fake accounts and in Q2 2025, we took action against 687 million fake accounts on Facebook globally. We estimate that fake accounts represented approximately 3% of our worldwide monthly active users (MAU) on Facebook during Q1 2025 and 4% during Q2 2025.


This report addresses the practices implemented for Facebook, Instagram, Messenger, and WhatsApp within the EU during the reporting period of H1 2025. In alignment with Meta's public announcements on 7 January 2025, we will continue to evaluate the applicability of these practices to Meta products. We will also regularly review the appropriateness of making adjustments in response to changes in our practices, such as the deployment of Community Notes.



Download PDF

Crisis 2025
[Note: Signatories are requested to provide information relevant to their particular response to the threats and challenges they observed on their service(s). They ensure that the information below provides an accurate and complete report of their relevant actions. As operational responses to crisis/election situations can vary from service to service, an absence of information should not be considered a priori a shortfall in the way a particular service has responded. Impact metrics are accurate to the best of signatories’ abilities to measure them].
Threats observed or anticipated
War of aggression by Russia on Ukraine

As outlined in our benchmark report, we took a variety of actions with the objectives of:
  • Helping to keep people in Ukraine and Russia safe: We’ve added several privacy and safety features to help people in Ukraine and Russia protect their accounts from being targeted.
  • Enforcing our policies: We are taking additional steps to enforce our Community Standards, not only in Ukraine and Russia but also in other countries globally where content may be shared.
  • Reducing the spread of misinformation: We took steps to fight the spread of misinformation on our services and consulted with outside experts. 
  • Transparency around state-controlled media: We have been working hard to tackle disinformation from Russia coming from state-controlled media. Since March 2022, we have been globally demoting content from Facebook Pages and Instagram accounts from Russian state-controlled media outlets and making them harder to find across our platforms. In addition to demoting, labelling, demonetizing and blocking ads from Russian State Controlled Media, we are also demoting and labelling any posts from users that contain links to Russian State Controlled Media websites.
  • In addition to these global actions, in Ukraine, the EU and UK, we have restricted access to Russia Today (globally), Sputnik, NTV/NTV Mir, Rossiya 1, REN TV and Perviy Kanal and others.
  • On 15 June 2024, we added restrictions to further state-controlled media organisations targeted by the EU broadcast ban under Article 2f of Regulation 833/2014. These included: Voice of Europe, RIA Novosti, Izvestia, Rossiyskaya Gazeta.
  • On 17 September 2024, we expanded our ongoing enforcement against Russian state media outlets. Rossiya Segodnya, RT, and other related entities were banned from our apps globally due to foreign interference activities.

[Israel - Hamas War]

In the spirit of transparency and cooperation we share below the details of some of the specific steps we are taking to respond to the Israel - Hamas War.
Mitigations in place
[War of Aggression by Russia on Ukraine]


Our main strategies are in line with what we outlined in our benchmark report, with a focus on safety features in Ukraine and Russia, extensive steps to fight the spread of misinformation (including through media literacy campaigns), tools to help our community access crucial resources, transparency around state controlled media and monitoring/taking action against any coordinated inauthentic behaviour.


This means (as outlined in previous reports) we will continue to: 

  • Monitor for coordinated inauthentic behaviour and other adversarial networks  (see commitment 16 for more information on behaviour we saw from Doppelganger during the reporting period). 
  • Enforce our Community Standards  
  • Work with fact-checkers 
  • Strengthen our engagement with local experts and governments in the Central and Eastern Europe region 


[Israel - Hamas War]

In the wake of the 07/10/2023 terrorist attacks in Israel and Israel’s response in Gaza, expert teams from across Meta took immediate crisis response measures, while protecting people’s ability to use our apps to shed light on important developments happening on the ground. As we did so, we were guided by core human rights principles, including respect for the right to life and security of the person, the protection of the dignity of victims, and the right to non-discrimination - as well as balancing those with the right to freedom of expression. We looked to the UN Guiding Principles on Business and Human Rights to prioritise and mitigate the most salient human rights risks: in this case, that people may use Meta platforms to further inflame an already violent conflict. We also looked to international humanitarian law (IHL) as an important source of reference for assessing online conduct. We have provided a public overview of our efforts related to the war in our Newsroom, as well as in our 2023 Annual Human Rights report. The following are some examples of the specific steps we have taken:

Taking Action on Violating Content:


Safety and Security:
  • In addition to this, our teams detected and removed a cluster of Coordinated Inauthentic Behaviour (CIB) activity attributed to Hamas in 2021. These fake accounts attempted to re-establish their presence on our platforms.
  • In early 2025, we removed 17 accounts on Facebook, 22 FB Pages and 21 accounts on Instagram for violating our CIB policy. This network originated in Iran and targeted Azeri-speaking audiences in Azerbaijan and Turkey. Fake accounts – some of which were detected and disabled by our automated systems prior to our investigation – were used to post content, including in Groups, manage Pages, and to comment on the network’s own content – likely to make it appear more popular than it was. Many of these accounts posed as female journalists and pro-Palestine activists. The operation also used popular hashtags like #palestine, #gaza, #starbucks, #instagram in their posts, as part of its spammy tactics in an attempt to insert themselves in the existing public discourse.
  • We memorialise accounts when we receive a request from a friend or family member of someone who has passed away, to provide a space for people to pay their respects, share memories and support each other.

Reducing the Spread of Misinformation:
  • We’re working with third-party fact-checkers in the region to debunk false claims. Meta’s third-party fact-checking network includes coverage in both Arabic and Hebrew, through AFP, Reuters and Fatabyyano. When they rate something as false, we move this content lower in Feed so fewer people see it. 
  • We recognise the importance of speed in moments like this, so we’ve made it easier for fact-checkers to find and rate content related to the war, using keyword detection to group related content in one place.
  • We’re also giving people more information to help them decide what to read, trust, and share, by adding warning labels on content rated false by third-party fact-checkers and applying labels to state-controlled media publishers. 
  • We also have limits on message forwarding and we label messages that haven’t originated with the sender so people are aware that something is information from a third party.

User Controls:
We continue to provide tools to help people control their experience on our apps and protect themselves from content they don’t want to see. These include but aren’t limited to:
  • Hidden Words: This tool filters offensive terms and phrases from DM requests and comments.
  • Limits: When turned on, Limits automatically hide DM requests and comments on Instagram from people who don’t follow you, or who only recently followed you.
  • Comment controls: You can control who can comment on your posts on Facebook and Instagram and choose to turn off comments completely on a post by post basis. 
  • Show More, Show Less: This gives people direct control over the content they see on Facebook. 
  • Facebook Reduce: Through the Facebook Feed Preferences settings, people can increase the degree to which we demote some content so they see less of it in their Feed. 
  • Sensitive Content Control: Instagram’s Sensitive Content Control allows people to choose how much sensitive content they see in places where we recommend content, such as Explore, Search, Reels and in-Feed recommendations.

Policies and Terms and Conditions
Outline any changes to your policies
War of Aggression by Russia on Ukraine

Policy
No further policy updates since our benchmark report.

Rationale
We continue to enforce our Community Standards and prioritise people’s safety and well-being through the application of these policies alongside Meta’s technologies, tools and processes. There are no substantial changes to report on for this period.


Israel - Hamas War

For the duration of the ongoing crisis, Meta has taken various actions to mitigate the possible content risks emerging from the crisis. This includes, inter alia, under the Dangerous Organisations and Individuals Policy, removes imagery depicting the moment an identifiable individual is abducted, unless such imagery is shared in the context of  condemnation or a call to release, in which case we allow with a Mark as Disturbing (MAD) interstitial; and, remove Hamas-produced imagery for hostages in captivity in all contexts. Meta has some further discretion policies which may be applied when content is escalated to us.