Meta

Report March 2026

Submitted

Your organisation description

Transparency Centre

Commitment 34

To ensure transparency and accountability around the implementation of this Code, Relevant Signatories commit to set up and maintain a publicly available common Transparency Centre website.

We signed up to the following measures of this commitment

Measure 34.1 Measure 34.2 Measure 34.3 Measure 34.4 Measure 34.5

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Yes

If yes, list these implementation measures here

As mentioned in our baseline report, Meta (representing Facebook, Instagram, [WhatsApp and Messenger]) co-funded the Transparency Centre website’s development, to ensure transparency and accountability around the implementation of this Code.

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

N/A

Measure 34.1

Signatories establish and maintain the common Transparency Centre website, which will be operational and available to the public within 6 months from the signature of this Code.

Facebook, Instagram, WhatsApp, Messenger

Measure 34.2

Signatories provide appropriate funding, for setting up and operating the Transparency Centre website, including its maintenance, daily operation, management, and regular updating. Funding contribution should be commensurate with the nature of the Signatories' activity and shall be sufficient for the website's operations and maintenance and proportional to each Signatories' risk profile and economic capacity.

Facebook, Instagram, WhatsApp, Messenger

Measure 34.3

Relevant Signatories will contribute to the Transparency Centre's information to the extent that the Code is applicable to their services.

Facebook, Instagram, WhatsApp, Messenger

Measure 34.4

Signatories will agree on the functioning and financing of the Transparency Centre within the Task-force, to be recorded and reviewed within the Task-Force on an annual basis.

Facebook, Instagram, WhatsApp, Messenger

Measure 34.5

The Task-force will regularly discuss the Transparency Centre and assess whether adjustments or actions are necessary. Signatories commit to implement the actions and adjustments decided within the Task-force within a reasonable timeline.

Facebook, Instagram, WhatsApp, Messenger

Commitment 35

Signatories commit to ensure that the Transparency Centre contains all the relevant information related to the implementation of the Code's Commitments and Measures and that this information is presented in an easy-to-understand manner, per service, and is easily searchable.

We signed up to the following measures of this commitment

Measure 35.1 Measure 35.2 Measure 35.3 Measure 35.4 Measure 35.5 Measure 35.6

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Yes

If yes, list these implementation measures here

As mentioned in our baseline report, Meta (representing Facebook, Instagram, WhatsApp and Messenger) commits to upload its reports on the Transparency Centre in due course. 

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

As mentioned in our baseline report, Meta (representing Facebook, Instagram, WhatsApp and Messenger) commits to upload its reports on the Transparency Centre in due course. 

Measure 35.1

Signatories will list in the Transparency Centre, per each Commitment and Measure that they subscribe to, the terms of service and policies that their service applies to implement these Commitments and Measures.

Facebook, Instagram, WhatsApp, Messenger

Measure 35.2

Signatories provide information on the implementation and enforcement of their policies per service, including geographical and language coverage.

Facebook, Instagram, WhatsApp, Messenger

Measure 35.3

Signatories ensure that the Transparency Centre contains a repository of their reports assessing the implementation of the Code's commitments.

Facebook, Instagram, WhatsApp, Messenger

Measure 35.4

In crisis situations, Signatories use the Transparency Centre to publish information regarding the specific mitigation actions taken related to the crisis.

Facebook, Instagram, WhatsApp, Messenger

Measure 35.5

Signatories ensure that the Transparency Centre is built with state-of-the-art technology, is user-friendly, and that the relevant information is easily searchable (including per Commitment and Measure). Users of the Transparency Centre will be able to easily track changes in Signatories' policies and actions.

Facebook, Instagram, WhatsApp, Messenger

Measure 35.6

The Transparency Centre will enable users to easily access and understand the Service Level Indicators and Qualitative Reporting Elements tied to each Commitment and Measure of the Code for each service, including Member State breakdowns, in a standardised and searchable way. The Transparency Centre should also enable users to easily access and understand Structural Indicators for each Signatory.

Facebook, Instagram, WhatsApp, Messenger

Commitment 36

Signatories commit to updating the relevant information contained in the Transparency Centre in a timely and complete manner.

We signed up to the following measures of this commitment

Measure 36.1 Measure 36.2 Measure 36.3

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Yes

If yes, list these implementation measures here

As mentioned in our baseline report, Meta (representing Facebook, Instagram, WhatsApp and Messenger) will both upload this report in due course and support other signatories in their efforts to upload their own reports.

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

Yes

If yes, which further implementation measures do you plan to put in place in the next 6 months?

As mentioned in our baseline report, Meta (representing Facebook, Instagram, WhatsApp and Messenger) will both upload all future reports in due course.

Measure 36.1

Signatories provide updates about relevant changes in policies and implementation actions in a timely manner, and in any event no later than 30 days after changes are announced or implemented.

Facebook, Instagram, WhatsApp, Messenger

Measure 36.2

Signatories will regularly update Service Level Indicators, reporting elements, and Structural Indicators, in parallel with the regular reporting foreseen by the monitoring framework. After the first reporting period, Relevant Signatories are encouraged to also update the Transparency Centre more regularly.

Facebook, Instagram, WhatsApp, Messenger

Measure 36.3

Signatories will update the Transparency Centre to reflect the latest decisions of the Permanent Task-force, regarding the Code and the monitoring framework.

Facebook, Instagram, WhatsApp, Messenger

QRE 36.1.1 (for the Commitments 34-36)

With their initial implementation report, Signatories will outline the state of development of the Transparency Centre, its functionalities, the information it contains, and any other relevant information about its functioning or operations. This information can be drafted jointly by Signatories involved in operating or adding content to the Transparency Centre.

We continue to upload our report according to the approved deadlines.

QRE 36.1.2 (for the Commitments 34-36)

Signatories will outline changes to the Transparency Centre's content, operations, or functioning in their reports over time. Such updates can be drafted jointly by Signatories involved in operating or adding content to the Transparency Centre.

The administration of the Transparency Centre website has been transferred fully to the community of the Code’s signatories, with VOST Europe taking the role of developer.

SLI 36.1.1 (for the Commitments 34-36)

Signatories will provide meaningful quantitative information on the usage of the Transparency Centre, such as the average monthly visits of the webpage.

In the period between 01/07/2025 to 31/12/2025, our signatory profile was visited 1,580 times, and our signatory reports were downloaded 9,941 times. The Transparency Centre Webpage overall was visited 30,384 times.

Country Our company would like to provide the following data: Nr of fact-checkers IFCN-certified
Austria 0
Belgium 0
Bulgaria 0
Croatia 0
Cyprus 0
Czech Republic 0
Denmark 0
Estonia 0
Finland 0
France 0
Germany 0
Greece 0
Hungary 0
Ireland 0
Italy 0
Latvia 0
Lithuania 0
Luxembourg 0
Malta 0
Netherlands 0
Poland 0
Portugal 0
Romania 0
Slovakia 0
Slovenia 0
Spain 0
Sweden 0
Iceland 0
Liechtenstein 0
Norway 0

Permanent Task-Force

Commitment 37

Signatories commit to participate in the permanent Task-force. The Task-force includes the Signatories of the Code and representatives from EDMO and ERGA. It is chaired by the European Commission, and includes representatives of the European External Action Service (EEAS). The Task-force can also invite relevant experts as observers to support its work. Decisions of the Task-force are made by consensus.

We signed up to the following measures of this commitment

Measure 37.1 Measure 37.2 Measure 37.3 Measure 37.4 Measure 37.5 Measure 37.6

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

No

If yes, list these implementation measures here

There have been no significant updates since the last submitted report.

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

N/A

Measure 37.1

Signatories will participate in the Task-force and contribute to its work. Signatories, in particular smaller or emerging services will contribute to the work of the Task-force proportionate to their resources, size and risk profile. Smaller or emerging services can also agree to pool their resources together and represent each other in the Task-force. The Task-force will meet in plenary sessions as necessary and at least every 6 months, and, where relevant, in subgroups dedicated to specific issues or workstreams.

Facebook, Instagram, WhatsApp, Messenger

Measure 37.2

Signatories agree to work in the Task-force in particular – but not limited to – on the following tasks: Establishing a risk assessment methodology and a rapid response system to be used in special situations like elections or crises; Cooperate and coordinate their work in special situations like elections or crisis; Agree on the harmonised reporting templates for the implementation of the Code's Commitments and Measures, the refined methodology of the reporting, and the relevant data disclosure for monitoring purposes; Review the quality and effectiveness of the harmonised reporting templates, as well as the formats and methods of data disclosure for monitoring purposes, throughout future monitoring cycles and adapt them, as needed; Contribute to the assessment of the quality and effectiveness of Service Level and Structural Indicators and the data points provided to measure these indicators, as well as their relevant adaptation; Refine, test and adjust Structural Indicators and design mechanisms to measure them at Member State level; Agree, publish and update a list of TTPs employed by malicious actors, and set down baseline elements, objectives and benchmarks for Measures to counter them, in line with the Chapter IV of this Code.

Facebook, Instagram, WhatsApp, Messenger

Measure 37.3

The Task-force will agree on and define its operating rules, including on the involvement of third-party experts, which will be laid down in a Vademecum drafted by the European Commission in collaboration with the Signatories and agreed on by consensus between the members of the Task-force.

Facebook, Instagram, WhatsApp, Messenger

Measure 37.4

Signatories agree to set up subgroups dedicated to the specific issues related to the implementation and revision of the Code with the participation of the relevant Signatories.

Facebook, Instagram, WhatsApp, Messenger

Measure 37.5

When needed, and in any event at least once per year the Task-force organises meetings with relevant stakeholder groups and experts to inform them about the operation of the Code and gather their views related to important developments in the field of Disinformation.

Facebook, Instagram, WhatsApp, Messenger

Measure 37.6

Signatories agree to notify the rest of the Task-force when a Commitment or Measure would benefit from changes over time as their practices and approaches evolve, in view of technological, societal, market, and legislative developments. Having discussed the changes required, the Relevant Signatories will update their subscription document accordingly and report on the changes in their next report.

Facebook, Instagram, WhatsApp, Messenger

QRE 37.6.1

Signatories will describe how they engage in the work of the Task-force in the reporting period, including the sub-groups they engaged with.

There have been no significant updates since the last submitted report.


Monitoring of the Code

Commitment 38

The Signatories commit to dedicate adequate financial and human resources and put in place appropriate internal processes to ensure the implementation of their commitments under the Code.

We signed up to the following measures of this commitment

Measure 38.1

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

No

If yes, list these implementation measures here

N/A

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

As mentioned in our baseline report, our policies benefit from our experience and expertise. 

Measure 38.1

Relevant Signatories will outline the teams and internal processes they have in place, per service, to comply with the Code in order to achieve full coverage across the Member States and the languages of the EU.

Facebook, Instagram, WhatsApp, Messenger

QRE 38.1.1

Relevant Signatories will outline the teams and internal processes they have in place, per service, to comply with the Code in order to achieve full coverage across the Member States and the languages of the EU.

We invest in combating the spread of harmful content, including misinformation and disinformation, in support of our implementation of the Code.
Teams with expertise in content moderation, operations, policy design, safety, market specialists, data and forensic analysis, stakeholder and partner engagement, threat investigation, cybersecurity and product development all work on these challenges. These teams are distributed globally, and draw from the local expertise of their team members and local partners.

Commitment 39

Signatories commit to provide to the European Commission, within 1 month after the end of the implementation period (6 months after this Code’s signature) the baseline reports as set out in the Preamble.

We signed up to the following measures of this commitment

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Yes

If yes, list these implementation measures here

This report was submitted within the required timeline.

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

This report was submitted within the required timeline.


Commitment 40

Signatories commit to provide regular reporting on Service Level Indicators (SLIs) and Qualitative Reporting Elements (QREs). The reports and data provided should allow for a thorough assessment of the extent of the implementation of the Code’s Commitments and Measures by each Signatory, service and at Member State level.

We signed up to the following measures of this commitment

Measure 40.1 Measure 40.2 Measure 40.3 Measure 40.4 Measure 40.5 Measure 40.6

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Yes

If yes, list these implementation measures here

For this report, Facebook. Instagram, WhatsApp and Messenger provided QREs and SLIs across the different chapters 

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

Yes

If yes, which further implementation measures do you plan to put in place in the next 6 months?

As mentioned in our baseline report, Facebook, Instagram, WhatsApp and Messenger will continue to provide relevant QREs and SLIs across the chapters of this Code.

Measure 40.1

Relevant Signatories that are Very Large Online Platforms, as defined in the DSA, will report every six-months on the implementation of the Commitments and Measures they signed up to under the Code, including on the relevant QREs and SLIs at service and Member State Level.

Facebook, Instagram, Whatsapp, Messenger

Measure 40.2

Other Signatories will report yearly on the implementation of the Commitments and Measures taken under the present Code, including on the relevant QREs and SLIs, at service and Member State level.

Facebook, Instagram, Whatsapp, Messenger

Measure 40.3

Facebook, Instagram, Whatsapp, Messenger

Measure 40.4

Facebook, Instagram, Whatsapp, Messenger

Measure 40.5

Facebook, Instagram, Whatsapp, Messenger

Measure 40.6

Facebook, Instagram, Whatsapp, Messenger

Commitment 43

Relevant Signatories commit to provide, in special situations like elections or crisis, upon request of the European Commission, proportionate and appropriate information and data, including ad-hoc specific reports and specific chapters within the regular monitoring, in accordance with the rapid response system established by the Taskforce.

We signed up to the following measures of this commitment

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Yes

If yes, list these implementation measures here

Facebook, Instagram, WhatsApp and Messenger provided their qualitative and quantitative information in the harmonised template provided.

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

Yes

If yes, which further implementation measures do you plan to put in place in the next 6 months?

Facebook, Instagram, WhatsApp and Messenger continue to engage with the Taskforce working group on reporting/monitoring as the template evolves.

Crisis and Elections Response

Elections 2025

[Note: Signatories are requested to provide information relevant to their particular response to the threats and challenges they observed on their service(s). They ensure that the information below provides an accurate and complete report of their relevant actions. As operational responses to crisis/election situations can vary from service to service, an absence of information should not be considered a priori a shortfall in the way a particular service has responded. Impact metrics are accurate to the best of signatories’ abilities to measure them].

Threats observed or anticipated

Meta is committed to providing reliable election information while combating misinformation across languages on our platforms. Our policies and safeguards for elections have been developed over many years and informed by our experiences of working on more than 200 elections around the world. Those experiences have resulted in the development of a robust election program, which uses mature policies, processes, and tools to both protect speech on our platform and safeguard the integrity of the elections. We continuously improve these measures to ensure they remain appropriate and responsive to emerging risks, and we have reinforced these efforts in light of the regulatory framework set out under the Digital Services Act, the Election Guidelines, and our commitments under this Code.

  1. Community Standards and Guidelines Relevant to Elections:

Our Community Standards set out strict rules for content that can and cannot be posted to our platforms. These policies cover voter interference, voter fraud, electoral violence, and misinformation, among other categories, such as, hateful conduct, coordinating harm and promoting crime, bullying and harassment. Our policies have been refined over many years, by partnering with academics, civil society, and third-party fact-checkers to find the appropriate balance between protecting people and protecting freedom of expression and information. These policies are regularly reviewed, and they are made available to the public through our Transparency Centre.

Our comprehensive approach to elections continued for European elections held between 1 July - 31 December 2025. The election responses covered in this report include:

  1. Norway (Parliamentary) election, 9 September 2025
  2. Czech Republic (Legislative) election, 3 - 4 October 2025
  3. Ireland (Presidential) election, 24 October 2025 
  4. Netherlands, General election for the House of Representatives, 29 October 2025


2. Our Election Risk Management Processes

We have a dedicated team responsible for driving Meta’s cross-company election integrity efforts, leveraging experts from a full range of business functions to foster a holistic approach to tackling election-related risks. Those functions include colleagues in Meta’s intelligence, data science, product and engineering, research, operations, content and public policy, and legal teams. 

Over the years, Meta has developed a comprehensive approach to mitigate relevant user risks and respect the integrity of elections during an election period. This approach has been iterated and has matured over the course of hundreds of elections over the past years. We have processes, tools and policies in place all year round to address harmful or illegal content while protecting legitimate speech on our platforms, which have been further reinforced in light of the regulatory framework under the DSA including the Communication from the Commission (C/2024/3014) on Commission Guidelines on the mitigation of systemic risks for electoral processes (the "Election Guidelines”). 

During the reporting period for this report, we continued to work closely with a full range of external stakeholders to inform our processes and procedures ahead of elections. This included collaboration with Member State Digital Service Coordinators (DSCs), national authorities, electoral bodies, as well as taking part in the EU Code of Practice (“CoP”) Rapid Response System. As part of the rapid response system framework, we onboarded designated civil society organisations and fact checkers to our direct escalation channels to report time sensitive content, accounts or trends that could threaten the integrity of the electoral process. 



Mitigations in place

Overview of  Cooperation with External Stakeholders and Election Integrity Efforts
Meta engages with a full range of external stakeholders to inform our processes and procedures as part of our day-to-day business, and this practice continued during our election preparation and integrity efforts for Norway, Czech Republic, Ireland and the Netherlands. Meta values the networks and channels we have with our external stakeholders to work together in identifying risks on our platforms, and as such, we have welcomed many of the Election Guidelines recommending cooperation and points of contact with national authorities, civil society organisations, and others.

Norway Parliamentary Election

External engagement and election preparation efforts began early, including engagements with the national security authority (Nasjonal sikkerhetsmyndighet), the Organization for Security and Co-operation in Europe (OSCE) and Ministry of Digitalisation and Public Governance. We also conducted training in the Norwegian Parliament for political parties in May 2025 to provide further information on our policies and reporting channels. 

Voter Information Units and Election Day Information Features

We remain focused on providing users with reliable election information while combating misinformation across languages. That is why we continue to connect people with details about the election for their Member State through in-app notifications, where legally permitted. We proactively point users to reliable information on the electoral process through in-app ‘Voter Information Units (VIU)’ and ‘Election Day Information’ reminders (EDR).

Facebook
  • VIU Reach: Over 2.8 million
  • EDR Reach: Over 2.0 million

Instagram
  • VIU Reach: Over 1.9 million
  • EDR Reach: Over 1.4 million

Czech Republic - Legislative Election

External engagement and election preparation efforts began early, including participating in several engagements with stakeholders across government, including: the Ministry of Internal Affairs and the Ministry of Foreign Affairs. Meta also participated in roundtables organised by the Digital Service Coordinator (DSC), with representatives of the European Commission, Czech government, civil society organizations and law enforcement agencies.  We also onboarded the Czech Telecommunication Office to our direct regulatory reporting channel and provided on-the-ground training to Czech authorities on our policies and reporting channels. 

As an active member of the EU Code of Practice on Disinformation Taskforce’s Working Group on Elections, we took part in its Rapid Response System (RRS). Through this, we were regularly in touch with civil society organisations and partners including: Central European Digital Media Observatory, Globsec, Demagog.cz and Alliance4Europe. 

Meta also conducted comprehensive outreach to all political Parties ahead of the election in advance to ensure all candidates’ teams were aware of critical resources, policies and escalation channels on how to contact Meta in case of an escalation. 

Overview of partners and notifications received during the Rapid Response Implementation period (8 September to 13 October 2025):

  • Number of onboarded non-platform signatories to our direct reporting channels: 4.
  • Number of reports received during the election period: 6.

Voter Information Units and Election Day Information Features

Facebook
  • VIU Reach: Over 3.3 million
  • EDR Reach: Over 3.1 million

Instagram
  • VIU Reach: Over 2.8 million
  • EDR Reach: Over 2.6 million

Ireland Presidential Election

External engagement and election preparation efforts began early, including a roundtable hosted by CnaM in September 2025. This included a range of partners, such as  representatives from the European Commission, European Digital Media Observatory (EDMO) and An Garda Siochána.  We were also regularly in touch with civil society organisations and partners, including: Democracy Reporting International and Ireland’s Electoral Commission (An Coimisiún Toghcháin) who we onboarded to our direct regulatory reporting channel.

Overview of partners and notifications received during the Rapid Response Implementation period (29 September - 3 November 2025):

  • Number of onboarded non-platform signatories to our direct reporting channels: 2.
  • Number of reports received during the election period: 59.

Voter Information Units and Election Day Information Features

Facebook
  • VIU Reach: Over 2.0 million
  • EDR Reach: Over 1.5 million

Instagram
  • VIU Reach: Over 2.2 million
  • EDR Reach: Over 1.5 million

Netherlands - General election for the House of Representatives

Overview of partners and notifications received during the Rapid Response Implementation period (1 October to 5 November 2025):

  • Number of onboarded non-platform signatories to our direct reporting channels: 4.
  • Number of reports received during the election period: 1.

External engagement and election preparation efforts began early, including  meetings with the Rijksvoorlichtingsdiens and roundtables with the Authority for Consumers and Markets. We also continued our collaboration with the local, independent fact-checking organisations: dpa-Faktencheck and AFP as part of our election integrity efforts. 

As an active member of the EU Code of Practice on Disinformation Taskforce’s Working Group on Elections, we took part in its Rapid Response System (RRS). Through this, we onboarded the Authority for Consumers and Markets (designated Digital Service Coordinator) to our direct regulatory reporting channel. We also worked closely with the European Commission and non-platform signatories (civil society organisations and fact checkers) to share elections related trends and onboard them to a direct escalation channel to report content which poses serious or systemic concerns to the integrity of the electoral process and support its prompt review.

Voter Information Units and Election Day Information Features

Facebook
  • VIU Reach: Over 5.2 million
  • EDR Reach:  Over 4.4 million

Instagram
  • VIU Reach:  Over 6.9 million
  • EDR Reach: Over 5.9 million

Responsible Approach to Gen AI

Meta’s approach to responsible AI is another way that we are safeguarding the integrity of elections globally, including for the EU national elections.

Community Standards, Fact-Checking, and AI Labelling:

Meta’s Community Standards and Advertising Standards apply to all content, including content generated by AI. AI-generated content is also eligible to be reviewed and rated by Meta’s third-party fact-checking partners, whose rating options allow them to address various ways in which media content may mislead people, including but not limited to media that is created or edited by AI. 

Meta labels photorealistic images created using Meta AI, as well as AI-generated images from certain content creation tools.

Meta has begun labelling a wider range of video, audio, and image content when we detect industry-standard AI image indicators or when users disclose that they are uploading AI-generated content. Meta requires people to use this disclosure and label tool when they post organic content with a photorealistic video or realistic-sounding audio that was digitally created or altered, and may apply penalties if they fail to do so. If Meta determines that digitally created or altered image, video, or audio content creates a particularly high risk of materially deceiving the public on a matter of importance, we may add a more prominent label, so that people have more information and context.

Continuing to Foster AI Transparency through Industry Collaboration:

Meta has also been working with other companies in the tech industry on common standards and guidelines. Meta Platforms, Inc. is a member of the Partnership on AI, for example, and signed onto the tech accord designed to combat the spread of deceptive AI content in 2024 elections globally. Meta receives information from Meta Platforms, Inc. in the progress of these initiatives, and benefits from these partnerships when addressing the risks of manipulated media. 

Scrutiny of Ads Placements

Outline approaches pertinent to this chapter, highlighting similarities/commonalities and differences with regular enforcement.

The measures outlined in Chapters 1 to 3 of this report were in place for the elections covered in this report. They were complemented by the prohibited ads policy outlined above. Most pertinently, under these policies, content that is fact-checked cannot be used for an ad under our Advertising Standards.

Political Advertising

Outline approaches pertinent to this chapter, highlighting similarities/commonalities and differences with regular enforcement.

As outlined in Section 6, Beginning in October 2025, Meta will no longer allow political, electoral and social issue ads on our platforms in the EU, given the unworkable requirements and legal uncertainties introduced by the EU’s Transparency and Targeting of Political Advertising regulation. 

Integrity of Services

Outline approaches pertinent to this chapter, highlighting similarities/commonalities and differences with regular enforcement.

All the measures outlined in Chapters 14 to 16 of this report were in place ahead of the European national elections.

Empowering the Research Community

Outline approaches pertinent to this chapter, highlighting similarities/commonalities and differences with regular enforcement.

Since 2023, researchers in Europe have had access to the Meta Content Library, enabling them to study various topics, including disinformation.

Crisis 2025

[Note: Signatories are requested to provide information relevant to their particular response to the threats and challenges they observed on their service(s). They ensure that the information below provides an accurate and complete report of their relevant actions. As operational responses to crisis/election situations can vary from service to service, an absence of information should not be considered a priori a shortfall in the way a particular service has responded. Impact metrics are accurate to the best of signatories’ abilities to measure them].

Threats observed or anticipated

War of aggression by Russia on Ukraine

As outlined in our benchmark report, we took a variety of actions with the objectives of:

  • Helping to keep people in Ukraine and Russia safe: since the beginning of the full-scale invasions we have introduced several privacy and safety features to help people in Ukraine and Russia protect their accounts from being targeted.

  • Enforcing our policies: We are taking additional steps to enforce our Community Standards, not only in Ukraine and Russia but also in other countries globally where content may be shared.

  • Reducing the spread of misinformation: We took steps to fight the spread of misinformation on our services and consulted with outside experts. 

  • Transparency around state-controlled media: We have been working hard to tackle disinformation from Russia coming from state-controlled media. Since March 2022, we have been globally demoting content from Facebook Pages and Instagram accounts from Russian state-controlled media outlets and making them harder to find across our platforms. In addition to demoting, labelling, demonetizing and blocking ads from Russian State Controlled Media, we are also demoting and labelling any posts from users that contain links to Russian State Controlled Media websites.

  • In addition to these global actions, in Ukraine, the EU and UK, we have restricted access to Russia Today (globally), Sputnik, NTV/NTV Mir, Rossiya 1, REN TV and Perviy Kanal and others.

  • We added restrictions to further state-controlled media organisations targeted by the EU broadcast ban under Article 2f of Regulation 833/2014. These included: Voice of Europe, RIA Novosti, Izvestia, Rossiyskaya Gazeta, EADaily / Eurasia Daily, Fondsk, Lenta, NewsFront, RuBaltic, SouthFront, Strategic Culture Foundation, and Krasnaya Zvezda / Tvzvezda.

  • We also expanded our ongoing enforcement against Russian state media outlets. Rossiya Segodnya, RT, and other related entities were banned from our apps globally due to foreign interference activities.

Israel - Hamas War

In the spirit of transparency and cooperation we share below the details of some of the specific steps we are taking to respond to the Israel - Hamas War.


Mitigations in place

War of aggression by Russia on Ukraine

Our main strategies are in line with what we outlined in our benchmark report, with a focus on safety features in Ukraine and Russia, extensive steps to fight the spread of misinformation (including through media literacy campaigns), transparency around state controlled media and monitoring/taking action against any coordinated inauthentic behaviour.

This means (as outlined in previous reports) we continue to: 

  • Monitor for coordinated inauthentic behaviour and other adversarial networks  (see commitment 16 for more information on behaviour we saw from Doppelganger during the reporting period). 

  • Enforce our Community Standards  

  • Work with fact-checkers 

  • Strengthen our engagement with local experts and governments in the Central and Eastern Europe region

Israel - Hamas War

In the wake of the 07/10/2023 terrorist attacks in Israel and Israel’s response in Gaza, expert teams from across Meta took immediate crisis response measures, while protecting people’s ability to use our apps to shed light on important developments happening on the ground. As we did so, we were guided by core human rights principles, including respect for the right to life and security of the person, the protection of the dignity of victims, and the right to non-discrimination - as well as balancing those with the right to freedom of expression. We looked to the UN Guiding Principles on Business and Human Rights to prioritise and mitigate the most salient human rights risks: in this case, that people may use Meta platforms to further inflame an already violent conflict. We also looked to international humanitarian law (IHL) as an important source of reference for assessing online conduct. We have provided a public overview of our efforts related to the war in our Newsroom, as well as in our 2023 Annual Human Rights report. We provided an update on our actions in our 2024 annual human rights report. The following are some examples of the specific steps we have taken:

Taking Action on Violating Content:

Safety and Security:
  • In addition to this, our teams detected and removed a cluster of Coordinated Inauthentic Behaviour (CIB) activity attributed to Hamas in 2021. These fake accounts attempted to re-establish their presence on our platforms.
  • In early 2025, we removed 17 accounts on Facebook, 22 FB Pages and 21 accounts on Instagram for violating our CIB policy. This network originated in Iran and targeted Azeri-speaking audiences in Azerbaijan and Turkey. Fake accounts – some of which were detected and disabled by our automated systems prior to our investigation – were used to post content, including in Groups, manage Pages, and to comment on the network’s own content – likely to make it appear more popular than it was. Many of these accounts posed as female journalists and pro-Palestine activists. The operation also used popular hashtags like #palestine, #gaza, #starbucks, #instagram in their posts, as part of its spammy tactics in an attempt to insert themselves in the existing public discourse.
  • We memorialise accounts when we receive a request from a friend or family member of someone who has passed away, to provide a space for people to pay their respects, share memories and support each other.

Reducing the Spread of Misinformation:
  • We’re working with third-party fact-checkers in the region to debunk false claims. Meta’s third-party fact-checking network includes coverage in both Arabic and Hebrew, through AFP and Reuters. When they rate something as false, we move this content lower in Feed so fewer people see it. 
  • We recognise the importance of speed in moments like this, so we’ve made it easier for fact-checkers to find and rate content related to the war, using keyword detection to group related content in one place.
  • We’re also giving people more information to help them decide what to read, trust, and share, by adding warning labels on content rated false by third-party fact-checkers and applying labels to state-controlled media publishers. 
  • We also have limits on message forwarding and we label messages that haven’t originated with the sender so people are aware that something is information from a third party.

User Controls:
We continue to provide tools to help people control their experience on our apps and protect themselves from content they don’t want to see. These include but aren’t limited to:
  • Hidden Words: This tool filters offensive terms and phrases from DM requests and comments.
  • Limits: When turned on, Limits automatically hide DM requests and comments on Instagram from people who don’t follow you, or who only recently followed you.
  • Comment controls: You can control who can comment on your posts on Facebook and Instagram and choose to turn off comments completely on a post by post basis. 
  • Show More, Show Less: This gives people direct control over the content they see on Facebook. 
  • Facebook Reduce: Through the Facebook Feed Preferences settings, people can increase the degree to which we demote some content so they see less of it in their Feed. 
  • Sensitive Content Control: Instagram’s Sensitive Content Control allows people to choose how much sensitive content they see in places where we recommend content, such as Explore, Search, Reels and in-Feed recommendations. 

Policies and Terms and Conditions

Outline any changes to your policies

War of Aggression by Russia on Ukraine

Policy
Inauthentic Behavior Community Standards - We updated our Inauthentic Behavior Community Standards to simplify and refine our IB and CIB policies and help uninvolved authentic communities, Pages, and Groups that are targeted, managed, or co-opted by CIB operations remain on our services.

Changes (such as newly introduced policies, edits, adaptation in scope or implementation)
We updated our Inauthentic Behavior Community Standards to simplify and refine our IB and CIB policies and help uninvolved authentic communities, Pages, and Groups that are targeted, managed, or co-opted by CIB operations remain on our services.

Rationale
We continue to enforce our Community Standards and prioritise people’s safety and well-being through the application of these policies alongside Meta’s technologies, tools and processes.

Israel - Hamas War

For the duration of the ongoing crisis, Meta has taken various actions to mitigate the possible content risks emerging from the crisis. This includes, inter alia, under the Dangerous Organisations and Individuals Policy, removes imagery depicting the moment an identifiable individual is abducted, unless such imagery is shared in the context of  condemnation or a call to release, in which case we allow with a Mark as Disturbing (MAD) interstitial; and, remove Hamas-produced imagery for hostages in captivity in all contexts. Meta has some further discretion policies which may be applied when content is escalated to us.

Scrutiny of Ads Placements

Outline approaches pertinent to this chapter, highlighting similarities/commonalities and differences with regular enforcement.

War of aggression by Russia on Ukraine

As noted in our baseline report, our policies are based on years of experience and expertise in safety combined with external input from experts around the world. We are continuously working to protect the integrity of our platforms and adjusting our policies, tools and processes.

Measures taken to demonetise disinformation related to the crisis (Commitment 1 and Commitment 2)

As mentioned in our baseline report, our Advertising Standards prohibit ads that include content debunked by third-party fact-checkers and advertisers that repeatedly attempt to post content rated by fact-checkers may also incur restrictions to advertise across Meta technologies.

For the monetisation of initially organic content, (1) per our Content Monetisation Policies, any content that's labelled as false by our third-party fact-checkers is ineligible for monetisation, and (2) any actor found in violation of our Community Standards, including our misinformation policies, may lose the right to monetise their content, per our Partner Monetisation Policies

As mentioned in our baseline report, we prohibited ads or monetisation from Russian state-controlled media. Before Russian authorities blocked access to Facebook and Instagram, we paused ads targeting people in Russia, and advertisers in Russia are no longer able to create or run ads anywhere in the world.


Israel - Hamas War

As noted in our baseline report, our policies are based on years of experience and expertise in safety combined with external input from experts around the world. We are continuously working to protect the integrity of our platforms and adjusting our policies, tools, and processes.

Political Advertising

Outline approaches pertinent to this chapter, highlighting similarities/commonalities and differences with regular enforcement.

War of Aggression by Russia on Ukraine

As noted in our baseline report, our policies are based on years of experience and expertise in safety combined with external input from experts around the world. We are continuously working to protect the integrity of our platforms and adjusting our policies, tools, and processes.

Israel - Hamas War

As noted in our baseline report, our policies are based on years of experience and expertise in safety combined with external input from experts around the world. We are continuously working to protect the integrity of our platforms and adjusting our policies, tools, and processes.

AI Generated or altered SIEP ads disclosure (Commitment 3)
The social issues, elections, and politics (SIEP) self-disclosure label will soon change from "digitally created" to "AI Info." This update more clearly indicates when AI is involved in creating or editing content, helping users better understand the type of content they’re seeing.

Advertisers must still disclose when ads about social issues, elections, or politics use AI to create or edit photorealistic images, videos, or realistic audio that depicts:

  • A real person saying or doing something they didn't.
  • A realistic-looking non-existent person.
  • A realistic event that didn't happen.
  • Altered footage of a real event.
  • A realistic, alleged event that isn't a true recording.

Disclosure is not required for immaterial AI uses (e.g., resizing, color correction). Meta will continue to enforce disclosure for AI-created or edited SIEP ads; failure to disclose the scenarios above may result in ad removal and account penalties for repeated violations.

We will continue to evolve our approach to labeling AI-generated content in partnership with experts, advertisers, policy stakeholders and industry partners as people’s expectations and the technology change.

Meta will add information on the ad when an advertiser discloses in the advertising flow that the content is digitally created or altered. This information will also appear in the Ad Library. If it is determined that an advertiser did not disclose as required, Meta will reject the ad. Repeated failure to disclose may result in penalties against the advertiser.

The AI Disclosure policy helps inform people about digitally altered or created Ads. This way, people will be more aware about the authenticity of messaging, which will help combat Disinformation. 

Integrity of Services

Outline approaches pertinent to this chapter, highlighting similarities/commonalities and differences with regular enforcement.

War of Aggression by Russia on Ukraine

As noted in our baseline report, our policies are based on years of experience and expertise in safety combined with external input from experts around the world. We are continuously working to protect the integrity of our platforms and adjusting our policies, tools, and processes.

Measures taken in the context of the crisis to counter manipulative behaviours/TTCs ( Commitment 14)
As mentioned in our baseline report, we have technical teams building scaled solutions to detect and prevent these behaviours, and are partnering with civil society organisations, researchers, and governments to strengthen our defences. We also improved our detection systems to more effectively identify and block fake accounts, which are the source of a lot of the inauthentic activity.

Since the invasion began,  we shared what measures we’ve taken to help keep Ukrainians and Russians safe, our approach to misinformation, state-controlled media and ensuring reliable access to trusted information.As mentioned in our baseline report, our security teams took down three distinct networks in Russia targeting discourse on the war (announced here, here, and here) and have continued to monitor and enforce against Russian threat actors engaged in coordinated inauthentic behaviour (CIB). The Q4 2024 Adversarial Threat Report shared information on the continued low efficacy of the Doppelganger operation’s efforts on our apps, with most attempts to acquire fake accounts or run ads being quickly detected and blocked.

In 2025, we disrupted a coordinated inauthentic behavior network originating in Belarus and targeting Polish audiences. Our internal investigation revealed links to Belarus and Russia, indicating a coordinated foreign influence campaign. We observed that network operators strategically disseminated messaging focused on Poland's immigration policies and the country's relationships with the European Union and Ukraine.

Relevant changes to working practices to respond to the demands of the crisis situation and/or additional human resources procured for the mitigation of the crisis (Commitment 14 -16) 
As mentioned in the baseline report, throughout the war, we have mobilised our teams, technologies and resources to combat the spread of harmful content, especially disinformation and misinformation as well as adversarial threat activities such as influence operations and cyber-espionage.

We continue to work with a cross-functional team of experts from across the company, including native Ukrainian and Russian speakers, who are monitoring the platform around the clock, allowing us to respond to issues in real time.

Israel - Hamas War

As noted in our baseline report, our policies are based on years of experience and expertise in safety combined with external input from experts around the world. We are continuously working to protect the integrity of our platforms and adjusting our policies, tools and processes.


Empowering Users

Outline approaches pertinent to this chapter, highlighting similarities/commonalities and differences with regular enforcement.

War of Aggression by Russia on Ukraine

As noted in our baseline report, our policies are based on years of experience and expertise in safety combined with external input from experts around the world. We are continuously working to protect the integrity of our platforms and adjusting our policies, tools and processes.

Actions taken against dis- and misinformation content (for example deamplification, labelling, removal etc.) (Commitment 17)

State controlled media:
We continue to take the actions we outlined in our benchmark report. We have taken further action to limit the impact of state controlled media, described above.

Escalation channel: This channel continues to operate as outlined in our benchmark report.

Covert influence campaigns: We have continued to monitor for and remove recidivist attempts by coordinated inauthentic behaviour (CIB) networks that target discourse about the war in Ukraine. This covert activity is aggressive and persistent, constantly probing for weak spots across the internet, including setting up hundreds of new spoof news organisation domains.

Promotion of authoritative information, including via recommender systems and products and features such as banners and panels (Commitment 19)

We continue to see funds raised on Facebook and Instagram for nonprofits in support of humanitarian efforts for Ukraine.

We continue to work through our AI for Good program, which empowers humanitarian organizations, researchers, UN agencies, and European policymakers to make more informed decisions on how to support the people of Ukraine.

Israel - Hamas War

As noted in our baseline report, our policies are based on years of experience and expertise in safety combined with external input from experts around the world. We are continuously working to protect the integrity of our platforms and adjusting our policies, tools, and processes.

Warning Screens on sensitive content, Sensitive Content Control and Facebook Reduce: (Commitment 17)
The 07/10/2023 attack by Hamas was designated as a Terrorist Attack under Meta’s Dangerous Organisation and Individuals policy. Consistent with that designation, we removed all content showing identifiable victims at the moment of the attack. Following that, people began sharing this type of footage in order to raise awareness and condemn the attacks. Meta’s goal is to allow people to express themselves while still removing harmful content. In turn, we began allowing people to post this type of footage within that context only, with the addition of a warning screen to inform users that it may be disturbing. If the user’s intent in sharing the content is unclear, we err on the side of safety and remove it. 
 
However, there are additional protections in place to ensure people have choices when it comes to this content.
 
Instagram’s Sensitive Content Control allows people to choose how much sensitive content they see in places where we recommend content, such as Explore, Search, Reels and in-Feed recommendations. We try not to recommend sensitive content in these places by default, but people can also choose to see less, to further reduce the possibility of seeing this content from accounts they don’t follow.
 
We’re continually testing how we deliver personalized experiences and have recently conducted testing around civic content. As a result, we started treating civic content from people and Pages users follow on Facebook more like any other content in their feed, and we started ranking and showing users that content based on explicit signals (for example, liking a piece of content) and implicit signals (like viewing posts) that help us predict what’s meaningful to people. We also started recommending more political content based on these personalized signals and are expanding the options people have to control how much of this content they see.

These actions ensure that we balance the protection of voice with removing harmful content. In this context, it has allowed for important discussion and condemnation of violence, while also empowering people to make choices in reaction to the content they see on Facebook and Instagram.
 
Hidden words Filter (Commitment 18, Commitment 19)
When turned on, Hidden Words filters offensive terms and phrases from DM requests and comments, so people never have to see them. People can customise this list, to make sure the terms they find offensive are hidden.

Hidden Words help people choose offensive terms and phrases to hide, so they are protected from seeing them.

Limits (Commitment 18, Commitment 19,)
When turned on, Limits automatically hide DM requests and comments on Instagram from people who don’t follow you, or who only recently followed you.

This tool gives people choice about DM and requests they receive, which may be important when engaging online around sensitive topics.
 
Comment Controls (Commitment 18, Commitment 19)
People can control who can comment on their posts on Facebook and Instagram and choose to turn off comments completely on a post by post basis.

This tool gives people control over engagement with what they post on Facebook and Instagram.

Show more Show less: (Commitment 18, Commitment 19)
Show More, Show Less gives people direct control over the content they see on Facebook. Selecting “Show more” will temporarily increase the amount of content that is like the post a user gave feedback on, while selecting “Show Less” means a user will temporarily see fewer posts like the one that feedback was given on.

This tool provides people with more direct control over what they see, which is important for protecting people's well-being during high profile crisis events. 

Empowering the Research Community

Outline approaches pertinent to this chapter, highlighting similarities/commonalities and differences with regular enforcement.

War of Aggression by Russia on Ukraine

As noted in our baseline report, our policies are based on years of experience and expertise in safety combined with external input from experts around the world. We are continuously working to protect the integrity of our platforms and adjusting our policies, tools and processes.

Measures taken to support research into crisis related misinformation and disinformation (Commitment 17-25)

As mentioned in our baseline report, the AI for Good program shares privacy-protected data externally to help tackle social issues like disasters, pandemics, poverty and climate change. In support of the Ukraine humanitarian response, the program's maps have been utilized to provide valuable assistance.

We make baseline population density maps (the high resolution settlement layer) of countries surrounding Ukraine publicly available. These are among the most accurate in the world with 30 metre resolution and demographic breakouts by combining updated census estimates with satellite imagery (i.e., no Facebook user data).

Our Social Connectedness Index has also been used by leading researchers, including the European Commission - Joint Research Centre unit on Demography, Migration and Governance to estimate the rate at which Ukrainian refugees might seek shelter in European regions with existing Ukrainian diaspora. 


Israel - Hamas War

As noted in our baseline report, our policies are based on years of experience and expertise in safety combined with external input from experts around the world. We are continuously working to protect the integrity of our platforms and adjusting our policies, tools, and processes.

Content Library and API tools (Commitment 26)
 As we previously reported, Meta has opened access to tools such as the Content Library and API tools to provide access to near real-time public content from Pages, Posts, Groups and Events on Facebook and public content on Instagram. Details about the content, such as the number of reactions, shares, comments and, for the first time, post view counts are also available. Researchers can search, explore and filter that content on both a graphical User Interface (UI) or through a programmatic API. Together, these tools provide the most comprehensive access to publicly-accessible content across Facebook and Instagram of any research tool built to date.

Individuals from qualified institutions, including journalists that are pursuing scientific or public interest research topics are able to apply for access to these tools through partners with deep expertise in secure data sharing for research, starting with the University of Michigan’s Inter-university Consortium for Political and Social Research. This is a first-of-its-kind partnership that will enable researchers to analyse data from the API in ICPSR’s Social Media Archives (SOMAR) Virtual Data Enclave.

Qualified individuals pursuing scientific or public interest research, including journalists can gain access to the tools if they meet all the requirements. 

Empowering the Fact-Checking Community

Outline approaches pertinent to this chapter, highlighting similarities/commonalities and differences with regular enforcement.

War of Aggression by Russia on Ukraine

As noted in our baseline report, our policies are based on years of experience and expertise in safety combined with external input from experts around the world. We are continuously working to protect the integrity of our platforms and adjusting our policies, tools, and processes.

Cooperation with independent fact-checkers in the crisis context, including coverage in the EU (Commitment 30-33)
As mentioned in our baseline report, for misinformation that does not violate our Community Standards, but undermines the authenticity and integrity of our platform, we work with our network of independent third-party fact-checking partners.

The details of the network are outlined under the Empowering Fact-Checkers chapter above.

As mentioned in our baseline report, our cooperation with fact-checkers is as outlined in the Fact-Checkers’ Empowerment chapter above.

In Europe, we partner with 46 fact-checking organisations, covering 36 languages. This includes 29 partners covering 26 countries and 23 different languages in the EU.

Israel - Hamas War

As noted in our baseline report, our policies are based on years of experience and expertise in safety combined with external input from experts around the world. We are continuously working to protect the integrity of our platforms and adjusting our policies, tools, and processes.

Working with fact checker in the region and deploying keyword detection  (Commitment 30)
Meta is working with third-party fact-checkers in the region to debunk false claims. Meta’s third-party fact-checking network includes coverage in both Arabic and Hebrew, through AFP, and Reuters. We recognise the importance of speed in moments like this, so we’ve made it easier for fact-checkers to find and rate content related to the war, using keyword detection to group related content in one place.

When they rate something as false, we move this content lower in Feed so fewer people see it.

Content Warning Labels Commitment 31)
Meta is adding warning labels on content rated false by third-party fact-checkers and applying labels to state-controlled media publishers. We also have limits on message forwarding and label messages that haven’t originated with the sender so people are aware that something is information from a third party.

Meta is supporting people in the region by giving them more information to decide what to read, trust and share by adding warning labels onto relevant content.