Report March 2026
Your organisation description
Transparency Centre
Commitment 34
To ensure transparency and accountability around the implementation of this Code, Relevant Signatories commit to set up and maintain a publicly available common Transparency Centre website.
We signed up to the following measures of this commitment
Measure 34.1 Measure 34.2 Measure 34.3 Measure 34.4 Measure 34.5
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 34.1
Signatories establish and maintain the common Transparency Centre website, which will be operational and available to the public within 6 months from the signature of this Code.
Measure 34.2
Signatories provide appropriate funding, for setting up and operating the Transparency Centre website, including its maintenance, daily operation, management, and regular updating. Funding contribution should be commensurate with the nature of the Signatories' activity and shall be sufficient for the website's operations and maintenance and proportional to each Signatories' risk profile and economic capacity.
Measure 34.3
Relevant Signatories will contribute to the Transparency Centre's information to the extent that the Code is applicable to their services.
Measure 34.4
Signatories will agree on the functioning and financing of the Transparency Centre within the Task-force, to be recorded and reviewed within the Task-Force on an annual basis.
Measure 34.5
The Task-force will regularly discuss the Transparency Centre and assess whether adjustments or actions are necessary. Signatories commit to implement the actions and adjustments decided within the Task-force within a reasonable timeline.
Commitment 35
Signatories commit to ensure that the Transparency Centre contains all the relevant information related to the implementation of the Code's Commitments and Measures and that this information is presented in an easy-to-understand manner, per service, and is easily searchable.
We signed up to the following measures of this commitment
Measure 35.1 Measure 35.2 Measure 35.3 Measure 35.4 Measure 35.5 Measure 35.6
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 35.1
Signatories will list in the Transparency Centre, per each Commitment and Measure that they subscribe to, the terms of service and policies that their service applies to implement these Commitments and Measures.
Measure 35.2
Signatories provide information on the implementation and enforcement of their policies per service, including geographical and language coverage.
Measure 35.3
Signatories ensure that the Transparency Centre contains a repository of their reports assessing the implementation of the Code's commitments.
Measure 35.4
In crisis situations, Signatories use the Transparency Centre to publish information regarding the specific mitigation actions taken related to the crisis.
Measure 35.5
Signatories ensure that the Transparency Centre is built with state-of-the-art technology, is user-friendly, and that the relevant information is easily searchable (including per Commitment and Measure). Users of the Transparency Centre will be able to easily track changes in Signatories' policies and actions.
Measure 35.6
The Transparency Centre will enable users to easily access and understand the Service Level Indicators and Qualitative Reporting Elements tied to each Commitment and Measure of the Code for each service, including Member State breakdowns, in a standardised and searchable way. The Transparency Centre should also enable users to easily access and understand Structural Indicators for each Signatory.
Commitment 36
Signatories commit to updating the relevant information contained in the Transparency Centre in a timely and complete manner.
We signed up to the following measures of this commitment
Measure 36.1 Measure 36.2 Measure 36.3
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 36.1
Signatories provide updates about relevant changes in policies and implementation actions in a timely manner, and in any event no later than 30 days after changes are announced or implemented.
Measure 36.2
Signatories will regularly update Service Level Indicators, reporting elements, and Structural Indicators, in parallel with the regular reporting foreseen by the monitoring framework. After the first reporting period, Relevant Signatories are encouraged to also update the Transparency Centre more regularly.
Measure 36.3
Signatories will update the Transparency Centre to reflect the latest decisions of the Permanent Task-force, regarding the Code and the monitoring framework.
QRE 36.1.1 (for the Commitments 34-36)
With their initial implementation report, Signatories will outline the state of development of the Transparency Centre, its functionalities, the information it contains, and any other relevant information about its functioning or operations. This information can be drafted jointly by Signatories involved in operating or adding content to the Transparency Centre.
QRE 36.1.2 (for the Commitments 34-36)
Signatories will outline changes to the Transparency Centre's content, operations, or functioning in their reports over time. Such updates can be drafted jointly by Signatories involved in operating or adding content to the Transparency Centre.
SLI 36.1.1 (for the Commitments 34-36)
Signatories will provide meaningful quantitative information on the usage of the Transparency Centre, such as the average monthly visits of the webpage.
| Country | Our company would like to provide the following data: Nr of fact-checkers IFCN-certified |
|---|---|
| Austria | 0 |
| Belgium | 0 |
| Bulgaria | 0 |
| Croatia | 0 |
| Cyprus | 0 |
| Czech Republic | 0 |
| Denmark | 0 |
| Estonia | 0 |
| Finland | 0 |
| France | 0 |
| Germany | 0 |
| Greece | 0 |
| Hungary | 0 |
| Ireland | 0 |
| Italy | 0 |
| Latvia | 0 |
| Lithuania | 0 |
| Luxembourg | 0 |
| Malta | 0 |
| Netherlands | 0 |
| Poland | 0 |
| Portugal | 0 |
| Romania | 0 |
| Slovakia | 0 |
| Slovenia | 0 |
| Spain | 0 |
| Sweden | 0 |
| Iceland | 0 |
| Liechtenstein | 0 |
| Norway | 0 |
Permanent Task-Force
Commitment 37
Signatories commit to participate in the permanent Task-force. The Task-force includes the Signatories of the Code and representatives from EDMO and ERGA. It is chaired by the European Commission, and includes representatives of the European External Action Service (EEAS). The Task-force can also invite relevant experts as observers to support its work. Decisions of the Task-force are made by consensus.
We signed up to the following measures of this commitment
Measure 37.1 Measure 37.2 Measure 37.3 Measure 37.4 Measure 37.5 Measure 37.6
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 37.1
Signatories will participate in the Task-force and contribute to its work. Signatories, in particular smaller or emerging services will contribute to the work of the Task-force proportionate to their resources, size and risk profile. Smaller or emerging services can also agree to pool their resources together and represent each other in the Task-force. The Task-force will meet in plenary sessions as necessary and at least every 6 months, and, where relevant, in subgroups dedicated to specific issues or workstreams.
Measure 37.2
Signatories agree to work in the Task-force in particular – but not limited to – on the following tasks: Establishing a risk assessment methodology and a rapid response system to be used in special situations like elections or crises; Cooperate and coordinate their work in special situations like elections or crisis; Agree on the harmonised reporting templates for the implementation of the Code's Commitments and Measures, the refined methodology of the reporting, and the relevant data disclosure for monitoring purposes; Review the quality and effectiveness of the harmonised reporting templates, as well as the formats and methods of data disclosure for monitoring purposes, throughout future monitoring cycles and adapt them, as needed; Contribute to the assessment of the quality and effectiveness of Service Level and Structural Indicators and the data points provided to measure these indicators, as well as their relevant adaptation; Refine, test and adjust Structural Indicators and design mechanisms to measure them at Member State level; Agree, publish and update a list of TTPs employed by malicious actors, and set down baseline elements, objectives and benchmarks for Measures to counter them, in line with the Chapter IV of this Code.
Measure 37.3
The Task-force will agree on and define its operating rules, including on the involvement of third-party experts, which will be laid down in a Vademecum drafted by the European Commission in collaboration with the Signatories and agreed on by consensus between the members of the Task-force.
Measure 37.4
Signatories agree to set up subgroups dedicated to the specific issues related to the implementation and revision of the Code with the participation of the relevant Signatories.
Measure 37.5
When needed, and in any event at least once per year the Task-force organises meetings with relevant stakeholder groups and experts to inform them about the operation of the Code and gather their views related to important developments in the field of Disinformation.
Measure 37.6
Signatories agree to notify the rest of the Task-force when a Commitment or Measure would benefit from changes over time as their practices and approaches evolve, in view of technological, societal, market, and legislative developments. Having discussed the changes required, the Relevant Signatories will update their subscription document accordingly and report on the changes in their next report.
QRE 37.6.1
Signatories will describe how they engage in the work of the Task-force in the reporting period, including the sub-groups they engaged with.
Monitoring of the Code
Commitment 38
The Signatories commit to dedicate adequate financial and human resources and put in place appropriate internal processes to ensure the implementation of their commitments under the Code.
We signed up to the following measures of this commitment
Measure 38.1
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 38.1
Relevant Signatories will outline the teams and internal processes they have in place, per service, to comply with the Code in order to achieve full coverage across the Member States and the languages of the EU.
QRE 38.1.1
Relevant Signatories will outline the teams and internal processes they have in place, per service, to comply with the Code in order to achieve full coverage across the Member States and the languages of the EU.
Commitment 39
Signatories commit to provide to the European Commission, within 1 month after the end of the implementation period (6 months after this Code’s signature) the baseline reports as set out in the Preamble.
We signed up to the following measures of this commitment
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Commitment 40
Signatories commit to provide regular reporting on Service Level Indicators (SLIs) and Qualitative Reporting Elements (QREs). The reports and data provided should allow for a thorough assessment of the extent of the implementation of the Code’s Commitments and Measures by each Signatory, service and at Member State level.
We signed up to the following measures of this commitment
Measure 40.1 Measure 40.2 Measure 40.3 Measure 40.4 Measure 40.5 Measure 40.6
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 40.1
Relevant Signatories that are Very Large Online Platforms, as defined in the DSA, will report every six-months on the implementation of the Commitments and Measures they signed up to under the Code, including on the relevant QREs and SLIs at service and Member State Level.
Measure 40.2
Other Signatories will report yearly on the implementation of the Commitments and Measures taken under the present Code, including on the relevant QREs and SLIs, at service and Member State level.
Measure 40.3
Measure 40.4
Measure 40.5
Measure 40.6
Commitment 43
Relevant Signatories commit to provide, in special situations like elections or crisis, upon request of the European Commission, proportionate and appropriate information and data, including ad-hoc specific reports and specific chapters within the regular monitoring, in accordance with the rapid response system established by the Taskforce.
We signed up to the following measures of this commitment
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Crisis and Elections Response
Elections 2025
[Note: Signatories are requested to provide information relevant to their particular response to the threats and challenges they observed on their service(s). They ensure that the information below provides an accurate and complete report of their relevant actions. As operational responses to crisis/election situations can vary from service to service, an absence of information should not be considered a priori a shortfall in the way a particular service has responded. Impact metrics are accurate to the best of signatories’ abilities to measure them].
Threats observed or anticipated
- Community Standards and Guidelines Relevant to Elections:
- Norway (Parliamentary) election, 9 September 2025
- Czech Republic (Legislative) election, 3 - 4 October 2025
- Ireland (Presidential) election, 24 October 2025
- Netherlands, General election for the House of Representatives, 29 October 2025
2. Our Election Risk Management Processes
Mitigations in place
Meta engages with a full range of external stakeholders to inform our processes and procedures as part of our day-to-day business, and this practice continued during our election preparation and integrity efforts for Norway, Czech Republic, Ireland and the Netherlands. Meta values the networks and channels we have with our external stakeholders to work together in identifying risks on our platforms, and as such, we have welcomed many of the Election Guidelines recommending cooperation and points of contact with national authorities, civil society organisations, and others.
- VIU Reach: Over 2.8 million
- EDR Reach: Over 2.0 million
- VIU Reach: Over 1.9 million
- EDR Reach: Over 1.4 million
Czech Republic - Legislative Election
Overview of partners and notifications received during the Rapid Response Implementation period (8 September to 13 October 2025):
- Number of onboarded non-platform signatories to our direct reporting channels: 4.
- Number of reports received during the election period: 6.
- VIU Reach: Over 3.3 million
- EDR Reach: Over 3.1 million
- VIU Reach: Over 2.8 million
- EDR Reach: Over 2.6 million
Ireland Presidential Election
- Number of onboarded non-platform signatories to our direct reporting channels: 2.
- Number of reports received during the election period: 59.
- VIU Reach: Over 2.0 million
- EDR Reach: Over 1.5 million
- VIU Reach: Over 2.2 million
- EDR Reach: Over 1.5 million
Netherlands - General election for the House of Representatives
Overview of partners and notifications received during the Rapid Response Implementation period (1 October to 5 November 2025):
- Number of onboarded non-platform signatories to our direct reporting channels: 4.
- Number of reports received during the election period: 1.
External engagement and election preparation efforts began early, including meetings with the Rijksvoorlichtingsdiens and roundtables with the Authority for Consumers and Markets. We also continued our collaboration with the local, independent fact-checking organisations: dpa-Faktencheck and AFP as part of our election integrity efforts.
- VIU Reach: Over 5.2 million
- EDR Reach: Over 4.4 million
- VIU Reach: Over 6.9 million
- EDR Reach: Over 5.9 million
Responsible Approach to Gen AI
Community Standards, Fact-Checking, and AI Labelling:
Meta labels photorealistic images created using Meta AI, as well as AI-generated images from certain content creation tools.
Meta has begun labelling a wider range of video, audio, and image content when we detect industry-standard AI image indicators or when users disclose that they are uploading AI-generated content. Meta requires people to use this disclosure and label tool when they post organic content with a photorealistic video or realistic-sounding audio that was digitally created or altered, and may apply penalties if they fail to do so. If Meta determines that digitally created or altered image, video, or audio content creates a particularly high risk of materially deceiving the public on a matter of importance, we may add a more prominent label, so that people have more information and context.
Scrutiny of Ads Placements
Outline approaches pertinent to this chapter, highlighting similarities/commonalities and differences with regular enforcement.
Political Advertising
Outline approaches pertinent to this chapter, highlighting similarities/commonalities and differences with regular enforcement.
Integrity of Services
Outline approaches pertinent to this chapter, highlighting similarities/commonalities and differences with regular enforcement.
Empowering the Research Community
Outline approaches pertinent to this chapter, highlighting similarities/commonalities and differences with regular enforcement.
Crisis 2025
[Note: Signatories are requested to provide information relevant to their particular response to the threats and challenges they observed on their service(s). They ensure that the information below provides an accurate and complete report of their relevant actions. As operational responses to crisis/election situations can vary from service to service, an absence of information should not be considered a priori a shortfall in the way a particular service has responded. Impact metrics are accurate to the best of signatories’ abilities to measure them].
Threats observed or anticipated
As outlined in our benchmark report, we took a variety of actions with the objectives of:
- Helping to keep people in Ukraine and Russia safe: since the beginning of the full-scale invasions we have introduced several privacy and safety features to help people in Ukraine and Russia protect their accounts from being targeted.
- Enforcing our policies: We are taking additional steps to enforce our Community Standards, not only in Ukraine and Russia but also in other countries globally where content may be shared.
- Reducing the spread of misinformation: We took steps to fight the spread of misinformation on our services and consulted with outside experts.
- Transparency around state-controlled media: We have been working hard to tackle disinformation from Russia coming from state-controlled media. Since March 2022, we have been globally demoting content from Facebook Pages and Instagram accounts from Russian state-controlled media outlets and making them harder to find across our platforms. In addition to demoting, labelling, demonetizing and blocking ads from Russian State Controlled Media, we are also demoting and labelling any posts from users that contain links to Russian State Controlled Media websites.
- In addition to these global actions, in Ukraine, the EU and UK, we have restricted access to Russia Today (globally), Sputnik, NTV/NTV Mir, Rossiya 1, REN TV and Perviy Kanal and others.
- We added restrictions to further state-controlled media organisations targeted by the EU broadcast ban under Article 2f of Regulation 833/2014. These included: Voice of Europe, RIA Novosti, Izvestia, Rossiyskaya Gazeta, EADaily / Eurasia Daily, Fondsk, Lenta, NewsFront, RuBaltic, SouthFront, Strategic Culture Foundation, and Krasnaya Zvezda / Tvzvezda.
- We also expanded our ongoing enforcement against Russian state media outlets. Rossiya Segodnya, RT, and other related entities were banned from our apps globally due to foreign interference activities.
In the spirit of transparency and cooperation we share below the details of some of the specific steps we are taking to respond to the Israel - Hamas War.
Mitigations in place
Our main strategies are in line with what we outlined in our benchmark report, with a focus on safety features in Ukraine and Russia, extensive steps to fight the spread of misinformation (including through media literacy campaigns), transparency around state controlled media and monitoring/taking action against any coordinated inauthentic behaviour.
- Monitor for coordinated inauthentic behaviour and other adversarial networks (see commitment 16 for more information on behaviour we saw from Doppelganger during the reporting period).
- Enforce our Community Standards
- Work with fact-checkers
- Strengthen our engagement with local experts and governments in the Central and Eastern Europe region
Israel - Hamas War
In the wake of the 07/10/2023 terrorist attacks in Israel and Israel’s response in Gaza, expert teams from across Meta took immediate crisis response measures, while protecting people’s ability to use our apps to shed light on important developments happening on the ground. As we did so, we were guided by core human rights principles, including respect for the right to life and security of the person, the protection of the dignity of victims, and the right to non-discrimination - as well as balancing those with the right to freedom of expression. We looked to the UN Guiding Principles on Business and Human Rights to prioritise and mitigate the most salient human rights risks: in this case, that people may use Meta platforms to further inflame an already violent conflict. We also looked to international humanitarian law (IHL) as an important source of reference for assessing online conduct. We have provided a public overview of our efforts related to the war in our Newsroom, as well as in our 2023 Annual Human Rights report. We provided an update on our actions in our 2024 annual human rights report. The following are some examples of the specific steps we have taken:
- We quickly established a dedicated crisis response staffed with experts, including fluent Hebrew and Arabic speakers, to closely monitor and respond to this rapidly evolving situation in real time. We explain how we deploy our Crisis Policy Protocol and manage crises in a new infographic in our 2024 annual human rights report (page 38).
- We continue to enforce our policies around Dangerous Organisations and Individuals, Violent and Graphic Content, Hateful Conduct, Violence and Incitement, Bullying and Harassment, and Coordinating Harm.
- In addition to this, our teams detected and removed a cluster of Coordinated Inauthentic Behaviour (CIB) activity attributed to Hamas in 2021. These fake accounts attempted to re-establish their presence on our platforms.
- In early 2025, we removed 17 accounts on Facebook, 22 FB Pages and 21 accounts on Instagram for violating our CIB policy. This network originated in Iran and targeted Azeri-speaking audiences in Azerbaijan and Turkey. Fake accounts – some of which were detected and disabled by our automated systems prior to our investigation – were used to post content, including in Groups, manage Pages, and to comment on the network’s own content – likely to make it appear more popular than it was. Many of these accounts posed as female journalists and pro-Palestine activists. The operation also used popular hashtags like #palestine, #gaza, #starbucks, #instagram in their posts, as part of its spammy tactics in an attempt to insert themselves in the existing public discourse.
- We memorialise accounts when we receive a request from a friend or family member of someone who has passed away, to provide a space for people to pay their respects, share memories and support each other.
- We’re working with third-party fact-checkers in the region to debunk false claims. Meta’s third-party fact-checking network includes coverage in both Arabic and Hebrew, through AFP and Reuters. When they rate something as false, we move this content lower in Feed so fewer people see it.
- We recognise the importance of speed in moments like this, so we’ve made it easier for fact-checkers to find and rate content related to the war, using keyword detection to group related content in one place.
- We’re also giving people more information to help them decide what to read, trust, and share, by adding warning labels on content rated false by third-party fact-checkers and applying labels to state-controlled media publishers.
- We also have limits on message forwarding and we label messages that haven’t originated with the sender so people are aware that something is information from a third party.
- Hidden Words: This tool filters offensive terms and phrases from DM requests and comments.
- Limits: When turned on, Limits automatically hide DM requests and comments on Instagram from people who don’t follow you, or who only recently followed you.
- Comment controls: You can control who can comment on your posts on Facebook and Instagram and choose to turn off comments completely on a post by post basis.
- Show More, Show Less: This gives people direct control over the content they see on Facebook.
- Facebook Reduce: Through the Facebook Feed Preferences settings, people can increase the degree to which we demote some content so they see less of it in their Feed.
- Sensitive Content Control: Instagram’s Sensitive Content Control allows people to choose how much sensitive content they see in places where we recommend content, such as Explore, Search, Reels and in-Feed recommendations.
Policies and Terms and Conditions
Outline any changes to your policies
Policy
Inauthentic Behavior Community Standards - We updated our Inauthentic Behavior Community Standards to simplify and refine our IB and CIB policies and help uninvolved authentic communities, Pages, and Groups that are targeted, managed, or co-opted by CIB operations remain on our services.
Changes (such as newly introduced policies, edits, adaptation in scope or implementation)
We updated our Inauthentic Behavior Community Standards to simplify and refine our IB and CIB policies and help uninvolved authentic communities, Pages, and Groups that are targeted, managed, or co-opted by CIB operations remain on our services.
Rationale
We continue to enforce our Community Standards and prioritise people’s safety and well-being through the application of these policies alongside Meta’s technologies, tools and processes.
Israel - Hamas War
For the duration of the ongoing crisis, Meta has taken various actions to mitigate the possible content risks emerging from the crisis. This includes, inter alia, under the Dangerous Organisations and Individuals Policy, removes imagery depicting the moment an identifiable individual is abducted, unless such imagery is shared in the context of condemnation or a call to release, in which case we allow with a Mark as Disturbing (MAD) interstitial; and, remove Hamas-produced imagery for hostages in captivity in all contexts. Meta has some further discretion policies which may be applied when content is escalated to us.
Scrutiny of Ads Placements
Outline approaches pertinent to this chapter, highlighting similarities/commonalities and differences with regular enforcement.
As noted in our baseline report, our policies are based on years of experience and expertise in safety combined with external input from experts around the world. We are continuously working to protect the integrity of our platforms and adjusting our policies, tools and processes.
Measures taken to demonetise disinformation related to the crisis (Commitment 1 and Commitment 2)
As mentioned in our baseline report, our Advertising Standards prohibit ads that include content debunked by third-party fact-checkers and advertisers that repeatedly attempt to post content rated by fact-checkers may also incur restrictions to advertise across Meta technologies.
As mentioned in our baseline report, we prohibited ads or monetisation from Russian state-controlled media. Before Russian authorities blocked access to Facebook and Instagram, we paused ads targeting people in Russia, and advertisers in Russia are no longer able to create or run ads anywhere in the world.
Israel - Hamas War
As noted in our baseline report, our policies are based on years of experience and expertise in safety combined with external input from experts around the world. We are continuously working to protect the integrity of our platforms and adjusting our policies, tools, and processes.
Political Advertising
Outline approaches pertinent to this chapter, highlighting similarities/commonalities and differences with regular enforcement.
As noted in our baseline report, our policies are based on years of experience and expertise in safety combined with external input from experts around the world. We are continuously working to protect the integrity of our platforms and adjusting our policies, tools, and processes.
Israel - Hamas War
As noted in our baseline report, our policies are based on years of experience and expertise in safety combined with external input from experts around the world. We are continuously working to protect the integrity of our platforms and adjusting our policies, tools, and processes.
AI Generated or altered SIEP ads disclosure (Commitment 3)
The social issues, elections, and politics (SIEP) self-disclosure label will soon change from "digitally created" to "AI Info." This update more clearly indicates when AI is involved in creating or editing content, helping users better understand the type of content they’re seeing.
- A real person saying or doing something they didn't.
- A realistic-looking non-existent person.
- A realistic event that didn't happen.
- Altered footage of a real event.
- A realistic, alleged event that isn't a true recording.
Meta will add information on the ad when an advertiser discloses in the advertising flow that the content is digitally created or altered. This information will also appear in the Ad Library. If it is determined that an advertiser did not disclose as required, Meta will reject the ad. Repeated failure to disclose may result in penalties against the advertiser.
Integrity of Services
Outline approaches pertinent to this chapter, highlighting similarities/commonalities and differences with regular enforcement.
As noted in our baseline report, our policies are based on years of experience and expertise in safety combined with external input from experts around the world. We are continuously working to protect the integrity of our platforms and adjusting our policies, tools, and processes.
Measures taken in the context of the crisis to counter manipulative behaviours/TTCs ( Commitment 14)
As mentioned in our baseline report, we have technical teams building scaled solutions to detect and prevent these behaviours, and are partnering with civil society organisations, researchers, and governments to strengthen our defences. We also improved our detection systems to more effectively identify and block fake accounts, which are the source of a lot of the inauthentic activity.
In 2025, we disrupted a coordinated inauthentic behavior network originating in Belarus and targeting Polish audiences. Our internal investigation revealed links to Belarus and Russia, indicating a coordinated foreign influence campaign. We observed that network operators strategically disseminated messaging focused on Poland's immigration policies and the country's relationships with the European Union and Ukraine.
Relevant changes to working practices to respond to the demands of the crisis situation and/or additional human resources procured for the mitigation of the crisis (Commitment 14 -16)
As mentioned in the baseline report, throughout the war, we have mobilised our teams, technologies and resources to combat the spread of harmful content, especially disinformation and misinformation as well as adversarial threat activities such as influence operations and cyber-espionage.
We continue to work with a cross-functional team of experts from across the company, including native Ukrainian and Russian speakers, who are monitoring the platform around the clock, allowing us to respond to issues in real time.
Israel - Hamas War
As noted in our baseline report, our policies are based on years of experience and expertise in safety combined with external input from experts around the world. We are continuously working to protect the integrity of our platforms and adjusting our policies, tools and processes.
Empowering Users
Outline approaches pertinent to this chapter, highlighting similarities/commonalities and differences with regular enforcement.
As noted in our baseline report, our policies are based on years of experience and expertise in safety combined with external input from experts around the world. We are continuously working to protect the integrity of our platforms and adjusting our policies, tools and processes.
Actions taken against dis- and misinformation content (for example deamplification, labelling, removal etc.) (Commitment 17)
State controlled media: We continue to take the actions we outlined in our benchmark report. We have taken further action to limit the impact of state controlled media, described above.
Escalation channel: This channel continues to operate as outlined in our benchmark report.
Covert influence campaigns: We have continued to monitor for and remove recidivist attempts by coordinated inauthentic behaviour (CIB) networks that target discourse about the war in Ukraine. This covert activity is aggressive and persistent, constantly probing for weak spots across the internet, including setting up hundreds of new spoof news organisation domains.
Promotion of authoritative information, including via recommender systems and products and features such as banners and panels (Commitment 19)
We continue to see funds raised on Facebook and Instagram for nonprofits in support of humanitarian efforts for Ukraine.
We continue to work through our AI for Good program, which empowers humanitarian organizations, researchers, UN agencies, and European policymakers to make more informed decisions on how to support the people of Ukraine.
Israel - Hamas War
As noted in our baseline report, our policies are based on years of experience and expertise in safety combined with external input from experts around the world. We are continuously working to protect the integrity of our platforms and adjusting our policies, tools, and processes.
Warning Screens on sensitive content, Sensitive Content Control and Facebook Reduce: (Commitment 17)
The 07/10/2023 attack by Hamas was designated as a Terrorist Attack under Meta’s Dangerous Organisation and Individuals policy. Consistent with that designation, we removed all content showing identifiable victims at the moment of the attack. Following that, people began sharing this type of footage in order to raise awareness and condemn the attacks. Meta’s goal is to allow people to express themselves while still removing harmful content. In turn, we began allowing people to post this type of footage within that context only, with the addition of a warning screen to inform users that it may be disturbing. If the user’s intent in sharing the content is unclear, we err on the side of safety and remove it.
However, there are additional protections in place to ensure people have choices when it comes to this content.
Instagram’s Sensitive Content Control allows people to choose how much sensitive content they see in places where we recommend content, such as Explore, Search, Reels and in-Feed recommendations. We try not to recommend sensitive content in these places by default, but people can also choose to see less, to further reduce the possibility of seeing this content from accounts they don’t follow.
We’re continually testing how we deliver personalized experiences and have recently conducted testing around civic content. As a result, we started treating civic content from people and Pages users follow on Facebook more like any other content in their feed, and we started ranking and showing users that content based on explicit signals (for example, liking a piece of content) and implicit signals (like viewing posts) that help us predict what’s meaningful to people. We also started recommending more political content based on these personalized signals and are expanding the options people have to control how much of this content they see.
Hidden words Filter (Commitment 18, Commitment 19)
When turned on, Hidden Words filters offensive terms and phrases from DM requests and comments, so people never have to see them. People can customise this list, to make sure the terms they find offensive are hidden.
Hidden Words help people choose offensive terms and phrases to hide, so they are protected from seeing them.
Limits (Commitment 18, Commitment 19,)
When turned on, Limits automatically hide DM requests and comments on Instagram from people who don’t follow you, or who only recently followed you.
This tool gives people choice about DM and requests they receive, which may be important when engaging online around sensitive topics.
Comment Controls (Commitment 18, Commitment 19)
People can control who can comment on their posts on Facebook and Instagram and choose to turn off comments completely on a post by post basis.
This tool gives people control over engagement with what they post on Facebook and Instagram.
Show more Show less: (Commitment 18, Commitment 19)
Show More, Show Less gives people direct control over the content they see on Facebook. Selecting “Show more” will temporarily increase the amount of content that is like the post a user gave feedback on, while selecting “Show Less” means a user will temporarily see fewer posts like the one that feedback was given on.
This tool provides people with more direct control over what they see, which is important for protecting people's well-being during high profile crisis events.
Empowering the Research Community
Outline approaches pertinent to this chapter, highlighting similarities/commonalities and differences with regular enforcement.
As noted in our baseline report, our policies are based on years of experience and expertise in safety combined with external input from experts around the world. We are continuously working to protect the integrity of our platforms and adjusting our policies, tools and processes.
Measures taken to support research into crisis related misinformation and disinformation (Commitment 17-25)
As mentioned in our baseline report, the AI for Good program shares privacy-protected data externally to help tackle social issues like disasters, pandemics, poverty and climate change. In support of the Ukraine humanitarian response, the program's maps have been utilized to provide valuable assistance.
We make baseline population density maps (the high resolution settlement layer) of countries surrounding Ukraine publicly available. These are among the most accurate in the world with 30 metre resolution and demographic breakouts by combining updated census estimates with satellite imagery (i.e., no Facebook user data).
Israel - Hamas War
As noted in our baseline report, our policies are based on years of experience and expertise in safety combined with external input from experts around the world. We are continuously working to protect the integrity of our platforms and adjusting our policies, tools, and processes.
Content Library and API tools (Commitment 26)
As we previously reported, Meta has opened access to tools such as the Content Library and API tools to provide access to near real-time public content from Pages, Posts, Groups and Events on Facebook and public content on Instagram. Details about the content, such as the number of reactions, shares, comments and, for the first time, post view counts are also available. Researchers can search, explore and filter that content on both a graphical User Interface (UI) or through a programmatic API. Together, these tools provide the most comprehensive access to publicly-accessible content across Facebook and Instagram of any research tool built to date.
Individuals from qualified institutions, including journalists that are pursuing scientific or public interest research topics are able to apply for access to these tools through partners with deep expertise in secure data sharing for research, starting with the University of Michigan’s Inter-university Consortium for Political and Social Research. This is a first-of-its-kind partnership that will enable researchers to analyse data from the API in ICPSR’s Social Media Archives (SOMAR) Virtual Data Enclave.
Qualified individuals pursuing scientific or public interest research, including journalists can gain access to the tools if they meet all the requirements.
Empowering the Fact-Checking Community
Outline approaches pertinent to this chapter, highlighting similarities/commonalities and differences with regular enforcement.
As noted in our baseline report, our policies are based on years of experience and expertise in safety combined with external input from experts around the world. We are continuously working to protect the integrity of our platforms and adjusting our policies, tools, and processes.
Cooperation with independent fact-checkers in the crisis context, including coverage in the EU (Commitment 30-33)
As mentioned in our baseline report, for misinformation that does not violate our Community Standards, but undermines the authenticity and integrity of our platform, we work with our network of independent third-party fact-checking partners.
The details of the network are outlined under the Empowering Fact-Checkers chapter above.
As mentioned in our baseline report, our cooperation with fact-checkers is as outlined in the Fact-Checkers’ Empowerment chapter above.
In Europe, we partner with 46 fact-checking organisations, covering 36 languages. This includes 29 partners covering 26 countries and 23 different languages in the EU.
Israel - Hamas War
As noted in our baseline report, our policies are based on years of experience and expertise in safety combined with external input from experts around the world. We are continuously working to protect the integrity of our platforms and adjusting our policies, tools, and processes.
Working with fact checker in the region and deploying keyword detection (Commitment 30)
Meta is working with third-party fact-checkers in the region to debunk false claims. Meta’s third-party fact-checking network includes coverage in both Arabic and Hebrew, through AFP, and Reuters. We recognise the importance of speed in moments like this, so we’ve made it easier for fact-checkers to find and rate content related to the war, using keyword detection to group related content in one place.
When they rate something as false, we move this content lower in Feed so fewer people see it.
Content Warning Labels Commitment 31)
Meta is adding warning labels on content rated false by third-party fact-checkers and applying labels to state-controlled media publishers. We also have limits on message forwarding and label messages that haven’t originated with the sender so people are aware that something is information from a third party.
Meta is supporting people in the region by giving them more information to decide what to read, trust and share by adding warning labels onto relevant content.