TikTok

Report September 2025

Submitted
TikTok’s mission is to inspire creativity and bring joy. With a global community of more than a billion users, it’s natural for people to hold different opinions. That’s why we focus on a shared set of facts when it comes to issues that affect people’s safety. A safe, authentic, and trustworthy experience is essential to achieving our goals. Transparency plays a key role in building that trust, allowing online communities and society to assess how TikTok meets its regulatory obligations. As a signatory to the Code of Conduct on Disinformation (the Code), TikTok is committed to sharing clear insights into the actions we take.

TikTok takes disinformation extremely seriously. We are committed to preventing its spread, promoting authoritative information, and supporting media literacy initiatives that strengthen community resilience.

We prioritise proactive content moderation, with the vast majority of violative content removed before it is viewed or reported. In H1 2025, more than 97% of videos violating our Integrity and Authenticity policies were removed proactively worldwide.

We continue to address emerging behaviours and risks through our Digital Services Act (DSA) compliance programme, which the Code has operated under since July 2025. This includes a range of measures to protect users, detailed on our European Online Safety Hub. Our actions under the Code demonstrate TikTok’s strong commitment to combating disinformation while ensuring transparency and accountability to our community and regulators.

Our full executive summary can be read by downloading our report using the link below.

Download PDF

Crisis 2025
[Note: Signatories are requested to provide information relevant to their particular response to the threats and challenges they observed on their service(s). They ensure that the information below provides an accurate and complete report of their relevant actions. As operational responses to crisis/election situations can vary from service to service, an absence of information should not be considered a priori a shortfall in the way a particular service has responded. Impact metrics are accurate to the best of signatories’ abilities to measure them].
Threats observed or anticipated

War of Aggression by Russia on Ukraine


The war of aggression by Russia on Ukraine (hereinafter, “War in Ukraine”) continues to challenge us to confront an incredibly complex and continually evolving environment. At TikTok, the safety of our people and community is of paramount importance and we work continuously to safeguard our platform.

We have set out below some of the main threats we have observed on our platform in relation to the spread of harmful misinformation and covert influence operations (CIO) related to the War in Ukraine in the reporting period. We remain committed to preventing such content from being shared in this context.

(I) Spread of harmful misinformation

We observe and take action where appropriate under our policies. Since the War in Ukraine began we have seen false or unconfirmed claims about specific attacks and events, the development or use of weapons, the involvement of specific countries in the conflict and statements about specific military activities, such as the direction of troop movement. We also have seen instances of footage repurposed in a misleading way, including from video games or unrelated footage from past events presented as current.

 TikTok adopts a dynamic approach to understanding and removing misleading stories. When addressing harmful misinformation, we apply our Integrity & Authenticity policies (Integrity & Authenticity policies) in our Community Guidelines and we will take action against offending content on our platform. Our moderation teams are provided with detailed policy guidance and direction when moderating on crisis related misinformation using our misinformation policies, this includes the provision of case banks of harmful misinformation claims to support their moderation work.

(II) CIOs

We continuously work to detect and disrupt covert influence operations that attempt to establish themselves on TikTok and undermine the integrity of our platform. Our Integrity & Authenticity policies prohibit attempts to sway public opinion while also misleading our systems or users about the identity, origin, approximate location, popularity or overall purpose. We have specifically-trained teams that are on high alert to investigate and detect CIOs on our platform.  We ban accounts that try to engage in such behavior, take action on others that we assess as part of the network, and report them regularly in our Transparency Center. When we ban these accounts, any content they posted is also removed.

In the period from January to June 2025, we took action to remove a total of 7 networks (consisting of 29,245 accounts in total) that were found to be involved in coordinated attempts to influence public opinion about the War in Ukraine while also misleading individuals, our community, or our systems. We publish all of the CIO networks we identify and remove within our new dedicated CIO transparency report here.  

CIO will continue to evolve in response to our detection, and networks may attempt to reestablish a presence on our platform. To counter these emerging threats and stay ahead of evolving challenges, we have expert teams who focus entirely on detecting, investigating, and disrupting covert influence operations.

Israel-Hamas Conflict


TikTok acknowledges both the significance and sensitivity of the Israel-Hamas conflict (referred to as the “Conflict” throughout this section).  We understand this remains a difficult, fearful, and polarizing time for many people around the world and on TikTok. TikTok continues to recognise the need to engage in content moderation of violative content at scale while ensuring that the fundamental rights and freedoms of European citizens are respected and protected. We remain dedicated to supporting free expression, upholding our commitment to human rights, and maintaining the safety of our community and integrity of our platform during the Conflict.   

We have set out below some of the main threats both observed and considered in relation to the Conflict and the actions we have taken to address these during the reporting period. 

(I) Spread of harmful misinformation

Trust forms the foundation of our community, and we strive to keep TikTok a safe and authentic space where genuine interactions and content can thrive. TikTok takes a multi-faceted approach to tackling the spread of harmful misinformation, regardless of intent. This includes our:  Integrity & Authenticity policies (Integrity & Authenticity policies) in our Community Guidelines; products; practices; and external partnerships with fact-checkers, media literacy bodies, and researchers. We support our moderation teams with detailed misinformation policy guidance, enhanced training, and access to tools like our global database of previously fact-checked claims from our IFCN-accredited fact-checking partners, who help assess the accuracy of content. 

We continue to take swift action against misinformation, conspiracy theories, fake engagement,and fake accounts relating to the Conflict.


TikTok’s integrity and authenticity policies do not allow deceptive behaviour that may cause harm to our community or society at large.  This includes coordinated attempts to influence or sway public opinion while also misleading individuals, our community, or our systems about an account’s identity, approximate location, relationships, popularity, or purpose. 

We have specifically-trained teams on high alert to investigate CIO, and disrupting CIO networks has been a high priority for us in the context of the Conflict. We now provide regular updates on the CIO networks we detect and remove from our platform, including those we identify relating to the Conflict, in our dedicated CIO transparency report. Between January to June 2025, we reported one new CIO network disruption that was found to post content relating to the Conflict as a dominant theme.

We know that CIO will continue to evolve in response to our detection and networks may attempt to reestablish a presence on our platform, which is why we continually seek to strengthen our policies and enforcement actions in order to protect our community against new types of harmful misinformation and inauthentic behaviours. 
Mitigations in place

War of Aggression by Russia on Ukraine


We aim to ensure that TikTok is a source of reliable and safe information and recognise the heightened risk and impact of misleading information during a time of crisis such as the War in Ukraine.. 

(I) Investment in our fact-checking programme

We employ a layered approach to detecting harmful misinformation that is in violation of our Community Guidelines. 

Working closely with our fact-checking partners is a crucial part of our approach to enforcing harmful misinformation on our platform. Our fact-checking programme includes coverage of Russian, Ukrainian, and Belarusian. We also partner with Reuters, which is dedicated to helping us accurately fact-check content in Russian and Ukrainian. 

We also collaborate with certain fact-checking partners to receive advance warning of emerging misinformation narratives.This has facilitated proactive responses against high-harm trends and has ensured that our moderation teams have up-to-date guidance.

(II) Disruption of CIOs

As set out above, disrupting CIO networks has been high priority for us in the context of the crisis. We published a list of the networks we disrupted in the relevant period within our most recently published transparency report here

Between January and June 2025, we took action to remove a total of 7 networks (consisting of 29,245 accounts in total) that were found to be involved in coordinated attempts to influence public opinion about the War in Ukraine while also misleading individuals, our community, or our systems. We publish all of the CIO networks we identify and remove within our dedicated CIO transparency report here.  

Countering influence operations is an industry-wide effort, in part because these operations often spread their activity across multiple platforms. We regularly consult with third-party experts, including our global Content and Safety Advisory Councils, whose guidance helps us improve our policies and understand regional context. 

(III) Restricting access to content for state-affiliated media

Since the early stages of the war, we have restricted access to content from a number of Russian state-affiliated media entities in the EU, Iceland and Liechtenstein. Our state-affiliated media policy is used to help users understand the context of certain content and to help them to evaluate the content they consume on our platform. Labels have since applied to content posted by the state-affiliated accounts of such entities in Russia, Ukraine and Belarus. 

We continue the detection and labeling of state-controlled media accounts in accordance with our state-controlled media label policy globally. 

(IV) Mitigating the risk of monetisation of harmful misinformation

Political advertising has been prohibited on our platform for many years, but as an additional risk mitigation measure against the risk of profiteering from the War in Ukraine we prohibit Russian-based advertisers from outbound targeting of EU markets. We also suspended TikTok in the Donetsk and Luhansk regions, removing Livestream videos originating in Ukraine from the For You feed of users located in the EU. In addition, the ability to add new video content or Livestream videos to the platform in Russia remains suspended.

(V) Launching localised media literacy campaigns

Proactive measures aimed at improving our users' digital literacy are vital, and we recognise the importance of increasing the prominence of authoritative information. We have thirteen localised media literacy campaigns addressing disinformation related to the War in Ukraine in Austria, Bulgaria, Czech Republic, Croatia, Estonia, Germany, Hungary, Latvia, Lithuania, Poland, Romania, Slovakia, and Slovenia in close collaboration with our factchecking partners. Users searching for keywords relating to the War in Ukraine are directed to tips, prepared in partnership with our fact checking partners, to help users identify misinformation and prevent the spread of it on the platform. We have also partnered with a local Ukrainian fact-checking organisation, VoxCheck, with the aim of launching a permanent media literacy campaign in Ukraine.

Israel-Hamas Conflict


We are continually working hard to ensure that TikTok is a source of reliable and safe information and recognise the heightened risk and impact of misleading information during a time of crisis. As part of our crisis management process, we launched a command centre that brings together key members of our global team of thousands of safety professionals, representing a range of expertise and regional perspectives, so that we remain agile in how we take action to respond to this fast-evolving crisis. Since the beginning of the Conflict, we are:

(I) Upholding TikTok's Community Guidelines

Continuing to enforce our policies against violence, hate, and harmful misinformation by taking action to remove violative content and accounts. For example, we remove content that promotes Hamas, or otherwise supports the attacks or mocks victims affected by the violence. If content is posted depicting a person who has been taken hostage, we will do everything we can to protect their dignity and remove content that breaks our rules. We do not tolerate attempts to incite violence or spread hateful ideologies. We have a zero-tolerance policy for content praising violent and hateful organisations and individuals, and those organisations and individuals aren't allowed on our platform. We also block hashtags that promote violence or otherwise break our rules. In H1 2025, we have removed 7,589 videos in relation to the conflict, which violated our misinformation policies.

Evolving our proactive automated detection systems in real-time as we identify new threats; this enables us to automatically detect and remove graphic and violent content so that neither our moderators nor our community members are exposed to it.

(II) Leveraging our Fact-Checking Program

We employ a layered approach to detecting harmful misinformation that violates our Community Guidelines and our global fact-checking program is a critical part of this. The core objective of the fact-checking program is to leverage the expertise of external fact-checking organisations to help assess the accuracy of harmful and difficult-to-verify claims. 

To limit the spread of potentially misleading information, we apply warning labels and prompt users to reconsider sharing content related to unfolding or emergency events, which have been assessed by our fact-checkers but cannot be verified as accurate i.e., ‘unverified content’. Mindful about how evolving events may impact the assessment of sensitive Conflict related claims day-to-day, we have implemented a process that allows our fact-checking partners to update us quickly if claims previously assessed as ‘unverified’  become verified with additional context and/or at a later stage.

(III) Scaling up our content moderation capabilities

TikTok has Arabic and Hebrew speaking moderators in the content moderation teams who review content and assist with Conflict-related translations. As we continue to focus on moderator care, we have also deployed additional well-being resources for our human moderation teams during this time. 

(IV) Disruption of CIOs

Disrupting CIO networks has also been high-priority work for us in tackling deceptive behaviour that may cause harm to our community or society at large.  As noted above, between January to June 2025, we took action to remove one network (consisting of twelve accounts in total) that were found to be related to the Conflict. We now publish all of the CIO networks we identify and remove, including those relating to the conflict, within our dedicated CIO transparency report, here

(V) Mitigating the risk of monetisation of harmful misinformation

Making temporary adjustments to policies that govern TikTok features in an effort to proactively prevent them from being used for hateful or violent behaviour in the region. For example, we’ve added additional restrictions on LIVE eligibility as a temporary measure given the heightened safety risk in the context of the current hostage situation. Our existing political ads policy, GPPPA labelling, and safety and civility policies help to mitigate the risk of monetisation of harmful misinformation.  

(VI) Deploying search interventions to raise awareness of potential misinformation 

To help raise awareness and to protect our users, we previously launched search interventions, which are triggered when users search for non-violating terms related to the Conflict (e.g., Israel, Palestine). These search interventions remind users to pause and check their sources and also direct them to well-being resources. In H2 2024 we continued to refine this process; in particular, we focused on improving keywords to ensure they are relevant and effective.

(VII) Adding opt-in screens over content that could be shocking or graphic

We recognise that some content that may otherwise break our rules can be in the public interest, and we allow this content to remain on the platform for documentary, educational, and counterspeech purposes. Opt-in screens help prevent people from unexpectedly viewing shocking or graphic content as we continue to make public interest exceptions for some content. 

In addition, we are committed to engagement with experts across the industry and civil society, such as Tech Against Terrorism and our Advisory Councils, and cooperation with law enforcement agencies globally in line with our Law Enforcement Guidelines, to further safeguard and secure our platform during these difficult times.  

Policies and Terms and Conditions
Outline any changes to your policies
Russia-Ukraine: No relevant updates in the reporting period.
In a crisis, we keep under review our policies and to ensure moderation teams have supplementary guidance.

Israel-Hamas:
We refined and expanded our newsworthy exceptions to allow the dissemination of content documenting from a conflict zone and legitimate political speech/criticism, while remaining sensitive to the potential harm users may experience from exposure to graphic visuals, hateful behaviours, or incitement to violence. As part of this effort, we introduced dedicated policies addressing content related to the Conflict, specifically in areas depicting hostages, human suffering, and protests.

Additionally, we strengthened our policies on content that glorifies Hamas or Hezbollah and on the promotion or celebration of violent acts committed by either side of the Conflict. To further enhance platform integrity, we implemented specific Integrity & Authenticity policies for Israel-Hamas-related content, with a focus on conspiracy theories of varying severity and unsubstantiated claims.
Policy - 51.1.1
Russia-Ukraine:
No relevant updates in the reporting period.

Israel-Hamas
Policy updates


Changes (such as newly introduced policies, edits, adaptation in scope or implementation) - 51.1.2
Russia-Ukraine:
In a crisis, we keep under review our policies and to ensure moderation teams have supplementary guidance.

Israel-Hamas:
We continue to rely on our existing, robust Integrity & Authenticity policies, which are an effective basis for tackling content related to the Conflict. As such, we have not needed to introduce any new misinformation policies, for the purposes of addressing the crisis. In a crisis, we keep under review our policies and to ensure moderation teams have supplementary guidance.
Rationale - 51.1.3
Russia-Ukraine:
Our Integrity & Authenticity policies are our first line of defense in combating harmful misinformation and deceptive behaviours on our platform. 

Our Community Guidelines make clear to our users what content we remove or make ineligible for the For You feed when it poses a risk of harm to our users or the wider public. Our moderation teams are provided with detailed policy guidance and direction when moderating on war-related harmful misinformation using existing policies.

We have specialist teams within our Trust and Safety department dedicated to the policy issue of Integrity & Authenticity, including within the areas of product and policy. Our experienced subject matter experts on Integrity & Authenticity continually keep these policies under review and collaborate with external partners and experts when understanding whether updates are required.

When situations such as the War in Ukraine arise, our teams work to ensure that appropriate guidance is developed so that the Integrity & Authenticity policies are applied effectively in respect of content relating to the relevant crisis (in this case, the war). This includes issuing detailed policy guidance and direction, including providing case banks on harmful misinformation claims to support moderation teams.

Israel-Hamas: 
We refined and expanded our newsworthy exceptions to allow the dissemination of content documenting from a conflict zone and legitimate political speech/criticism, while remaining sensitive to the potential harm users may experience from exposure to graphic visuals, hateful behaviours, or incitement to violence. As part of this effort, we introduced dedicated policies addressing content related to the Conflict, specifically in areas depicting hostages, human suffering, and protests.Additionally, we strengthened our policies on content that glorifies Hamas or Hezbollah and on the promotion or celebration of violent acts committed by either side of the Conflict. To further enhance platform integrity, we implemented specific Integrity & Authenticity policies for Israel-Hamas-related content, with a focus on conspiracy theories of varying severity and unsubstantiated claims.

In the context of the Conflict, we rely on our robust  Integrity & Authenticity policies as our first line of defence in combating harmful misinformation and deceptive behaviours on our platform. 

Our Community Guidelines clearly identify to our users what content we remove or make ineligible for the For You feed when it poses a risk of harm to our users or the wider public. We have also supported our moderation teams with detailed policy guidance and direction when moderating on Conflict-related harmful misinformation using existing policies.

We have specialist teams within our Trust and Safety department dedicated to the policy issue of Integrity & Authenticity, including within the areas of product and policy. Our experienced subject matter experts on Integrity & Authenticity continually keep these policies under review and collaborate with external partners and experts when understanding whether updates are required.
When situations such as the Conflict arise, these teams work to ensure that appropriate guidance is developed so that the Integrity & Authenticity policies are applied in an effective manner in respect of content relating to the relevant crisis (in this case, the Conflict). This includes issuing detailed policy guidance and direction, including providing case banks on harmful misinformation claims to support moderation teams.
Policy - 51.1.4
Russia-Ukraine: No relevant updates in the reporting period.

Israel-Hamas: 
Feature policies
Changes (such as newly introduced policies, edits, adaptation in scope or implementation) - 51.1.5
Israel-Hamas:
In addition to being able to rely on our Integrity & Authenticity policies, we have made temporary adjustments to existing policies which govern certain TikTok features. For example, we have added additional restrictions on LIVE eligibility as a temporary measure given the heightened safety risk in the context of the current hostage situation.
Rationale - 51.1.6
Israel-Hamas:
Temporary adjustments have been introduced in an effort to proactively prevent certain features from being used for hateful or violent behaviour in the region.