TikTok

Report March 2025

Submitted
TikTok's mission is to inspire creativity and bring joy. In a global community such as ours with millions of users it is natural for people to have different opinions, so we seek to operate on a shared set of facts and reality when it comes to topics that impact people’s safety. Ensuring a safe and authentic environment for our community is critical to achieving our goals - this includes making sure our users have a trustworthy experience on TikTok. As part of creating a trustworthy environment, transparency is essential to enable online communities and wider society to assess TikTok's approach to its regulatory obligations. TikTok is committed to providing insights into the actions we are taking as a signatory to the Code of Practice on Disinformation (the Code). 

Our full executive summary is available as part of our report, which can be downloaded by following the link below.

Download PDF

Crisis 2024
[Note: Signatories are requested to provide information relevant to their particular response to the threats and challenges they observed on their service(s). They ensure that the information below provides an accurate and complete report of their relevant actions. As operational responses to crisis/election situations can vary from service to service, an absence of information should not be considered a priori a shortfall in the way a particular service has responded. Impact metrics are accurate to the best of signatories’ abilities to measure them].
Threats observed or anticipated

War of aggression by Russia on Ukraine


The war of aggression by Russia on Ukraine (hereinafter, “War in Ukraine”) continues to challenge us to confront an incredibly complex and continually evolving environment. At TikTok, the safety of our people and community is of paramount importance and we work continuously to safeguard our platform.

We have set out below some of the main threats we have observed on our platform in relation to the spread of harmful misinformation and covert influence operations (CIO) related to the War in Ukraine in the reporting period. We remain committed to preventing such content from being shared in this context.

(I) Spread of harmful misinformation

We observe and take action where appropriate under our policies. Since the War in Ukraine began we have seen false or unconfirmed claims about specific attacks and events, the development or use of weapons, the involvement of specific countries in the conflict and statements about specific military activities, such as the direction of troop movement. We also have seen instances of footage repurposed in a misleading way, including from video games or unrelated footage from past events presented as current.

 TikTok adopts a dynamic approach to understanding and removing misleading stories. When addressing harmful misinformation, we apply our Integrity & Authenticity policies (I&A policies) in our Community Guidelines and we will take action against offending content on our platform. Our moderation teams are provided with detailed policy guidance and direction when moderating on crisis related misinformation using our misinformation policies, this includes the provision of case banks of harmful misinformation claims to support their moderation work.

(II) CIOs

We continuously work to detect and disrupt covert influence operations that attempt to establish themselves on TikTok and undermine the integrity of our platform. Our I&A policies prohibit attempts to sway public opinion while also misleading our systems or users about the identity, origin, approximate location, popularity or overall purpose. We have specifically-trained teams which are on high alert to investigate and detect CIOs on our platform.  We ban accounts that try to engage in such behavior, take action on others that we assess as part of the network, and report them regularly in our transparency center. When we ban these accounts, any content they posted is also removed.

In the period from July to December 2024, we took action to remove a total of 9 networks (consisting of 20,002 accounts in total) that were found to be involved in coordinated attempts to influence public opinion about the War in Ukraine while also misleading individuals, our community, or our systems. We publish all of the CIO networks we identify and remove within our new dedicated CIO transparency report here.  

CIO will continue to evolve in response to our detection and networks may attempt to reestablish a presence on our platform. 

To counter these emerging threats and stay ahead of evolving challenges, we have expert teams who focus entirely on detecting, investigating, and disrupting covert influence operations.

Israel-Hamas Conflict

TikTok acknowledges both the significance and sensitivity of the Israel-Hamas conflict (referred to as the “Conflict” throughout this section).  We understand this remains a difficult, fearful, and polarizing time for many people around the world and on TikTok. TikTok continues to recognise the need to engage in content moderation of violative content at scale while ensuring that the fundamental rights and freedoms of European citizens are respected and protected. We remain dedicated to supporting free expression, upholding our commitment to human rights, and maintaining the safety of our community and integrity of our platform during the Conflict.   

In advance of the anniversary of the Conflict, we were aware that there would be an increase in content posted at this time relevant to the content and we wanted to ensure the safety of our community and integrity of the platform at this sensitive time. In preparation for the Anniversary of the Conflict, a comprehensive plan was developed to address potential risks and ensure platform safety. This involved strategic coordination across multiple regions, including IL, MENA, APAC, EU, US/CA, and AMS, with a focus on mitigating high-risk content and ensuring uninterrupted service. 

We remain committed to transparency throughout this time and have kept our community informed of our immediate and ongoing response through the following Newsroom post which was last updated approaching the anniversary of the Conflict: in October 2024 Our continued actions to protect the TikTok community during the Israel-Hamas war

We have set out below some of the main threats both observed and considered in relation to the Conflict and the actions we have taken to address these during the reporting period. 

(I) Spread of harmful misinformation

Trust forms the foundation of our community, and we strive to keep TikTok a safe and authentic space where genuine interactions and content can thrive. TikTok takes a multi-faceted approach to tackling the spread of harmful misinformation, regardless of intent. This includes our:  Integrity & Authenticity policies (I&A policies) in our Community Guidelines; products; practices; and external partnerships with fact-checkers, media literacy bodies, and researchers. We support our moderation teams with detailed misinformation policy guidance, enhanced training, and access to tools like our global database of previously fact-checked claims from our IFCN-accredited fact-checking partners, who help assess the accuracy of content. 

We continue to take swift action against misinformation, conspiracy theories, fake engagement,and fake accounts relating to the Conflict.

TikTok’s integrity and authenticity policies do not allow deceptive behaviour that may cause harm to our community or society at large.  This includes coordinated attempts to influence or sway public opinion while also misleading individuals, our community, or our systems about an account’s identity, approximate location, relationships, popularity, or purpose. 

We have specifically-trained teams on high alert to investigate CIO, and disrupting CIO networks has been high priority work for us in the context of the Conflict. We now provide regular updates on the CIO networks we detect and remove from our platform, including those we identify relating to the Conflict, in our dedicated CIO transparency report. Between June 2024- December 2024, we reported 3 new CIO network disruptions that were found to post content relating to the Conflict as a dominant theme.

We know that CIO will continue to evolve in response to our detection and networks may attempt to reestablish a presence on our platform, which is why we continually seek to strengthen our policies and enforcement actions in order to protect our community against new types of harmful misinformation and inauthentic behaviours. 

Mitigations in place

War of Aggression by Russia on Ukraine

We aim to ensure that TikTok is a source of reliable and safe information and recognise the heightened risk and impact of misleading information during a time of crisis such as the War in Ukraine.. 

(I) Investment in our fact-checking programme

We employ a layered approach to detecting harmful misinformation which is in violation of our CGs. 

Working closely with our fact-checking partners is a crucial part of our approach to enforcing harmful misinformation on our platform. Our fact-checking programme includes coverage of Russian, Ukrainian and Belarusian. We also partner with Reuters, who are dedicated to fact-checking content in Russian and Ukrainian. 

We also collaborate with certain of our fact-checking partners to receive advance warning of emerging misinformation narratives, which has facilitated proactive responses against high-harm trends and has ensured that our moderation teams have up-to-date guidance.

(II) Disruption of CIOs

As set out above, disrupting CIO networks has been high priority work for us in the context of the crisis and we published a list of the networks we disrupted in the relevant period within our most recently published transparency report here

Between July and December 2024, we took action to remove a total of 9 networks (consisting of 20,002 accounts in total) that were found to be involved in coordinated attempts to influence public opinion about the War in Ukraine while also misleading individuals, our community, or our systems. We publish all of the CIO networks we identify and remove within our dedicated CIO transparency report here.  

Countering influence operations is an industry-wide effort, in part because these operations often spread their activity across multiple platforms. We regularly consult with third-party experts, including our global Content and Safety Advisory Councils, whose guidance helps us improve our policies and understand regional context. 

(III) Restricting access to content for state affiliated media

Since the early stages of the war, we have restricted access to content from a number of Russian state affiliated media entities in the EU, Iceland and Liechtenstein. Our state affiliated media policy is used to help users understand the context of certain content and to help them to evaluate the content they consume on our platform. Labels have since applied to content posted by the state affiliated accounts of such entities in Russia, Ukraine and Belarus. 

We continue the detection and labeling of state-controlled media accounts in accordance with our state-controlled media label policy globally. 

(IV) Mitigating the risk of monetisation of harmful misinformation

Political advertising has been prohibited on our platform for many years, but as an additional risk mitigation measure against the risk of monetisation off the back of the War in Ukraine we prohibit Russian-based advertisers from outbound targeting of EU markets. We also suspended TikTok in the Donetsk and Luhansk regions, removing Livestream videos originating in Ukraine from the For You feed of users located in the EU. In addition, the ability to add new video content or Livestream videos to the platform in Russia remains suspended.

(V) Launching localised media literacy campaigns

Proactive measures which are aimed at improving our users' digital literacy are vital and we recognise the importance of increasing the prominence of authoritative information. We have thirteen localised media literacy campaigns addressing disinformation related to the War in Ukraine in Austria, Bulgaria, Czech Republic, Croatia, Estonia, Germany, Hungary, Latvia, Lithuania, Poland, Romania, Slovakia and Slovenia in close collaboration with our fact-checking partners. Users searching for keywords relating to the War in Ukraine are directed to tips, prepared in partnership with our fact checking partners, to help users identify misinformation and prevent the spread of it on the platform. We have also partnered with a local Ukrainian fact-checking organisation, VoxCheck, with the aim of launching a permanent media literacy campaign in Ukraine.



Israel-Hamas Conflict

We are continually working hard to ensure that TikTok is a source of reliable and safe information and recognise the heightened risk and impact of misleading information during a time of crisis. As part of our crisis management process, we launched a command centre that brings together key members of our global team of thousands of safety professionals, representing a range of expertise and regional perspectives, so that we remain agile in how we take action to respond to this fast-evolving crisis. Since the beginning of the Conflict, we are:

(I) Upholding TikTok's Community Guidelines

We rolled out refreshed Community Guidelines and began to enforce expanded hate speech and hateful behavior policies. These policies aim to better address implicit or indirect hate speech and create a safer and more civil environment for everyone. These add to our long-standing policies against antisemitism and other hateful ideologies. We also updated our hate speech policy to recognize content that uses "Zionist" as a proxy for a protected attribute when it is not used to refer to a political ideology and instead used as a proxy with Jewish or Israeli identity. This policy was implemented early in 2024 after observing a rise in how the word was increasingly used in a hateful way.

Continuing to enforce our policies against violence, hate, and harmful misinformation by taking action to remove violative content and accounts. For example, we remove content that promotes Hamas, or otherwise supports the attacks or mocks victims affected by the violence. If content is posted depicting a person who has been taken hostage, we will do everything we can to protect their dignity and remove content that breaks our rules. We do not tolerate attempts to incite violence or spread hateful ideologies. We have a zero-tolerance policy for content praising violent and hateful organisations and individuals, and those organisations and individuals aren't allowed on our platform. We also block hashtags that promote violence or otherwise break our rules. We have removed 8,765 videos in relation to the conflict which violated our misinformation policies.

Evolving our proactive automated detection systems in real-time as we identify new threats; this enables us to automatically detect and remove graphic and violent content so that neither our moderators nor our community members are exposed to it.

(II) Leveraging our Fact-Checking Program

We employ a layered approach to detecting harmful misinformation which is in violation of our Community Guidelines and our global fact-checking program is a critical part of this. The core objective of the fact-checking program is to leverage the expertise of external fact-checking organisations to help assess the accuracy of harmful and difficult to verify claims. 

To limit the spread of potentially misleading information, we apply warning labels and prompt users to reconsider sharing content related to unfolding or emergency events, which have been assessed by our fact-checkers but cannot be verified as accurate i.e., ‘unverified content’. Mindful about how evolving events may impact the assessment of sensitive Conflict related claims day-to-day, we have implemented a process that allows our fact-checking partners to update us quickly if claims previously assessed as ‘unverified’  become verified with additional context and/or at a later stage.

(III) Scaling up our content moderation capabilities

TikTok has Arabic and Hebrew speaking moderators in the content moderation teams who review content and assist with Conflict-related translations. As we continue to focus on moderator care, we have also deployed additional well-being resources for our human moderation teams during this time. 

(IV) Disruption of CIOs

Disrupting CIO networks has also been high priority work for us in tackling deceptive behaviour that may cause harm to our community or society at large.  As noted above, between July and December 2024, we took action to remove 3 networks (consisting of 132 accounts in total) that were found to be related to the Conflict. We now publish all of the CIO networks we identify and remove, including those relating to the conflict, within our dedicated CIO transparency report, here

(V) Mitigating the risk of monetisation of harmful misinformation

Making temporary adjustments to policies that govern TikTok features in an effort to proactively prevent them from being used for hateful or violent behaviour in the region. For example, we’ve added additional restrictions on LIVE eligibility as a temporary measure given the heightened safety risk in the context of the current hostage situation. Our existing political ads policy, GPPPA labelling and safety and civility policies help to mitigate the risk of monetisation of harmful misinformation.  

(VI) Deploying search interventions to raise awareness of potential misinformation 

To help raise awareness and to protect our users, we have launched search interventions which are triggered when users search for non-violating terms related to the Conflict (e.g., Israel, Palestine). These search interventions remind users to pause and check their sources and also direct them to well-being resources. In H2 we continued to refine this process, in particular we focused on improving keywords to ensure they are relevant and effective.

(VII) Adding opt-in screens over content that could be shocking or graphic

We recognise that some content that may otherwise break our rules can be in the public interest, and we allow this content to remain on the platform for documentary, educational, and counterspeech purposes. Opt-in screens help prevent people from unexpectedly viewing shocking or graphic content as we continue to make public interest exceptions for some content. 

In addition, we are committed to engagement with experts across the industry and civil society, such as Tech Against Terrorism and our Advisory Councils, and cooperation with law enforcement agencies globally in line with our Law Enforcement Guidelines, to further safeguard and secure our platform during these difficult times.  
Policies and Terms and Conditions
Outline any changes to your policies
Signatories will report, quantitatively, on actions they took to enforce each of the policies mentioned in the qualitative part of this service level indicator, at the Member State or language level. This could include, for instance, actions to remove, to block, or to otherwise restrict harmful Disinformation in advertising messages and in the promotion of content.
Please see the relevant section of TikTok's full PDF report, which is available to download at the top of this page
Policy