TikTok

Report March 2026

Submitted
TikTok’s mission is to inspire creativity and bring joy. With more than 200 million people across Europe coming to TikTok every month, including 178 million in the EU, it’s natural for people to hold different opinions. That’s why we focus on a shared set of facts when it comes to issues that affect people’s safety. A safe, authentic, and trustworthy experience is essential to achieving our goals. Transparency plays a key role in building that trust, allowing online communities and society to assess how TikTok meets its regulatory obligations. As a signatory to the Code of Conduct on Disinformation (the Code), TikTok is committed to sharing clear insights into the actions we take.

TikTok takes disinformation extremely seriously. We are committed to preventing its spread, promoting authoritative information, and supporting media literacy initiatives that strengthen community resilience.

We prioritise proactive content moderation, with the vast majority of violative content removed before it is reported. In H2 2025, more than 98% of videos violating our Integrity and Authenticity policies were removed proactively worldwide.

We continue to address emerging behaviours and risks through our Digital Services Act (DSA) compliance programme, which the Code has operated under since July 2025.

Our actions under the Code demonstrate TikTok’s strong commitment to combating disinformation while ensuring transparency and accountability to our community and regulators.

Please see the sections below for information about our work under specific commitments, or download the report as a PDF.

Download PDF

Crisis 2025
[Note: Signatories are requested to provide information relevant to their particular response to the threats and challenges they observed on their service(s). They ensure that the information below provides an accurate and complete report of their relevant actions. As operational responses to crisis/election situations can vary from service to service, an absence of information should not be considered a priori a shortfall in the way a particular service has responded. Impact metrics are accurate to the best of signatories’ abilities to measure them].
Threats observed or anticipated
War of Aggression by Russia on Ukraine

Threats observed or anticipated at time of reporting: [suggested character limit 2000 characters].

Since the start of the war of aggression by Russia on Ukraine in February 2022 (the “War in Ukraine”), we have observed false or unverified claims about specific attacks and events, the development or use of weapons, the involvement of particular countries, and military activities such as troop movements. We have also seen misleadingly repurposed footage, including clips from video games, AI-generated content, or unrelated past events presented as current.
While no specific threats related to the War in Ukraine were identified or anticipated in H2 2025, we remained alert to the spread of harmful misinformation and covert influence operations (CIO), and continue working to prevent such content from being shared.


(I) Spread of harmful misinformation

TikTok takes a multi-faceted approach to tackling the spread of harmful misinformation, regardless of intent. This includes our Integrity and Authenticity policies, as well as our products, operational practices, and external partnerships with fact-checkers, media literacy organisations, and researchers.
We support our Integrity and Authenticity moderators with detailed misinformation policy guidance, enhanced training, and direct access to our IFCN-accredited fact-checking partners, who help assess the accuracy of content.
We continue to take swift action against misinformation, conspiracy theories, fake engagement, and fake accounts relating to the War in Ukraine.

(II) CIOs

TikTok’s integrity and authenticity policies do not allow deceptive behaviour that may cause harm to our community or society at large. This includes coordinated attempts to influence or sway public opinion while also misleading individuals, our community, or our systems about an account’s identity, approximate location, relationships, popularity, or purpose. We have specifically-trained teams on high alert to investigate, disrupt and remove CIO networks from our platform and we provide regular updates in our dedicated CIO transparency reports. For advertising-related CIO measures, please refer to Chapter 2.

Israel-Hamas Conflict:
TikTok acknowledges the significance and sensitivity of the Israel–Hamas conflict (referred to as the “Conflict” in this chapter), which has been ongoing for an extended period. We recognise that it continues to be a challenging and deeply felt issue for many people around the world and on TikTok.
TikTok continues to moderate violative content at scale, while respecting and protecting the fundamental rights and freedoms of European users. We remain committed to supporting freedom of expression, upholding our commitment to human rights, and maintaining the safety and integrity of our platform during the Conflict.

Below, we outline some of the main threats, both observed and considered, in relation to the Conflict and the steps taken to address them during the reporting period.

(I) Spread of harmful misinformation

TikTok takes a multi-faceted approach to tackling the spread of harmful misinformation, regardless of intent. This includes our Integrity and Authenticity policies, as well as our products, operational practices, and external partnerships with fact-checkers, media literacy organisations, and researchers. We support our Integrity and Authenticity moderators with detailed misinformation policy guidance, enhanced training, and direct access to our IFCN-accredited fact-checking partners, who help assess the accuracy of content. 

We continue to take swift action against misinformation, conspiracy theories, fake engagement, and fake accounts relating to the Conflict.


TikTok’s integrity and authenticity policies do not allow deceptive behaviour that may cause harm to our community or society at large. This includes coordinated attempts to influence or sway public opinion while also misleading individuals, our community, or our systems about an account’s identity, approximate location, relationships, popularity, or purpose. We have specifically-trained teams on high alert to investigate, disrupt and remove CIO networks from our platform and we provide regular updates in our dedicated CIO transparency reports. For advertising-related CIO measures, please refer to Chapter 2.


Mitigations in place
War of Aggression by Russia on Ukraine
We aim to ensure that TikTok is a source of reliable and safe information and recognise the heightened risk and impact of misleading information during a time of crisis such as the War in Ukraine. 

(I) Upholding TikTok's Community Guidelines
Continuing to enforce our policies against violence, hate, and harmful misinformation by taking action to remove violative content and accounts. We use a combination of advanced moderation technologies and teams of human safety experts to identify, review, and action content that violates our policies.

Automated Review

We place considerable emphasis on proactive detection to remove violative content and reduce exposure to potentially distressing content for our human moderators. Before content is posted to our platform, it's reviewed by automated moderation technologies which identify content or behavior that may violate our policies or For You feed eligibility standards, or that may require age-restriction or other actions. While undergoing this review, the content is visible only to the uploader.

If our automated moderation technology identifies content that is a potential violation, it will either take action against the content or flag it for further review by our human moderation teams. In line with our safeguards to help ensure accurate decisions are made, automated removal is applied when violations are the most clear-cut.

Some of the methods and technologies that support these efforts include:
  • Vision-based: Computer vision models can identify objects that violate our Community Guidelines, such as weapons or hate symbols.
  • Audio-based: Audio clips are reviewed for violations of our policies, supported by a dedicated audio bank and "classifiers" that help us detect audios that are similar or modified to previous violations.
  • Text-based: Detection models review written content like comments or hashtags,. Artificial Intelligence (AI) that can interpret the context surrounding content—helps us identify violations that are context-dependent, such as words that can be used in a hateful way but may not violate our policies by themselves. 
  • Similarity-based: "Similarity detection systems" enable us to not only catch identical or highly similar versions of violative content, but other types of content that share key contextual similarities and may require additional review.
  • Activity-based: Technologies that look at how accounts are being operated help us disrupt deceptive activities like bot accounts, spam, or attempts to artificially inflate engagement through fake likes or follow attempts.
  • LLMs: We use multimodal LLMs to help moderate content faster and more consistently at scale, from taking automated action on activity like fake engagement, to empowering teams with better moderation tools and risk insights.
  • We work with external groups, for example Tech Against Terrorism in the context of violent extremist content, who help us to more quickly detect and remove violative content that has already been identified off the platform.


Scaling human expertise

Human insight plays a crucial role in the content moderation process, from our community or external experts, to our own safety professionals. Our teams of human safety experts speak more than 60 languages and dialects, including Russian and Ukrainian. We strive to promote a caring working environment for all TikTok employees, and especially for trust and safety professionals. We use an evidence-based approach to develop programmes and resources that support their psychological well-being, including for Trust & Safety personnel working on mis & disinformation.

In H2 2025, we removed 1,352 videos in relation to the War in Ukraine, which violated our misinformation policies.

(II) Leveraging our Global Fact-Checking Program

We use a layered approach to detect harmful misinformation that violates our Community Guidelines, with our Global Fact-Checking Program playing a key role. We assess the accuracy of harmful or hard-to-verify claims by partnering with more than 20 IFCN-accredited fact-checking organizations who support over 60 languages on TikTok, including Russian, Ukrainian, and Belarusian. We also collaborate with certain fact-checking partners to receive advance warning of emerging misinformation narratives. This helps facilitate proactive responses against high-harm trends and ensures that our Integrity and Authenticity moderators have up-to-date guidance.

To limit the spread of potentially misleading information, we apply warning labels and prompt users to reconsider sharing content about unfolding or emergency events that have been reviewed by fact-checkers but cannot be verified—referred to as “unverified content.” Recognising that the situation around the Conflict can change rapidly, we have put in place a process allowing our fact-checking partners to quickly update us if claims previously marked as “unverified” are later verified or clarified with additional context.

(III) Disruption of CIOs

Disrupting CIO networks targeting discourse related to the War in Ukraine remains a priority. Between July and December 2025, we took action to remove a total of four such networks.


(IV) Mitigating the risk of monetisation of harmful misinformation

Political advertising has been prohibited on our platform for many years, but as an additional risk mitigation measure against the risk of profiteering from the War in Ukraine we prohibit Russian-based advertisers from outbound targeting of EU markets. We also suspended TikTok in the Donetsk and Luhansk regions.

(V) Localised media literacy campaigns

Proactive measures aimed at improving our users' digital literacy are vital, and we recognise the importance of increasing the prominence of authoritative information. We have 17 localised media literacy campaigns addressing disinformation related to the War in Ukraine in Austria, Bosnia, Bulgaria, Czechia, Croatia, Estonia, Germany, Hungary, Latvia, Lithuania, Montenegro, Poland, Romania, Serbia, Slovakia, Slovenia, and Ukraine in close collaboration with our factchecking partners. Users searching for keywords relating to the War in Ukraine are directed to tips, prepared in partnership with our fact-checking partners, to help users identify misinformation and prevent the spread of it on the platform.

(VI) Adding opt-in screens over content that could be shocking or graphic
We recognise that some content that may otherwise break our rules can be in the public interest, and we allow this content to remain on the platform for documentary, educational, and counterspeech purposes. As we continue to make public interest exceptions for some content, we provide opt-in screens to help prevent people from unexpectedly viewing shocking or graphic content.

(VII) External engagement
We are committed to engaging with experts across the industry and civil society, and cooperating with law enforcement agencies globally in line with our Law Enforcement Guidelines, to further safeguard and secure our platform during times of conflict.

Israel-Hamas Conflict:
We aim to ensure that TikTok is a source of reliable and safe information and recognise the heightened risk and impact of misleading information during a time of crisis such as the Conflict. 

(I) Upholding TikTok's Community Guidelines

Continuing to enforce our policies against violence, hate, and harmful misinformation by taking action to remove violative content and accounts. For example, we remove content that promotes Hamas, or otherwise supports the attacks or mocks victims affected by the violence. We do not tolerate attempts to incite violence or spread hateful ideologies. We have a zero-tolerance policy for content praising violent and hateful organisations and individuals, and those organisations and individuals aren't allowed on our platform. We also block hashtags that promote violence or otherwise break our rules. We use a combination of advanced moderation technologies and teams of human safety experts to identify, review, and action content that violates our policies.


Automated Review

We place considerable emphasis on proactive detection to remove violative content and reduce exposure to potentially distressing content for our human moderators. Before content is posted to our platform, it's reviewed by automated moderation technologies which identify content or behavior that may violate our policies or For You feed eligibility standards, or that may require age-restriction or other actions. While undergoing this review, the content is visible only to the uploader.

If our automated moderation technology identifies content that is a potential violation, it will either take action against the content or flag it for further review by our human moderation teams. In line with our safeguards to help ensure accurate decisions are made, automated removal is applied when violations are the most clear-cut.

Some of the methods and technologies that support these efforts include:

  • Vision-based:
    Computer vision models can identify objects that violate our Community Guidelines, such as weapons or hate symbols.
  • Audio-based: Audio clips are reviewed for violations of our policies, supported by a dedicated audio bank and "classifiers" that help us detect audios that are similar or modified to previous violations.
  • Text-based: Detection models review written content like comments or hashtags, using foundational keyword lists to find variations of violative text. Artificial Intelligence (AI) that can interpret the context surrounding content—helps us identify violations that are context-dependent, such as words that can be used in a hateful way but may not violate our policies by themselves. We also work with various external experts, like our fact-checking partners, to inform our keyword lists.
  • Similarity-based: "Similarity detection systems" enable us to not only catch identical or highly similar versions of violative content, but other types of content that share key contextual similarities and may require additional review.
  • Activity-based: Technologies that look at how accounts are being operated help us disrupt deceptive activities like bot accounts, spam, or attempts to artificially inflate engagement through fake likes or follow attempts
  • LLMs: We use multimodal LLMs to help moderate content faster and more consistently at scale, from taking automated action on activity like fake engagement, to empowering teams with better moderation tools and risk insights.
  • We work with external groups, for example Tech Against Terrorism in the context of violent extremist content, who help us to more quickly detect and remove violative content that has already been identified off the platform.

Scaling human expertise

Human insight plays a crucial role in the content moderation process, from our community or external experts, to our own safety professionals. TikTok has Arabic and Hebrew speaking content moderators who review content and assist with Conflict-related translations. We continue to focus on moderator care through the provision of internal training and well-being resources for T&S personnel working on mis & disinformation.

In H2 2025, we have removed 3,901 videos in relation to the Conflict, which violated our misinformation policies.

(II) Leveraging our Global Fact-Checking Program

We use a layered approach to detect harmful misinformation that violates our Community Guidelines, with our Global Fact-Checking Program playing a key role. We assess the accuracy of harmful or hard-to-verify claims by partnering with more than 20 IFCN-accredited fact-checking organizations who support over 60 languages on TikTok, including Arabic and Hebrew. We also collaborate with certain fact-checking partners to receive advance warning of emerging misinformation narratives. This helps facilitate proactive responses against high-harm trends and ensures that our Integrity and Authenticity moderators have up-to-date guidance.

To limit the spread of potentially misleading information, we apply warning labels and prompt users to reconsider sharing content about unfolding or emergency events that have been reviewed by fact-checkers but cannot be verified—referred to as “unverified content.” Recognising that the situation around the Conflict can change rapidly, we have put in place a process allowing our fact-checking partners to quickly update us if claims previously marked as “unverified” are later verified or clarified with additional context.

(III) Disruption of CIOs

Disrupting CIO networks targeting discourse related to Israel and Palestine remains a priority. Between July and December 2025, we took action to remove a total of four such networks.

(IV) Deploying search interventions to raise awareness of potential misinformation

To help raise awareness and to protect our users, we provide in-app search interventions that are triggered when users search for non-violating terms related to the Conflict (e.g., Israel, Palestine). These search interventions remind users to pause and check their sources.

(V) Adding opt-in screens over content that could be shocking or graphic

We recognise that some content that may otherwise break our rules can be of public interest, and we allow this content to remain on the platform for documentary, educational, and counterspeech purposes. As we continue to make public interest exceptions for some content, we provide opt-in screens to help prevent people from unexpectedly viewing shocking or graphic content.
 
(VI) External engagement

We are committed to engaging with experts across the industry and civil society, such as Tech Against Terrorism and cooperating with law enforcement agencies globally in line with our Law Enforcement Guidelines, to further safeguard and secure our platform during times of conflict.

Policies and Terms and Conditions
Outline any changes to your policies
Russia-Ukraine:
In a crisis, we keep under review our policies and to ensure moderation teams have supplementary guidance.

Israel-Hamas:

During the reporting period, no Conflict-specific policy changes were implemented.


Policy - 51.1.1
Russia-Ukraine:
No update during the reporting period.

Israel-Hamas:
No update during the reporting period.