War of Aggression by Russia on Ukraine
We aim to ensure that TikTok is a source of reliable and safe information and recognise the heightened risk and impact of misleading information during a time of crisis such as the War in Ukraine.
(I) Upholding TikTok's Community Guidelines
Continuing to enforce our
policies against
violence,
hate, and
harmful misinformation by taking action to remove violative content and accounts. We use a combination of advanced moderation technologies and teams of human safety experts to identify, review, and action content that violates our policies.
Automated Review
We place considerable emphasis on proactive detection to remove violative content and reduce exposure to potentially distressing content for our human moderators. Before content is posted to our platform, it's reviewed by automated moderation technologies which identify content or behavior that may violate our policies or For You feed eligibility standards, or that may require age-restriction or other actions. While undergoing this review, the content is visible only to the uploader.
If our automated moderation technology identifies content that is a potential violation, it will either take action against the content or flag it for further review by our human moderation teams. In line with our safeguards to help ensure accurate decisions are made, automated removal is applied when violations are the most clear-cut.
Some of the methods and technologies that support these efforts include:
- Vision-based: Computer vision models can identify objects that violate our Community Guidelines, such as weapons or hate symbols.
- Audio-based: Audio clips are reviewed for violations of our policies, supported by a dedicated audio bank and "classifiers" that help us detect audios that are similar or modified to previous violations.
- Text-based: Detection models review written content like comments or hashtags,. Artificial Intelligence (AI) that can interpret the context surrounding content—helps us identify violations that are context-dependent, such as words that can be used in a hateful way but may not violate our policies by themselves.
- Similarity-based: "Similarity detection systems" enable us to not only catch identical or highly similar versions of violative content, but other types of content that share key contextual similarities and may require additional review.
- Activity-based: Technologies that look at how accounts are being operated help us disrupt deceptive activities like bot accounts, spam, or attempts to artificially inflate engagement through fake likes or follow attempts.
- LLMs: We use multimodal LLMs to help moderate content faster and more consistently at scale, from taking automated action on activity like fake engagement, to empowering teams with better moderation tools and risk insights.
- We work with external groups, for example Tech Against Terrorism in the context of violent extremist content, who help us to more quickly detect and remove violative content that has already been identified off the platform.
Scaling human expertise
Human insight plays a crucial role in the content moderation process, from our community or external experts, to our own safety professionals. Our teams of human safety experts speak more than 60 languages and dialects, including Russian and Ukrainian. We strive to promote a caring working environment for all TikTok employees, and especially for trust and safety professionals. We use an evidence-based approach to develop programmes and resources that support their psychological well-being, including for Trust & Safety personnel working on mis & disinformation.
In H2 2025, we removed 1,352 videos in relation to the War in Ukraine, which violated our misinformation policies.
(II) Leveraging our Global Fact-Checking Program
We use a layered approach to detect harmful misinformation that violates our Community Guidelines, with our Global Fact-Checking Program playing a key role. We assess the accuracy of harmful or hard-to-verify claims by partnering with more than 20
IFCN-accredited fact-checking organizations who support over 60 languages on TikTok, including Russian, Ukrainian, and Belarusian. We also collaborate with certain fact-checking partners to receive advance warning of emerging misinformation narratives. This helps facilitate proactive responses against high-harm trends and ensures that our Integrity and Authenticity moderators have up-to-date guidance.
To limit the spread of potentially misleading information, we apply
warning labels and prompt users to reconsider sharing content about unfolding or emergency events that have been reviewed by fact-checkers but cannot be verified—referred to as “unverified content.” Recognising that the situation around the Conflict can change rapidly, we have put in place a process allowing our fact-checking partners to quickly update us if claims previously marked as “unverified” are later verified or clarified with additional context.
(III) Disruption of CIOs
Disrupting CIO networks targeting discourse related to the War in Ukraine remains a priority. Between July and December 2025, we took action to remove a total of four such networks.
(IV) Mitigating the risk of monetisation of harmful misinformation
Political advertising has been prohibited on our platform for many years, but as an additional risk mitigation measure against the risk of profiteering from the War in Ukraine we prohibit Russian-based advertisers from outbound targeting of EU markets. We also suspended TikTok in the Donetsk and Luhansk regions.
(V) Localised media literacy campaigns
Proactive measures aimed at improving our users' digital literacy are vital, and we recognise the importance of increasing the prominence of authoritative information. We have 17 localised media literacy campaigns addressing disinformation related to the War in Ukraine in Austria, Bosnia, Bulgaria, Czechia, Croatia, Estonia, Germany, Hungary, Latvia, Lithuania, Montenegro, Poland, Romania, Serbia, Slovakia, Slovenia, and Ukraine in close collaboration with our factchecking partners. Users searching for keywords relating to the War in Ukraine are directed to tips, prepared in partnership with our fact-checking partners, to help users identify misinformation and prevent the spread of it on the platform.
(VI) Adding opt-in screens over content that could be shocking or graphic
We recognise that some content that may otherwise break our rules can be in the public interest, and we allow this content to remain on the platform for documentary, educational, and counterspeech purposes. As we continue to make
public interest exceptions for some content, we provide opt-in screens to help prevent people from unexpectedly viewing shocking or graphic content.
(VII) External engagement
We are committed to engaging with experts across the industry and civil society, and cooperating with law enforcement agencies globally in line with our
Law Enforcement Guidelines, to further safeguard and secure our platform during times of conflict.
Israel-Hamas Conflict:
We aim to ensure that TikTok is a source of reliable and safe information and recognise the heightened risk and impact of misleading information during a time of crisis such as the Conflict.
(I) Upholding TikTok's Community Guidelines
Continuing to enforce our
policies against
violence,
hate, and
harmful misinformation by taking action to remove violative content and accounts. For example, we remove content that promotes Hamas, or otherwise supports the attacks or mocks victims affected by the violence. We do not tolerate attempts to incite violence or spread hateful ideologies. We have a zero-tolerance policy for content praising violent and hateful organisations and individuals, and those organisations and individuals aren't allowed on our platform. We also block hashtags that promote violence or otherwise break our rules. We use a combination of advanced moderation technologies and teams of human safety experts to identify, review, and action content that violates our policies.
Automated Review
We place considerable emphasis on proactive detection to remove violative content and reduce exposure to potentially distressing content for our human moderators. Before content is posted to our platform, it's reviewed by automated moderation technologies which identify content or behavior that may violate our policies or For You feed eligibility standards, or that may require age-restriction or other actions. While undergoing this review, the content is visible only to the uploader.
If our automated moderation technology identifies content that is a potential violation, it will either take action against the content or flag it for further review by our human moderation teams. In line with our safeguards to help ensure accurate decisions are made, automated removal is applied when violations are the most clear-cut.
Some of the methods and technologies that support these efforts include:
Vision-based: Computer vision models can identify objects that violate our Community Guidelines, such as weapons or hate symbols.- Audio-based: Audio clips are reviewed for violations of our policies, supported by a dedicated audio bank and "classifiers" that help us detect audios that are similar or modified to previous violations.
- Text-based: Detection models review written content like comments or hashtags, using foundational keyword lists to find variations of violative text. Artificial Intelligence (AI) that can interpret the context surrounding content—helps us identify violations that are context-dependent, such as words that can be used in a hateful way but may not violate our policies by themselves. We also work with various external experts, like our fact-checking partners, to inform our keyword lists.
- Similarity-based: "Similarity detection systems" enable us to not only catch identical or highly similar versions of violative content, but other types of content that share key contextual similarities and may require additional review.
- Activity-based: Technologies that look at how accounts are being operated help us disrupt deceptive activities like bot accounts, spam, or attempts to artificially inflate engagement through fake likes or follow attempts
- LLMs: We use multimodal LLMs to help moderate content faster and more consistently at scale, from taking automated action on activity like fake engagement, to empowering teams with better moderation tools and risk insights.
- We work with external groups, for example Tech Against Terrorism in the context of violent extremist content, who help us to more quickly detect and remove violative content that has already been identified off the platform.
Scaling human expertise
Human insight plays a crucial role in the content moderation process, from our community or external experts, to our own safety professionals. TikTok has Arabic and Hebrew speaking content moderators who review content and assist with Conflict-related translations. We continue to focus on moderator care through the provision of internal training and well-being resources for T&S personnel working on mis & disinformation.
In H2 2025, we have removed 3,901 videos in relation to the Conflict, which violated our misinformation policies.
(II) Leveraging our Global Fact-Checking ProgramWe use a layered approach to detect harmful misinformation that violates our Community Guidelines, with our Global Fact-Checking Program playing a key role. We assess the accuracy of harmful or hard-to-verify claims by partnering with more than 20
IFCN-accredited fact-checking organizations who support over 60 languages on TikTok, including Arabic and Hebrew. We also collaborate with certain fact-checking partners to receive advance warning of emerging misinformation narratives. This helps facilitate proactive responses against high-harm trends and ensures that our Integrity and Authenticity moderators have up-to-date guidance.
To limit the spread of potentially misleading information, we apply
warning labels and prompt users to reconsider sharing content about unfolding or emergency events that have been reviewed by fact-checkers but cannot be verified—referred to as “unverified content.” Recognising that the situation around the Conflict can change rapidly, we have put in place a process allowing our fact-checking partners to quickly update us if claims previously marked as “unverified” are later verified or clarified with additional context.
(III) Disruption of CIOsDisrupting CIO networks targeting discourse related to Israel and Palestine remains a priority. Between July and December 2025, we took action to remove a total of four such networks.
(IV) Deploying search interventions to raise awareness of potential misinformation To help raise awareness and to protect our users, we provide in-app search interventions that are triggered when users search for non-violating terms related to the Conflict (e.g., Israel, Palestine). These search interventions remind users to pause and check their sources.
(V) Adding opt-in screens over content that could be shocking or graphic
We recognise that some content that may otherwise break our rules can be of public interest, and we allow this content to remain on the platform for documentary, educational, and counterspeech purposes. As we continue to make
public interest exceptions for some content, we provide opt-in screens to help prevent people from unexpectedly viewing shocking or graphic content.
(VI) External engagement
We are committed to engaging with experts across the industry and civil society, such as
Tech Against Terrorism and cooperating with law enforcement agencies globally in line with our
Law Enforcement Guidelines, to further safeguard and secure our platform during times of conflict.