War of Aggression by Russia on Ukraine
We aim to ensure that TikTok is a source of reliable and safe information and recognise the heightened risk and impact of misleading information during a time of crisis such as the War in Ukraine..
(I) Investment in our fact-checking programme
We employ a layered approach to detecting harmful misinformation which is in violation of our CGs.
Working closely with our fact-checking partners is a crucial part of our approach to enforcing harmful misinformation on our platform. Our fact-checking programme includes coverage of Russian, Ukrainian and Belarusian. We also partner with Reuters, who are dedicated to fact-checking content in Russian and Ukrainian.
We also collaborate with certain of our fact-checking partners to receive advance warning of emerging misinformation narratives, which has facilitated proactive responses against high-harm trends and has ensured that our moderation teams have up-to-date guidance.
(II) Disruption of CIOs
As set out above, disrupting CIO networks has been high priority work for us in the context of the crisis and we published a list of the networks we disrupted in the relevant period within our most recently published transparency report
here.
Between July and December 2024, we took action to remove a total of 9 networks (consisting of 20,002 accounts in total) that were found to be involved in coordinated attempts to influence public opinion about the War in Ukraine while also misleading individuals, our community, or our systems. We publish all of the CIO networks we identify and remove within our dedicated CIO transparency report
here.
Countering influence operations is an industry-wide effort, in part because these operations often spread their activity across multiple platforms. We regularly consult with third-party experts, including our global
Content and Safety Advisory Councils, whose guidance helps us improve our policies and understand regional context.
(III) Restricting access to content for state affiliated media
Since the early stages of the war, we have restricted access to content from a number of Russian state affiliated media entities in the EU, Iceland and Liechtenstein. Our state affiliated media policy is used to help users understand the context of certain content and to help them to evaluate the content they consume on our platform. Labels have since applied to content posted by the state affiliated accounts of such entities in Russia, Ukraine and Belarus.
We continue the detection and labeling of state-controlled media accounts in accordance with our state-controlled media label policy globally.
(IV) Mitigating the risk of monetisation of harmful misinformation
Political advertising has been prohibited on our platform for many years, but as an additional risk mitigation measure against the risk of monetisation off the back of the War in Ukraine we prohibit Russian-based advertisers from outbound targeting of EU markets. We also suspended TikTok in the Donetsk and Luhansk regions, removing Livestream videos originating in Ukraine from the For You feed of users located in the EU. In addition, the ability to add new video content or Livestream videos to the platform in Russia remains suspended.
(V) Launching localised media literacy campaigns
Proactive measures which are aimed at improving our users' digital literacy are vital and we recognise the importance of increasing the prominence of authoritative information. We have thirteen localised media literacy campaigns addressing disinformation related to the War in Ukraine in Austria, Bulgaria, Czech Republic, Croatia, Estonia, Germany, Hungary, Latvia, Lithuania, Poland, Romania, Slovakia and Slovenia in close collaboration with our fact-checking partners. Users searching for keywords relating to the War in Ukraine are directed to tips, prepared in partnership with our fact checking partners, to help users identify misinformation and prevent the spread of it on the platform. We have also partnered with a local Ukrainian fact-checking organisation, VoxCheck, with the aim of launching a permanent media literacy campaign in Ukraine.
Israel-Hamas Conflict
We are continually working hard to ensure that TikTok is a source of reliable and safe information and recognise the heightened risk and impact of misleading information during a time of crisis. As part of our crisis management process, we launched a command centre that brings together key members of our global team of thousands of safety professionals, representing a range of expertise and regional perspectives, so that we remain agile in how we take action to respond to this fast-evolving crisis. Since the beginning of the Conflict, we are:
(I) Upholding TikTok's Community Guidelines
We rolled out refreshed
Community Guidelines and began to enforce expanded hate speech and hateful behavior policies. These policies aim to better address implicit or indirect hate speech and create a safer and more civil environment for everyone. These add to our long-standing policies against antisemitism and other hateful ideologies. We also updated our hate speech policy to recognize content that uses "Zionist" as a proxy for a protected attribute when it is not used to refer to a political ideology and instead used as a proxy with Jewish or Israeli identity. This policy was implemented early in 2024 after observing a rise in how the word was increasingly used in a hateful way.
Continuing to enforce our
policies against
violence,
hate, and
harmful misinformation by taking action to remove violative content and accounts. For example, we remove content that promotes Hamas, or otherwise supports the attacks or mocks victims affected by the violence. If content is posted depicting a person who has been taken hostage, we will do everything we can to protect their dignity and remove content that breaks our rules. We do not tolerate attempts to incite violence or spread hateful ideologies. We have a zero-tolerance policy for content praising violent and hateful organisations and individuals, and those organisations and individuals aren't allowed on our platform. We also block hashtags that promote violence or otherwise break our rules. We have removed 8,765 videos in relation to the conflict which violated our misinformation policies.
Evolving our proactive automated detection systems in real-time as we identify new threats; this enables us to automatically detect and remove graphic and violent content so that neither our moderators nor our community members are exposed to it.
(II) Leveraging our Fact-Checking Program
We employ a layered approach to detecting harmful misinformation which is in violation of our Community Guidelines and our global fact-checking program is a critical part of this. The core objective of the fact-checking program is to leverage the expertise of external fact-checking organisations to help assess the accuracy of harmful and difficult to verify claims.
To limit the spread of potentially misleading information, we apply
warning labels and prompt users to reconsider sharing content related to unfolding or emergency events, which have been assessed by our fact-checkers but cannot be verified as accurate i.e., ‘unverified content’. Mindful about how evolving events may impact the assessment of sensitive Conflict related claims day-to-day, we have implemented a process that allows our fact-checking partners to update us quickly if claims previously assessed as ‘unverified’ become verified with additional context and/or at a later stage.
(III) Scaling up our content moderation capabilities
TikTok has Arabic and Hebrew speaking moderators in the content moderation teams who review content and assist with Conflict-related translations. As we continue to focus on moderator care, we have also deployed additional well-being resources for our human moderation teams during this time.
(IV) Disruption of CIOs
Disrupting CIO networks has also been high priority work for us in tackling deceptive behaviour that may cause harm to our community or society at large. As noted above, between July and December 2024, we took action to remove 3 networks (consisting of 132 accounts in total) that were found to be related to the Conflict. We now publish all of the CIO networks we identify and remove, including those relating to the conflict, within our dedicated CIO transparency report,
here.
(V) Mitigating the risk of monetisation of harmful misinformation
Making temporary adjustments to policies that govern TikTok features in an effort to proactively prevent them from being used for hateful or violent behaviour in the region. For example, we’ve added additional restrictions on LIVE eligibility as a temporary measure given the heightened safety risk in the context of the current hostage situation. Our existing
political ads policy, GPPPA labelling and
safety and civility policies help to mitigate the risk of monetisation of harmful misinformation.
(VI) Deploying search interventions to raise awareness of potential misinformation
To help raise awareness and to protect our users, we have launched search interventions which are triggered when users search for non-violating terms related to the Conflict (e.g., Israel, Palestine). These search interventions remind users to pause and check their sources and also direct them to well-being resources. In H2 we continued to refine this process, in particular we focused on improving keywords to ensure they are relevant and effective.
(VII) Adding opt-in screens over content that could be shocking or graphic
We recognise that some content that may otherwise break our rules can be in the public interest, and we allow this content to remain on the platform for documentary, educational, and counterspeech purposes. Opt-in screens help prevent people from unexpectedly viewing shocking or graphic content as we continue to make
public interest exceptions for some content.