TikTok

Report March 2025

Submitted
TikTok's mission is to inspire creativity and bring joy. In a global community such as ours with millions of users it is natural for people to have different opinions, so we seek to operate on a shared set of facts and reality when it comes to topics that impact people’s safety. Ensuring a safe and authentic environment for our community is critical to achieving our goals - this includes making sure our users have a trustworthy experience on TikTok. As part of creating a trustworthy environment, transparency is essential to enable online communities and wider society to assess TikTok's approach to its regulatory obligations. TikTok is committed to providing insights into the actions we are taking as a signatory to the Code of Practice on Disinformation (the Code). 

Our full executive summary is available as part of our report, which can be downloaded by following the link below.

Download PDF

Commitment 18
Relevant Signatories commit to minimise the risks of viral propagation of Disinformation by adopting safe design practices as they develop their systems, policies, and features.
We signed up to the following measures of this commitment
Measure 18.1 Measure 18.2 Measure 18.3
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
Yes
If yes, list these implementation measures here
  • Onboarded two new fact-checking partners in wider Europe:
    • Albania & Kosovo: Internews Kosova
    • Georgia: Fact Check Georgia
  • Continued to improve the accuracy of, and overall coverage provided by, our machine learning detection models. 
  • Members of the EDMO working group for the creation of the Independent Intermediary Body (IIB) to support research on digital platforms.
    • Refined our standard operating procedure (SOP) for vetted researcher access to ensure compliance with the provisions of the Delegated Act on Data Access for Research.
    • Participated in the EC Technical Roundtable on data access in December, 2024.
    • Invested in training and development for our Trust and Safety team, including regular internal sessions dedicated to knowledge sharing and discussion about relevant issues and trends and attending external events to share their expertise and support continued professional learning. For example: 
      • In the lead-up to certain elections, we invite suitably qualified external local/regional experts, as part of our Election Speaker Series. Sharing their market expertise with our internal teams provides us with insights to better understand areas that could potentially amount to election manipulation, and informs our approach to the upcoming election.
      • In June 2024, 12 members of our Trust & Safety team (including leaders of our fact-checking program) attended the GlobalFact11 and participated in an on-the-record mainstage presentation answering questions about our misinformation strategy and partnerships with professional fact-checkers.
    • Continued to participate in, and co-chair, the working group on Elections.
  • In October, we sponsored, attended, and presented at Disinfo24 the annual EU DisinfoLab Conference in Riga.
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
No
If yes, which further implementation measures do you plan to put in place in the next 6 months?
We are continuously reviewing and improving our tools and processes to fight misinformation and disinformation and will report on any further development in the next COPD report.
Measure 18.1
Relevant Signatories will take measures to mitigate risks of their services fuelling the viral spread of harmful Disinformation, such as: recommender systems designed to improve the prominence of authoritative information and reduce the prominence of Disinformation based on clear and transparent methods and approaches for defining the criteria for authoritative information; other systemic approaches in the design of their products, policies, or processes, such as pre-testing.
QRE 18.1.1
Relevant Signatories will report on the risk mitigation systems, tools, procedures, or features deployed under Measure 18.1 and report on their deployment in each EU Member State.
TikTok takes a multi-faceted approach to tackling the spread of harmful misinformation, regardless of intent. This includes our policies, products, practices and external partnerships with fact-checkers, media literacy bodies, and researchers.
 
(I) Removal of violating content or accounts. To reduce potential harm, we aim to remove content or accounts that violate our CGs including our I&A policies before they are viewed or shared by other people. We detect and take action on this content by using a combination of automation and human moderation.
  • Automated Review We place considerable emphasis on proactive detection to remove violative content. Content that is uploaded to the platform is typically first reviewed by our automated moderation technology, which looks at a variety of signals across content, including keywords, images, captions, and audio, to identify violating content. We work with various external experts, like our fact-checking partners, to inform our keyword lists. If our automated moderation technology identifies content that is a potential violation, it will either be automatically removed from the platform or flagged for further review by our human moderation teams.  In line with our safeguards to help ensure accurate decisions are made, automated removal is applied when violations are the most clear-cut.  We also carry out targeted sweeps of certain types of violative content including harmful misinformation, where we have identified specific risks or where our fact-checking partners or other experts have alerted us to specific risks. 
  • Human Moderation While some misinformation can be enforced through technology alone—for example, repetitions of previously debunked content—misinformation evolves quickly and is highly nuanced. That’s why we have misinformation moderators with enhanced training and access to tools like our global repository of previously fact-checked claims from the IFCN-accredited fact-checking partners, who help assess the accuracy of content. We also have teams on the ground who partner with experts to prioritise local context and nuance. We may also issue guidance to our moderation teams to help them more easily spot and take swift action on violating content. Human moderation will also occur if a video gains popularity or has been reported. Community members can report violations in-app and on our website. Our fact-checking partners and other stakeholders can also report potential violating content to us directly.

(II) Safety in our recommendations. In addition to removing content that clearly violates our CGs, we have a number of safeguards in place to ensure the For You feed (as the primary access point for discovering original and entertaining content on the platform) has safety built-in.

  1. For content that does not violate our CGs but may negatively impact the authenticity of the platform, we reduce its prominence on the For You feed and / or label it. The types of misinformation we may make  ineligible for the For You feed are made clear to users here; general conspiracy theories, unverified information related to an emergency or unfolding event  and potential high-harm misinformation that is undergoing a fact-check. We also label accounts and content of state-affiliated media entities to empower users to consider the sources of information. Our moderators take additional precautions to review videos as they rise in popularity to reduce the likelihood of content that may not be appropriate entering our recommended system. 
  2. Providing access to authoritative information is an important part of our overall strategy to counter misinformation. There are a number of ways in which we do this, including launching information centres with informative resources from authoritative third-parties in response to global or local events, adding public service announcements on hashtag or search pages, or labelling content related to a certain topic to prompt our community to seek out authoritative information. 

(III) Safety by Design. Within our Trust and Safety Product and Policy teams, we have subject matter experts dedicated to integrity and authenticity. When we develop a new feature or policy, these teams work closely with external partners to ensure we are building safety into TikTok by design and reflecting industry best practice. For example:

  • We collaborate with Irrational Labs to develop and implement specialised prompts to help users consider before sharing unverified content (as outlined in QRE 21.3.1),
  • Yad Vashem created an enrichment program on the Holocaust for our Trust and Safety team. The five week program aimed to give our team a deeper understanding about the Holocaust, its lessons and misinformation related to antisemitism and hatred.
  • We worked with local/regional experts through our Election Speaker Series to ensure their insights and expertise informs our internal teams ahead of particular elections throughout 2024.
QRE 18.1.2
Relevant Signatories will publish the main parameters of their recommender systems, both in their report and, once it is operational, on the Transparency Centre.
The For You feed is the interface users first see when they open TikTok. It is central to the TikTok experience and where most of our users spend their time exploring the platform. User interactions act as signals that help the recommender systems predict content they are more likely to be interested in as well as the content they might be less interested in and may prefer to skip. User interactions across TikTok can impact how the system ranks and serves content. 
These are some examples of information that may influence TikTok content in your For You feed:
  • User interactions: Content you like, share, comment on, and watch in full or skip, as well as accounts of followers that you follow back.
  • Content information: Sounds, hashtags, number of views, and the country in which the content was published.
  • User information: Device settings, language preference, location, time zone and day, and device type.

For most users, user interactions, which may include the time spent watching a video, are generally weighted more heavily than others. 

Aside from the signals users provide by how they interact with content on TikTok, there are additional tools we have built to help them better control what kind of content is recommended to them.

  • Not interested: Users can long-press on the video in their For You feed and select ‘Not interested’ from the pop-up menu. This will let us know they are not interested in this type of content and we will limit how much of that content we recommend in their feed.
  • Video keyword filters: They can add keywords – both words or hashtags – they’d like to filter from their For You feed.
  • For You refresh: To help you discover new content, users can refresh their For You feed, enabling them to explore entirely new sides of TikTok.
We share more information about our recommender systems in our Help Center and Transparency Center and below in our response to QRE 19.1.1.
QRE 18.1.3
Relevant Signatories will outline how they design their products, policies, or processes, to reduce the impressions and engagement with Disinformation whether through recommender systems or through other systemic approaches, and/or to increase the visibility of authoritative information.
We take action to prevent and mitigate the spread of inaccurate, misleading, or false misinformation that may cause significant harm to individuals or the public at large. We do this by removing content and accounts that violate our rules, investing in media literacy and connecting our community to authoritative information, and partnering with external experts. Our I&A policies make clear that we do not allow activities that may undermine the integrity of our platform or the authenticity of our users. We remove content or accounts that involve misleading information that causes significant harm or, in certain circumstances, reduce the prominence of content. The types of misinformation we may make ineligible For You feed are set out in our Community Guidelines.

  •  Misinformation
    • Conspiracy theories that are unfounded and claim that certain events or situations are carried out by covert or powerful groups, such as "the government" or a "secret society".
    • Moderate harm health misinformation, such as an unproven recommendation for how to treat a minor illness.
    • Repurposed media, such as showing a crowd at a music concert and suggesting it is a political protest.
    • Misrepresenting authoritative sources, such as selectively referencing certain scientific data to support a conclusion that is counter to the findings of the study.
    • Unverified claims related to an emergency or unfolding event.
    • Potential high-harm misinformation while it is undergoing a fact-checking review.
  • Civic and Election Integrity
    • Unverified claims about an election, such as a premature claim that all ballots have been counted or tallied.
    • Statements that significantly misrepresent authoritative civic information, such as a false claim about the text of a parliamentary bill.

  • Fake Engagement
    • Content that tricks or manipulates others as a way to increase gifts, or engagement metrics, such as "like-for-like" promises or other false incentives for engaging with content.

To enforce our CGs at scale, we use a combination of automated review and human moderation. While some misinformation can be enforced through technology alone—for example, repetitions of previously debunked content—misinformation evolves quickly and is highly nuanced. Assessing harmful misinformation requires additional context and assessment by our misinformation moderators who have enhanced training, expertise and tools to identify such content, including our global repository of previously fact-checked claims from the IFCN-accredited fact-checking partners and direct access to our fact-checking partners where appropriate.

Our network of independent fact-checking partners do not moderate content directly on TikTok, but assess whether a claim is true, false, or unsubstantiated so that our moderators can take action based on our Community Guidelines. We incorporate fact-checker input into our broader content moderation efforts through:

  • Proactive insight reports that flag new and evolving claims they’re seeing across the internet. This helps us detect harmful misinformation and anticipate misinformation trends on our platform.
  • A repository of previously fact-checked claims to help misinformation moderators make swift and accurate decisions. 

Working with our network of independent fact-checking organisations enables TikTok to identify and take action on misinformation and connect our community to authoritative information around important events. This is an important part of our overall strategy to counter misinformation. There are a number of ways in which we do this, including launching information centers with resources from authoritative third-parties in response to global or local events,  adding public service announcements (PSAs) on hashtags or search pages, or labelling content related to a certain topic to prompt our community to seek out authoritative information.

We are also committed to civic and election integrity and mitigating the spread of false or misleading content about an electoral or civic process. We work with national electoral commissions, media literacy bodies and civil society organisations to ensure we are providing our community with accurate up-to-date information about an election through our in-app election information centers, election guides, search interventions and content labels.


SLI 18.1.1
Relevant Signatories will provide, through meaningful metrics capable of catering for the performance of their products, policies, processes (including recommender systems), or other systemic approaches as relevant to Measure 18.1 an estimation of the effectiveness of such measures, such as the reduction of the prevalence, views, or impressions of Disinformation and/or the increase in visibility of authoritative information. Insofar as possible, Relevant Signatories will highlight the causal effects of those measures.
Methodology of data measurement:

The share cancel rate (%) following the unverified content label share warning pop-up indicates the percentage of users who do not share a video after seeing the label pop up. This metric is based on the approximate location of the users that engaged with these tools.
Country Share cancel rate (%) following the unverified content label share warning pop-up (users who do not share the video after seeing the pop up)
Austria 31.80%
Belgium 33.80%
Bulgaria 34.00%
Croatia 33.70%
Cyprus 32.90%
Czech Republic 29.50%
Denmark 30.20%
Estonia 28.50%
Finland 27.20%
France 37.10%
Germany 30.10%
Greece 32.10%
Hungary 31.40%
Ireland 29.60%
Italy 37.70%
Latvia 30.90%
Lithuania 30.80%
Luxembourg 33.60%
Malta 35.40%
Netherlands 27.80%
Poland 28.90%
Portugal 33.10%
Romania 30.10%
Slovakia 28.90%
Slovenia 33.30%
Spain 34.10%
Sweden 29.40%
Iceland 27.90%
Liechtenstein 19.60%
Norway 25.40%
Total EU 32.20%
Total EEA 32.10%