TikTok

Report September 2025

Submitted
TikTok’s mission is to inspire creativity and bring joy. With a global community of more than a billion users, it’s natural for people to hold different opinions. That’s why we focus on a shared set of facts when it comes to issues that affect people’s safety. A safe, authentic, and trustworthy experience is essential to achieving our goals. Transparency plays a key role in building that trust, allowing online communities and society to assess how TikTok meets its regulatory obligations. As a signatory to the Code of Conduct on Disinformation (the Code), TikTok is committed to sharing clear insights into the actions we take.

TikTok takes disinformation extremely seriously. We are committed to preventing its spread, promoting authoritative information, and supporting media literacy initiatives that strengthen community resilience.

We prioritise proactive content moderation, with the vast majority of violative content removed before it is viewed or reported. In H1 2025, more than 97% of videos violating our Integrity and Authenticity policies were removed proactively worldwide.

We continue to address emerging behaviours and risks through our Digital Services Act (DSA) compliance programme, which the Code has operated under since July 2025. This includes a range of measures to protect users, detailed on our European Online Safety Hub. Our actions under the Code demonstrate TikTok’s strong commitment to combating disinformation while ensuring transparency and accountability to our community and regulators.

Our full executive summary can be read by downloading our report using the link below.

Download PDF

Commitment 18
Relevant Signatories commit to minimise the risks of viral propagation of Disinformation by adopting safe design practices as they develop their systems, policies, and features.
We signed up to the following measures of this commitment
Measure 18.1 Measure 18.2 Measure 18.3
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
Yes
If yes, list these implementation measures here
  • Continued to improve the accuracy of, and overall coverage provided by, our machine learning detection models. 
  • Began testing large language models (LLMs) to further support proactive moderation at scale. Because LLMs can comprehend human language and perform highly specific, complex tasks, we are better able to moderate nuanced areas like misinformation by extracting specific misinformation "claims" from videos for moderators to assess directly or route to our fact-checking partners.
  • Invested in training and development for our Trust and Safety team, including regular internal sessions dedicated to knowledge sharing and discussion about relevant issues and trends and attending external events to share their expertise and support continued professional learning. For example: 

  • In the lead-up to certain elections, we invite suitably qualified external local/regional experts, as part of our Election Speaker Series. Sharing their market expertise with our internal teams provides us with insights to better understand areas that could potentially amount to election manipulation, and informs us about our approach to the upcoming election. During the reporting period, we ran 7 Election Speaker Series sessions, 3 in EU Member States, and 4 in Albania, Belarus, Greenland, and Kosovo.
    • Albania: Internews Kosova (Kallxo)
    • Belarus: Belarusian Investigative Center
    • Germany: Deutsche Presse-Agentur (dpa)
    • Greenland: Logically Facts
    • Kosovo: Internews Kosova (Kallxo)
    • Poland: Demagog
    • Portugal: Poligrafo
  • In June 2025, 14 members of our Trust & Safety team (including leaders of our fact-checking program) attended GlobalFact12. In addition to a breakout session on Footnotes, TikTok hosted a networking event with more than 80 people from our partner organizations, including staff from fact checking partners, media literacy organizations, and Safety Advisory Councils.
  • TikTok teams and personnel also regularly participate in research-focused events. In H1 2025, we presented at the Political Tech Summit in Berlin (January), hosted Research Tools demos in Warsaw (April), Presented at GNET Annual Conference (May), hosted Research Tools demos in Prague (June), Presented at the Warsaw Women in Tech Summit (June), briefed a small group of Irish academic UCD (Dublin) researchers (June), and attended the ICWSM conference in Copenhagen (June). 
  • Continued to participate in, and co-chair, the working group on Elections.

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
N/A
If yes, which further implementation measures do you plan to put in place in the next 6 months?
N/A
Measure 18.1
Relevant Signatories will take measures to mitigate risks of their services fuelling the viral spread of harmful Disinformation, such as: recommender systems designed to improve the prominence of authoritative information and reduce the prominence of Disinformation based on clear and transparent methods and approaches for defining the criteria for authoritative information; other systemic approaches in the design of their products, policies, or processes, such as pre-testing.
QRE 18.1.1
Relevant Signatories will report on the risk mitigation systems, tools, procedures, or features deployed under Measure 18.1 and report on their deployment in each EU Member State.
N/A
QRE 18.1.2
Relevant Signatories will publish the main parameters of their recommender systems, both in their report and, once it is operational, on the Transparency Centre.
N/A
QRE 18.1.3
Relevant Signatories will outline how they design their products, policies, or processes, to reduce the impressions and engagement with Disinformation whether through recommender systems or through other systemic approaches, and/or to increase the visibility of authoritative information.
N/A