TikTok

Report March 2026

Submitted
TikTok’s mission is to inspire creativity and bring joy. With more than 200 million people across Europe coming to TikTok every month, including 178 million in the EU, it’s natural for people to hold different opinions. That’s why we focus on a shared set of facts when it comes to issues that affect people’s safety. A safe, authentic, and trustworthy experience is essential to achieving our goals. Transparency plays a key role in building that trust, allowing online communities and society to assess how TikTok meets its regulatory obligations. As a signatory to the Code of Conduct on Disinformation (the Code), TikTok is committed to sharing clear insights into the actions we take.

TikTok takes disinformation extremely seriously. We are committed to preventing its spread, promoting authoritative information, and supporting media literacy initiatives that strengthen community resilience.

We prioritise proactive content moderation, with the vast majority of violative content removed before it is reported. In H2 2025, more than 98% of videos violating our Integrity and Authenticity policies were removed proactively worldwide.

We continue to address emerging behaviours and risks through our Digital Services Act (DSA) compliance programme, which the Code has operated under since July 2025.

Our actions under the Code demonstrate TikTok’s strong commitment to combating disinformation while ensuring transparency and accountability to our community and regulators.

Please see the sections below for information about our work under specific commitments, or download the report as a PDF.

Download PDF

Commitment 30
Relevant Signatories commit to establish a framework for transparent, structured, open, financially sustainable, and non-discriminatory cooperation between them and the EU fact-checking community regarding resources and support made available to fact-checkers.
We signed up to the following measures of this commitment
Measure 30.1 Measure 30.2 Measure 30.3 Measure 30.4
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
Yes
If yes, list these implementation measures here
N/A
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
N/A
If yes, which further implementation measures do you plan to put in place in the next 6 months?
N/A
Measure 30.1
Relevant Signatories will set up agreements between them and independent fact-checking organisations (as defined in whereas (e)) to achieve fact-checking coverage in all Member States. These agreements should meet high ethical and professional standards and be based on transparent, open, consistent and non-discriminatory conditions and will ensure the independence of fact-checkers.
QRE 30.1.1
Relevant Signatories will report on and explain the nature of their agreements with fact-checking organisations; their expected results; relevant quantitative information (for instance: contents fact-checked, increased coverage, changes in integration of fact-checking as depends on the agreements and to be further discussed within the Task-force); and such as relevant common standards and conditions for these agreements.
Within Europe, we work with 13 fact-checking partners who provide fact-checking coverage in 23 EEA languages, including at least one official language of every EU Member State, and additional languages including Georgian, Russian, Turkish, Ukrainian, Albanian and Serbian.

Our partners have teams of fact-checkers who review and verify reported content. Our Integrity and Authenticity moderators then use that independent feedback to take action and where appropriate, remove or make ineligible for recommendation false or misleading content or label unverified content. 

Our agreements with our partners are standardised, meaning the agreements are based on our template master services agreements and consistent with common standards and conditions. We reviewed and updated our template standard agreements as part of our annual contract renewal process.

The terms of the agreements describe:
  • The service the fact-checking partner will provide, namely, that their team of fact checkers review, assess and rate video content uploaded to their fact-checking queue, and will provide regular pro-active Insights Reports about general misinformation trends observed on our platform and across the industry generally, including new/changing industry or market trends, events or topics that generate particular misinformation or disinformation. 
  • The expected results e.g., the fact-checkers advise on whether the content may be or contain misinformation and rate it using our classification categories. 
  • An option to receive pro-actively flagging of potential harmful misinformation from our partners.
  • The languages in which they will provide fact-checking services.
  • The ability to request temporary coverage regarding additional languages or support on ad hoc additional projects.
  • All other key terms including the applicable term and fees and payment arrangements.
QRE 30.1.2
Relevant Signatories will list the fact-checking organisations they have agreements with (unless a fact-checking organisation opposes such disclosure on the basis of a reasonable fear of retribution or violence).
We currently have 13 IFCN accredited fact-checking partners across the EU, EEA, and wider Europe: 

  1. Agence France-Presse (AFP)
  2. Deutsche Presse-Agentur (dpa)
  3. Demagog
  4. Facta
  5. Geofacts
  6. Faktograf
  7. Internews Kosova (Kallxo)
  8. Lead Stories
  9. Newtral
  10. Poligrafo
  11. Reuters
  12. Science Feedback- For advertising-related fact-checking partnerships, please refer to Chapter 2. 
  13. Teyit

These partners provide fact-checking coverage in 23 official EEA languages, including at least one official language of each EU Member States, and additional languages including Georgian, Russian, Turkish, Ukrainian, Albanian and Serbian.

We can, and have, put in place temporary agreements with these fact-checking partners to provide additional EU language coverage during high risk events like elections or an unfolding crisis.

Outside of our fact-checking program, we also collaborate with fact-checking organisations to develop a variety of media literacy campaigns. For example, during this reporting period, we
worked with European fact-checkers on 6 temporary media literacy campaigns, in advance of regional elections, through our in-app Election Centers:
  1. Portugal Local Elections -Polígrafo
  2. Estonia Local Elections - Lead Stories
  3. Ireland Presidential Election - The Journal
  4. Portugal Presidential Election - Polígrafo
  5. Denmark (local and municipal elections): Sikker Digital
  6. Czechia (parliamentary election elections): Demagog.cz

Globally, we have more than 20 IFCN-accredited fact-checking partners and we keep users updated here.

QRE 30.1.3
Relevant Signatories will report on resources allocated where relevant in each of their services to achieve fact-checking coverage in each Member State and to support fact-checking organisations' work to combat Disinformation online at the Member State level.
We have fact-checking coverage in 23 official EEA languages: Bulgarian, Croatian, Czech, Danish, Dutch, English, Estonian, Finnish, French, German, Greek, Hungarian, Italian, Latvian, Lithuanian, Norwegian, Polish, Portuguese, Romanian, Slovak, Slovenian, Spanish and Swedish. 

We have fact-checking coverage in a number of other European languages or languages used in Europe which affect European users, including Georgian, Russian, Turkish, and Ukrainian and we can request additional support in Azeri, Armenian, and Belarusian. 

In terms of global fact-checking initiatives, we currently cover more than 60 languages and 130 markets across the world, thereby improving the overall integrity of the service and benefiting European users. 

In order to effectively scale the feedback provided by our fact-checkers globally, we have implemented the measures listed below.
  • Insights reports. Our fact-checking partners provide regular reports identifying general misinformation trends observed on our platform and across the industry generally, including new/changing industry or market trends, events or topics that generated particular misinformation or disinformation.  
  • Proactive detection by our fact-checking partners. Our fact-checking partners are authorised to proactively identify content that may constitute harmful misinformation on our platform, which our moderators assess against our Community Guidelines, and suggest prominent misinformation that is circulating online that may benefit from verification. 
  • Fact-checking guidelines. Where relevant, we create guidelines and trending topic reminders for our moderators which are informed by previous fact checking assessments. This helps our teams leverage the insights from our fact-checking partners and supports swift and accurate decisions on flagged content regardless of the language in which the original claim was made.

Moderation teams working dedicated misinformation queues receive enhanced training on our misinformation policies and have access to the above-mentioned tools and measures, which enables them to make accurate content decisions across Europe and globally.

We place considerable emphasis on proactive detection to remove violative content and reduce exposure to potentially distressing content for our human safety experts. Before content is posted to our platform, it's reviewed by automated moderation technologies which identify content or behavior that may violate our policies or For You feed eligibility standards, or that may require age-restriction or other actions. While undergoing this review, the content is visible only to the uploader.

If our automated moderation technology identifies content that is a potential violation, it will either take action against the content or flag it for human review. In line with our safeguards to help ensure accurate decisions are made, automated removal is applied when violations are more clear-cut.

Some of the methods and technologies that support these efforts include:
  • Vision-based: Computer vision models can identify objects that violate our Community Guidelines, such as weapons or hate symbols.
  • Audio-based: Audio clips are reviewed for violations of our policies, supported by a dedicated audio bank and "classifiers" that help us detect audios that are similar or modified to previous violations.
  • Text-based: Detection models review written content like comments or hashtags, using foundational keyword lists to find variations of violative text. Artificial Intelligence (AI) that can interpret the context surrounding content—helps us identify violations that are context-dependent, such as words that can be used in a hateful way but may not violate our policies by themselves. We also work with various external experts, like our fact-checking partners, to inform our keyword lists.
  • Similarity-based: "Similarity detection systems" enable us to not only catch identical or highly similar versions of violative content, but other types of content that share key contextual similarities and may require additional review.
  • Activity-based: Technologies that look at how accounts are being operated help us disrupt deceptive activities like bot accounts, spam, or attempts to artificially inflate engagement through fake likes or follow attempts.
  • LLMs: We use multimodal LLMs to help moderate content faster and more consistently at scale, from taking automated action on activity like fake engagement, to empowering teams with better moderation tools and risk insights.
  • Content Credentials: We launched the ability to read Content Credentials that attach metadata to content, which we can use to automatically label AI-generated content that originated on other major platforms.

Continuing to leverage the fact-checking output in this way enables us to further increase the positive impact of our fact checking programme.