TikTok

Report March 2026

Submitted
TikTok’s mission is to inspire creativity and bring joy. With more than 200 million people across Europe coming to TikTok every month, including 178 million in the EU, it’s natural for people to hold different opinions. That’s why we focus on a shared set of facts when it comes to issues that affect people’s safety. A safe, authentic, and trustworthy experience is essential to achieving our goals. Transparency plays a key role in building that trust, allowing online communities and society to assess how TikTok meets its regulatory obligations. As a signatory to the Code of Conduct on Disinformation (the Code), TikTok is committed to sharing clear insights into the actions we take.

TikTok takes disinformation extremely seriously. We are committed to preventing its spread, promoting authoritative information, and supporting media literacy initiatives that strengthen community resilience.

We prioritise proactive content moderation, with the vast majority of violative content removed before it is reported. In H2 2025, more than 98% of videos violating our Integrity and Authenticity policies were removed proactively worldwide.

We continue to address emerging behaviours and risks through our Digital Services Act (DSA) compliance programme, which the Code has operated under since July 2025.

Our actions under the Code demonstrate TikTok’s strong commitment to combating disinformation while ensuring transparency and accountability to our community and regulators.

Please see the sections below for information about our work under specific commitments, or download the report as a PDF.

Download PDF

Commitment 31
Relevant Signatories commit to integrate, showcase, or otherwise consistently use fact-checkers' work in their platforms' services, processes, and contents; with full coverage of all Member States and languages.
We signed up to the following measures of this commitment
Measure 31.1 and 31.2 Measure 31.3 Measure 31.4
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
No
If yes, list these implementation measures here
N/A
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
N/A
If yes, which further implementation measures do you plan to put in place in the next 6 months?
N/A
Measure 31.1 and 31.2
31.1: Relevant Signatories that showcase User Generated Content (UGC) will integrate, showcase, or otherwise consistently use independent fact-checkers’ work in their platforms’ services, processes, and contents across all Member States and across formats relevant to the service. Relevant Signatories will collaborate with fact-checkers to that end, starting by conducting and documenting research and testing. 31.2: Relevant Signatories that integrate fact-checks in their products or processes will ensure they employ swift and efficient mechanisms such as labelling, information panels or policy enforcement to help increase the impact of fact-checks on audiences.
Measure 31.1
TikTok did not subscribe to this measure as outlined in the January 2025 Subscription Document.
QRE 31.1.1 (for Measures 31.1 and 31.2)
Relevant Signatories will report on their specific activities and initiatives related to Measures 31.1 and 31.2, including the full results and methodology applied in testing solutions to that end.
We see harmful misinformation as different from other content issues. Context and fact-checking are critical to consistently and accurately enforcing our harmful misinformation policies, which is why we work with 13 fact-checking partners in Europe, covering 23 EEA languages. 

As previously outlined, we place considerable emphasis on proactive detection and automated moderation technology to action violative content. For example, "multi-modal LLMs" can perform complex, highly specific tasks related to visual content. We can use this technology to make misinformation moderation easier by extracting specific misinformation "claims" from videos for moderators to assess directly or route to our fact-checking partners.

Our Integrity and Authenticity moderators receive specialised training to assess, confirm, and take action on harmful misinformation. This includes direct access to our fact-checking partners who help assess the accuracy of content. We also use fact-checking feedback to provide additional context to users about certain content. As mentioned, when our fact checking partners conclude that the fact-check is inconclusive or content is not able to be confirmed, (which is especially common during unfolding events or crises), we inform viewers via a banner when we identify a video with unverified content in an effort to raise users' awareness about the credibility of the content and to reduce sharing. The video may also become ineligible for recommendation into anyone's For You feed to limit the spread of potentially misleading information.

SLI 31.1.1
Member State level reporting on use of fact-checks by service and the swift and efficient mechanisms in place to increase their impact, which may include (as depends on the service): number of fact-check articles published; reach of fact-check articles; number of content pieces reviewed by fact-checkers.
Methodology of data measurement:

The number of fact checked videos is based on the number of videos that have been reviewed by one of our fact-checking partners in the relevant territory.

Number of fact checked videos (tasks)
Austria 52
Belgium 396
Bulgaria 1,221
Croatia 135
Cyprus 21
Czechia 367
Denmark 239
Estonia 326
Finland 58
France 3,659
Germany 1,203
Greece 97
Hungary 107
Ireland 53
Italy 743
Latvia 223
Lithuania 182
Luxembourg 1
Malta 1
Netherlands 1,603
Poland 806
Portugal 344
Romania 308
Slovakia 283
Slovenia 193
Spain 349
Sweden 102
Iceland 5
Norway 190
Total EU 13,072
Total EEA 13,267
SLI 31.1.2
An estimation, through meaningful metrics, of the impact of actions taken such as, for instance, the number of pieces of content labelled on the basis of fact-check articles, or the impact of said measures on user interactions with information fact-checked as false or misleading.
Methodology of data measurement: 

The number of videos removed as a result of a fact-checking assessment and the number of videos removed because of policy guidelines and known misinformation trends.. 

These metrics correspond to the numbers of removals under the misinformation policy since all of its enforcement are based on the policy guidelines and known misinformation trends. 

Number of videos removed as a result of a fact checking assessment Number of videos removed under Misinformation policyÊ
Austria 12 2,612
Belgium 11 4,150
Bulgaria 166 4,828
Croatia 40 638
Cyprus 1 701
Czech Republic 20 2,855
Denmark 12 2,484
Estonia 28 527
Finland 7 1,357
France 273 37,466
Germany 216 42,642
Greece 7 4,602
Hungary 4 1,490
Ireland 3 2,613
Italy 207 18,667
Latvia 0 705
Lithuania 7 1,086
Luxembourg 0 349
Malta 0 159
Netherlands 258 14,335
Poland 128 14,770
Portugal 48 3,141
Romania 65 28,743
Slovakia 18 1,122
Slovenia 5 370
Spain 50 21,592
Sweden 3 4,159
Iceland 1 123
Liechtenstein 0 143
Norway 10 1,662
Total EU 1,589 218,163
Total EEA 1,600 220,091
SLI 31.1.3
Signatories recognise the importance of providing context to SLIs 31.1.1 and 31.1.2 in ways that empower researchers, fact-checkers, the Commission, ERGA, and the public to understand and assess the impact of the actions taken to comply with Commitment 31. To that end, relevant Signatories commit to include baseline quantitative information that will help contextualise these SLIs. Relevant Signatories will present and discuss within the Permanent Task-force the type of baseline quantitative information they consider using for contextualisation ahead of their baseline reports.
Methodology of data measurement:

The metric we have provided demonstrates the % of videos which have been removed as a result of the fact checking assessment, in comparison to the total number of videos removed because of violation of our harmful misinformation policy.

Videos removed as a result of a fact checking assessment as a percentage of total number of videos removed due to violation of harmful misinformation policy
Austria 0.50%
Belgium 0.30%
Bulgaria 3.40%
Croatia 6.30%
Cyprus 0.10%
Czech Republic 0.70%
Denmark 0.50%
Estonia 5.30%
Finland 0.50%
France 0.70%
Germany 0.50%
Greece 0.20%
Hungary 0.30%
Ireland 0.10%
Italy 1.10%
Latvia 0.00%
Lithuania 0.60%
Luxembourg 0.00%
Malta 0.00%
Netherlands 1.80%
Poland 0.90%
Portugal 1.50%
Romania 0.20%
Slovakia 1.60%
Slovenia 1.40%
Spain 0.20%
Sweden 0.10%
Iceland 0.80%
Liechtenstein 0.00%
Norway 0.60%
Total EU 0.70%
Total EEA 0.70%