TikTok

Report March 2025

Submitted
TikTok's mission is to inspire creativity and bring joy. In a global community such as ours with millions of users it is natural for people to have different opinions, so we seek to operate on a shared set of facts and reality when it comes to topics that impact people’s safety. Ensuring a safe and authentic environment for our community is critical to achieving our goals - this includes making sure our users have a trustworthy experience on TikTok. As part of creating a trustworthy environment, transparency is essential to enable online communities and wider society to assess TikTok's approach to its regulatory obligations. TikTok is committed to providing insights into the actions we are taking as a signatory to the Code of Practice on Disinformation (the Code). 

Our full executive summary is available as part of our report, which can be downloaded by following the link below.

Download PDF

Commitment 31
Relevant Signatories commit to integrate, showcase, or otherwise consistently use fact-checkers' work in their platforms' services, processes, and contents; with full coverage of all Member States and languages.
We signed up to the following measures of this commitment
Measure 31.1 Measure 31.2 Measure 31.3 Measure 31.4
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
Yes
If yes, list these implementation measures here
  • Onboarded two new fact-checking partners in wider Europe:
    • Albania & Kosovo: Internews Kosova
    • Georgia: Fact Check Georgia.
  • And, in addition, expanded our fact-checking coverage to other wider-European and EU candidate countries with existing fact-checking partners:
    • Moldova: AFP/Reuters 
    • Serbia: Lead Stories
  • Continued to expand our fact-checking repository to ensure our teams and systems leverage the full scope of insights our fact-checking partners submitted to TikTok (regardless of the original language of the relevant content).
  • Continued to conduct feedback sessions with our partners to further enhance the efficiency of the fact-checking program.
  • Continued to participate in the working group within the Code framework on the creation of an external fact-checking repository. 
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
No
If yes, which further implementation measures do you plan to put in place in the next 6 months?
We are continuously reviewing and improving our tools and processes to fight misinformation and disinformation and will report on any further development in the next COPD report. 
Measure 31.2
Relevant Signatories that integrate fact-checks in their products or processes will ensure they employ swift and efficient mechanisms such as labelling, information panels, or policy enforcement to help increase the impact of fact-checks on audiences.
QRE 31.2.1
Relevant Signatories will report on their specific activities and initiatives related to Measures 31.1 and 31.2, including the full results and methodology applied in testing solutions to that end.
We see harmful misinformation as different from other content issues. Context and fact-checking are critical to consistently and accurately enforcing our harmful misinformation policies, which is why we work with 14 fact-checking partners in Europe, covering 23 EEA languages. 

While we use machine learning models to help detect potential misinformation, our approach is to have members of our content moderation team, who receive specialised training on misinformation, assess, confirm, and take action on harmful misinformation. This includes direct access to our fact-checking partners who help assess the accuracy of content. Our fact-checking partners are involved in our moderation process in three ways:

(i) a moderator sends a video to fact-checkers for review and their assessment of the accuracy of the content by providing a rating. Fact-checkers will do so independently from us, and their review may include calling sources, consulting public data, authenticating videos and images, and more.

While content is being fact-checked or when content can't be substantiated through fact-checking, we may reduce the content’s distribution so that fewer people see it. Fact-checkers ultimately do not take action on the content directly. The moderator will instead take into account the fact-checkers’ feedback on the accuracy of the content when deciding whether the content violates our CGs and what action to take.

(ii) contributing to our global database of previously fact-checked claims to help our misinformation moderators make decisions. 

(iii) a proactive detection programme with our fact-checkers who flag new and evolving claims they're seeing on our platform. This enables our moderators to quickly assess these claims and remove violations.

In addition, we use fact-checking feedback to provide additional context to users about certain content. As mentioned, when our fact checking partners conclude that the fact-check is inconclusive or content is not able to be confirmed, (which is especially common during unfolding events or crises), we inform viewers via a banner when we identify a video with unverified content in an effort to raise users' awareness about the credibility of the content and to reduce sharing. The video may also become ineligible for recommendation into anyone's For You feed to limit the spread of potentially misleading information.
SLI 31.1.1 (for Measures 31.1 and 31.2)
Member State level reporting on use of fact-checks by service and the swift and efficient mechanisms in place to increase their impact, which may include (as depends on the service): number of fact-check articles published; reach of fact-check articles; number of content pieces reviewed by fact-checkers.
Methodology of data measurement:

The number of fact checked videos is based on the number of videos that have been reviewed by one of our fact-checking partners in the relevant territory.
Country Number of fact-checked videos
Austria 64
Belgium 141
Bulgaria 398
Croatia 137
Cyprus 8
Czech Republic 200
Denmark 175
Estonia 84
Finland 61
France 1045
Germany 837
Greece 64
Hungary 144
Ireland 91
Italy 202
Latvia 40
Lithuania 41
Luxembourg 2
Malta 0
Netherlands 52
Poland 622
Portugal 59
Romania 669
Slovakia 138
Slovenia 22
Spain 407
Sweden 158
Iceland 1
Liechtenstein 0
Norway 227
Total EU 5861
Total EEA 6089
SLI 31.1.2 (for Measures 31.1 and 31.2)
An estimation, through meaningful metrics, of the impact of actions taken such as, for instance, the number of pieces of content labelled on the basis of fact-check articles, or the impact of said measures on user interactions with information fact-checked as false or misleading.
Methodology of data measurement: 

The number of videos removed as a result of a fact-checking assessment and the number of videos removed because of policy guidelines, known misinformation trends and our knowledge based repository is based on the country in which the video was posted. 

These metrics correspond to the numbers of removals under the misinformation policy since all of its enforcement are based on the policy guidelines, known misinformation trends and knowledge based repository.
Country Number of videos removed as a result of a fact-checking assessment Number of videos removed because of policy guidelines, known misinformation trends and knowledge based repository
Austria 8 2888
Belgium 26 3902
Bulgaria 62 1568
Croatia 31 789
Cyprus 0 511
Czech Republic 42 2720
Denmark 12 1455
Estonia 2 319
Finland 4 984
France 166 44354
Germany 177 50335
Greece 8 4198
Hungary 21 2002
Ireland 13 4676
Italy 40 21035
Latvia 1 694
Lithuania 0 520
Luxembourg 0 279
Malta 0 168
Netherlands 13 5422
Poland 152 13028
Portugal 10 2629
Romania 168 14103
Slovakia 42 1365
Slovenia 3 574
Spain 55 22581
Sweden 15 3489
Iceland 1 122
Liechtenstein 0 35
Norway 14 1798
Total EU 1071 206588
Total EEA 1086 208543
SLI 31.1.3 (for Measures 31.1 and 31.2)
Signatories recognise the importance of providing context to SLIs 31.1.1 and 31.1.2 in ways that empower researchers, fact-checkers, the Commission, ERGA, and the public to understand and assess the impact of the actions taken to comply with Commitment 31. To that end, relevant Signatories commit to include baseline quantitative information that will help contextualise these SLIs. Relevant Signatories will present and discuss within the Permanent Task-force the type of baseline quantitative information they consider using for contextualisation ahead of their baseline reports.
Methodology of data measurement:

The metric we have provided demonstrates the % of videos which have been removed as a result of the fact checking assessment, in comparison to the total number of videos removed because of violation of our harmful misinformation policy.
Country Videos removed as a result of a fact checking assessment as a percentage of total number of videos removed due to violation of harmful misinformation policy
Austria 0.20%
Belgium 0.50%
Bulgaria 3.60%
Croatia 1.00%
Cyprus 0.00%
Czech Republic 1.30%
Denmark 0.80%
Estonia 0.60%
Finland 0.40%
France 0.40%
Germany 0.30%
Greece 0.20%
Hungary 0.30%
Ireland 0.00%
Italy 0.20%
Latvia 0.00%
Lithuania 0.00%
Luxembourg 0.00%
Malta 0.00%
Netherlands 0.10%
Poland 1.00%
Portugal 0.30%
Romania 0.90%
Slovakia 2.80%
Slovenia 0.00%
Spain 0.20%
Sweden 0.40%
Iceland 0.00%
Liechtenstein 0.00%
Norway 0.60%
Total EU 0.40%
Total EEA 0.40%