TikTok

Report September 2025

Submitted
TikTok’s mission is to inspire creativity and bring joy. With a global community of more than a billion users, it’s natural for people to hold different opinions. That’s why we focus on a shared set of facts when it comes to issues that affect people’s safety. A safe, authentic, and trustworthy experience is essential to achieving our goals. Transparency plays a key role in building that trust, allowing online communities and society to assess how TikTok meets its regulatory obligations. As a signatory to the Code of Conduct on Disinformation (the Code), TikTok is committed to sharing clear insights into the actions we take.

TikTok takes disinformation extremely seriously. We are committed to preventing its spread, promoting authoritative information, and supporting media literacy initiatives that strengthen community resilience.

We prioritise proactive content moderation, with the vast majority of violative content removed before it is viewed or reported. In H1 2025, more than 97% of videos violating our Integrity and Authenticity policies were removed proactively worldwide.

We continue to address emerging behaviours and risks through our Digital Services Act (DSA) compliance programme, which the Code has operated under since July 2025. This includes a range of measures to protect users, detailed on our European Online Safety Hub. Our actions under the Code demonstrate TikTok’s strong commitment to combating disinformation while ensuring transparency and accountability to our community and regulators.

Our full executive summary can be read by downloading our report using the link below.

Download PDF

Commitment 2
Relevant Signatories participating in advertising commit to prevent the misuse of advertising systems to disseminate Disinformation in the form of advertising messages.
We signed up to the following measures of this commitment
Measure 2.1 Measure 2.2 Measure 2.3 Measure 2.4
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
Yes
If yes, list these implementation measures here
  • Continued to enforce and improve our five granular harmful misinformation ad policies in the EEA. As mentioned in our H2 2024 report, the policies cover:
    • Medical Misinformation
    • Dangerous Misinformation
    • Synthetic and Manipulated Media
    • Dangerous Conspiracy Theories 
    • Climate Misinformation
  • Enabled advertisers to selectively exclude unwanted or misaligned videos that do not align with their brand safety requirements from appearing next to their ads through TikTok's Video Exclusion List solution.
  • Enabled advertisers to exclude specific profile pages from serving their Profile Feed ads through TikTok's Profile Feed Exclusion List. Paid ads are subject to our strict ad policies, which specifically prohibit misleading, inauthentic, and deceptive behaviours. Ads are reviewed against these policies before being allowed on our platform. In order to improve our existing ad policies, we launched four more granular policies in the EEA in 2023 (covering Medical Misinformation, Dangerous Misinformation, Synthetic and Manipulated Media, and Dangerous Conspiracy Theories), which advertisers also need to comply with. In December 2024, we launched a fifth granular policy covering Climate Misinformation.
  • We continue to engage in the Task-force and its working groups and subgroups such as the working subgroup on Elections (Crisis Response).

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 2.1
Relevant Signatories will develop, deploy, and enforce appropriate and tailored advertising policies that address the misuse of their advertising systems for propagating harmful Disinformation in advertising messages and in the promotion of content.
QRE 2.1.1
Signatories will disclose and outline the policies they develop, deploy, and enforce to meet the goals of Measure 2.1 and will link to relevant public pages in their help centres.
Paid ads are subject to our strict ad policies, which specifically prohibit misleading, inauthentic, and deceptive behaviours. Ads are reviewed against these policies before being allowed on our platform. In order to improve our existing ad policies, we launched four more granular policies in the EEA in 2023 (covering Medical Misinformation, Dangerous Misinformation, Synthetic and Manipulated Media, and Dangerous Conspiracy Theories), which advertisers also need to comply with. In December 2024, we launched a fifth granular policy covering Climate Misinformation.
SLI 2.1.1
Signatories will report, quantitatively, on actions they took to enforce each of the policies mentioned in the qualitative part of this service level indicator, at the Member State or language level. This could include, for instance, actions to remove, to block, or to otherwise restrict harmful Disinformation in advertising messages and in the promotion of content.
We have set out the number of ads that have been removed from our platform for violation of our political content policies, as well as our five granular policies on Medical Misinformation, Dangerous Misinformation, Synthetic and Manipulated Media, Dangerous Conspiracy Theories, and Climate Misinformation. We launched our Climate Misinformation policy in December 2024.

The majority of ads that violate our newly launched misinformation policies, would have been removed under our existing policies. In cases where an ad is deemed violative for other policies and also for these additional misinformation policies, the removal is counted under the older policy. Therefore, the second column below shows only the number of ads removed where the sole reason was one of these five additional misinformation policies, and does not include ads already removed under our existing policies or where misinformation policies were not the driving factor for the removal.

The data below suggests that our existing policies (such as Political Content) already cover the majority of harmful misinformation ads due to their expansive coverage.

Note that numbers have only been provided for monetised markets and are based on where the ads were displayed. 

Country Number of ad removals under the political content ad policy Number of ad removals under the five granular misinformation ad policies
Austria 1,634 11
Belgium 2,447 4
Bulgaria 880 9
Croatia 705 0
Cyprus 585 0
Czech Republic 859 0
Denmark 796 2
Estonia 307 0
Finland 1,033 2
France 16,026 46
Germany 18,041 72
Greece 2,420 20
Hungary 1,647 111
Ireland 1,263 8
Italy 8,150 27
Latvia 795 2
Lithuania 521 4
Luxembourg 250 1
Netherlands 3,028 30
Poland 5,699 19
Portugal 1,430 1
Romania 13,989 23
Slovakia 500 2
Slovenia 230 2
Spain 6,526 54
Sweden 1,659 8
Iceland 3 0
Norway 1,071 3
EU Level 91,420 458
EEA Level 92,494 461