TikTok

Report March 2026

Submitted
TikTok’s mission is to inspire creativity and bring joy. With more than 200 million people across Europe coming to TikTok every month, including 178 million in the EU, it’s natural for people to hold different opinions. That’s why we focus on a shared set of facts when it comes to issues that affect people’s safety. A safe, authentic, and trustworthy experience is essential to achieving our goals. Transparency plays a key role in building that trust, allowing online communities and society to assess how TikTok meets its regulatory obligations. As a signatory to the Code of Conduct on Disinformation (the Code), TikTok is committed to sharing clear insights into the actions we take.

TikTok takes disinformation extremely seriously. We are committed to preventing its spread, promoting authoritative information, and supporting media literacy initiatives that strengthen community resilience.

We prioritise proactive content moderation, with the vast majority of violative content removed before it is reported. In H2 2025, more than 98% of videos violating our Integrity and Authenticity policies were removed proactively worldwide.

We continue to address emerging behaviours and risks through our Digital Services Act (DSA) compliance programme, which the Code has operated under since July 2025.

Our actions under the Code demonstrate TikTok’s strong commitment to combating disinformation while ensuring transparency and accountability to our community and regulators.

Please see the sections below for information about our work under specific commitments, or download the report as a PDF.

Download PDF

Commitment 2
Relevant Signatories participating in advertising commit to prevent the misuse of advertising systems to disseminate Disinformation in the form of advertising messages.
We signed up to the following measures of this commitment
Measure 2.1 Measure 2.2 Measure 2.3 Measure 2.4
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
Yes
If yes, list these implementation measures here
At the end of August 2025, we launched more granular misinformation advertising policies in the EEA, providing clearer categorisation, and more targeted risk-based enforcement.
  • Health Misinformation
  • Environment/Climate Misinformation
  • Public Safety & Trust Misinformation
  • Election Misinformation
  • Other Misinformation

These new policies supersede and expand upon the previous set of five policies introduced in H1 2025, which included:
  • Medical Misinformation
  • Dangerous Misinformation
  • Synthetic and Manipulated Media
  • Dangerous Conspiracy Theories
  • Climate Misinformation

We have enhanced our automated detection models, which are now operational and support enforcement of the new misinformation advertising policies. We also continue to develop our automated detection models to support the enforcement of the new policies.

We provided users with a simple and intuitive way to report advertisements in-app for  breach of our misinformation advertising policies in each EU Member State.
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
Yes
If yes, which further implementation measures do you plan to put in place in the next 6 months?
We continue to focus on improving the accuracy and coverage of our automated misinformation moderation systems for advertising.
Measure 2.1
Relevant Signatories will develop, deploy, and enforce appropriate and tailored advertising policies that address the misuse of their advertising systems for propagating harmful Disinformation in advertising messages and in the promotion of content.
QRE 2.1.1:
In H2 2025, we iterated existing advertising policies for misinformation and launched more granular policies in the EEA (covering Health Misinformation, Environment/Climate Misinformation, Public Safety & Trust Misinformation, Election Misinformation, Other Misinformation), with which advertisers need to comply with. These policies provide clearer categorisation of misinformation types and build on the principles and enforcement experience of the five policies set out in the H1 2025 report, enabling more consistent and targeted enforcement in line with evolving risks.

Our advertiser account policies expressly prohibit deceptive behaviours, including prohibiting advertisers from circumventing, evading, or interfering with our advertising systems and processes.
QRE 2.1.1
Signatories will disclose and outline the policies they develop, deploy, and enforce to meet the goals of Measure 2.1 and will link to relevant public pages in their help centres.
Paid ads are subject to our strict ad policies, which specifically prohibit misleading, inauthentic, and deceptive behaviours. Ads are reviewed against these policies before being allowed on our platform. In order to improve our existing ad policies, we launched four more granular policies in the EEA in 2023 (covering Medical Misinformation, Dangerous Misinformation, Synthetic and Manipulated Media, and Dangerous Conspiracy Theories), which advertisers also need to comply with. In December 2024, we launched a fifth granular policy covering Climate Misinformation.
SLI 2.1.1
Signatories will report, quantitatively, on actions they took to enforce each of the policies mentioned in the qualitative part of this service level indicator, at the Member State or language level. This could include, for instance, actions to remove, to block, or to otherwise restrict harmful Disinformation in advertising messages and in the promotion of content.
Methodology of data measurement:
We have set out the number of ads that have been removed from our platform for violation of our granular misinformation advertising policies on Health Misinformation, Environment/Climate Misinformation, Public Safety & Trust Misinformation, Election Misinformation, Other Misinformation. We launched these iterated misinformation policies in August 2025. These policies were developed to provide clearer categorisation and more targeted, risk-based enforcement. 

The methodology for ad removals data for misinformation advertising policies was revised in this period to capture refinement in deduplication logic. 

We are pleased to be able to report on the ads removed for breach of our granular misinformation advertising policies. We have provided the political advertising enforcement metrics in the Elections Crisis Chapter of this Report.

Note that numbers have only been provided for monetised markets and are based on where the ads were displayed.
Number of ads removals under the granular misinformation ad policies
Austria 133
Belgium 101
Bulgaria 16
Croatia 9
Cyprus 3
Czech Republic 22
Denmark 90
Estonia 10
Finland 22
France 138
Germany 656
Greece 17
Hungary 49
Ireland 176
Italy 102
Latvia 21
Lithuania 8
Luxembourg 1
Malta -
Netherlands 46
Poland 77
Portugal 37
Romania 19
Slovakia 11
Slovenia 12
Spain 73
Sweden 195
Liechtenstein -
Norway 165
Total EU 2,044
Total EEA 2,209