TikTok

Report March 2025

Submitted
TikTok's mission is to inspire creativity and bring joy. In a global community such as ours with millions of users it is natural for people to have different opinions, so we seek to operate on a shared set of facts and reality when it comes to topics that impact people’s safety. Ensuring a safe and authentic environment for our community is critical to achieving our goals - this includes making sure our users have a trustworthy experience on TikTok. As part of creating a trustworthy environment, transparency is essential to enable online communities and wider society to assess TikTok's approach to its regulatory obligations. TikTok is committed to providing insights into the actions we are taking as a signatory to the Code of Practice on Disinformation (the Code). 

Our full executive summary is available as part of our report, which can be downloaded by following the link below.

Download PDF

Commitment 1
Relevant signatories participating in ad placements commit to defund the dissemination of disinformation, and improve the policies and systems which determine the eligibility of content to be monetised, the controls for monetisation and ad placement, and the data to report on the accuracy and effectiveness of controls and services around ad placements.
We signed up to the following measures of this commitment
Measure 1.1 Measure 1.2 Measure 1.3 Measure 1.4 Measure 1.5 Measure 1.6
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
Yes
If yes, list these implementation measures here
  • In order to improve the granularity of existing ad policies, developed a specific climate misinformation ad policy.

  • Continued to enforce our four granular harmful misinformation ad policies in the EEA. As mentioned in our H2 2023 report, the policies cover:
    • Medical Misinformation
    • Dangerous Misinformation
    • Synthetic and Manipulated Media
    • Dangerous Conspiracy Theories 

  • Expanded the functionality (including choice and ability) in the EEA of our in-house pre-campaign brand safety tool, the TikTok Inventory Filter. 

  • Improved our IAB certification for Sweden Gold Standard to 2.0.
 
  • We continue to engage in the Task-force and its working groups and subgroups such as the working subgroup on Elections (Crisis Response).

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
No
If yes, which further implementation measures do you plan to put in place in the next 6 months?
We are continuously reviewing and improving our tools and processes to fight disinformation and will report on any further development in the next report.
Measure 1.1
Relevant Signatories involved in the selling of advertising, inclusive of media platforms, publishers and ad tech companies, will deploy, disclose, and enforce policies with the aims of: - first avoiding the publishing and carriage of harmful Disinformation to protect the integrity of advertising supported businesses - second taking meaningful enforcement and remediation steps to avoid the placement of advertising next to Disinformation content or on sources that repeatedly violate these policies; and - third adopting measures to enable the verification of the landing / destination pages of ads and origin of ad placement.
QRE 1.1.1
Signatories will disclose and outline the policies they develop, deploy, and enforce to meet the goals of Measure 1.1 and will link to relevant public pages in their help centres.
To help keep our platform welcoming and authentic for everyone, we are focused on ensuring it is free from harmful misinformation. 

(I) Our policies and approach

Our Integrity & Authenticity (I&A) policies within our CGs are the first line of defence in combating harmful misinformation and deceptive behaviours on our platform. All users are required to comply with our CGs, which set out the circumstances where we will remove, or otherwise limit the availability of, content.

Paid ads are also subject to our ad policies and are reviewed against these policies before being allowed on our platform. Our ad policies specifically prohibit inaccurate, misleading, or false content that may cause significant harm to individuals or society, regardless of intent. They also prohibit other misleading, inauthentic and deceptive behaviours. Ads deemed in violation of these policies will not be permitted on our platform, and accounts deemed in severe or repeated violation may be suspended or banned.

In 2023, in order to improve our existing ad policies, we launched four granular policies in the EEA. The policies cover:
  • Medical Misinformation
  • Dangerous Misinformation
  • Synthetic and Manipulated Media
  • Dangerous Conspiracy Theories 

We have been constantly working on improving the implementation of these policies, and reflecting on whether there are further focused areas for which we should develop new policies. We launched a fifth granular ad policy covering climate misinformation at the end of 2024. It prohibits false or misleading claims relating to climate change, such as, denying the existence and impacts of climate change, falsely stating that long-term impacts of climate mitigation strategies are worse than those of climate changes or undermining the validity or credibility of data or research that documents well-established scientific consensus.

Our ad policies require advertisers to meet a number of requirements regarding the landing page. For example, the landing page must be functioning and must contain complete and accurate information including about the advertiser. Ads risk not being approved if the product or service advertised on the landing page does not match that included in the ad.

In line with our approach of building a platform that brings people together, not divides them, we have long prohibited political ads and political branded content. Specifically, we do not allow paid ads (nor landing pages) that promote or oppose a candidate, current leader, political party or group, or content that advocates a stance (for or against) on a local, state, or federal issue of public importance in order to influence a political decision or outcome. Similar rules apply in respect of branded content. We also classify certain accounts as Government, Politician, and Political Party Accounts (GPPPA) and we have introduced restrictions on these at an account level. This means accounts belonging to the government, politicians and political parties will automatically have their access to advertising features turned off. We make exceptions for governments in certain circumstances e.g., to promote public health. We make various brand safety tools available to advertisers to assist in helping to ensure that their ads are not placed adjacent to content they do not consider to fit with their brand values. While any content that is violative of our CGs, including our I&A policies, is removed, the brand safety tools are designed to help advertisers to further protect their brand. For example, a family-oriented brand may not want to appear next to videos containing news-related content. We have adopted the industry accepted framework in support of these principles.

(II) Verification in the context of ads

We provide verified badges on some accounts including certain advertisers. Verified badges help users make informed choices about the accounts they choose to follow. It's an easy way for notable figures to let users know they’re seeing authentic content, and it helps to build trust among high-profile accounts and their followers. For individuals, non-profits, institutions, businesses, or official brand pages, this badge builds an important layer of clarity with the TikTok community. We consider a number of factors before granting a verified badge, such as whether the notable account is authentic, unique, and active.

We strengthen our approach to countering influence attempts by:

  • Making state-affiliated media accounts that attempt to reach communities outside their home country on current global events and affairs ineligible for recommendation, which means their content won't appear in the For You feed.
  • Prohibiting state-affiliated media accounts in all markets where our state-controlled media labels are available from advertising outside of the country with which they are primarily affiliated.
  • Investing in our detection capabilities of state-affiliated media accounts.
  • Working with third party external experts to shape our state-affiliated media policy and assessment of state-controlled media labels.

SLI 1.1.1
Signatories will report, quantitatively, on actions they took to enforce each of the policies mentioned in the qualitative part of this service level indicator, at the Member State or language level. This could include, for instance, actions to remove, to block, or to otherwise restrict advertising on pages and/or domains that disseminate harmful Disinformation.
Methodology of data measurement: 

We have set out the number of ads that have been removed from our platform for violation of our political content policies, as well as our more granular policies on medical misinformation, dangerous misinformation, synthetic and manipulated media and dangerous conspiracy theories. We launched our granular climate misinformation policy towards the end of the reporting period and we look forward to sharing data on it once we have a full reporting period of data.

The majority of ads that violate our previously four, now five, granular misinformation ad policies, would have been removed under our existing policies. In cases where an ad is deemed violative for other policies and also for our more recent granular misinformation policies, the removal is counted under the existing policy. Therefore, the second column below shows only the number of ads removed where the sole reason was one of these four reported additional misinformation policies, and does not include ads already removed under our existing policies or where misinformation policies were not the driving factor for the removal.

We have been focused on enforcement of our political advertising prohibition as well as our internal detection capability of political content on our platform which included launching specialised political content moderator training and automoderation strategies. The data below suggests that our existing policies (such as political content and other policy areas such as our inaccurate, misleading, or false content policy) already cover the majority of harmful misinformation ads, due to their expansive nature of coverage.

Note that numbers have only been provided for monetised markets and are based on where the ads were displayed. We note that H2 2024 covered a very busy election cycle in Europe, including in Romania, France and Ireland. 
Country Number of ad removals under the political content ad policy Number of ad removals under the four granular misinformation ad policies
Austria 746 3
Belgium 1152 1
Bulgaria 328 7
Croatia 3 0
Cyprus 128 0
Czech Republic 111 0
Denmark 409 0
Estonia 90 0
Finland 235 0
France 4621 7
Germany 6498 63
Greece 911 8
Hungary 512 2
Ireland 565 1
Italy 2781 8
Latvia 131 4
Lithuania 19 0
Luxembourg 86 0
Netherlands 1179 3
Poland 1118 4
Portugal 438 1
Romania 10698 2
Slovakia 145 4
Slovenia 52 0
Spain 2558 17
Sweden 752 0
Norway 474 2
Total EU 36266 135
Total EEA 36740 137