TikTok

Report March 2025

Submitted

TikTok allows users to create, share and watch short-form videos and live content, primarily for entertainment purposes

Advertising

Commitment 1

Relevant signatories participating in ad placements commit to defund the dissemination of disinformation, and improve the policies and systems which determine the eligibility of content to be monetised, the controls for monetisation and ad placement, and the data to report on the accuracy and effectiveness of controls and services around ad placements.

We signed up to the following measures of this commitment

Measure 1.1 Measure 1.2 Measure 1.3 Measure 1.4 Measure 1.5 Measure 1.6

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Yes

If yes, list these implementation measures here

  • In order to improve the granularity of existing ad policies, developed a specific climate misinformation ad policy.

  • Continued to enforce our four granular harmful misinformation ad policies in the EEA. As mentioned in our H2 2023 report, the policies cover:
    • Medical Misinformation
    • Dangerous Misinformation
    • Synthetic and Manipulated Media
    • Dangerous Conspiracy Theories 

  • Expanded the functionality (including choice and ability) in the EEA of our in-house pre-campaign brand safety tool, the TikTok Inventory Filter. 

  • Improved our IAB certification for Sweden Gold Standard to 2.0.
 
  • We continue to engage in the Task-force and its working groups and subgroups such as the working subgroup on Elections (Crisis Response).

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

We are continuously reviewing and improving our tools and processes to fight disinformation and will report on any further development in the next report.

Measure 1.1

Relevant Signatories involved in the selling of advertising, inclusive of media platforms, publishers and ad tech companies, will deploy, disclose, and enforce policies with the aims of: - first avoiding the publishing and carriage of harmful Disinformation to protect the integrity of advertising supported businesses - second taking meaningful enforcement and remediation steps to avoid the placement of advertising next to Disinformation content or on sources that repeatedly violate these policies; and - third adopting measures to enable the verification of the landing / destination pages of ads and origin of ad placement.

QRE 1.1.1

Signatories will disclose and outline the policies they develop, deploy, and enforce to meet the goals of Measure 1.1 and will link to relevant public pages in their help centres.

To help keep our platform welcoming and authentic for everyone, we are focused on ensuring it is free from harmful misinformation. 

(I) Our policies and approach

Our Integrity & Authenticity (I&A) policies within our CGs are the first line of defence in combating harmful misinformation and deceptive behaviours on our platform. All users are required to comply with our CGs, which set out the circumstances where we will remove, or otherwise limit the availability of, content.

Paid ads are also subject to our ad policies and are reviewed against these policies before being allowed on our platform. Our ad policies specifically prohibit inaccurate, misleading, or false content that may cause significant harm to individuals or society, regardless of intent. They also prohibit other misleading, inauthentic and deceptive behaviours. Ads deemed in violation of these policies will not be permitted on our platform, and accounts deemed in severe or repeated violation may be suspended or banned.

In 2023, in order to improve our existing ad policies, we launched four granular policies in the EEA. The policies cover:
  • Medical Misinformation
  • Dangerous Misinformation
  • Synthetic and Manipulated Media
  • Dangerous Conspiracy Theories 

We have been constantly working on improving the implementation of these policies, and reflecting on whether there are further focused areas for which we should develop new policies. We launched a fifth granular ad policy covering climate misinformation at the end of 2024. It prohibits false or misleading claims relating to climate change, such as, denying the existence and impacts of climate change, falsely stating that long-term impacts of climate mitigation strategies are worse than those of climate changes or undermining the validity or credibility of data or research that documents well-established scientific consensus.

Our ad policies require advertisers to meet a number of requirements regarding the landing page. For example, the landing page must be functioning and must contain complete and accurate information including about the advertiser. Ads risk not being approved if the product or service advertised on the landing page does not match that included in the ad.

In line with our approach of building a platform that brings people together, not divides them, we have long prohibited political ads and political branded content. Specifically, we do not allow paid ads (nor landing pages) that promote or oppose a candidate, current leader, political party or group, or content that advocates a stance (for or against) on a local, state, or federal issue of public importance in order to influence a political decision or outcome. Similar rules apply in respect of branded content. We also classify certain accounts as Government, Politician, and Political Party Accounts (GPPPA) and we have introduced restrictions on these at an account level. This means accounts belonging to the government, politicians and political parties will automatically have their access to advertising features turned off. We make exceptions for governments in certain circumstances e.g., to promote public health. We make various brand safety tools available to advertisers to assist in helping to ensure that their ads are not placed adjacent to content they do not consider to fit with their brand values. While any content that is violative of our CGs, including our I&A policies, is removed, the brand safety tools are designed to help advertisers to further protect their brand. For example, a family-oriented brand may not want to appear next to videos containing news-related content. We have adopted the industry accepted framework in support of these principles.

(II) Verification in the context of ads

We provide verified badges on some accounts including certain advertisers. Verified badges help users make informed choices about the accounts they choose to follow. It's an easy way for notable figures to let users know they’re seeing authentic content, and it helps to build trust among high-profile accounts and their followers. For individuals, non-profits, institutions, businesses, or official brand pages, this badge builds an important layer of clarity with the TikTok community. We consider a number of factors before granting a verified badge, such as whether the notable account is authentic, unique, and active.

We strengthen our approach to countering influence attempts by:

  • Making state-affiliated media accounts that attempt to reach communities outside their home country on current global events and affairs ineligible for recommendation, which means their content won't appear in the For You feed.
  • Prohibiting state-affiliated media accounts in all markets where our state-controlled media labels are available from advertising outside of the country with which they are primarily affiliated.
  • Investing in our detection capabilities of state-affiliated media accounts.
  • Working with third party external experts to shape our state-affiliated media policy and assessment of state-controlled media labels.

SLI 1.1.1

Signatories will report, quantitatively, on actions they took to enforce each of the policies mentioned in the qualitative part of this service level indicator, at the Member State or language level. This could include, for instance, actions to remove, to block, or to otherwise restrict advertising on pages and/or domains that disseminate harmful Disinformation.

Methodology of data measurement: 

We have set out the number of ads that have been removed from our platform for violation of our political content policies, as well as our more granular policies on medical misinformation, dangerous misinformation, synthetic and manipulated media and dangerous conspiracy theories. We launched our granular climate misinformation policy towards the end of the reporting period and we look forward to sharing data on it once we have a full reporting period of data.

The majority of ads that violate our previously four, now five, granular misinformation ad policies, would have been removed under our existing policies. In cases where an ad is deemed violative for other policies and also for our more recent granular misinformation policies, the removal is counted under the existing policy. Therefore, the second column below shows only the number of ads removed where the sole reason was one of these four reported additional misinformation policies, and does not include ads already removed under our existing policies or where misinformation policies were not the driving factor for the removal.

We have been focused on enforcement of our political advertising prohibition as well as our internal detection capability of political content on our platform which included launching specialised political content moderator training and automoderation strategies. The data below suggests that our existing policies (such as political content and other policy areas such as our inaccurate, misleading, or false content policy) already cover the majority of harmful misinformation ads, due to their expansive nature of coverage.

Note that numbers have only been provided for monetised markets and are based on where the ads were displayed. We note that H2 2024 covered a very busy election cycle in Europe, including in Romania, France and Ireland. 

Country Number of ad removals under the political content ad policy Number of ad removals under the four granular misinformation ad policies
Austria 746 3
Belgium 1152 1
Bulgaria 328 7
Croatia 3 0
Cyprus 128 0
Czech Republic 111 0
Denmark 409 0
Estonia 90 0
Finland 235 0
France 4621 7
Germany 6498 63
Greece 911 8
Hungary 512 2
Ireland 565 1
Italy 2781 8
Latvia 131 4
Lithuania 19 0
Luxembourg 86 0
Malta 0 0
Netherlands 1179 3
Poland 1118 4
Portugal 438 1
Romania 10698 2
Slovakia 145 4
Slovenia 52 0
Spain 2558 17
Sweden 752 0
Iceland 0 0
Liechtenstein 0 0
Norway 474 2
Total EU 36266 135
Total EEA 36740 137

Measure 1.2

Relevant Signatories responsible for the selling of advertising, inclusive of publishers, media platforms, and ad tech companies, will tighten eligibility requirements and content review processes for content monetisation and ad revenue share programmes on their services as necessary to effectively scrutinise parties and bar participation by actors who systematically post content or engage in behaviours which violate policies mentioned in Measure 1.1 that tackle Disinformation.

QRE 1.2.1

Signatories will outline their processes for reviewing, assessing, and augmenting their monetisation policies in order to scrutinise and bar participation by actors that systematically provide harmful Disinformation.

All creators must comply with TikTok’s Community Guidelines, including our I&A policies. Where creators fail to comply with our Community Guidelines, this may result in loss of access to monetisation and / or loss of account access. Users in all EU member states are notified by an in-app notification in their relevant local language where there has been a restriction of their ability to monetise, restriction of their access to a feature, removal or otherwise restriction of access to their content, or a ban of their account. 

Our policies prohibit accounts verified as belonging to a government, politician or political party from accessing monetisation features. They will, for instance, be ineligible for participation in content monetisation programs such as our Creator Rewards Program. Along with our existing ban on political advertising, this means that accounts belonging to politicians, political parties and governments will not be able to give or receive money through TikTok's monetisation features, or spend money promoting their content (although exemptions are made for governments in certain circumstances such as for public health). 

We launched the Creator Code of Conduct in April 2024. These are the standards we expect creators involved in TikTok programs, features, events and campaigns to follow on and off-platform, in addition to our Community Guidelines and Terms of Service. Being a part of these creator programs is an opportunity that comes with additional responsibilities, and this code will also help provide creators with additional reassurance that other participants are meeting these standards too. We are actively improving our enforcement guidance and processes for this, including building on proactive signalling of off-platform activity.

SLI 1.2.1

Signatories will report on the number of policy reviews and/or updates to policies relevant to Measure 1.2 throughout the reporting period. In addition, Signatories will report on the numbers of accounts or domains barred from participation to advertising or monetisation as a result of these policies at the Member State level.

Methodology of data measurement:

Our I&A policies within our CGs are the first line of defence in combating harmful misinformation and deceptive behaviours on our platform. All creators are required to comply with our CGs, which set out the circumstances where we will remove, or otherwise limit the availability of, content. Creators who breach the Community Guidelines or Terms of Service are not eligible to receive rewards. We have set out the number of ads that have been removed from our platform for violation of our political content policies as well as our four more granular policies on medical misinformation, dangerous misinformation, synthetic and manipulated media and dangerous conspiracy theories in SLI 1.1.1. Further, SLI 1.1.2 aims to provide an estimate of the potential impact on revenue of demonetising disinformation. We are working towards being able to provide more data for this SLI. 

Country 0 0 0 0
Austria 0 0 0 0
Belgium 0 0 0 0
Bulgaria 0 0 0 0
Croatia 0 0 0 0
Cyprus 0 0 0 0
Czech Republic 0 0 0 0
Denmark 0 0 0 0
Estonia 0 0 0 0
Finland 0 0 0 0
France 0 0 0 0
Germany 0 0 0 0
Greece 0 0 0 0
Hungary 0 0 0 0
Ireland 0 0 0 0
Italy 0 0 0 0
Latvia 0 0 0 0
Lithuania 0 0 0 0
Luxembourg 0 0 0 0
Malta 0 0 0 0
Netherlands 0 0 0 0
Poland 0 0 0 0
Portugal 0 0 0 0
Romania 0 0 0 0
Slovakia 0 0 0 0
Slovenia 0 0 0 0
Spain 0 0 0 0
Sweden 0 0 0 0
Iceland 0 0 0 0
Liechtenstein 0 0 0 0
Norway 0 0 0 0

Measure 1.3

Relevant Signatories responsible for the selling of advertising, inclusive of publishers, media platforms, and ad tech companies, will take commercial and technically feasible steps, including support for relevant third-party approaches, to give advertising buyers transparency on the placement of their advertising.

QRE 1.3.1

Signatories will report on the controls and transparency they provide to advertising buyers with regards to the placement of their ads as it relates to Measure 1.3.

We partner with a number of industry leaders to provide a number of controls and transparency tools to advertising buyers with regard to the placement of ads:

Controls: We offer pre-campaign solutions to advertisers so they can put additional safeguards in place before their campaign goes live to mitigate the risk of their advertising being displayed adjacent to certain types of user-generated content. These measures are in addition to the CGs, which provide overarching rules around the types of content that can appear on TikTok and are eligible for the For You feed:

  • TikTok Inventory Filter: This is our proprietary system which enables advertisers to choose the profile of content they want their ads to run adjacent to. We expanded our Inventory Filter which is now available in 29 jurisdictions in the EEA and is embedded directly in TikTok Ads Manager, the system through which advertisers purchase ads. We have expanded the functionality of this Inventory Filter in various EEA countries. More details can be found here. The Inventory Filter is informed by Industry Standards and policies include topics which may be susceptible to disinformation.
  • TikTok Pre-bid Brand Safety Solution by Integral Ad Science (“IAS”): Advertisers can filter content based on industry-standard frameworks with all levels of risk (available in France and Germany). Some misinformation content may be captured and filtered out by these industry standard categories, such as “Sensitive Social Issues”.

Transparency: We have partnered with third parties to offer post-campaign solutions that enable advertisers to assess the suitability of user content that ran immediately adjacent to their ad in the For You feed, against their chosen brand suitability parameters:

  • Zefr: Through our partnership with Zefr, advertisers can obtain campaign insights into brand suitability and safety on the platform (now available in 29 countries in the EEA). Zefr aligns with the Industry Standards.

  • IAS: Advertisers can measure brand safety, viewability and invalid traffic on the platform with the IAS Signal platform (post campaign is available in 28 countries in the EEA). As with IAS’s pre-bid solution covered above, this aligns with the GARM Framework. 

DoubleVerify: We are partnering with DoubleVerify to provide advertisers with media quality measurement for ads. DoubleVerify is working actively with us to expand their suite of brand suitability and media quality solutions on the platform. DoubleVerify is available in 27 EU countries.

Measure 1.4

Relevant Signatories responsible for the buying of advertising, inclusive of advertisers, and agencies, will place advertising through ad sellers that have taken effective, and transparent steps to avoid the placement of advertising next to Disinformation content or in places that repeatedly publish Disinformation.

QRE 1.4.1

Relevant Signatories that are responsible for the buying of advertising will describe their processes and procedures to ensure they place advertising through ad sellers that take the steps described in Measure 1.4.

When TikTok advertises, we buy advertising space only through ad networks (either directly, through publishers or agencies) which allows for direct measurement of brand safety and suitability, via tagging, using leading brand safety tools across all digital media channels. This allows us to mitigate the risk of TikTok ads appearing next to sources of disinformation and be in control of the environment our content is appearing next to.

We use DoubleVerify to ensure our own ads run on or near suitable content, whilst running and monitoring brand safety and suitability metrics across other placements, always updating the context and content of our blocklists as well as to ensure the TikTok brand is protected in any context. 

For instance, we monitor the placement of our ads very closely, especially in the context of politically sensitive events such as the War in Ukraine or the Israel / Hamas conflict, and in the event of our ads appearing adjacent to or on sources of disinformation, we are able to identify and investigate the content in question to assess risks using DoubleVerify dashboards. Once identified, we will then adjust any filters or add the publication to our blocklist (which is regularly reviewed and updated) to prevent recurrence. 

Measure 1.5

Relevant Signatories involved in the reporting of monetisation activities inclusive of media platforms, ad networks, and ad verification companies will take the necessary steps to give industry-recognised relevant independent third-party auditors commercially appropriate and fair access to their services and data in order to: - First, confirm the accuracy of first party reporting relative to monetisation and Disinformation, seeking alignment with regular audits performed under the DSA. - Second, accreditation services should assess the effectiveness of media platforms' policy enforcement, including Disinformation policies.

QRE 1.5.1

Signatories that produce first party reporting will report on the access provided to independent third-party auditors as outlined in Measure 1.5 and will link to public reports and results from such auditors, such as MRC Content Level Brand Safety Accreditation, TAG Brand Safety certifications, or other similarly recognised industry accepted certifications.

We have achieved the TAG Brand Safety Certified seal and the TAG Certified Against Fraud seal by the Trustworthy Accountability Group (“TAG”) in the EEA and globally. This required appropriate verification by external auditors. Details of our TAG seal can be found by searching for “TikTok” on their public register which can be found here

We have been certified by the Interactive Advertising Bureau (“IAB”) for the IAB Ireland Gold Standard 2.1 (listed here) and IAB Sweden Gold Standard 2.0.

QRE 1.5.2

Signatories that conduct independent accreditation via audits will disclose areas of their accreditation that have been updated to reflect needs in Measure 1.5.

We have achieved the TAG Brand Safety Certified and TAG Certified Against Fraud seals and the IAB Ireland Gold Standard and IAB Sweden Gold Standard 2.0.

Measure 1.6

Relevant Signatories will advance the development, improve the availability, and take practical steps to advance the use of brand safety tools and partnerships, with the following goals: - To the degree commercially viable, relevant Signatories will provide options to integrate information and analysis from source-raters, services that provide indicators of trustworthiness, fact-checkers, researchers or other relevant stakeholders providing information e.g., on the sources of Disinformation campaigns to help inform decisions on ad placement by ad buyers, namely advertisers and their agencies. - Advertisers, agencies, ad tech companies, and media platforms and publishers will take effective and reasonable steps to integrate the use of brand safety tools throughout the media planning, buying and reporting process, to avoid the placement of their advertising next to Disinformation content and/or in places or sources that repeatedly publish Disinformation. - Brand safety tool providers and rating services who categorise content and domains will provide reasonable transparency about the processes they use, insofar that they do not release commercially sensitive information or divulge trade secrets, and that they establish a mechanism for customer feedback and appeal.

QRE 1.6.1

Signatories that place ads will report on the options they provide for integration of information, indicators and analysis from source raters, services that provide indicators of trustworthiness, fact-checkers, researchers, or other relevant stakeholders providing information e.g. on the sources of Disinformation campaigns to help inform decisions on ad placement by buyers.

We offer a variety of brand safety tools for preventing ads from being placed beside specific types of content. 

We continue to invest in our existing partnerships with leading third party brand safety and suitability providers (including DoubleVerify, Integral Ad Science, and Zefr). 

We evaluate, on an ongoing basis, whether there are potential new partnerships, including with researchers, that may be appropriate for our platform. Furthermore, our advertising policies help to ensure that the categories of content which are most likely to require such checks and integration of information do not make it onto the platform in the first place. 

QRE 1.6.2

Signatories that purchase ads will outline the steps they have taken to integrate the use of brand safety tools in their advertising and media operations, disclosing what percentage of their media investment is protected by such services.

We only purchase ads through ad networks which make robust and reputable brand safety tools available to us. All of our media investment is therefore protected by such tools. 

QRE 1.6.3

Signatories that provide brand safety tools will outline how they are ensuring transparency and appealability about their processes and outcomes.

We have partnered with several third parties (IAS, Double Verify and Zefr) to offer post-campaign solutions that enable advertisers to assess the suitability of user content that ran immediately adjacent to their ad in all feeds

QRE 1.6.4

Relevant Signatories that rate sources to determine if they persistently publish Disinformation shall provide reasonable information on the criteria under which websites are rated, make public the assessment of the relevant criteria relating to Disinformation, operate in an apolitical manner and give publishers the right to reply before ratings are published.

Not applicable as TikTok does not rate sources.

Commitment 2

Relevant Signatories participating in advertising commit to prevent the misuse of advertising systems to disseminate Disinformation in the form of advertising messages.

We signed up to the following measures of this commitment

Measure 2.1 Measure 2.2 Measure 2.3 Measure 2.4

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Yes

If yes, list these implementation measures here

  • In order to improve the granularity of existing ad policies, developed a specific climate misinformation ad policy.
  • Continued to enforce our four granular harmful misinformation ad policies in the EEA. As mentioned in our H2 2023 report, the policies cover:
    • Medical Misinformation
    • Dangerous Misinformation
    • Synthetic and Manipulated Media
    • Dangerous Conspiracy Theories 
  • Expanded the functionality (including choice and ability) in the EEA of our in-house pre-campaign brand safety tool, the TikTok Inventory Filter. 
  • Improved our IAB certification for Sweden Gold Standard to 2.0. 
  • We continue to engage in the Task-force and its working groups and subgroups such as the working subgroup on Elections (Crisis Response).

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

We are continuously reviewing and improving our tools and processes to fight disinformation and will report on any further development in the next report.

Measure 2.1

Relevant Signatories will develop, deploy, and enforce appropriate and tailored advertising policies that address the misuse of their advertising systems for propagating harmful Disinformation in advertising messages and in the promotion of content.

QRE 2.1.1

Signatories will disclose and outline the policies they develop, deploy, and enforce to meet the goals of Measure 2.1 and will link to relevant public pages in their help centres.

Paid ads are subject to our strict ad policies, which specifically prohibit misleading, inauthentic and deceptive behaviours. Ads are reviewed against these policies before being allowed on our platform. In order to improve our existing ad policies, we launched four more granular policies in the EEA in 2023 (covering Medical Misinformation, Dangerous Misinformation, Synthetic and Manipulated Media and Dangerous Conspiracy Theories) which advertisers also need to comply with. Towards the end of 2024, we launched a fifth granular policy covering climate misinformation.

SLI 2.1.1

Signatories will report, quantitatively, on actions they took to enforce each of the policies mentioned in the qualitative part of this service level indicator, at the Member State or language level. This could include, for instance, actions to remove, to block, or to otherwise restrict harmful Disinformation in advertising messages and in the promotion of content.

Methodology of data measurement:

We have set out the number of ads that have been removed from our platform for violation of our political content policies, as well as our four granular policies on medical misinformation, dangerous misinformation, synthetic and manipulated media and dangerous conspiracy theories. We launched our climate misinformation policy towards the end of the reporting period and we look forward to sharing data on it along with our 4 other granular misinformation ad policies once we have a full reporting period of data.

The majority of ads that violate our newly launched misinformation policies, would have been removed under our existing policies. In cases where an ad is deemed violative for other policies and also for these additional misinformation policies, the removal is counted under the older policy. Therefore, the second column below shows only the number of ads removed where the sole reason was one of these four additional misinformation policies, and does not include ads already removed under our existing policies or where misinformation policies were not the driving factor for the removal.

The data below suggests that our existing policies (such as political content) already cover the majority of harmful misinformation ads, due to their expansive nature of coverage.

Note that numbers have only been provided for monetised markets and are based on where the ads were displayed. 

Country Number of ad removals under the political content ad policy Number of ad removals under the four granular misinformation ad policies
Austria 746 3
Belgium 1152 1
Bulgaria 328 7
Croatia 3 0
Cyprus 128 0
Czech Republic 111 0
Denmark 409 0
Estonia 90 0
Finland 235 0
France 4621 7
Germany 6498 63
Greece 911 8
Hungary 512 2
Ireland 565 1
Italy 2781 8
Latvia 131 4
Lithuania 19 0
Luxembourg 86 0
Malta 0 0
Netherlands 1179 3
Poland 1118 4
Portugal 438 1
Romania 10698 2
Slovakia 145 4
Slovenia 52 0
Spain 2558 17
Sweden 752 0
Iceland 0 0
Liechtenstein 0 0
Norway 474 2
Total EU 36266 135
Total EEA 36740 137

Measure 2.2

Relevant Signatories will develop tools, methods, or partnerships, which may include reference to independent information sources both public and proprietary (for instance partnerships with fact-checking or source rating organisations, or services providing indicators of trustworthiness, or proprietary methods developed internally) to identify content and sources as distributing harmful Disinformation, to identify and take action on ads and promoted content that violate advertising policies regarding Disinformation mentioned in Measure 2.1.

QRE 2.2.1

Signatories will describe the tools, methods, or partnerships they use to identify content and sources that contravene policies mentioned in Measure 2.1 - while being mindful of not disclosing information that'd make it easier for malicious actors to circumvent these tools, methods, or partnerships. Signatories will specify the independent information sources involved in these tools, methods, or partnerships.

In order to identify content and sources that breach our ad policies, ads go through moderation prior to going “live” on the platform. 

TikTok places considerable emphasis on proactive moderation of advertisements. Advertisements and advertiser accounts are reviewed against our Advertising Policies at the pre-posting and post-posting stage through a combination of automated and human moderation.

The majority of ads that violate our misinformation policies would have been removed under our existing policies. Our granular advertising policies currently cover:
  • Dangerous Misinformation
  • Dangerous Conspiracy Theories
  • Medical Misinformation
  • Manipulated Media
  • Climate Misinformation

After the ad goes live on the platform, users can report any concerns using the “report” button, and the ad will be reviewed again and appropriate action taken if necessary. 

TikTok also operates a "recall" process whereby ads already on TikTok will go through an additional stage of review if certain conditions are met, including reaching certain impression thresholds. TikTok also conducts additional reviews on random samples of ads to ensure its processes are functioning as expected.

We work with 14 fact-checking partners who provide fact-checking coverage in 23 EEA languages, including at least one official language of every EU Member State, plus Georgian, Russian, Turkish, and Ukrainian.

Measure 2.3

Relevant Signatories will adapt their current ad verification and review systems as appropriate and commercially feasible, with the aim of preventing ads placed through or on their services that do not comply with their advertising policies in respect of Disinformation to be inclusive of advertising message, promoted content, and site landing page.

QRE 2.3.1

Signatories will describe the systems and procedures they use to ensure that ads placed through their services comply with their advertising policies as described in Measure 2.1.

In order to identify content and sources that breach our ad policies, ads go through moderation prior to going “live” on the platform. 

TikTok places considerable emphasis on proactive moderation of advertisements. Advertisements and advertiser accounts are reviewed against our Advertising Policies at the pre-posting and post-posting stage through a combination of automated and human moderation.

The majority of ads that violate our misinformation policies would have been removed under our existing policies. Our granular advertising policies currently cover:
  • Dangerous Misinformation
  • Dangerous Conspiracy Theories
  • Medical Misinformation
  • Manipulated Media
  • Climate Misinformation

After the ad goes live on the platform, users can report any concerns using the “report” button, and the ad will be reviewed again and appropriate action taken if necessary.

TikTok also operates a "recall" process whereby ads already on TikTok will go through an additional stage of review if certain conditions are met, including reaching certain impression thresholds. TikTok also conducts additional reviews on random samples of ads to ensure its processes are functioning as expected.

SLI 2.3.1

Signatories will report quantitatively, at the Member State level, on the ads removed or prohibited from their services using procedures outlined in Measure 2.3. In the event of ads successfully removed, parties should report on the reach of violatory content and advertising.

We are pleased to be able to report on the ads removed for breach of our political content policies, as well as our more granular misinformation ad policies, including the impressions of those ads in this report. We launched our climate misinformation policy towards the end of the reporting period and we look forward to sharing data on it along with our 4 other granular misinformation ad policies once we have a full reporting period of data.

Country Number of ad removals under the political content ad policy Number of ad removals under the four granular misinformation ad policies Number of impressions for ads removed under the political content ad policy Number of impressions for ads removed under the four granular misinformation ad policies
Austria 746 3 2,405,688 0
Belgium 1152 1 414,078 16,971
Bulgaria 328 7 21,839 0
Croatia 3 0 69 0
Cyprus 128 0 10,838 0
Czech Republic 111 0 187,494 0
Denmark 409 0 1,333,325 12,268
Estonia 90 0 14,889 0
Finland 235 0 7,543,943 0
France 4621 7 14,427,406 510
Germany 6498 63 45,161,261 0
Greece 911 8 512,170 12,873
Hungary 512 2 3,675,505 0
Ireland 565 1 1,341,419 0
Italy 2781 8 6,836,564 12,029
Latvia 131 4 4,551 0
Lithuania 19 0 59,348 0
Luxembourg 86 0 5,472 0
Malta 0 0 0 0
Netherlands 1179 3 879,250 1,048
Poland 1118 4 610,009 0
Portugal 438 1 409,358 0
Romania 10698 2 27,208,895 0
Slovakia 145 4 52,215 0
Slovenia 52 0 53,989 0
Spain 2558 17 9,622,981 8,551
Sweden 752 0 4,565,753 0
Iceland 0 0 0 0
Liechtenstein 0 0 0 0
Norway 474 2 120,449 1,367
Total EU 36266 135 127,358,309 64,250
Total EEA 36740 137 127,478,758 65,617

Measure 2.4

Relevant Signatories will provide relevant information to advertisers about which advertising policies have been violated when they reject or remove ads violating policies described in Measure 2.1 above or disable advertising accounts in application of these policies and clarify their procedures for appeal.

QRE 2.4.1

Signatories will describe how they provide information to advertisers about advertising policies they have violated and how advertisers can appeal these policies.

We are clear with advertisers that their ads must comply with our strict ad policies (see TikTok Business Help Centre). We explain that all ads are reviewed before being uploaded on our platform - usually within 24 hours. Ads already on TikTok may go through an additional stage of review if they are reported, if certain conditions are met (e.g., reaching certain impression thresholds) or because of random sampling conducted at TikTok’s own initiative.

Where an advertiser has violated an ad policy they are informed by way of a notification. This is visible in their TikTok Ads Manager account and/or sent by email (if they have provided a valid email address), or where an advertiser has booked their ad through a TikTok representative, then the representative will inform the advertiser of any violations. Advertisers are able to make use of functionality to appeal rejections of their ads in certain circumstances. 

As part of our overarching DSA compliance programme, we improved how we notify and increase transparency to our advertisers. Notifications of restrictions include the restriction itself, reason for restriction , whether we made that decision by automated means, how we came to detect the violation (e.g. as a result of a user report or proactive TikTok initiatives) and what their rights of redress are .Advertisers can access online functionality to appeal restrictions on their account or ads. These appeals are then also reviewed against our ad policies and additional information could be provided to advertisers to help them understand the violation and what to do about it.

SLI 2.4.1

Signatories will report quantitatively, at the Member State level, on the number of appeals per their standard procedures they received from advertisers on the application of their policies and on the proportion of these appeals that led to a change of the initial policy decision.

We are pleased to be able to share the number of appeals for ads removed under our political content ad policies and our four granular misinformation ad policies as well as the number of respective overturns. The data shows a reduced number of appeals for ads removed under the political content policy evidencing our improved moderation and decision making processes. We launched our climate misinformation policy towards the end of the reporting period and we look forward to sharing data on it along with our 4 other granular misinformation ad policies once we have a full reporting period of data.

Country Number of ad removals under the political content ad policy Number of ad removals under the four granular misinformation ad policies Number of impressions for ads removed under the political content ad policy Number of impressions for ads removed under the four granular misinformation ad policies
Austria 746 3 2,405,688 0
Belgium 1152 1 414,078 16,971
Bulgaria 328 7 21,839 0
Croatia 3 0 69 0
Cyprus 128 0 10,838 0
Czech Republic 111 0 187,494 0
Denmark 409 0 1,333,325 12,268
Estonia 90 0 14,889 0
Finland 235 0 7,543,943 0
France 4621 7 14,427,406 510
Germany 6498 63 45,161,261 0
Greece 911 8 512,170 12,873
Hungary 512 2 3,675,505 0
Ireland 565 1 1,341,419 0
Italy 2781 8 6,836,564 12,029
Latvia 131 4 4,551 0
Lithuania 19 0 59,348 0
Luxembourg 86 0 5,472 0
Malta 0 0 0 0
Netherlands 1179 3 879,250 1,048
Poland 1118 4 610,009 0
Portugal 438 1 409,358 0
Romania 10698 2 27,208,895 0
Slovakia 145 4 52,215 0
Slovenia 52 0 53,989 0
Spain 2558 17 9,622,981 8,551
Sweden 752 0 4,565,753 0
Iceland 0 0 0 0
Liechtenstein 0 0 0 0
Norway 474 2 120,449 1,367
Total EU 36266 135 127,358,309 64,250
Total EEA 36740 137 127,478,758 65,617

Commitment 3

Relevant Signatories involved in buying, selling and placing digital advertising commit to exchange best practices and strengthen cooperation with relevant players, expanding to organisations active in the online monetisation value chain, such as online e-payment services, e-commerce platforms and relevant crowd-funding/donation systems, with the aim to increase the effectiveness of scrutiny of ad placements on their own services.

We signed up to the following measures of this commitment

Measure 3.1 Measure 3.2 Measure 3.3

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Yes

If yes, list these implementation measures here

We continue to engage in the Task-force and all its working groups and subgroups such as the working subgroup on Elections (Crisis Response).

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

We are continuously reviewing and improving our tools and processes to fight disinformation and will report on any further development in the next report.

Measure 3.1

Relevant Signatories will cooperate with platforms, advertising supply chain players, source-rating services, services that provide indicators of trustworthiness, fact-checking organisations, advertisers and any other actors active in the online monetisation value chain, to facilitate the integration and flow of information, in particular information relevant for tackling purveyors of harmful Disinformation, in full respect of all relevant data protection rules and confidentiality agreements.

QRE 3.1.1

Signatories will outline how they work with others across industry and civil society to facilitate the flow of information that may be relevant for tackling purveyors of harmful Disinformation.

As set out later on in this report, we cooperate with a number of third parties to facilitate the flow of information that may be relevant for tackling purveyors of harmful misinformation. This information is shared internally to help ensure consistency of approach across our platform.

We also continue to be actively involved in the Task-force working group for Chapter 2, specifically the working subgroup on Elections (Crisis Response) which we co-chaired. We work with other signatories to define and outline metrics regarding the monetary reach and impact of harmful misinformation. We are in close collaboration with industry to ensure alignment and clarity on the reporting of these code requirements.

We work with 14 fact-checking partners who provide fact-checking coverage in 23 EEA languages, including at least one official language of every EU Member State, plus Georgian, Russian, Turkish, and Ukrainian.

Measure 3.2

Relevant Signatories will exchange among themselves information on Disinformation trends and TTPs (Tactics, Techniques, and Procedures), via the Code Task-force, GARM, IAB Europe, or other relevant fora. This will include sharing insights on new techniques or threats observed by Relevant Signatories, discussing case studies, and other means of improving capabilities and steps to help remove Disinformation across the advertising supply chain - potentially including real-time technical capabilities.

QRE 3.2.1

Signatories will report on their discussions within fora mentioned in Measure 3.2, being mindful of not disclosing information that is confidential and/or that may be used by malicious actors to circumvent the defences set by Signatories and others across the advertising supply chain. This could include, for instance, information about the fora Signatories engaged in; about the kinds of information they shared; and about the learnings they derived from these exchanges.

We work with industry partners to discuss common standards and definitions to support consistency of categorising content, adjacency & measurement relevant topics, in appropriate fora. We work closely with IAB Sweden, IAB Ireland and other organisations such as TAG in the EEA and globally. We are also on the board of the Brand Safety Institute. 

We continue to share relevant insights and metrics within our quarterly transparency reports, which aim to inform industry peers and the research community. We continue to engage in the sub groups set up for insights sharing between signatories and the Commission.

Measure 3.3

Relevant Signatories will integrate the work of or collaborate with relevant third-party organisations, such as independent source-rating services, services that provide indicators of trustworthiness, fact-checkers, researchers, or open-source investigators, in order to reduce monetisation of Disinformation and avoid the dissemination of advertising containing Disinformation.

QRE 3.3.1

Signatories will report on the collaborations and integrations relevant to their work with organisations mentioned.

We continue to work closely with IAB Sweden, IAB Ireland and other organisations such as TAG in the EEA and globally.

Political Advertising

Commitment 4

Relevant Signatories commit to adopt a common definition of "political and issue advertising".

We signed up to the following measures of this commitment

Measure 4.1 Measure 4.2

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

No

If yes, list these implementation measures here

N/A

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

As we prohibit political advertising we are continuing to focus on enforcement of this policy in light of Regulation EU 2024/900 on the Transparency and Targeting of Political Advertising coming into force and the majority of provisions applying from October 2025.

If yes, which further implementation measures do you plan to put in place in the next 6 months?

N/A

Measure 4.1

Relevant Signatories commit to define "political and issue advertising" in this section in line with the definition of "political advertising" set out in the European Commission's proposal for a Regulation on the transparency and targeting of political advertising.

QRE 4.1.1

Relevant Signatories will declare the relevant scope of their commitment at the time of reporting and publish their relevant policies, demonstrating alignment with the European Commission's proposal for a Regulation on the transparency and targeting of political advertising.

TikTok is first and foremost an entertainment platform, and we're proud to be a place that brings people together through creative and entertaining content. While sharing political beliefs and engaging in political conversation is allowed as organic content on TikTok, our policies prohibit our community, including politicians and political party accounts, from placing political ads or posting political branded content.

Specifically, our Politics, Culture and Religion policy prohibits ads and landing pages which: 
  • reference, promote, or oppose candidates or nominees for public office, political parties, or elected or appointed government officials;
  • reference an election, including voter registration, voter turnout, and appeals for votes;
  • include advocacy for or against past, current, or proposed referenda, ballot measures, and legislative, judicial, or regulatory outcomes or processes (including those that promote or attack government policies or track records); and
  • reference, promote, or sell, merchandise that features prohibited individuals, entities, or content, including campaign slogans, symbols, or logos.
Where accounts are designated as Government, Politician, and Political Party Accounts (“GPPPA”), those accounts are banned from placing ads on TikTok, accessing monetisation features and from campaign fundraising. We may allow some cause-based advertising and public services advertising from government agencies, non-profits and other entities if they meet certain conditions and are working with a TikTok sales representative.

We prohibit political content in branded content i.e. content which is posted in exchange for payment or any other incentive by a third party.

We have been reviewing our policies to ensure that our prohibition is at least as broad as that defined by the Regulation EU 2024/900 on the Transparency and Targeting of Political Advertising. Our prohibition on political advertising is one part of our election integrity efforts which you can read more about in the elections crisis reports. 

QRE 4.1.2

After the first year of the Code's operation, Relevant Signatories will state whether they assess that further work with the Task-force is necessary and the mechanism for doing so, in line with Measure 4.2.

Not applicable at this stage. 

Commitment 5

Relevant Signatories commit to apply a consistent approach across political and issue advertising on their services and to clearly indicate in their advertising policies the extent to which such advertising is permitted or prohibited on their services.

We signed up to the following measures of this commitment

Measure 5.1

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

No

If yes, list these implementation measures here

N/A

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

We prohibit political advertising and are continuing to focus on enforcement of this policy in light of Regulation EU 2024/900 on the Transparency and Targeting of Political Advertising coming into force and the majority of provisions applying from October 2025.

If yes, which further implementation measures do you plan to put in place in the next 6 months?

N/A

Measure 5.1

Relevant Signatories will apply the labelling, transparency and verification principles (as set out below) across all ads relevant to their Commitments 4 and 5. They will publicise their policy rules or guidelines pertaining to their service's definition(s) of political and/or issue advertising in a publicly available and easily understandable way.

QRE 5.1.1

Relevant Signatories will report on their policy rules or guidelines and on their approach towards publicising them.

Not applicable as TikTok does not allow political advertising, as outlined in our Politics, Culture and Religion policy. We do not allow featuring political content in any form of advertising, extending this prohibition to both government, politician, or political party accounts and non-political advertisers expressing political views in advertising.

Commitment 6

Relevant Signatories commit to make political or issue ads clearly labelled and distinguishable as paid-for content in a way that allows users to understand that the content displayed contains political or issue advertising.

We signed up to the following measures of this commitment

Measure 6.1 Measure 6.2 Measure 6.3 Measure 6.4 Measure 6.5

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

No

If yes, list these implementation measures here

N/A

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

We prohibit political advertising and are continuing to focus on enforcement of this policy in light of Regulation EU 2024/900 on the Transparency and Targeting of Political Advertising coming into force and the majority of provisions applying from October 2025.

If yes, which further implementation measures do you plan to put in place in the next 6 months?

N/A

Measure 6.1

Relevant Signatories will develop a set of common best practices and examples for marks and labels on political or issue ads and integrate those learnings as relevant to their services.

QRE 6.1.1

Relevant Signatories will publicise the best practices and examples developed as part of Measure 2.2.1 and describe how they relate to their relevant services.

Not applicable as TikTok does not allow political advertising.

Measure 6.2

Relevant Signatories will ensure that relevant information, such as the identity of the sponsor, is included in the label attached to the ad or is otherwise easily accessible to the user from the label.

QRE 6.2.1

Relevant Signatories will publish examples of how sponsor identities and other relevant information are attached to ads or otherwise made easily accessible to users from the label.

Not applicable as TikTok does not allow political advertising.

QRE 6.2.2

Relevant Signatories will publish their labelling designs.

Not applicable as TikTok does not allow political advertising.

Measure 6.3

Relevant Signatories will invest and participate in research to improve users's identification and comprehension of labels, discuss the findings of said research with the Task-force, and will endeavour to integrate the results of such research into their services where relevant.

QRE 6.3.1

Relevant Signatories will publish relevant research into understanding how users identify and comprehend labels on political or issue ads and report on the steps they have taken to ensure that users are consistently able to do so and to improve the labels' potential to attract users' awareness.

Not applicable as TikTok does not allow political advertising.

Measure 6.4

Relevant Signatories will ensure that once a political or issue ad is labelled as such on their platform, the label remains in place when users share that same ad on the same platform, so that they continue to be clearly identified as paid-for political or issue content.

QRE 6.4.1

Relevant Signatories will describe the steps they put in place to ensure that labels remain in place when users share ads.

Not applicable as TikTok does not allow political advertising.

Measure 6.5

Relevant Signatories that provide messaging services will, where possible and when in compliance with local law, use reasonable efforts to work towards improving the visibility of labels applied to political advertising shared over messaging services. To this end they will use reasonable efforts to develop solutions that facilitate users recognising, to the extent possible, paid-for content labelled as such on their online platform when shared over their messaging services, without any weakening of encryption and with due regard to the protection of privacy.

QRE 6.5.1

Relevant Signatories will report on any solutions in place to empower users to recognise paid-for content as outlined in Measure 6.5.

This commitment is not applicable as TikTok is not a messaging app.

Commitment 7

Relevant Signatories commit to put proportionate and appropriate identity verification systems in place for sponsors and providers of advertising services acting on behalf of sponsors placing political or issue ads. Relevant signatories will make sure that labelling and user-facing transparency requirements are met before allowing placement of such ads.

We signed up to the following measures of this commitment

Measure 7.1 Measure 7.2 Measure 7.3 Measure 7.4

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

No

If yes, list these implementation measures here

N/A

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

We prohibit political advertising and are continuing to focus on enforcement of this policy in light of Regulation EU 2024/900 on the Transparency and Targeting of Political Advertising coming into force and the majority of provisions applying from October 2025.

If yes, which further implementation measures do you plan to put in place in the next 6 months?

N/A

Measure 7.1

Relevant Signatories will make sure the sponsors and providers of advertising services acting on behalf of sponsors purchasing political or issue ads have provided the relevant information regarding their identity to verify (and re-verify where appropriate) said identity or the sponsors they are acting on behalf of before allowing placement of such ads.

QRE 7.1.1

Relevant Signatories will report on the tools and processes in place to collect and verify the information outlined in Measure 7.1.1, including information on the timeliness and proportionality of said tools and processes.

Where accounts are designated as Government, Politician, and Political Party Accounts (“GPPPA”), those accounts are banned from placing ads on TikTok (with the exception of certain government agencies that may have a specific reason to advertise e.g. to promote public health initiatives) and from monetisation features. We publish the details of our GPPPA policy on our website, where we set out who we consider to be a GPPPA and the restrictions on those types of account. We explain how the actor of a government agency should act on our platform and what it can advertise in our TikTok Business Help Centre.

In the EU, we apply an internal label to accounts belonging to a government, politician, or political party. Once an account has been labelled in this manner, a number of policies will be applied that help prevent misuse of certain features e.g., access to advertising features and solicitation for campaign fundraising are not allowed.

Measure 7.2

Relevant Signatories will complete verifications processes described in Commitment 7 in a timely and proportionate manner.

QRE 7.2.1

Relevant Signatories will report on the actions taken against actors demonstrably evading the said tools and processes, including any relevant policy updates.

Not applicable as TikTok does not allow political advertising. 

Our Actor Policy aims to protect the integrity and authenticity of our community and prevent actors from evading our tools and processes. If an actor consistently demonstrates behaviour that deceives, misleads or is inauthentic to users and/or to TikTok we apply account level enforcement. This is not exclusive to ads containing political content.

TikTok is dedicated to investigating and disrupting confirmed cases of CIO on the platform. Covert influence operations (CIOs) are organised attempts to manipulate or corrupt public debate while also misleading TikTok systems or users about identity, origin, operating location, popularity, or overall purpose. Suspension logic is dependent on strikes, where we take into account ad-level violation and advertiser account behaviours. Confirmed critical policy violations lead to permanent suspension. Further information on our policy can be found in our Business Help Centre Article.

QRE 7.2.2

Relevant Signatories will provide information on the timeliness and proportionality of the verification process.

Not applicable as TikTok does not allow political advertising.

Measure 7.3

Relevant Signatories will take appropriate action, such as suspensions or other account-level penalties, against political or issue ad sponsors who demonstrably evade verification and transparency requirements via on-platform tactics. Relevant Signatories will develop - or provide via existing tools - functionalities that allow users to flag ads that are not labelled as political.

QRE 7.3.1

Relevant Signatories will report on the tools and processes in place to request a declaration on whether the advertising service requested constitutes political or issue advertising.

Not applicable as TikTok does not allow political advertising.

QRE 7.3.2

Relevant Signatories will report on policies in place against political or issue ad sponsors who demonstrably evade verification and transparency requirements on-platform.

Not applicable as TikTok does not allow political advertising.

Measure 7.4

Relevant Signatories commit to request that sponsors, and providers of advertising services acting on behalf of sponsors, declare whether the advertising service they request constitutes political or issue advertising.

QRE 7.4.1

Relevant Signatories will report on research and publish data on the effectiveness of measures they take to verify the identity of political or issue ad sponsors.

Not applicable as TikTok does not allow political advertising.

Commitment 8

Relevant Signatories commit to provide transparency information to users about the political or issue ads they see on their service.

We signed up to the following measures of this commitment

Measure 8.1 Measure 8.2

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

No

If yes, list these implementation measures here

N/A

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

We prohibit political advertising and are continuing to focus on enforcement of this policy in light of Regulation EU 2024/900 on the Transparency and Targeting of Political Advertising coming into force and the majority of provisions applying from October 2025.

If yes, which further implementation measures do you plan to put in place in the next 6 months?

N/A

Measure 8.2

Relevant Signatories will provide a direct link from the ad to the ad repository.

QRE 8.2.1

Relevant Signatories will provide details of the policies and measures put in place to implement the above-mentioned measures accessible to EU users, especially by publishing information outlining the main parameters their recommender systems employ in this regard.

Not applicable as TikTok does not allow political advertising. 

Commitment 9

Relevant Signatories commit to provide users with clear, comprehensible, comprehensive information about why they are seeing a political or issue ad.

We signed up to the following measures of this commitment

Measure 9.1 Measure 9.2

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

No

If yes, list these implementation measures here

N/A

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

We prohibit political advertising and are continuing to focus on enforcement of this policy in light of Regulation EU 2024/900 on the Transparency and Targeting of Political Advertising coming into force and the majority of provisions applying from October 2025.

If yes, which further implementation measures do you plan to put in place in the next 6 months?

N/A

Measure 9.2

Relevant Signatories will explain in simple, plain language, the rationale and the tools used by the sponsors and providers of advertising services acting on behalf of sponsors (for instance: demographic, geographic, contextual, interest or behaviourally-based) to determine that a political or issue ad is displayed specifically to the user.

QRE 9.2.1

Relevant Signatories will describe the tools and features in place to provide users with the information outlined in Measures 9.1 and 9.2, including relevant examples for each targeting method offered by the service.

Not applicable as TikTok does not allow political advertising.

Commitment 10

Relevant Signatories commit to maintain repositories of political or issue advertising and ensure their currentness, completeness, usability and quality, such that they contain all political and issue advertising served, along with the necessary information to comply with their legal obligations and with transparency commitments under this Code.

We signed up to the following measures of this commitment

Measure 10.1 Measure 10.2

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

No

If yes, list these implementation measures here

N/A

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

We prohibit political advertising and are continuing to focus on enforcement of this policy in light of Regulation EU 2024/900 on the Transparency and Targeting of Political Advertising coming into force and the majority of provisions applying from October 2025.

If yes, which further implementation measures do you plan to put in place in the next 6 months?

N/A

Measure 10.2

The information in such ad repositories will be publicly available for at least 5 years.

QRE 10.2.1

Relevant Signatories will detail the availability, features, and updating cadence of their repositories to comply with Measures 10.1 and 10.2. Relevant Signatories will also provide quantitative information on the usage of the repositories, such as monthly usage.

Not applicable as TikTok does not allow political advertising.

In compliance with our obligations pursuant to the Digital Services Act, TikTok maintains a publicly searchable Ad Library that features ads that TikTok has been paid to display to users, including those that are not currently active or have been paused by the advertisers. This includes information on the total number of recipients reached, with aggregate numbers broken down by Member State for the group or groups of recipients that the ad specifically targeted, including for political ads which have been removed. Each ad entry is available for the duration that it is shown on TikTok and for a year afterwards in compliance with the Digital Services Act. 

Article 39(3) of the Digital Services Act requires that such libraries should not include the content of the ad, the identity of the person on whose behalf it was presented, or who paid for it where an ad has been removed for incompatibility with a platform’s terms and conditions. As political ads are prohibited on TikTok, in order to comply with its legal obligations TikTok must remove these specific details of any political ads that have been removed from its platform (as such ad breaches its terms and conditions). For this reason TikTok’s ad library is required to display different information in respect of political ads in comparison to platforms that do allow them.

Commitment 11

Relevant Signatories commit to provide application programming interfaces (APIs) or other interfaces enabling users and researchers to perform customised searches within their ad repositories of political or issue advertising and to include a set of minimum functionalities as well as a set of minimum search criteria for the application of APIs or other interfaces.

We signed up to the following measures of this commitment

Measure 11.1 Measure 11.2 Measure 11.3 Measure 11.4

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

No

If yes, list these implementation measures here

N/A

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

N/A

Commitment 13

Relevant Signatories agree to engage in ongoing monitoring and research to understand and respond to risks related to Disinformation in political or issue advertising.

We signed up to the following measures of this commitment

Measure 13.1 Measure 13.2 Measure 13.3

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

No

If yes, list these implementation measures here

N/A

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

We prohibit political advertising and are continuing to focus on enforcement of this policy in light of Regulation EU 2024/900 on the Transparency and Targeting of Political Advertising coming into force and the majority of provisions applying from October 2025. 

If yes, which further implementation measures do you plan to put in place in the next 6 months?

N/A

Measure 13.1

Relevant Signatories agree to work individually and together through the Task-force to identify novel and evolving disinformation risks in the uses of political or issue advertising and discuss options for addressing those risks.

QRE 13.1.1

Through the Task-force, the Relevant Signatories will convene, at least annually, an appropriately resourced discussion around novel risks in political advertising to develop coordinated policy.

Whilst we do not allow political advertising, we remain engaged with discussions being held through the Task-force and other fora to ensure our policies and processes remain current and emerging threats are addressed in our policies and enforcement.

Measure 13.2

TikTok does not allow political advertising and this continues in blackout periods. 

Integrity of Services

Commitment 14

In order to limit impermissible manipulative behaviours and practices across their services, Relevant Signatories commit to put in place or further bolster policies to address both misinformation and disinformation across their services, and to agree on a cross-service understanding of manipulative behaviours, actors and practices not permitted on their services. Such behaviours and practices include: The creation and use of fake accounts, account takeovers and bot-driven amplification, Hack-and-leak operations, Impersonation, Malicious deep fakes, The purchase of fake engagements, Non-transparent paid messages or promotion by influencers, The creation and use of accounts that participate in coordinated inauthentic behaviour, User conduct aimed at artificially amplifying the reach or perceived public support for disinformation.

We signed up to the following measures of this commitment

Measure 14.1 Measure 14.2 Measure 14.3

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Yes

If yes, list these implementation measures here

  • Building on our new AI-generated label for creators to disclose content that is completely AI-generated or significantly edited by AI, we have expanded our efforts in the AIGC space by:
    • Implementing the Coalition for Content Provenance and Authenticity (C2PA) Content Credentials, which enables our systems to instantly recognize and automatically label AIGC.
    • Supporting the coalition’s working groups as a C2PA General Member.
    • Joining the Content Authenticity Initiative (CAI) to drive wider adoption of the technical standard.
    • Publishing a new Transparency Center article Supporting responsible, transparent AI-generated content.
    • Building on our new AI-generated content label for creators, and implementation of C2PA Content Credentials, we launched a number of media literacy campaigns with guidance from expert organisations like Mediawise and WITNESS, including in Brazil, Germany, France, Mexico and the UK, that teach our community how to spot and label AI-generated content. This AIGC Transparency Campaign informed by WITNESS has reached 80M users globally, including more than 8.5M and 9.5M in Germany and France respectively.
  • Continued to join industry partners as a party to the “Tech Accord to Combat Deceptive Use of AI in 2024 Elections’ which is a joint commitment to combat the deceptive use of AI in elections. 
  • Continued to participate in the working groups on integrity of services and Generative AI.
  • We have continued to enhance our ability to detect covert influence operations. To provide more regular and detailed updates about the covert influence operations we disrupt, we have a dedicated Transparency Report on covert influence operations, which is available in TikTok’s transparency centre. In this report, we include information about operations that we have previously removed and that have attempted to return to our platform with new accounts.

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

We are continuously reviewing and improving our tools and processes to fight disinformation and will report on any further development in the next COPD report.

Measure 14.1

Relevant Signatories will adopt, reinforce and implement clear policies regarding impermissible manipulative behaviours and practices on their services, based on the latest evidence on the conducts and tactics, techniques and procedures (TTPs) employed by malicious actors, such as the AMITT Disinformation Tactics, Techniques and Procedures Framework.

QRE 14.1.1

Relevant Signatories will list relevant policies and clarify how they relate to the threats mentioned above as well as to other Disinformation threats.

As well as our I&A policies in our CGs which safeguard against harmful misinformation (see QRE 18.2.1), our I&A policies also expressly prohibit deceptive behaviours. Our policies on deceptive behaviours relate to the TTPs as follows:

TTPs which pertain to the creation of assets for the purpose of a disinformation campaign, and to ways to make these assets seem credible: 

Creation of inauthentic accounts or botnets (which may include automated, partially automated, or non-automated accounts)  

Our I&A policies which address Spam and Deceptive Account Behaviours expressly prohibit account behaviours that may spam or mislead our community. You can set up multiple accounts on TikTok to create different channels for authentic creative expression, but not for deceptive purposes.

We do not allow spam including:
  • Operating large networks of accounts controlled by a single entity, or through automation;
  • Bulk distribution of a high-volume of spam; and
  • Manipulation of engagement signals to amplify the reach of certain content, or buying and selling followers, particularly for financial purposes

We also do not allow impersonation including:
  • Accounts that pose as another real person or entity without disclosing that they are a fan or parody account in the account name, such as using someone's name, biographical details, content, or image without disclosing it
  • Presenting as a person or entity that does not exist (a fake persona) with a demonstrated intent to mislead others on the platform

If we determine someone has engaged in any of these deceptive account behaviours, we will ban the account, and may ban any new accounts that are created.

Use of fake / inauthentic reactions (e.g. likes, up votes, comments) and use of fake followers or subscribers

Our I&A policies which address fake engagement do not allow the trade or marketing of services that attempt to artificially increase engagement or deceive TikTok’s recommendation system. We do not allow our users to: 

  • facilitate the trade or marketing of services that artificially increase engagement, such as selling followers or likes; or
  • provide instructions on how to artificially increase engagement on TikTok.

If we become aware of accounts or content with inauthentically inflated metrics, we will remove the associated fake followers or likes. Content that tricks or manipulates others as a way to increase engagement metrics, such as “like-for-like” promises and false incentives for engaging with content (to increase gifts, followers, likes, views, or other engagement metrics) is ineligible for our For You feed.

Creation of inauthentic pages, groups, chat groups, fora, or domains 
TikTok does not have pages, groups, chat groups, fora or domains. This TTP is not relevant to our platform.

Account hijacking or Impersonation
Again, our policies prohibit impersonation which refers to accounts that pose as another real person or entity or present as a person or entity that does not exist (a fake persona) with a demonstrated intent to mislead others on the platform. Our users are not allowed to use someone else's name, biographical details, or profile picture in a misleading manner. 

In order to protect freedom of expression, we do allow accounts that are clearly parody, commentary, or fan-based, such as where the account name indicates that it is a fan, commentary, or parody account and not affiliated with the subject of the account. We continue to develop our policies to ensure that impersonation of entities (such as businesses or educational institutions, for example) are prohibited and that accounts which impersonate people or entities who are not on the platform are also prohibited. We also issue warnings to users of suspected impersonation accounts and do not recommend those accounts on our For You Feed.

We also have a number of policies that address account hijacking. Our privacy and security policies under our CGs expressly prohibit users from providing access to their account credentials to others or enable others to conduct activities against our CGs. We do not allow access to any part of TikTok through unauthorised methods; attempts to obtain sensitive, confidential, commercial, or personal information; or any abuse of the security, integrity, or reliability of our platform. We also provide practical guidance to users if they have concerns that their account may have been hacked.  

TTPs which pertain to the dissemination of content created in the context of a disinformation campaign, which may or may not include some forms of targeting or attempting to silence opposing views: 

Deliberately targeting vulnerable recipients (e.g. via personalised advertising, location spoofing or obfuscation), inauthentic coordination of content creation or amplification, including attempts to deceive/manipulate platforms algorithms (e.g. keyword stuffing or inauthentic posting/reposting designed to mislead people about popularity of content, including by influencers), use of deceptive practices to deceive/manipulate platform algorithms, such as to create, amplify or hijack hashtags, data voids, filter bubbles, or echo chambers and coordinated mass reporting of non-violative opposing content or accounts.

We fight against CIOs as our policies prohibit attempts to sway public opinion while also misleading our systems or users about the identity, origin, approximate location, popularity or overall purpose.

When we investigate and remove these operations, we focus on behaviour and assessing linkages between accounts and techniques to determine if actors are engaging in a coordinated effort to mislead TikTok’s systems or our community. In each case, we believe that the people behind these activities coordinate with one another to misrepresent who they are and what they are doing. We know that CIOs will continue to evolve in response to our detection and networks may attempt to reestablish a presence on our platform - that's why we take continuous action against these attempts including banning accounts found to be linked with previously disrupted networks. We continue to iteratively research and evaluate complex deceptive behaviours on our platform and develop appropriate product and policy solutions as appropriate in the long term. We publish all of the CIO networks we identify and remove voluntarily in a dedicated report within our transparency centre here.

Use “hack and leak” operation (which may or may not include doctored content) 

We have a number of policies that address hack and leak related threats (some examples are below):
  • Our hack and leak policy, which aims to further reduce the harms inflicted by the unauthorised disclosure of hacked materials on the individuals, communities and organisations that may be implicated or exposed by such disclosures
  • Our CIO policy addresses use of leaked documents to sway public opinion as part of a wider operation
  • Our Edited Media and AI-Generated Content (AIGC) policy captures materials that have been digitally altered without an appropriate disclosure
  • Our harmful misinformation policies combats conspiracy theories related to unfolding events and dangerous misinformation
  • Our Trade of Regulated Goods and Services policy prohibits trading of hacked goods

Deceptive manipulated media (e.g. “deep fakes”, “cheap fakes”...)  

Our ‘Edited Media and AI-Generated Content (AIGC)’ policy includes commonly used and easily understood language when referring to AIGC, and outlines our existing prohibitions on AIGC showing fake authoritative sources or crisis events, or falsely showing public figures in certain contexts including being bullied, making an endorsement, or being endorsed. We also do not allow content that contains the likeness of young people, or the likeness of adult private figures used without their permission.

For the purposes of our policy, AIGC refers to content created or modified by artificial intelligence (AI) technology or machine-learning processes, which may include images of real people, and may show highly realistic-appearing scenes, or use a particular artistic style, such as a painting, cartoons, or anime. ‘Significantly edited content’ is content that shows people doing or saying something they did not do or say, or altering their appearance in a way that makes them difficult to recognise or identify. Misleading AIGC or edited media is audio or visual content that has been edited, including by combining different clips together, to change the composition, sequencing, or timing in a way that alters the meaning of the content and could mislead viewers about the truth of real world events.
 
In accordance with our policy, we prohibit AIGC that features:
  • Realistic-appearing people under the age of 18
  • The likeness of adult private figures, if we become aware it was used without their permission
  • Misleading AIGC or edited media that falsely shows:
    • Content made to seem as if it comes from an authoritative source, such as a reputable news organisation
    • A crisis event, such as a conflict or natural disaster
    • A public figure who is:
      • being degraded or harassed, or engaging in criminal or antisocial behaviour
      • taking a position on a political issue, commercial product, or a matter of public importance (such as an elections)
      • being politically endorsed or condemned by an individual or group

As AI evolves, we continue to invest in combating harmful AIGC by evolving our proactive detection models, consulting with experts, and partnering with peers on shared solutions.

Non-transparent compensated messages or promotions by influencers 

Our Terms of Service and Branded Content Policy require users posting about a brand or product in return for any payment or other incentive to disclose their content by enabling the branded content toggle which we make available for users. We also provide functionality to enable users to report suspected undisclosed branded content which reminds the user who posted the suspected undisclosed branded content of our requirements and prompts them to turn the branded content toggle on if required. We made this requirement even clearer to users in our Commercial Disclosures and Paid Promotion policy in our March 2023 CG refresh, by increasing the information around our policing of this policy and providing specific examples.

In addition to branded content policies, our CIO policy can also apply to non-transparent compensated messages or promotions by influencers where it is found that those messages or promotions formed part of a covert influence campaign.

QRE 14.1.2

Signatories will report on their proactive efforts to detect impermissible content, behaviours, TTPs and practices relevant to this commitment.

At TikTok, we place considerable emphasis on proactive content moderation and use a combination of technology and safety professionals to detect and remove harmful misinformation (see QRE 18.1.1) and deceptive behaviours on our Platform, before it is reported to us by users or third parties. 

For instance, we take proactive measures to prevent inauthentic or spam accounts from being created. Thus, we have created and use detection models and rule engines that:

  • prevent inauthentic accounts from being created based on malicious patterns; and
  • remove registered accounts based on certain signals (ie, uncommon behaviour on the platform).

We also manually monitor user reports of inauthentic accounts in order to detect larger clusters or similar inauthentic behaviours.

However, given the complex nature of the TTPs, human moderation is critical to success in this area and TikTok's moderation teams therefore play a key role assessing and addressing identified violations. We provide our moderation teams with detailed guidance on how to apply the I&A policies in our CGs, including providing case banks of harmful misinformation claims to support their moderation work, and allow them to route new or evolving content to our fact-checking partners for assessment. 

In addition, where content reaches certain levels of popularity in terms of the number of video views, it will be flagged for further review. Such review is undertaken given the extent of the content’s dissemination and the increase in potential harm if the content is found to be in breach of our CGs including our I&A policies.

Furthermore, during the reporting period, we improved automated detection and enforcement of our ‘Edited Media and AI-Generated Content (AIGC)’ policy, resulting in an effective increase in the number of videos removed for policy violations. This also resulted in the number of visitors per video decreasing over the reporting period, demonstrating an effective control strategy as the scope of enforcement increased.

We have also set up specifically-trained teams that are focused on investigating and detecting CIO on our Platform. We've built international trust & safety teams with specialized expertise across threat intelligence, security, law enforcement, and data science to work on influence operations full-time. These teams continuously pursue and analyse on-platform signals of deceptive behaviour, as well as leads from external sources. They also collaborate with external intelligence vendors to support specific investigations on a case-by-case basis. When we investigate and remove these operations, we focus on behaviour and assessing linkages between accounts and techniques to determine if actors are engaging in a coordinated effort to mislead TikTok’s systems or our community. In each case, we believe that the people behind these activities coordinate with one another to misrepresent who they are and what they are doing.

Accounts that engage in influence operations often avoid posting content that would be violative of platforms' guidelines by itself. That's why we focus on accounts' behaviour and technical linkages when analysing them, specifically looking for evidence that:

  • They are coordinating with each other. For example, they are operated by the same entity, share technical similarities like using the same devices, or are working together to spread the same narrative.
  • They are misleading our systems or users. For example, they are trying to conceal their actual location, or using fake personas to pose as someone they're not.
  • They are attempting to manipulate or corrupt public debate to impact the decision making, beliefs and opinions of a community. For example, they are attempting to shape discourse around an election or conflict.

These criteria are aligned with industry standards and guidance from the experts we regularly consult with. They're particularly important to help us distinguish malicious, inauthentic coordination from authentic interactions that are part of healthy and open communities. For example, it would not violate our policies if a group of people authentically worked together to raise awareness or campaign for a social cause, or express a shared opinion (including political views). However, multiple accounts deceptively working together to spread similar messages in an attempt to influence public discussions would be prohibited and disrupted.

Measure 14.2

Relevant Signatories will keep a detailed, up-to-date list of their publicly available policies that clarifies behaviours and practices that are prohibited on their services and will outline in their reports how their respective policies and their implementation address the above set of TTPs, threats and harms as well as other relevant threats.

QRE 14.2.1

Relevant Signatories will report on actions taken to implement the policies they list in their reports and covering the range of TTPs identified/employed, at the Member State level.

The implementation of our policies is ensured by different means, including specifically-designed tools (such as toggles to disclose branded content - see QRE 14.1.1) or human investigations to detect deceptive behaviours (for CIO activities - see QRE 14.1.2).

The implementation of these policies is also ensured through enforcement measures applied in all Member States. 

CIO investigations are resource intensive and require in-depth analysis to ensure high confidence in proposed actions. Where our teams have the necessary high degree of confidence that an account is engaged in CIO or is connected to networks we took down in the past as part of a CIO, it is removed from our Platform.

Similarly, where our teams have a high degree of confidence that specific content violates one of our TTPs-related policies (See QRE 14.1.1), such content is removed from TikTok.

Lastly, we may reduce the discoverability of some content, including by making videos ineligible for recommendation in the For You feed section of our platform. This is, for example, the case for content that tricks or manipulates users in order to inauthentically increase followers, likes, or views

SLI 14.2.1

Number of instances of identified TTPs and actions taken at the Member State level under policies addressing each of the TTPs as well as information on the type of content.

TTP No. 1: Creation of inauthentic accounts or botnets (which may include automated, partially automated, or non-automated accounts) 


Methodology of data measurement

We have based the number of: (i) fake accounts removed; and (ii) followers of the fake accounts (identified at the time of removal of the fake account), on the country the fake account was last active in.

We have updated our methodology to report the ratio of monthly average of fake accounts over of monthly active users, based on the latest publication of monthly active users, in order to better reflect TTPs related content in relation to overall content on the service.

TTP no. 2: Use of fake / inauthentic reactions (e.g. likes, up votes, comments)  


Methodology of data measurement:

We based the number of fake likes that we removed on the country of registration of the user. We also based the number of fake likes prevented on the country of registration of the user.

TTP No. 3: Use of fake followers or subscribers  


Methodology of data measurement:

We based the number of fake followers that we removed on the country of registration of the user. We also based the number of fake followers prevented on the country of registration of the user.

TTP No. 4: Creation of inauthentic pages, groups, chat groups, fora, or domains


TikTok does not have pages, groups, chat groups, fora or domains. This TTP is not relevant to our platform.

TTP No. 5:  Account hijacking or impersonation  

Methodology of data measurement:

The number of accounts removed under our impersonation policy is based on the approximate location of the users. We have updated our methodology to report the ratio of monthly average impersonation accounts banned over monthly active users, based on the latest publication of monthly active users, in order to better reflect TTPs related content in relation to overall content on the service.

TTP No. 6. Deliberately targeting vulnerable recipients (e.g. via personalised advertising, location spoofing or obfuscation)  


Methodology of data measurement:

The number of new CIO network discoveries found to be targeting EU markets relates to our public disclosures for the period July 1st 2024 to December 31st 2024. We have categorised disrupted CIO networks by the country we assess that the network targeted. We have included any network which we assess to have targeted one or more European markets, or have operated from an EU market. We publish all of the CIO networks we identify and remove within our transparency reports here.

CIO networks identified and removed are detailed below, including the assessed geographic location of network operation and the assessed target audience of the network, which we assess via technical and behavioural evidence from proprietary and open sources. The number of followers of CIO networks has been based on the number of accounts that followed any account within a network as of the date of that network’s removal.

Note: TTP No. 6 data cannot be shown on this page due to limitations with the website. We provide a full list of CIOs disrupted originating in Member States in our full report, which can be downloaded from this website.

TTP No. 7: Deploy deceptive manipulated media (e.g. “deep fakes”, “cheap fakes”...)

We have based the following numbers on the country in which the video was posted: videos removed because of violations of the Edited Media and AI-Generated Content (AIGC) policy. The number of views of videos removed because of violation of each of these policies is based on the approximate location of the user.

TTP No. 8: Use “hack and leak” operation (which may or may not include doctored content)

We have provided data on the CIO networks that we have disrupted in the reporting period under TTP No. 6. We have also provided data on violations of our Edited Media and AI-Generated Content (AIGC) policy under TTP No. 7. Our  hack and leak policy was recently launched in H1 2024,  but we do not have meaningful metrics under this policy to report for H2.

TTP No. 9: Inauthentic coordination of content creation or amplification, including attempts to deceive/manipulate platforms algorithms (e.g. keyword stuffing or inauthentic posting/reposting designed to mislead people about popularity of content, including by influencers)


We have provided data on the CIO networks that we have disrupted in the reporting period under TTP No. 6.

TTP No. 10: Use of deceptive practices to deceive/manipulate platform algorithms, such as to create, amplify or hijack hashtags, data voids, filter bubbles, or echo chambers


We have provided data on the CIO networks that we have disrupted in the reporting period under TTP No. 6.

TTP No. 11. Non-transparent compensated messages or promotions by influencers 


Methodology of data measurement:
We are unable to provide this metric due to insufficient data available for the reporting period. 

TTP No. 12: Coordinated mass reporting of non-violative opposing content or accounts


We have provided data on the CIO networks that we have disrupted in the reporting period under TTP No. 6.

Country TTP 1 (Number of Fake Accounts Removed) TTP 2 (Number of Fake Likes Removed) TTP 3 (Number of Fake Followers Removed) TTP 5 (Number of Accounts Banned Under Misinformation Policy) TTP 7 (Number of videos removed for violation of Edited Media & AIGC Policy)
Austria 92511 12262551 9980544 177 110859
Belgium 176327 16913076 11916866 300 166222
Bulgaria 423060 6468521 4561129 175 75036
Croatia 74704 1821268 1965426 77 27536
Cyprus 86741 4176517 1706405 54 59263
Czech Republic 194925 3052689 4342681 134 51417
Denmark 155675 4183605 3154022 115 49328
Estonia 111506 687649 482641 29 19687
Finland 99745 3086208 3204999 92 60083
France 2061174 78227394 109481878 2587 1399713
Germany 1678822 131158324 125941360 2277 1380835
Greece 133443 14621872 7880295 215 206528
Hungary 84057 1821268 2589692 141 63319
Ireland 321237 4520433 3213842 235 32936
Italy 672344 60514367 35511559 805 746928
Latvia 60145 1690473 732030 48 99265
Lithuania 79417 1682687 2057659 76 42778
Luxembourg 73258 1920605 1574849 43 40901
Malta 60192 1395676 401869 0 12100
Netherlands 886619 23557961 17070055 567 202203
Poland 360959 8833014 10128172 1251 203835
Portugal 190906 9239486 3714261 206 151389
Romania 294195 11254476 14021343 1300 287851
Slovakia 131567 1208123 4288570 63 21883
Slovenia 298807 727133 678185 43 10131
Spain 709560 38331442 31084803 709 676935
Sweden 239020 15782957 12342226 284 163490
Iceland 31476 230931 120003 15 3353
Liechtenstein 1369 24827 893407 0 357
Norway 92800 5457966 3756414 178 59556
All EU 9750916 459139775 424027361 12003 6362451
All EEA 9876561 464982867 428797185 12196 6425717

SLI 14.2.2

Views/impressions of and interaction/engagement at the Member State level (e.g. likes, shares, comments), related to each identified TTP, before and after action was taken.

Please see SLI 14.2.1 for definitions of individual TTPs

Country TTP 1 (Number of followers of fake accounts identified at the time of removal) TTP 2 (Number of fake likes prevented) TTP 3 (Number of Fake Followers Prevented) TTP 7 (Number of views of videos removed because of Edited Media and AI-Generated Content (AIGC) policy)
Austria 467635 39213306 25000123 216433
Belgium 544073 56682105 34550567 1119223
Bulgaria 188995 40004761 26400841 5977
Croatia 175230 17901159 18990456 58579
Cyprus 124021 6960047 18497473 19441
Czech Republic 348626 31099711 18233387 8287531
Denmark 298306 17585666 23806634 2742457
Estonia 239039 7385026 16887949 2063380
Finland 195684 19264460 20303735 464824
France 20207105 336499329 127136908 312078908
Germany 20545728 357582219 138933948 23904234
Greece 1702918 84211417 38712931 145950
Hungary 184291 28069699 24773097 86870
Ireland 697840 31110363 25239860 103199
Italy 5900534 606697045 158916638 1892355
Latvia 124765 11600082 17952175 4519
Lithuania 300241 11795998 18928046 25410
Luxembourg 611602 7987636 21051498 8729
Malta 226073 3466698 15758979 5811847
Netherlands 1575641 101316771 35162609 9080526
Poland 3192516 208518568 54501610 13404186
Portugal 370719 56146620 26901973 339124
Romania 4045608 83405388 44172801 623525
Slovakia 1347301 18154505 21010637 2014
Slovenia 45359 5843233 1942793 605
Spain 5351682 161280031 73920335 21882268
Sweden 528326 48240073 36451604 377862
Iceland 253997 1564206 2572695 6113
Liechtenstein 11129 70045 1045728 525
Norway 151088 20708187 7242021 139984
All EU 69539858 2398021916 1084139607 404749976
All EEA 69956072 2420364354 1095000051 404896598

SLI 14.2.3

Metrics to estimate the penetration and impact that e.g. Fake/Inauthentic accounts have on genuine users and report at the Member State level (including trends on audiences targeted; narratives used etc.).

Please see SLI 14.2.1 for definitions of individual TTPs

Country Number of unique videos labelled with AIGC tag of "Creator labeled as AI-generated"
Austria 110859
Belgium 166222
Bulgaria 75036
Croatia 27536
Cyprus 59263
Czech Republic 51417
Denmark 49328
Estonia 19687
Finland 60083
France 1399713
Germany 1380835
Greece 206528
Hungary 63319
Ireland 32936
Italy 746928
Latvia 99265
Lithuania 42778
Luxembourg 40901
Malta 12100
Netherlands 202203
Poland 203835
Portugal 151389
Romania 287851
Slovakia 21883
Slovenia 10131
Spain 676935
Sweden 163490
Iceland 3353
Liechtenstein 357
Norway 59556
Total EU 6362451
Total EEA 6425717

SLI 14.2.4

Estimation, at the Member State level, of TTPs related content, views/impressions and interaction/engagement with such content as a percentage of the total content, views/impressions and interaction/engagement on relevant signatories' service.

Please see SLI 14.2.1 for definitions of individual TTPs

Country TTP 1 (Ratio of monthly average of Fake accounts over monthly active users) TTP 5 (Impersonation accounts as a % of monthly active users) TTP 7 (Number of unique videos labelled with AIGC tag of "AI-generated")
Austria 38531
Belgium 75316
Bulgaria 78668
Croatia 18595
Cyprus 3165
Czech Republic 89409
Denmark 30694
Estonia 11220
Finland 49106
France 432739
Germany 502916
Greece 7936
Hungary 74704
Ireland 34736
Italy 393642
Latvia 18852
Lithuania 21581
Luxembourg 3319
Malta 3444
Netherlands 29448
Poland 316048
Portugal 64975
Romania 37467
Slovakia 28439
Slovenia 6969
Spain 493675
Sweden 105253
Iceland 4720
Liechtenstein 61
Norway 42172
Total EU 0.001 0.000013 2970847
Total EEA 3017800

Measure 14.3

Relevant Signatories will convene via the Permanent Task-force to agree upon and publish a list and terminology of TTPs employed by malicious actors, which should be updated on an annual basis.

QRE 14.3.1

Signatories will report on the list of TTPs agreed in the Permanent Task-force within 6 months of the signing of the Code and will update this list at least every year. They will also report about the common baseline elements, objectives and benchmarks for the policies and measures.

We collaborated as part of the Integrity of Services working group to set up the first list of TTPs.

Commitment 15

Relevant Signatories that develop or operate AI systems and that disseminate AI-generated and manipulated content through their services (e.g. deepfakes) commit to take into consideration the transparency obligations and the list of manipulative practices prohibited under the proposal for Artificial Intelligence Act.

We signed up to the following measures of this commitment

Measure 15.1 Measure 15.2

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Yes

If yes, list these implementation measures here

  • Building on our new AI-generated label for creators to disclose content that is completely AI-generated or significantly edited by AI, we have expanded our efforts in the AIGC space by:
    • Implementing the Coalition for Content Provenance and Authenticity (C2PA) Content Credentials, which enables our systems to instantly recognize and automatically label AIGC.
    • Supporting the coalition’s working groups as a C2PA General Member.
    • Joining the Content Authenticity Initiative (CAI) to drive wider adoption of the technical standard.
    • Publishing a new Transparency Center article Supporting responsible, transparent AI-generated content.
    • Building on our new AI-generated content label for creators, and implementation of C2PA Content Credentials, we launched a number of media literacy campaigns with guidance from expert organisations like Mediawise and WITNESS, including in Brazil, Germany, France, Mexico and the UK, that teach our community how to spot and label AI-generated content. This AIGC Transparency Campaign informed by WITNESS has reached 80M users globally, including more than 8.5M and 9.5M in Germany and France respectively.
  • Continued to join industry partners as a party to the “Tech Accord to Combat Deceptive Use of AI in 2024 Elections’ which is a joint commitment to combat the deceptive use of AI in elections.
  • We continue to participate in relevant working groups, such as the Generative AI working group, which commenced in September 2023.

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

We are continuously reviewing and improving our tools and processes to fight disinformation and will report on any further development in the next COPD report. 

Measure 15.1

Relevant signatories will establish or confirm their policies in place for countering prohibited manipulative practices for AI systems that generate or manipulate content, such as warning users and proactively detect such content.

QRE 15.1.1

In line with EU and national legislation, Relevant Signatories will report on their policies in place for countering prohibited manipulative practices for AI systems that generate or manipulate content.

Our Edited Media and AI-Generated Content (AIGC) policy includes commonly used and easily understood language when referring to AIGC, and outlines our existing prohibitions on AIGC showing fake authoritative sources or crisis events, or falsely showing public figures in certain contexts including being bullied, making an endorsement, or being endorsed. As AI evolves, we continue to invest in combating harmful AIGC by evolving our proactive detection models, consulting with experts, and partnering with peers on shared solutions.

While we welcome the creativity that new AI may unlock, in line with our updated policy, users must proactively disclose when their content is AI-generated or manipulated but shows realistic scenes (i.e. fake people, places or events that look like they are real). We launched an AI toggle in September 2023, which allows users to self-disclose AI-generated content when posting. When this has been turned on, a tag “Creator labelled as AI-generated” is displayed to users. Alternatively, this can be done through the use of a sticker or caption, such as ‘synthetic’, ‘fake’, ‘not real’, or ‘altered’. 

We also automatically label content made with TikTok effects if they use AI. TikTok may automatically apply the "AI-generated" label to content we identify as completely generated or significantly edited with AI. This may happen when a creator uses TikTok AI effects or uploads AI-generated content that has Content Credentials attached, a technology from the Coalition for Content Provenance and Authenticity (C2PA). Content Credentials attach metadata to content that we can use to recognize and label AIGC instantly. Once content is labeled as AI-generated with an auto label, users are unable to remove the label from the post.

We do not allow: 

  • AIGC that shows realistic-appearing people under the age of 18
  • AIGC that shows the likeness of adult private figures, if we become aware it was used without their permission
  • Misleading AIGC or edited media that falsely shows:
    • Content made to seem as if it comes from an authoritative source, such as a reputable news organisation
    • A crisis event, such as a conflict or natural disaster
    • A public figure who is:
      • being degraded or harassed, or engaging in criminal or antisocial behaviour
      • taking a position on a political issue, commercial product, or a matter of public importance (such as an elections)
      • being politically endorsed or condemned by an individual or group

Measure 15.2

Relevant Signatories will establish or confirm their policies in place to ensure that the algorithms used for detection, moderation and sanctioning of impermissible conduct and content on their services are trustworthy, respect the rights of end-users and do not constitute prohibited manipulative practices impermissibly distorting their behaviour in line with Union and Member States legislation.

QRE 15.2.1

Relevant Signatories will report on their policies and actions to ensure that the algorithms used for detection, moderation and sanctioning of impermissible conduct and content on their services are trustworthy, respect the rights of end-users and do not constitute prohibited manipulative practices in line with Union and Member States legislation.

We have a number of measures to ensure the AI systems we develop uphold the principles of fairness and comply with applicable laws. To that end:
  • We have in place internal guidelines and training to help ensure that the training and deployment of our AI systems comply with applicable data protection laws, as well as principles of fairness.
  • We have instituted a compliance review process for new AI systems that meet certain thresholds, and are working to prioritise review of previously developed algorithms.

We are also proud to be a launch partner of the Partnership on AI's Responsible Practices for Synthetic Media.

Commitment 16

Relevant Signatories commit to operate channels of exchange between their relevant teams in order to proactively share information about cross-platform influence operations, foreign interference in information space and relevant incidents that emerge on their respective services, with the aim of preventing dissemination and resurgence on other services, in full compliance with privacy legislation and with due consideration for security and human rights risks.

We signed up to the following measures of this commitment

Measure 16.1 Measure 16.2

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Yes

If yes, list these implementation measures here

Actively engaged with the Crisis Response working group, sharing insights and learnings about relevant areas including CIOs. 

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?


We are continuously reviewing and improving our tools and processes to fight disinformation and will report on any further development in the next COPD report. 

Measure 16.1

Relevant Signatories will share relevant information about cross-platform information manipulation, foreign interference in information space and incidents that emerge on their respective services for instance via a dedicated sub-group of the permanent Task-force or via existing fora for exchanging such information.

QRE 16.1.1

Relevant Signatories will disclose the fora they use for information sharing as well as information about learnings derived from this sharing.

Central to our strategy for identifying and removing CIO on our platforms is working with our stakeholders including from civil society to user reports. This approach facilitates us - and others - disrupting the network’s operations in their early stages. In addition to continuously enhancing our in-house capabilities, we proactively engage in comprehensive reviews of our peers' publicly disclosed findings and swiftly implement necessary actions in alignment with our policies.

To provide more regular and detailed updates about the CIO we disrupt, we have introduced a new dedicated Transparency Report on covert influence operations, which is available in TikTok’s transparency centre. In this report, we have also added new information about operations that we have previously removed and that have attempted to return to our platform with new accounts. The insights and metrics in this report aim to inform industry peers and the research community. 

We share relevant insights and metrics within our quarterly transparency reports, which aim to inform industry peers and the research community. We also review relevant insights and metrics from other industry peers to cross-compare for any similar behaviour on TikTok.

We continue to engage in the sub groups set up for insights sharing between signatories and the Commission. 

As we have detailed in other chapters to this report, we have robust monetisation integrity policies in place and have established joint operating procedures between specialist CIO investigations teams and monetisation integrity teams to work on joint investigations of CIOs involving monetised products.

Measure 16.2

Relevant Signatories will pay specific attention to and share information on the tactical migration of known actors of misinformation, disinformation and information manipulation across different platforms as a way to circumvent moderation policies, engage different audiences or coordinate action on platforms with less scrutiny and policy bandwidth.

QRE 16.2.1

As a result of the collaboration and information sharing between them, Relevant Signatories will share qualitative examples and case studies of migration tactics employed and advertised by such actors on their platforms as observed by their moderation team and/or external partners from Academia or fact-checking organisations engaged in such monitoring.

We publish all of the CIO networks we identify and remove within our transparency reports here. As new deceptive behaviours emerge, we’ll continue to evolve our response, strengthen enforcement capabilities, and publish our findings.

Empowering Users

Commitment 17

In light of the European Commission's initiatives in the area of media literacy, including the new Digital Education Action Plan, Relevant Signatories commit to continue and strengthen their efforts in the area of media literacy and critical thinking, also with the aim to include vulnerable groups.

We signed up to the following measures of this commitment

Measure 17.1 Measure 17.2 Measure 17.3

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Yes

If yes, list these implementation measures here

  • Rolled out two new ongoing general media literacy and critical thinking skills campaigns in the EU and two in EU candidate countries in collaboration with our fact-checking and media literacy partners:
    • France:  Agence France-Presse (AFP)
    • Portugal: Polígrafo
    • Georgia: Fact Check Georgia
    • Moldova: StopFals!
  • This brings the number of  general media literacy and critical thinking skills campaigns in Europe to 11 (Denmark, Finland, France, Georgia, Ireland, Italy, Spain, Sweden, Moldova, Netherlands, and Portugal).
  • Onboarded two new fact-checking partners in wider Europe:
    • Albania & Kosovo: Internews Kosova
    • Georgia: Fact Check Georgia
  • Expanded our fact-checking coverage to a number of wider-European and EU candidate countries:
    • Albania & Kosovo: Internews Kosova
    • Georgia: Fact Check Georgia.
    • Kazakhstan: Reuters
    • Moldova: AFP/Reuters 
    • Serbia: Lead Stories
  • We ran 14 temporary media literacy election integrity campaigns in advance of regional elections, most in collaboration with our fact-checking and media literacy partners:
    • 8 in the EU (Austria, Croatia, France, 2 x Germany, Ireland, Lithuania, and Romania)
      • Austria: Deutsche Presse-Agentur (dpa)
      • Croatia: Faktograf
      • France: Agence France-Presse (AFP)
      • Germany (regional elections): Deutsche Presse-Agentur (dpa)
      • Germany (federal election): Deutsche Presse-Agentur (dpa)
      • Ireland: The Journal
      • Lithuania: N/A
      • Romania: Funky Citizens.
    • 1 in EEA
      • Iceland: N/A
    • 5 in wider Europe/EU candidate countries (Bosnia, Bulgaria, Czechia, Georgia, and Moldova)
      • Bosnia: N/A
      • Bulgaria: N/A
      • Czechia: N/A
      • Georgia: Fact Check Georgia
      • Moldova: StopFals!
  • During the reporting period, we ran 9 Election Speaker Series sessions, 7 in EU Member States and 2 in Georgia and Moldova. 
    • France: Agence France-Presse (AFP)
    • Germany: German Press Agency (dpa)
    • Austria: German Press Agency (dpa)
    • Lithuania: Logically Facts
    • Romania: Funky Citzens
    • Ireland: Logically Facts
  • Croatia: Faktograf
    • Georgia: FactCheck Georgia
    • Moldova: Stop Fals!
  • Launched four new temporary in-app natural disaster media literacy search guides that link to authoritative 3rd party agencies and organisations:
    • Central & Eastern European Floods (Austria, Bosnia, Czechia, Germany, Hungary, Moldova, Poland, Romania, and Slovakia) 
    • Portugal Wildfires 
    • Spanish floods
    • Mayotte Cyclone
  • Continued our in-app interventions, including video tags, search interventions and in-app information centres, available in 23 official EU languages and Norwegian and Icelandic for EEA users, around the elections, the Israel-Hamas Conflict, Climate Change,, Holocaust Education, Mpox, and the War in Ukraine.
    Actively participated in the UN COP29 climate change summit by:
    • Working with the COP29 presidency to promote their content and engage new audiences around the conference as a strategic media partner.
    • Re-launching our global #ClimateAction campaign with over 7K posts from around the world. Content across #ClimateAction has now received over 4B video views since being launched in 2021.
    • Bringing 5 creators to the summit, who collectively produced 15+ videos that received over 60M video views.
    • Launching two global features (a video notice tag and search intervention guide) to point users to authoritative climate related content between 29th October and 25th November, which were viewed 400k times.
  • Our partnership with Verified for Climate, a joint initiative of the UN and social impact agency Purpose, continued to be our flagship climate initiative, which saw a network of 35 Verified Champions across Brazil, the United Arab Emirates, and Spain, work with select TikTok creators to develop educational content tackling climate misinformation and disinformation, and drive climate action within the TikTok community.
  • Partnered with the World Health Organisation (WHO), including a US$ 3 million donation, to support mental well-being awareness and literacy by creating reliable content and combat misinformation through the Fides network, a diverse community of trusted healthcare professionals and content creators in the United Kingdom, United States, France, Japan, Korea, Indonesia, Mexico, and Brazil. 
  • Building on these efforts, we also launched the UK Clinician Creator Network, an initiative bringing together 19 leading NHS qualified clinicians who are actively sharing their medical expertise on TikTok, engaging a community of over 2.2 million followers.
  • Strengthened our approach to state-affiliated media by:
    • Working with third party external experts to shape our state-affiliated media policy, assessment of state-controlled media labels, and continuing to expand its use. 
    • Continued investment in our detection capabilities for state-affiliated media (SAM) accounts, with a focus on automation and scaled detection. 
  • Building on our AI-generated content label for creators, and implementation of C2PA Content Credentials, we launched a number of media literacy campaigns with guidance from expert organisations like Mediawise and WITNESS, including in Brazil, Germany, France, Mexico and the UK, that teach our community how to spot and label AI-generated content.
    • Our AIGC Transparency Campaign informed by WITNESS has reached 80M users globally, including more than 8.5M and 9.5M in Germany and France respectively.
  • Brought greater transparency about our systems and our integrity and authenticity efforts to our community by sharing regular insights and updates.  In H2 2024, we continued to expand our Transparency Center with resources like our first-ever US Elections Integrity Hub, European Elections Integrity Hub, dedicated Covert Influence Operations Reports, and a new Transparency Center blog.
  • Continued our partnership with Amadeu Antonio Stiftung in Germany on the Demo:create project, an educational initiative supporting young TikTok users to effectively deal with online hate speech, disinformation and misinformation.
  • Continued to invest in training and development for our human moderation teams. 
  • TikTok continues to co-chair the working group on Elections.

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

We are continuously reviewing and improving our tools and processes to fight misinformation and disinformation and will report on any further development in the next COPD report

Measure 17.1

Relevant Signatories will design and implement or continue to maintain tools to improve media literacy and critical thinking, for instance by empowering users with context on the content visible on services or with guidance on how to evaluate online content.

QRE 17.1.1

Relevant Signatories will outline the tools they develop or maintain that are relevant to this commitment and report on their deployment in each Member State.

In addition to systematically removing content that violates our I&A policies, we continue to dedicate significant resources to: expanding our in-app measures that show users additional context on certain content; redirecting them to authoritative information; and making these tools available in 23 EU official languages (plus, for EEA users, Norwegian & Icelandic).

We work with external experts to combat harmful misinformation. For example, we work with the World Health Organisation (WHO) on medical information, and our global fact-checking partners, taking into account their feedback, as well as user feedback, to continually identify new topics and consider which tools may be best suited for raising awareness around that topic.

We deploy a combination of in-app user intervention tools on topical issues such as elections , the Israel-Hamas Conflict, Holocaust Education, Mpox and the War in Ukraine..

Video notice tags. 

A video notice tag is an information bar at the bottom of a video which is automatically applied to a specific word or hashtag (or set of hashtags). The information bar is clickable and invites users to “Learn more about [the topic]”. Users will be directed to an in-app guide, or reliable third party resource, as appropriate.

Search intervention. If users search for terms associated with a topic, they will be presented with a banner encouraging them to verify the facts and providing a link to a trusted source of information. Search interventions are not deployed for search terms that violate our Community Guidelines, which are actioned according to our policies. 

  • For example, the four new ongoing general media literacy and critical thinking skills campaigns rolled out in France, Georgia, Moldova, and Portugal, are all supported with search guides to direct users to authoritative sources. 
  • Our COP29 global search intervention, which ran from 29th October and 25th November, pointed users to authoritative climate related content, and was viewed 400k times.

Public service announcement (PSA). If users search for a hashtag on the topic, they will be served with a public service announcement reminding them about our Community Guidelines and presenting them with links to a trusted source of information. 

Unverified content label. In addition to the above mentioned tools, to encourage users to consider the reliability of content related to an emergency or unfolding event, which has been assessed by our fact-checking partners but cannot be verified as accurate i.e., ‘unverified content’, we apply warning labels and we prompt people to reconsider sharing such content. Details of these warning labels are included in our Community Guidelines.

Where users continue to post despite the warning:
  • To limit the spread of potentially misleading information, the video will become ineligible for recommendation in the For You feed.
  • The video's creator is also notified that their video was flagged as unsubstantiated content and is provided additional information about why the warning label has been added to their content. Again, this is to raise the creator’s awareness about the credibility of the content that they have shared. 

State-controlled media label. Our state-affiliated media policy is to label accounts run by entities whose editorial output or decision-making process is subject to control or influence by a government. We apply a prominent label to all content and accounts from state-controlled media. The user is also shown a screen pop-up providing information about what the label means, inviting them to “learn more”, and redirecting them to an in-app page. The measure brings transparency to our community, raises users’ awareness, and encourages users to consider the reliability of the source.  We continue to work with experts to inform our approach and explore how we can continue to expand its use. 

In the EU, Iceland and Liechtenstein, we have also taken steps to restrict access to content from the entities sanctioned by the EU in 2024:  
  • RT - Russia Today UK
  • RT - Russia Today Germany
  • RT - Russia Today France
  • RT- Russia Today Spanish
  • Sputnik
  • Rossiya RTR / RTR Planeta
  • Rossiya 24 / Russia 24
  • TV Centre International
  • NTV/NTV Mir
  • Rossiya 1
  • REN TV
  • Pervyi Kanal / Channel 1
  • RT Arabic
  • Sputnik Arabic
  • RT Balkan
  • Oriental Review
  • Tsargrad
  • New Eastern Outlook
  • Katehon
  • Voice of Europe
  • RIA Novosti
  • Izvestija
  • Rossiiskaja Gazeta

AI-generated content labels. As more creators take advantage of Artificial Intelligence (AI) to enhance their creativity, we want to support transparent and responsible content creation practices. In 2023 TikTok launched a AI-generated label for creators to disclose content that is completely AI-generated or significantly edited by AI.The launch of this new tool to help creators label their AI-generated content was accompanied by a creator education campaign, a Help Center page, and a Newsroom Post. In May 2024, we started using the Coalition for Content Provenance and Authenticity (C2PA) Content Credentials, which enables our systems to instantly recognize and automatically label AIGC. In the interests of transparency, we also renamed TikTok AI effects to explicitly include "AI" in their name and corresponding effects label, and updated our guidelines for Effect House creators to do the same. 

Dedicated online and in-app information resources. The above mentioned tools provide links to users to accurate and up-to-date information from trusted sources. Depending on the topic, or the relevant EU country, users may be directed to an external authoritative source (e.g., a national government website or an independent national electoral commission), an in-app information centre (e.g., War in Ukraine), or a dedicated page in the TikTok Safety Center or Transparency Center. 

We use our Safety Center to inform our community about our approach to safety, privacy, and security on our platform. Relevant to combating harmful misinformation, we have dedicated information on:


Users can learn more about our transparency efforts in our dedicated Transparency Center, available in a number of EU languages, which houses our transparency reports, including the standalone Covert Influence Operations report and the reports we have published under this Code, as well as information on our commitments to maintaining platform integrity e.g., Protecting the integrity of elections, Combating misinformation, Countering influence operation, Supporting responsible, transparent AI-generated content, and details of Government Removal Requests

We also use Newsroom posts to keep our community informed about our most recent updates and efforts across News, Product, Community, Safety and Product. Users can select their country, including EU, for preferred language where available, and regionally relevant posts. For example, upon publication of our fourth Code report in September 2024, we provided users with an overview of our continued  commitment to Combating Disinformation under the EU Code of Practice . We also updated users about how we are partnering with our industry to advance AI transparency and literacy. and how we protected the integrity of the platform during the Romanian presidential elections. 

SLI 17.1.1

Relevant Signatories will report, at the Member State level, on metrics pertinent to assessing the effects of the tools described in the qualitative reporting element for Measure 17.1, which will include: the total count of impressions of the tool; and information on the interactions/engagement with the tool.

Methodology of data measurement:

The number of impressions, clicks and click through rates of video notice tags, search interventions and public service announcements are based on the approximate location of the users that engaged with the tools. The number of impressions of the Safety Center pages is based on the IP location of the users. 

Country Number of impressions of state-affiliated media label (SAM) Number of clicks on state-affiliated media label (SAM) Clicks through rate of state-affiliated media label (SAM) Number of impressions of topic covered by video Intervention (Holocaust Misinformation/Denial) Number of impressions of topic covered by video Intervention (mpox) Number of impressions of topic covered by video Intervention (Elections) CTR of video Intervention (Holocaust Misinformation/Denial) CTR of video Intervention (mpox) CTR by video Intervention (Elections) Number of impressions for search interventions (Holocaust Misinformation/Denial) Number of impressions for search interventions (mpox) Number of impressions for search interventions (Elections) Number of impressions for search interventions (Climate change) Number of clicks on search interventions (Holocaust Misinformation/Denial) Number of clicks on search interventions (mpox) Number of clicks on search interventions (Elections) Number of clicks on search interventions (Climate change) CTR of search interventions (Holocaust Misinformation/Denial) CTR of search interventions (mpox) CTR of search interventions (Elections) CTR of search interventions (Climate change) Number of impressions of public service announcements (Holocaust Misinformation/Denial) Number of impressions of public service announcements (mpox)
Austria 3705075 5771 0.001557593 3987721 6065332 78371511 0.00277878 0.005051826 0.001992012 156298 467253 708656 228390 6187 2111 3263 196 0.03958464 0.004517895 0.004604491 0.000858181 16 26
Belgium 3789615 6994 0.00184557 3501679 15383226 0 0.003052536 0.004787292 0 200324 669059 0 207808 10626 2501 0 159 0.053044069 0.003738086 0 0.000765129 41 26
Bulgaria 7727480 9390 0.001215144 697482 5869376 29664185 0.004028778 0.00679033 0.001686141 62548 446240 121672 140769 1701 2724 367 183 0.027195114 0.006104338 0.003016306 0.001300002 17 39
Croatia 1656809 2786 0.001681546 658928 5206361 24640666 0.003373661 0.007821202 0.001692933 100586 561621 546661 159478 2631 3321 1767 128 0.026156722 0.00591324 0.003232351 0.000802619 6 13
Cyprus 752546 1166 0.001549407 311308 1309314 0 0.003819369 0.006010017 0 24629 77553 0 19126 445 452 0 20 0.018068131 0.005828272 0 0.001045697 4 4
Czech Republic 7602192 7762 0.001021021 2888696 6134172 329546 0.004759587 0.009942499 0.001007447 88635 587165 17994 163222 2476 4180 56 172 0.027934789 0.007118953 0.003112148 0.00105378 99 76
Denmark 2074577 4350 0.002096813 1690719 4604268 0 0.004065134 0.008214118 0 59881 435391 0 148528 1572 2571 0 134 0.026252067 0.005905037 0 0.000902187 13 17
Estonia 1391192 2402 0.001726577 406818 2279691 0 0.003987041 0.00802872 0 14970 147804 0 29383 476 926 0 44 0.031796927 0.006265054 0 0.001497465 60 12
Finland 3310339 9274 0.002801526 3314306 8904456 0 0.003627305 0.007459524 0 118664 648096 0 238286 2024 4591 0 213 0.017056563 0.007083827 0 0.000893884 27 30
France 32521568 28995 0.000891562 2293975 123453307 1301158781 0.004792554 0.003822433 0.001599036 1592000 3031084 15712577 652102 111776 5826 7306 446 0.070211055 0.001922085 0.000464978 0.000683942 562 473
Germany 37522365 49125 0.001309219 40208515 51643857 209773848 0.002756879 0.004731521 0.001247319 1383744 3901089 7265486 1761399 66652 15278 13805 1488 0.048167869 0.003916342 0.001900079 0.000844783 344 385
Greece 3107902 6491 0.002088547 2559183 9476029 0 0.003910232 0.006915661 0 492728 1145046 0 250015 2946 6796 0 361 0.005978958 0.005935133 0 0.001443913 14 28
Hungary 41012350 27450 0.000669311 4785260 5483667 0 0.003591236 0.007310437 0 118829 495768 0 270274 4853 3857 0 330 0.040840199 0.007779849 0 0.001220983 14 33
Ireland 3250908 6757 0.002078496 3352323 9413972 278245 0.003129173 0.00624168 0.00127226 112337 714141 1651434 221386 1848 2252 16293 118 0.016450502 0.003153439 0.009865971 0.000533006 20 38
Italy 12463432 16462 0.001320824 1987326 31117509 0 0.004668082 0.005592125 0 877504 2604995 0 1113346 11749 36148 0 818 0.013389113 0.013876418 0 0.000734722 74 63
Latvia 3063840 4027 0.001314364 491701 2821714 0 0.004234281 0.008449829 0 18212 206307 0 37048 668 1375 0 63 0.036679113 0.006664825 0 0.001700497 89 9
Lithuania 3025380 5056 0.001671195 685542 5065658 9102070 0.003958911 0.007301519 0.001649515 37622 484805 41034 112397 820 2767 127 176 0.021795758 0.005707449 0.003094994 0.001565878 41 11
Luxembourg 439714 628 0.001428201 181085 823972 0 0.003633653 0.005151874 0 11059 39807 0 14448 571 180 0 17 0.051632155 0.004521818 0 0.001176633 2 1
Malta 417073 605 0.001450585 206924 645827 0 0.002860954 0.004848048 0 7744 33713 0 9186 178 148 0 5 0.022985537 0.004389998 0 0.000544307 2 1
Netherlands 14557284 23801 0.001634989 12745193 15404762 0 0.002615025 0.007486971 0 490423 1010537 0 406345 6942 4547 0 308 0.014155127 0.004499588 0 0.000757977 80 147
Poland 203836052 63752 0.000312761 27175740 25070807 0 0.002600224 0.008385929 0 1069545 2560968 0 888768 842 19079 0 1136 0.000787251 0.007449917 0 0.001278174 154 172
Portugal 1518762 4985 0.003282279 1990257 6017068 0 0.003294047 0.006165794 0 221507 714886 0 198458 2507 3653 0 162 0.011317927 0.005109906 0 0.000816294 11 12
Romania 36420337 58883 0.001616762 3636828 12931412 1093883826 0.004134097 0.00773334 0.001400099 222020 1339325 21733061 375102 5125 7857 70746 513 0.023083506 0.005866388 0.003255225 0.001367628 21 40
Slovakia 1971329 3352 0.001700376 640942 1798295 0 0.003516699 0.007978669 0 42499 329754 0 93322 1331 2027 0 104 0.031318384 0.006147007 0 0.001114421 16 24
Slovenia 724668 1403 0.001936059 462528 2037178 0 0.003353311 0.006032364 0 30051 164647 0 34445 1520 793 0 26 0.05058068 0.004816365 0 0.000754827 3 5
Spain 6639002 11904 0.001793041 8574010 47595074 0 0.003606247 0.003745682 0 2155115 1775339 0 842382 39383 6112 0 503 0.018274199 0.003442723 0 0.000597116 51 82
Sweden 11757565 12977 0.001103715 5540266 15829378 0 0.004684613 0.00741501 0 175333 1106486 0 486125 4056 6128 0 405 0.023133124 0.005538254 0 0.000833119 87 55
Iceland 291908 589 0.002017759 215411 679537 4620010 0.003574562 0.006076196 0.004255618 5203 22964 78449 4668 147 245 1095 7 0.028252931 0.010668873 0.013958113 0.001499572 4 5
Liechtenstein 50186 48 0.000956442 11568 21397 0 0.004062932 0.006169089 0 478 1406 0 548 25 10 0 1 0.052301255 0.007112376 0 0.001824818 1 0
Norway 4367100 8605 0.001970415 2909307 6765469 0 0.004536476 0.009467784 0 89193 539291 0 223306 2505 3472 0 179 0.028085164 0.006438083 0 0.000801591 27 18
Total EU 446259356 376548 0.000843787 134975255 422385682 2747202678 0.003136642 0.005407563 0.001506023 9884807 25698879 47798575 9101538 291905 148200 113730 8228 0.029530673 0.005766789 0.00237936 0.000904023 1868 1822
Total EEA 450968550 385790 0.00085547 138111541 429852085 2751822688 0.00316689 0.005472562 0.001510639 9979681 26262540 47877024 9330060 294582 151927 114825 8415 0.029518178 0.005784932 0.002398332 0.000901923 1900 1845

Measure 17.2

Relevant Signatories will develop, promote and/or support or continue to run activities to improve media literacy and critical thinking such as campaigns to raise awareness about Disinformation, as well as the TTPs that are being used by malicious actors, among the general public across the European Union, also considering the involvement of vulnerable communities.


We are pleased to report metrics on the four new general media literacy and critical thinking skills campaigns in France, Georgia, Moldova, and Portugal as well as the existing permanent campaigns that ran through the reporting period in: Denmark, Finland, Ireland, Italy, Spain, Sweden, and Netherlands.

QRE 17.2.1

Relevant Signatories will describe the activities they launch or support and the Member States they target and reach. Relevant signatories will further report on actions taken to promote the campaigns to their user base per Member States targeted.

In order to raise awareness among our users about specific topics and empower them, we run a variety of on and off-platform media literacy campaigns. Our approach may differ depending on the topic. We localise certain campaigns (e.g., for elections) meaning we collaborate with national partners to develop an approach that best resonates with the local audience. For other campaigns such as the War in Ukraine, our emphasis is on scalability and connecting users to accurate and trusted resources. 

Below are examples of the campaigns we have most recently run in-app which have leveraged a number of the intervention tools we have outlined in our response to QRE 17.1.1 (e.g. search interventions and video notice tags).

(I) Promoting election integrity. As well as the election integrity pages on TikTok's Safety Center and Transparency Center, and the new dedicated European Elections Integrity Hub, which bring awareness and visibility to how we tackle election misinformation and covert influence operations on our platform, we launched media literacy campaigns in advance of several elections in the EU and wider Europe.

France Legislative Elections 2024: From 17 June 2024, we launched an in-app Election Centre to provide users with up-to-date information about the 2024 French legislative elections. The centre contained a section about spotting misinformation, which included videos created in partnership with fact-checking organisation Agence France-Presse (AFP).

Germany Regional Elections 2024 (Saxony, Thuringia, Brandenburg): From 8 Aug 2024, we launched an in-app Election Centre to provide users with up-to-date information about the German regional elections. The centre contained a section about spotting misinformation, which included videos created in partnership with the fact-checking organisation Deutsche Presse-Agentur (dpa).

Austria Federal Election 2024: From 13 Aug 2024, we launched an in-app Election Centre to provide users with up-to-date information about the election. The centre contained a section about spotting misinformation, which included videos created in partnership with the fact-checking organisation Deutsche Presse-Agentur (dpa).

Moldova Presidential Election and EU Referendum 2024: From 6 Sept 2024, we launched an in-app Election Centre to provide users with up-to-date information about the 2024 Moldova presidential election and EU referendum. The centre contained a section about spotting misinformation, which included videos created in partnership with the fact-checking organisation StopFals!

Georgia Parliamentary Election 2024: From 16 Sept 2024, we launched an in-app Election Centre to provide users with up-to-date information about the 2024 Georgia parliamentary election. The centre contained a section about spotting misinformation, which included videos created in partnership with the fact-checking organisation Fact Check Georgia.

Bosnia Parliamentary Election 2024: From 17 Sept 2024, we launched an in-app Election Centre to provide users with up-to-date information about the 2024 Bosnian regional elections, which contained a section about spotting misinformation.

Lithuania Parliamentary Election 2024: From 17 Sept 2024, we launched an in-app Election Centre to provide users with up-to-date information about the 2024 Lithuanian parliamentary elections, which contained a section about spotting misinformation.

Czechia Regional Elections 2024: From 13 Sept 2024, we launched an in-app Election Centre to provide users with up-to-date information about the 2024 Czechia regional elections, which contained a section about spotting misinformation.

Bulgaria Parliamentary Election 2024: From 1 Oct 2024, we launched an in-app Election Centre to provide users with up-to-date information about the 2024 Bulgaria parliamentary election, which contained a section about spotting misinformation. 

Romania Presidential and Parliamentary Election 2024: From 11 Nov 2024, we launched an in-app Election Centre to provide users with up-to-date information about the 2024 Romanian elections. The centre contained a section about spotting misinformation, which included videos created in partnership with the fact-checking organisation Funky Citizens. [On 6 Dec 2024, following the Constitutional Court's decision to annul the first round of the presidential election, we updated our in-app Election Centre to guide users on rapidly changing events].

Ireland General Election: From 7 Nov 2024, we launched an in-app Election Centre to provide users with up-to-date information about the 2024 Irish general election. The centre contained a section about spotting misinformation, which included videos created in partnership with the fact-checking organisation The Journal.

Iceland Parliamentary Election 2024: From 7 Nov 2024, we launched an in-app Election Centre to provide users with up-to-date information about the 2024 Iceland parliamentary election, which contained a section about spotting misinformation. 

Croatia Presidential Election 2024: From 6 Dec 2024, we launched an in-app Election Centre to provide users with up-to-date information about the 2024 Croatia presidential election. The centre contained a section about spotting misinformation, which included videos created in partnership with the fact-checking organisation Faktograf

Germany Federal Election 2024: From 16 Dec 2024, we launched an in-app Election Centre to provide users with up-to-date information about the 2025 German federal election. The centre contained a section about spotting misinformation, which included videos created in partnership with the fact-checking organisation Deutsche Presse-Agentur (dpa).


(II) Election Speaker Series.
To further promote election integrity, and inform our approach to elections, we invited suitably qualified local and regional external experts to share their insights and market expertise with our internal teams. During this reporting period, we ran 9 Election Speaker Series sessions, 7 in EU Member States and 2 in Georgia and Moldova. 

  1. France: Agence France-Presse (AFP)
  2. Germany: German Press Agency (dpa)
  3. Austria: German Press Agency (dpa)
  4. Lithuania: Logically Facts
  5. Romania: Funky Citzens
  6. Ireland: Logically Facts
  7. Croatia: Faktograf
  8. Georgia: FactCheck Georgia
  9. Moldova: Stop Fals!

(III) Media literacy (General). We rolled out two new ongoing general media literacy and critical thinking skills campaigns in the EU and two in EU candidate countries in collaboration with our fact-checking and media literacy partners:
  • France:  Agence France-Presse (AFP)
  • Portugal: Polígrafo
  • Georgia: Fact Check Georgia
  • Moldova: StopFals!

This brings the number of  general media literacy and critical thinking skills campaigns in Europe to 11 (Denmark, Finland, France, Georgia, Ireland, Italy, Spain, Sweden, Moldova, Netherlands, and Portugal).

(IV) Media literacy (War in Ukraine). We continue to serve 17 localised media literacy campaigns specific to the war in Ukraine in: Ukraine, Romania, Slovakia, Hungary, Latvia, Estonia, Lithuania, Czechia, Poland, Croatia, Slovenia, Bulgaria, Germany, Austria, Bosnia, Montenegro, and Serbia.

  • Partnered with Lead Stories: Ukraine, Romania, Slovakia, Hungary, Latvia, Estonia, Lithuania.
  • Partnered with fakenews.pl: Poland.
  • Partnered with Correctiv: Germany, Austria.

Through these media literacy campaigns, users searching for keywords relating to the war in Ukraine on TikTok are directed to tips prepared in partnership with local media literacy bodies and our trusted fact-checking partners, to help them identify misinformation and prevent its spread on the platform. 

(V) Israel-Hamas conflict. To help raise awareness and to protect our users, we have search interventions which are triggered when users search for neutral terms related to this topic (e.g., Israel, Palestine). These search interventions remind users to pause and check their sources and also directs them to well-being resources.

(VI) Climate literacy. 
  • Our climate change search intervention tool is available in 23 official EU languages (plus Norwegian and Icelandic for EEA users). It redirects users looking for climate change-related content to authoritative information and encourages them to report any potential misinformation they see.
  • In April 2024, in partnership with The Mary Robinson Centre, TikTok launched the TikTok Youth Climate Leaders Alliance, a programme aimed at 18-30-year-olds looking to make significant changes in the face of the climate crisis.
  • Actively participated in the UN COP29 climate change summit by:
    • Working with the COP29 presidency to promote their content and engage new audiences around the conference as a strategic media partner.
    • Re-launching our global #ClimateAction campaign with over 7K posts from around the world. Content across #ClimateAction has now received over 4B video views since being launched in 2021.
    • Bringing 5 creators to the summit, who collectively produced 15+ videos that received over 60M video views.
    • Launching two global features (a video notice tag and search intervention guide) to point users to authoritative climate related content between 29th October and 25th November, which were viewed 400k times.
  • As of August 2024, popular hashtags #ClimateChange, #SustainableLiving, and #ClimateAction have more than 800,000 associated posts on TikTok, combined.

SLI 17.2.1

Relevant Signatories report on number of media literacy and awareness raising activities organised and or participated in and will share quantitative information pertinent to show the effects of the campaigns they build or support at the Member State level.

We are pleased to report metrics on the four new general media literacy and critical thinking skills campaigns in France, Georgia, Moldova, and Portugal as well as the existing permanent campaigns that ran through the reporting period in: Denmark, Finland, Ireland, Italy, Spain, Sweden, and Netherlands.

Country Total number of impressions of H5 Page between July 1 and December 31 2024 Number of impressions of search intervention Number of clicks on search intervention Click through rate of the search intervention
France 72861 229676 1370 0.60%
Portugal 3400 107964 426 0.39%
Denmark 1540 10854 30 0.28%
Netherlands 2492 64241 226 0.35%
Ireland 1320 14282 46 0.32%
Finland 595 3725 25 0.67%
Sweden 1197 13444 64 0.48%
Spain 26213 1253955 3220 0.26%
Italy 1948 41297 181 0.44%
Austria and Germany 33220 15072256 45865 0.30%
Bulgaria 741 309132 1095 0.35%
Croatia 811 449332 1452 0.32%
Czech Republic 1025 954741 1722 0.18%
Slovenia 286 118972 407 0.34%

Measure 17.3

For both of the above Measures, and in order to build on the expertise of media literacy experts in the design, implementation, and impact measurement of tools, relevant Signatories will partner or consult with media literacy experts in the EU, including for instance the Commission's Media Literacy Expert Group, ERGA's Media Literacy Action Group, EDMO, its country-specific branches, or relevant Member State universities or organisations that have relevant expertise.

QRE 17.3.1

Relevant Signatories will describe how they involved and partnered with media literacy experts for the purposes of all Measures in this Commitment.

As documented in the TikTok Safety Center Safety Partners page and TikTok’s Advisory Councils,  we work with an array of industry experts, non-governmental organisations, and industry associations around the world in our commitment to building a safe platform for our community. They include media literacy bodies, to develop campaigns that educate users and redirect them to authoritative resources, and fact-checking partners. Specific examples of partnerships within the campaigns and projects set out in QRE 17.2.1 are:

(I) Promoting election integrity. We partner with various media organisations and fact-checkers to promote election integrity on TikTok. For more detail about the input our fact-checking partners provide please refer to QRE 30.1.3.
  • We ran 14 temporary media literacy election integrity campaigns in advance of regional elections, most in collaboration with our fact-checking and media literacy partners:
    • 7 in the EU (Austria, Croatia, France, 2 x Germany, Ireland and Lithuania)
      • Austria: Deutsche Presse-Agentur (dpa)
      • Croatia: Faktograf
      • France: Agence France-Presse (AFP)
      • Germany (regional elections): Deutsche Presse-Agentur (dpa)
      • Germany (federal election): Deutsche Presse-Agentur (dpa)
      • Ireland: The Journal
    • 1 in EEA (Iceland)
    • 6 in wider Europe/EU candidate countries (Bosnia, Bulgaria, Czechia, Georgia, Moldova and Romania)
      • Georgia: Fact Check Georgia
      • Moldova: StopFals!
      • Romania: Funky Citizens.
  • Election speaker series. To further promote election integrity, and inform our approach to elections, we invited suitably qualified local and regional external experts to share their insights and market expertise with our internal teams. During this reporting period, we ran 9 Election Speaker Series sessions, 7 in EU Member States and 2 in Georgia and Moldova. 
    • France: Agence France-Presse (AFP)
    • Germany: German Press Agency (dpa)
    • Austria: German Press Agency (dpa)
    • Lithuania: Logically Facts
    • Romania: Funky Citzens
    • Ireland: Logically Facts
    • Croatia: Faktograf
    • Georgia: FactCheck Georgia
    • Moldova: Stop Fals!

(II) War in Ukraine.
We continue to run our media literacy campaigns about the war in Ukraine, developed in partnership with our media literacy partners Correctiv in Austria and Germany, Fakenews.pl in Poland and Lead Stories in Ukraine, Romania, Slovakia, Hungary, Latvia, Estonia, Lithuania. We also expanded this campaign to Serbia, Bosnia, Montenegro, Czechia, Croatia, Slovenia, Bulgaria.

Commitment 18

Relevant Signatories commit to minimise the risks of viral propagation of Disinformation by adopting safe design practices as they develop their systems, policies, and features.

We signed up to the following measures of this commitment

Measure 18.1 Measure 18.2 Measure 18.3

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Yes

If yes, list these implementation measures here

  • Onboarded two new fact-checking partners in wider Europe:
    • Albania & Kosovo: Internews Kosova
    • Georgia: Fact Check Georgia
  • Continued to improve the accuracy of, and overall coverage provided by, our machine learning detection models. 
  • Members of the EDMO working group for the creation of the Independent Intermediary Body (IIB) to support research on digital platforms.
    • Refined our standard operating procedure (SOP) for vetted researcher access to ensure compliance with the provisions of the Delegated Act on Data Access for Research.
    • Participated in the EC Technical Roundtable on data access in December, 2024.
    • Invested in training and development for our Trust and Safety team, including regular internal sessions dedicated to knowledge sharing and discussion about relevant issues and trends and attending external events to share their expertise and support continued professional learning. For example: 
      • In the lead-up to certain elections, we invite suitably qualified external local/regional experts, as part of our Election Speaker Series. Sharing their market expertise with our internal teams provides us with insights to better understand areas that could potentially amount to election manipulation, and informs our approach to the upcoming election.
      • In June 2024, 12 members of our Trust & Safety team (including leaders of our fact-checking program) attended the GlobalFact11 and participated in an on-the-record mainstage presentation answering questions about our misinformation strategy and partnerships with professional fact-checkers.
    • Continued to participate in, and co-chair, the working group on Elections.
  • In October, we sponsored, attended, and presented at Disinfo24 the annual EU DisinfoLab Conference in Riga.

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

We are continuously reviewing and improving our tools and processes to fight misinformation and disinformation and will report on any further development in the next COPD report.

Measure 18.1

Relevant Signatories will take measures to mitigate risks of their services fuelling the viral spread of harmful Disinformation, such as: recommender systems designed to improve the prominence of authoritative information and reduce the prominence of Disinformation based on clear and transparent methods and approaches for defining the criteria for authoritative information; other systemic approaches in the design of their products, policies, or processes, such as pre-testing.

QRE 18.1.1

Relevant Signatories will report on the risk mitigation systems, tools, procedures, or features deployed under Measure 18.1 and report on their deployment in each EU Member State.

TikTok takes a multi-faceted approach to tackling the spread of harmful misinformation, regardless of intent. This includes our policies, products, practices and external partnerships with fact-checkers, media literacy bodies, and researchers.
 
(I) Removal of violating content or accounts. To reduce potential harm, we aim to remove content or accounts that violate our CGs including our I&A policies before they are viewed or shared by other people. We detect and take action on this content by using a combination of automation and human moderation.
  • Automated Review We place considerable emphasis on proactive detection to remove violative content. Content that is uploaded to the platform is typically first reviewed by our automated moderation technology, which looks at a variety of signals across content, including keywords, images, captions, and audio, to identify violating content. We work with various external experts, like our fact-checking partners, to inform our keyword lists. If our automated moderation technology identifies content that is a potential violation, it will either be automatically removed from the platform or flagged for further review by our human moderation teams.  In line with our safeguards to help ensure accurate decisions are made, automated removal is applied when violations are the most clear-cut.  We also carry out targeted sweeps of certain types of violative content including harmful misinformation, where we have identified specific risks or where our fact-checking partners or other experts have alerted us to specific risks. 
  • Human Moderation While some misinformation can be enforced through technology alone—for example, repetitions of previously debunked content—misinformation evolves quickly and is highly nuanced. That’s why we have misinformation moderators with enhanced training and access to tools like our global repository of previously fact-checked claims from the IFCN-accredited fact-checking partners, who help assess the accuracy of content. We also have teams on the ground who partner with experts to prioritise local context and nuance. We may also issue guidance to our moderation teams to help them more easily spot and take swift action on violating content. Human moderation will also occur if a video gains popularity or has been reported. Community members can report violations in-app and on our website. Our fact-checking partners and other stakeholders can also report potential violating content to us directly.

(II) Safety in our recommendations. In addition to removing content that clearly violates our CGs, we have a number of safeguards in place to ensure the For You feed (as the primary access point for discovering original and entertaining content on the platform) has safety built-in.

  1. For content that does not violate our CGs but may negatively impact the authenticity of the platform, we reduce its prominence on the For You feed and / or label it. The types of misinformation we may make  ineligible for the For You feed are made clear to users here; general conspiracy theories, unverified information related to an emergency or unfolding event  and potential high-harm misinformation that is undergoing a fact-check. We also label accounts and content of state-affiliated media entities to empower users to consider the sources of information. Our moderators take additional precautions to review videos as they rise in popularity to reduce the likelihood of content that may not be appropriate entering our recommended system. 
  2. Providing access to authoritative information is an important part of our overall strategy to counter misinformation. There are a number of ways in which we do this, including launching information centres with informative resources from authoritative third-parties in response to global or local events, adding public service announcements on hashtag or search pages, or labelling content related to a certain topic to prompt our community to seek out authoritative information. 

(III) Safety by Design. Within our Trust and Safety Product and Policy teams, we have subject matter experts dedicated to integrity and authenticity. When we develop a new feature or policy, these teams work closely with external partners to ensure we are building safety into TikTok by design and reflecting industry best practice. For example:

  • We collaborate with Irrational Labs to develop and implement specialised prompts to help users consider before sharing unverified content (as outlined in QRE 21.3.1),
  • Yad Vashem created an enrichment program on the Holocaust for our Trust and Safety team. The five week program aimed to give our team a deeper understanding about the Holocaust, its lessons and misinformation related to antisemitism and hatred.
  • We worked with local/regional experts through our Election Speaker Series to ensure their insights and expertise informs our internal teams ahead of particular elections throughout 2024.

QRE 18.1.2

Relevant Signatories will publish the main parameters of their recommender systems, both in their report and, once it is operational, on the Transparency Centre.

The For You feed is the interface users first see when they open TikTok. It is central to the TikTok experience and where most of our users spend their time exploring the platform. User interactions act as signals that help the recommender systems predict content they are more likely to be interested in as well as the content they might be less interested in and may prefer to skip. User interactions across TikTok can impact how the system ranks and serves content. 
These are some examples of information that may influence TikTok content in your For You feed:
  • User interactions: Content you like, share, comment on, and watch in full or skip, as well as accounts of followers that you follow back.
  • Content information: Sounds, hashtags, number of views, and the country in which the content was published.
  • User information: Device settings, language preference, location, time zone and day, and device type.

For most users, user interactions, which may include the time spent watching a video, are generally weighted more heavily than others. 

Aside from the signals users provide by how they interact with content on TikTok, there are additional tools we have built to help them better control what kind of content is recommended to them.

  • Not interested: Users can long-press on the video in their For You feed and select ‘Not interested’ from the pop-up menu. This will let us know they are not interested in this type of content and we will limit how much of that content we recommend in their feed.
  • Video keyword filters: They can add keywords – both words or hashtags – they’d like to filter from their For You feed.
  • For You refresh: To help you discover new content, users can refresh their For You feed, enabling them to explore entirely new sides of TikTok.
We share more information about our recommender systems in our Help Center and Transparency Center and below in our response to QRE 19.1.1.

QRE 18.1.3

Relevant Signatories will outline how they design their products, policies, or processes, to reduce the impressions and engagement with Disinformation whether through recommender systems or through other systemic approaches, and/or to increase the visibility of authoritative information.

We take action to prevent and mitigate the spread of inaccurate, misleading, or false misinformation that may cause significant harm to individuals or the public at large. We do this by removing content and accounts that violate our rules, investing in media literacy and connecting our community to authoritative information, and partnering with external experts. Our I&A policies make clear that we do not allow activities that may undermine the integrity of our platform or the authenticity of our users. We remove content or accounts that involve misleading information that causes significant harm or, in certain circumstances, reduce the prominence of content. The types of misinformation we may make ineligible For You feed are set out in our Community Guidelines.

  •  Misinformation
    • Conspiracy theories that are unfounded and claim that certain events or situations are carried out by covert or powerful groups, such as "the government" or a "secret society".
    • Moderate harm health misinformation, such as an unproven recommendation for how to treat a minor illness.
    • Repurposed media, such as showing a crowd at a music concert and suggesting it is a political protest.
    • Misrepresenting authoritative sources, such as selectively referencing certain scientific data to support a conclusion that is counter to the findings of the study.
    • Unverified claims related to an emergency or unfolding event.
    • Potential high-harm misinformation while it is undergoing a fact-checking review.
  • Civic and Election Integrity
    • Unverified claims about an election, such as a premature claim that all ballots have been counted or tallied.
    • Statements that significantly misrepresent authoritative civic information, such as a false claim about the text of a parliamentary bill.

  • Fake Engagement
    • Content that tricks or manipulates others as a way to increase gifts, or engagement metrics, such as "like-for-like" promises or other false incentives for engaging with content.

To enforce our CGs at scale, we use a combination of automated review and human moderation. While some misinformation can be enforced through technology alone—for example, repetitions of previously debunked content—misinformation evolves quickly and is highly nuanced. Assessing harmful misinformation requires additional context and assessment by our misinformation moderators who have enhanced training, expertise and tools to identify such content, including our global repository of previously fact-checked claims from the IFCN-accredited fact-checking partners and direct access to our fact-checking partners where appropriate.

Our network of independent fact-checking partners do not moderate content directly on TikTok, but assess whether a claim is true, false, or unsubstantiated so that our moderators can take action based on our Community Guidelines. We incorporate fact-checker input into our broader content moderation efforts through:

  • Proactive insight reports that flag new and evolving claims they’re seeing across the internet. This helps us detect harmful misinformation and anticipate misinformation trends on our platform.
  • A repository of previously fact-checked claims to help misinformation moderators make swift and accurate decisions. 

Working with our network of independent fact-checking organisations enables TikTok to identify and take action on misinformation and connect our community to authoritative information around important events. This is an important part of our overall strategy to counter misinformation. There are a number of ways in which we do this, including launching information centers with resources from authoritative third-parties in response to global or local events,  adding public service announcements (PSAs) on hashtags or search pages, or labelling content related to a certain topic to prompt our community to seek out authoritative information.

We are also committed to civic and election integrity and mitigating the spread of false or misleading content about an electoral or civic process. We work with national electoral commissions, media literacy bodies and civil society organisations to ensure we are providing our community with accurate up-to-date information about an election through our in-app election information centers, election guides, search interventions and content labels.


SLI 18.1.1

Relevant Signatories will provide, through meaningful metrics capable of catering for the performance of their products, policies, processes (including recommender systems), or other systemic approaches as relevant to Measure 18.1 an estimation of the effectiveness of such measures, such as the reduction of the prevalence, views, or impressions of Disinformation and/or the increase in visibility of authoritative information. Insofar as possible, Relevant Signatories will highlight the causal effects of those measures.

Methodology of data measurement:

The share cancel rate (%) following the unverified content label share warning pop-up indicates the percentage of users who do not share a video after seeing the label pop up. This metric is based on the approximate location of the users that engaged with these tools.

Country Share cancel rate (%) following the unverified content label share warning pop-up (users who do not share the video after seeing the pop up)
Austria 31.80%
Belgium 33.80%
Bulgaria 34.00%
Croatia 33.70%
Cyprus 32.90%
Czech Republic 29.50%
Denmark 30.20%
Estonia 28.50%
Finland 27.20%
France 37.10%
Germany 30.10%
Greece 32.10%
Hungary 31.40%
Ireland 29.60%
Italy 37.70%
Latvia 30.90%
Lithuania 30.80%
Luxembourg 33.60%
Malta 35.40%
Netherlands 27.80%
Poland 28.90%
Portugal 33.10%
Romania 30.10%
Slovakia 28.90%
Slovenia 33.30%
Spain 34.10%
Sweden 29.40%
Iceland 27.90%
Liechtenstein 19.60%
Norway 25.40%
Total EU 32.20%
Total EEA 32.10%

Measure 18.2

Relevant Signatories will develop and enforce publicly documented, proportionate policies to limit the spread of harmful false or misleading information (as depends on the service, such as prohibiting, downranking, or not recommending harmful false or misleading information, adapted to the severity of the impacts and with due regard to freedom of expression and information); and take action on webpages or actors that persistently violate these policies.

QRE 18.2.1

Relevant Signatories will report on the policies or terms of service that are relevant to Measure 18.2 and on their approach towards persistent violations of these policies.

We take action against misinformation that causes significant harm to individuals, our community, or the larger public regardless of intent. We do this by removing content and accounts that violate our rules, by investing in media literacy and connecting our community to authoritative information, and by partnering with experts.

Our Terms of Service and I&A policies under our CGs are the first line of defence in combating harmful misinformation and (as outlined in more detail in QRE 14.1.1) deceptive behaviours on our platform. These rules make clear to our users what content we remove or make ineligible for the For You feed when they pose a risk of harm to our users and our community.

Specifically, our policies do not allow:

  • Misinformation 
    • Misinformation that poses a risk to public safety or may induce panic about a crisis event or emergency, including using historical footage of a previous attack as if it were current, or incorrectly claiming a basic necessity (such as food or water) is no longer available in a particular location.Health misinformation, such as misleading statements about vaccines, inaccurate medical advice that discourages people from getting appropriate medical care for a life-threatening disease, or other misinformation which may cause negative health effects on an individual's life
    • Climate change misinformation that undermines well-established scientific consensus, such as denying the existence of climate change or the factors that contribute to it.
    • Conspiracy theories that name and attack individual people.
    • Conspiracy theories that are violent or hateful, such as making a violent call to action, having links to previous violence, denying well-documented violent events, or causing prejudice towards a group with a protected attribute.

  • Civic and Election Integrity
    • Election misinformation, including:
      • How, when, and where to vote or register to vote;
      • Eligibility requirements of voters to participate in an election, and the qualifications for candidates to run for office;
      • Laws, processes, and procedures that govern the organisation and implementation of elections and other civic processes, such as referendums, ballot propositions, or censuses;
      • Final results or outcome of an election.

  • Edited Media and AI-Generated Content (AIGC)
    • Realistic-appearing people under the age of 18.
    • The likeness of adult private figures, if we become aware it was used without their permission.
    • Misleading AIGC or edited media that falsely shows:
      • Content made to seem as if it comes from an authoritative source, such as a reputable news organisation;
      • A crisis event, such as a conflict or natural disaster.
    • A public figure who is:
      • being degraded or harassed, or engaging in criminal or antisocial behaviour;
      • taking a position on a political issue, commercial product, or a matter of public importance (such as an election);
      • being politically endorsed or condemned by an individual or group.

  • Fake Engagement
    • Facilitating the trade or marketing of services that artificially increase engagement, such as selling followers or likes.
    • Providing instructions on how to artificially increase engagement on TikTok.

We have made even clearer to our users here that the following content is ineligible for the For You feed:

  • Misinformation 
    • Conspiracy theories that are unfounded and claim that certain events or situations are carried out by covert or powerful groups, such as "the government" or a "secret society"
    • Moderate harm health misinformation, such as an unproven recommendation for how to treat a minor illness
    • Repurposed media, such as showing a crowd at a music concert and suggesting it is a political protest
    • Misrepresenting authoritative sources, such as selectively referencing certain scientific data to support a conclusion that is counter to the findings of the study
    • Unverified claims related to an emergency or unfolding event
    • Potential high-harm misinformation while it is undergoing a fact-checking review

  • Civic and Election Integrity
    • Unverified claims about an election, such as a premature claim that all ballots have been counted or tallied
    • Statements that significantly misrepresent authoritative civic information, such as a false claim about the text of a parliamentary bill

  • Fake Engagement
    • Content that tricks or manipulates others as a way to increase gifts, or engagement metrics, such as "like-for-like" promises or other false incentives for engaging with content

As outlined in the QRE 14, we also remove accounts that seek to mislead people or use TikTok to deceptively sway public opinion. These activities range from inauthentic or fake account creation, to more sophisticated efforts to undermine public trust.

We have policy experts within our Trust and Safety team dedicated to the topic of integrity and authenticity. They continually keep these policies under review and collaborate with external partners and experts  to understand whether updates or new policies are required and ensure they are informed by a diversity of perspectives, expertise, and lived experiences. In particular, our Safety Advisory Council for Europe, which brings together independent leaders from academia and civil society, represent a diverse array of backgrounds and perspectives, and are made up of experts in free expression, misinformation and other safety topics.They work collaboratively with us to inform and strengthen our policies, product features, and safety processes.

Enforcing our policies. We remove content – including video, audio, livestream, images, comments, links, or other text – that violates our I&A policies. Individuals are notified of our decisions and can appeal them if they believe no violation has occurred. We also make clear in our CGs that we will temporarily or permanently ban accounts and/or users that are involved in serious or repeated violations, including violations of our I&A policies.

We enforce our CGs policies, including our I&A policies, through a mix of technology and human moderation. To do this effectively at scale, we continue to invest in our automated review process as well as in people and training. At TikTok we place a considerable emphasis on proactive content moderation. This means our teams work to detect and remove harmful material before it is reported to us.

However, misinformation is different from other content issues. Context and fact-checking are critical to consistently and accurately enforcing our misinformation policies. So while we use machine learning models to help detect potential misinformation, ultimately our approach today is having our moderation team assess, confirm, and remove misinformation violations. We have misinformation moderators who have enhanced training, expertise, and tools to take action on harmful misinformation. This includes a repository of previously fact-checked claims to help misinformation moderators make swift and accurate decisions and direct access to our fact-checking partners who help assess the accuracy of new content.

We strive to maintain a balance between freedom of expression and protecting our users and the wider public from harmful content. Our approach to combating harmful misinformation, as stated in our CGs, is to remove content that is both false and can cause harm to individuals or the wider public. This does not include simply inaccurate information which does not pose a risk of harm. Additionally, in cases where fact-checks are inconclusive, especially during emergency or unfolding events, content may not be removed and may instead become ineligible for recommendation in the For You feed and labelled with the “unverified content” label to limit the spread of potentially misleading information. 

We are pleased to include in this report the number of videos made ineligible for the For You feed under the relevant I&A policies as explained to users here.

Note that in relation to the metrics we have shared at SLI 18.2.1 below, of all the views that occurred in H2 2024, approximately less than 1 per 10,000 views occurred on content identified and removed for violating our policies around harmful misinformation. 

SLI 18.2.1

Relevant Signatories will report on actions taken in response to violations of policies relevant to Measure 18.2, at the Member State level. The metrics shall include: Total number of violations and Meaningful metrics to measure the impact of these actions (such as their impact on the visibility of or the engagement with content that was actioned upon).

Methodology of data measurement:

We have based the following numbers on the country in which the video was posted: videos removed because of violations of our Misinformation,  Civic and Election Integrity and Edited media and AIGC policies.

The number of views of videos removed because of violation of each of these policies is based on the approximate location of the user.

We also updated the methodology on the number of videos made ineligible for the For You feed under our Misinformation policy. 

Country Number of videos removed because of violation of Misinformation policy Number of views of videos removed because of violation of Misinformation policy Number of videos removed because of violation of Civic and Election Integrity policy Number of views of videos removed because of violation of Civic and Election Integrity policy Number of videos removed because of violation of Edited Media and AI-Generated Content (AIGC) policy Number of views of videos removed because of violation of Edited Media and AI-Generated Content (AIGC) policy Number of videos ineligible for promotion under Misinformation policy
Austria 2888 1313102 472 843182 414 216433 1696
Belgium 3902 2844929 1002 107828 2092 1119223 2688
Bulgaria 1568 5435715 182 110186 227 5977 1600
Croatia 789 973202 64 3753 1361 58579 616
Cyprus 511 1241327 86 1333 948 19441 326
Czech Republic 2720 4705302 275 25952 465 8287531 6470
Denmark 1455 2979180 335 14082 315 2742457 1157
Estonia 319 77555 41 866 208 2063380 453
Finland 984 1784968 199 1944 716 464824 811
France 44354 61693484 4390 8369126 8563 312078908 24035
Germany 50335 162220869 12231 3510858 11199 23904234 30934
Greece 4198 4431258 649 1726365 8742 145950 1735
Hungary 2002 9947587 308 273247 261 86870 957
Ireland 4676 4802257 2051 568596 1063 103199 2154
Italy 21035 39078480 3910 1578217 3574 1892355 19481
Latvia 694 3745925 48 9 129 4519 459
Lithuania 520 1122197 57 26 203 25410 647
Luxembourg 279 162787 66 2180 223 8729 121
Malta 168 5599 70 97 183 5811847 173
Netherlands 5422 2811880 1046 55695 1883 9080526 6189
Poland 13028 59545691 768 3942081 772 13404186 9872
Portugal 2629 31071224 535 28529 1010 339124 1400
Romania 14103 64183832 4276 33123122 937 623525 11739
Slovakia 1365 4714713 41 677 98 2014 1472
Slovenia 574 22494 28 111 66 605 346
Spain 22581 37024505 2126 3554918 4392 21882268 54592
Sweden 3489 9893681 633 6424 762 377862 2423
Iceland 122 153566 26 19 85 6113 77
Liechtenstein 35 0 20 0 48 525 33
Norway 1798 5158745 313 1152478 679 139984 1200
Total EU 206588 517833743 35889 57849404 50806 404749976 184546
Total EEA 208543 523146054 36248 59001901 51618 404896598 185856

Measure 18.3

Relevant Signatories will invest and/or participate in research efforts on the spread of harmful Disinformation online and related safe design practices, will make findings available to the public or report on those to the Code's taskforce. They will disclose and discuss findings within the permanent Task-force, and explain how they intend to use these findings to improve existing safe design practices and features or develop new ones.

QRE 18.3.1

Relevant Signatories will describe research efforts, both in-house and in partnership with third-party organisations, on the spread of harmful Disinformation online and relevant safe design practices, as well as actions or changes as a result of this research. Relevant Signatories will include where possible information on financial investments in said research. Wherever possible, they will make their findings available to the general public.

We regularly consult with third party experts and researchers in relation to the development of policies and features which are designed to reduce the spread of disinformation. For example, we engaged with experts globally on our Election Misinformation policies, which help inform updates of our I&A policies. 

We are proud of our close work with behavioural psychologists, Irrational Labs, which led to the development of the following warning and labelling features (more detail at QRE 21.3.1):
  • specialised prompts for unverified content, which alerts viewers to unverified content identified during an emergency or unfolding event and
  • our state-controlled media label, which brings transparency to our community in relation to state affiliated media entities and raises awareness among users to encourage users to consider the reliability of the source.

We are proud to be a signatory to the Partnership on AI's (PAI) Responsible Practices for Synthetic Media. We contributed to developing this code of industry best practices for AI transparency and responsible innovation, balancing creative expression with the risks of emerging AI technology. And, in accordance with our commitments as a launch partner, we worked on a case study outlining how the Practices informed our policy making on synthetic media.

Commitment 19

Relevant Signatories using recommender systems commit to make them transparent to the recipients regarding the main criteria and parameters used for prioritising or deprioritising information, and provide options to users about recommender systems, and make available information on those options.

We signed up to the following measures of this commitment

Measure 19.1 Measure 19.2

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Yes

If yes, list these implementation measures here

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

No

Measure 19.1

Relevant Signatories will make available to their users, including through the Transparency Centre and in their terms and conditions, in a clear, accessible and easily comprehensible manner, information outlining the main parameters their recommender systems employ.

QRE 19.1.1

Relevant Signatories will provide details of the policies and measures put in place to implement the above-mentioned measures accessible to EU users, especially by publishing information outlining the main parameters their recommender systems employ in this regard. This information should also be included in the Transparency Centre.

The For You feed is the interface users first see when they open TikTok. It's central to the TikTok experience and where most of our users spend their time exploring the platform. 

We make clear to users in our Terms of Service and CGs (and also provide more context in our Help Center article and Transparency Center page) that each account holder’s For You feed is based on a personalised recommendation system. The For You feed is curated to each user. Safety is built into our recommendations. As well as removing harmful misinformation content that violates our CGs, we take steps to avoid recommending certain categories of content that may not be appropriate for a broad audience including general conspiracy theories and unverified information related to an emergency or unfolding event. We may also make some of this content harder to find in search. 

Main parameters. The system recommends content by ranking content based on a combination of factors including:
  • user interactions (e.g. content users like, share, comment on, and watch in full or skip, as well as accounts of followers that users follow back); 
  • Content information (e.g. sounds, hashtags, number of views, and the country the content was published); and 
  • User information  (e.g. device settings, language preferences, location, time zone and day, and device types). 


The main parameters help us make predictions on the content users are likely to be interested in. Different factors can play a larger or smaller role in what’s recommended, and the importance – or weighting – of a factor can change over time. For many users, the time spent watching a specific video is generally weighted more heavily than other factors. These predictions are also influenced by the interactions of other people on TikTok who appear to have similar interests. For example, if a user likes videos 1, 2, and 3 and a second user likes videos 1, 2, 3, 4 and 5, the recommendation system may predict that the first user will also like videos 4 and 5.

Users can also access the “Why this video” feature, which allows them to see with any particular video that appears in their For You feed factors that influenced why it appeared in their feed. This feature provides added transparency in relation to how our ranking system works and empowers our users to better understand why a particular video has been recommended to them. The feature essentially explains to users how past interactions on the platform have impacted the video they have been recommended. For further information, see our newsroom post

User preferences. Together with the safeguards we build into our platform by design, we also empower our users to customise their experience to their preferences and comfort. 
These include a number of features to help shape the content they see. For example, in the For You feed:

  • Users can click on any video and select “not interested” to indicate that they do not want to see similar content.
  • Users are able to automatically filter out specific words or hashtags from the content recommended to them(see here). 

Users are able to refresh their For You feed if they no longer feel like recommendations are relevant to them or are too similar. When the For You feed is refreshed, users view a number of new videos which include popular videos (e.g., they have a high view count or a high like rate). Their interaction with these new videos will inform future recommendations. 

As part of our obligations under the DSA (Article 38), we introduced non-personalized feeds on our platform, which provide our European users with an alternative to recommender systems. They are able to turn off personalisation so that feeds show non-personalised content. For example, the For You feed, will instead show popular videos in their regions and internationally. See here.

Measure 19.2

Relevant Signatories will provide options for the recipients of the service to select and to modify at any time their preferred options for relevant recommender systems, including giving users transparency about those options.

SLI 19.2.1

Relevant Signatories will provide aggregated information on effective user settings, such as the number of times users have actively engaged with these settings within the reporting period or over a sample representative timeframe, and clearly denote shifts in configuration patterns.

Methodology of data measurement:

The number of users who have filtered hashtags or a keyword to set preferences for For You feed, the number of times users clicked “not interested” in relation to the For You feed, and the number of times users clicked on the For You Feed Refresh are all based on the approximate location of the users that engaged with these tools.

The number for videos tagged with AIGC label includes both automatic and creator-generated labeling.

Country Number of users that filtered hashtags or words Number of users that clicked on "not interested" Number of times users clicked on the For You Feed Refresh
,Number of Videos tagged with AIGC label
Austria 53057 886639 52559 149390
Belgium 67734 1322561 83721 241538
Bulgaria 34081 744333 38568 153704
Croatia 20196 486259 23134 46131
Cyprus 7895 176600 13456 62428
Czech Republic 45392 753417 35791 140826
Denmark 35294 573821 27747 80022
Estonia 11648 151267 11558 30907
Finland 45185 586897 43657 109189
France 332521 7939397 486316 1832452
Germany 503549 7977800 648033 1883751
Greece 52519 1344879 68577 214464
Hungary 46966 1020692 28543 138023
Ireland 54952 801523 52714 67672
Italy 261272 6455485 295958 1140570
Latvia 15527 279241 24888 118117
Lithuania 21247 325564 23209 64359
Luxembourg 4519 76244 5508 44220
Malta 3137 77760 4923 15544
Netherlands 135944 2081920 150651 231651
Poland 196496 3383567 175988 519883
Portugal 57677 1152515 61327 216364
Romania 85551 2629162 165990 325318
Slovakia 18482 347681 13822 50322
Slovenia 9983 177990 19591 17100
Spain 275604 6889325 381588 1170610
Sweden 82868 1371265 111934 268743
Iceland 4720 57250 3175 8073
Liechtenstein 129 3563 291 418
Norway 48188 685406 63483 101728
Total EU 2479296 50013804 3049751 9333298
Total EEA 2532333 50760023 3116700 9443517

Commitment 21

Relevant Signatories commit to strengthen their efforts to better equip users to identify Disinformation. In particular, in order to enable users to navigate services in an informed way, Relevant Signatories commit to facilitate, across all Member States languages in which their services are provided, user access to tools for assessing the factual accuracy of sources through fact-checks from fact-checking organisations that have flagged potential Disinformation, as well as warning labels from other authoritative sources.

We signed up to the following measures of this commitment

Measure 21.1 Measure 21.2 Measure 21.3

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Yes

If yes, list these implementation measures here

  • Onboarded two new fact-checking partners in wider Europe:
    • Albania & Kosovo: Internews Kosova
    • Georgia: Fact Check Georgia
  • Expanded our fact-checking coverage to a number of wider-European and EU candidate countries:
    • Albania & Kosovo: Internews Kosova
    • Georgia: Fact Check Georgia.
    • Kazakhstan: Reuters
    • Moldova: AFP/Reuters 
    • Serbia: Lead Stories
  • We ran 14 temporary media literacy election integrity campaigns in advance of regional elections, most in collaboration with our fact-checking and media literacy partners:
    • 8 in the EU (Austria, Croatia, France, 2 x Germany, Ireland, Lithuania, and Romania)
      • Austria: Deutsche Presse-Agentur (dpa)
      • Croatia: Faktograf
      • France: Agence France-Presse (AFP)
      • Germany (regional elections): Deutsche Presse-Agentur (dpa)
      • Germany (federal election): Deutsche Presse-Agentur (dpa)
      • Ireland: The Journal
      • Lithuania: N/A
      • Romania: Funky Citizens
    • 1 in EEA
      • Iceland: N/A
    • 5 in wider Europe/EU candidate countries (Bosnia, Bulgaria, Czechia, Georgia, and Moldova)
      • Bosnia: N/A
      • Bulgaria: N/A
      • Czechia: N/A
      • Georgia: Fact Check Georgia
      • Moldova: StopFals!
  • Launched four new temporary in-app natural disaster media literacy search guides that link to authoritative 3rd party agencies and organisations:
    • Central & Eastern European Floods (Austria, Bosnia, Czechia, Germany, Hungary, Moldova, Poland, Romania, and Slovakia) 
    • Portugal Wildfires 
    • Spanish floods
    • Mayotte Cyclone
  • Continued our in-app interventions, including video tags, search interventions and in-app information centres, available in 23 official EU languages and Norwegian and Icelandic for EEA users, around the elections, the Israel-Hamas Conflict, Climate Change, Holocaust Education, Mpox, and the War in Ukraine.
  • We partner with fact checkers to assess the accuracy of content. Sometimes, our fact-checking partners  determine that content cannot be confirmed or checks are inconclusive (especially during unfolding events). Where our fact-checking partners provide us with a rating that demonstrates the claim cannot yet be verified, we may use our unverified content label to inform viewers via a banner that a video contains unverified content, in an effort to raise user awareness about content credibility.
  • Building on our new AI-generated content label for creators, and implementation of C2PA Content Credentials, we launched a number of media literacy campaigns with guidance from expert organisations like Mediawise and WITNESS, including in Brazil, Germany, France, Mexico and the UK, that teach our community how to spot and label AI-generated content.
    • Our AIGC Transparency Campaign informed by WITNESS has reached 80M users globally, including more than 8.5M and 9.5M in Germany and France respectively. 

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

We are continuously reviewing and improving our tools and processes to fight misinformation and disinformation and will report on any further development in the next COPD report. 

Measure 21.1

Relevant Signatories will further develop and apply policies, features, or programs across Member States and EU languages to help users benefit from the context and insights provided by independent fact-checkers or authoritative sources, for instance by means of labels, such as labels indicating fact-checker ratings, notices to users who try to share or previously shared the rated content, information panels, or by acting upon content notified by fact-checkers that violate their policies.

QRE 21.1.1

Relevant Signatories will report on the policies, features, or programs they deploy to meet this Measure and on their availability across Member States.

We currently have 14 IFCN accredited fact-checking partners across the EU, EEA, and wider Europe: 

  1. Agence France-Presse (AFP)
  2. dpa Deutsche Presse-Agentur
  3. Demagog
  4. Facta
  5. Fact Check Georgia
  6. Faktograf
  7. Internews Kosova
  8. Lead Stories
  9. Logically Facts
  10. Newtral
  11. Poligrafo
  12. Reuters
  13. Science Feedback
  14. Teyit

These partners provide fact-checking coverage in 23 official EEA languages, including at least one official language of each EU Member States, plus Georgian, Russian, Turkish, and Ukrainian.

We ensure that our users benefit from the context and insights provided by the fact checking organisations we partner with in the following ways: 

Enforcement of misinformation policies. Our fact-checking partners play a critical role in helping us enforce our misinformation policies, which aim to promote a trustworthy and authentic experience for our users. We consider context and fact-checking to be key to consistently and accurately enforcing these policies, so, while we use machine learning models to help detect potential misinformation, we have our misinformation moderators assess, confirm, and take action on harmful misinformation. As part of this process, our moderators can access a repository of previously fact-checked claims and they are able to provide content to our expert fact checking partners for further evaluation. Where fact-checking partners advise that content is false, our moderators take measures to assess and remove it from our platform. Our response to QRE 31.1.1 provides further insight into the way in which fact-checking partners  are involved in this process.

Unverified content labelling. As mentioned above, we partner with fact checkers to assess the accuracy of content. Sometimes, our fact-checking partners  determine that content cannot be confirmed or checks are inconclusive (especially during unfolding events). Where our fact-checking partners provide us with a rating that demonstrates the claim cannot yet be verified, we may use our unverified content label to inform viewers via a banner that a video contains unverified content, in an effort to raise user awareness about content credibility. In these circumstances, the content creator is also notified that their video was flagged as unsubstantiated content and the video will become ineligible for recommendation in the For You feed.

  • In-app tools related to specific topics:
    • Election integrity. We have launched campaigns in advance of several major elections aimed at educating the public about the voting process which encourage users to fact-check information with our fact-checking partners. For example, the election integrity campaign we rolled out in advance of France legislative elections in June 2024 included a search intervention and in-app Election Centre. The centre contained a section about spotting misinformation, which included videos created in partnership with fact-checking organisation Agence France-Presse (AFP). In total, during the reporting period, we ran 14 temporary media literacy election integrity campaigns in advance of regional elections. 
    • Climate Change. We launched a search intervention which redirects users seeking out climate change-related content to authoritative information. We worked with the UN to provide the authoritative information (see our newsroom post here). 
    • COP29: We launched two global features (a video notice tag and search intervention guide) to point users to authoritative climate related content between 29th October and 25th November, which were viewed 400k times.
    • Natural disasters: Launched four new temporary in-app natural disaster media literacy search guides that link to authoritative 3rd party agencies and organisations:
      • Central & Eastern European Floods (Austria, Bosnia, Czechia, Germany, Hungary, Moldova, Poland, Romania, and Slovakia)
      • Portugal Wildfires 
      • Spanish floods
      • Mayotte Cyclone
    • User awareness of our fact-checking partnerships and labels. We have created pages on our Safety Center & Transparency Center to raise users’ awareness about our fact-checking program and labels and to support the work of our fact-checking partners. 

SLI 21.1.1

Relevant Signatories will report through meaningful metrics on actions taken under Measure 21.1, at the Member State level. At the minimum, the metrics will include: total impressions of fact-checks; ratio of impressions of fact-checks to original impressions of the fact-checked content–or if these are not pertinent to the implementation of fact-checking on their services, other equally pertinent metrics and an explanation of why those are more adequate.

Methodology of data measurement:

The share of removals under our harmful misinformation policy,  share of proactive removals, share of removals before any views and share of the removals within 24h are relative to the total removals of each policy. 

The share cancel rate (%) following the unverified content label share warning pop-up indicates the percentage of users who do not share a video after seeing the label pop up. This metric is based on the approximate location of the users that engaged with these tools.

Country % video removals under Misinformation policy % proactive video removals under Misinformation policy % video removals before any views under Misinformation policy % video removals within 24h under Misinformation policy % video removals under Civic and Election Integrity policy % proactive video removals under Civic and Election Integrity policy % video removals before any views under Civic and Election Integrity policy % video removals within 24h under Civic and Election Integrity policy % video removals under Synthetic Media policy % proactive video removals under Synthetic Media policy % video removals before any views under Synthetic Media policy % video removals within 24h under Synthetic Media policy Share cancel rate (%) following the unverified content label share warning pop-up (users who do not share the video after seeing the pop up)
Austria 20.22% 97.92% 80.85% 82.17% 3.30% 96.61% 77.12% 82.84% 2.90% 99.03% 57.25% 47.34% 31.81%
Belgium 14.44% 98.92% 82.47% 89.65% 3.71% 98.60% 89.92% 93.11% 7.74% 97.56% 62.76% 72.66% 33.81%
Bulgaria 30.57% 94.39% 59.44% 82.91% 3.55% 95.05% 90.11% 94.51% 4.42% 99.12% 46.70% 23.79% 33.97%
Croatia 20.93% 98.99% 70.47% 89.48% 1.70% 95.31% 85.94% 87.50% 36.11% 93.17% 15.43% 11.09% 33.66%
Cyprus 18.42% 95.69% 71.62% 82.97% 3.10% 98.84% 83.72% 82.56% 34.17% 93.78% 30.70% 6.86% 32.91%
Czech Republic 25.19% 91.84% 53.20% 90.92% 2.55% 98.18% 94.18% 94.91% 4.31% 97.20% 48.82% 70.75% 29.52%
Denmark 8.25% 96.91% 73.47% 83.09% 1.90% 97.61% 94.63% 96.72% 1.79% 98.10% 48.57% 59.05% 30.20%
Estonia 18.74% 99.37% 75.86% 93.10% 2.41% 97.56% 82.93% 87.80% 12.22% 96.63% 59.13% 74.52% 28.53%
Finland 15.52% 94.11% 69.82% 89.43% 3.14% 97.99% 92.46% 96.98% 11.29% 97.07% 39.94% 55.31% 27.21%
France 22.45% 99.24% 86.89% 95.58% 2.22% 97.95% 90.05% 96.54% 4.33% 96.10% 46.50% 47.45% 37.13%
Germany 21.79% 97.71% 76.06% 90.87% 5.29% 98.11% 85.14% 96.21% 4.85% 97.79% 62.09% 56.74% 30.09%
Greece 17.26% 96.86% 74.92% 92.28% 2.67% 98.77% 96.46% 98.15% 35.94% 89.87% 27.90% 10.17% 32.05%
Hungary 28.88% 90.51% 63.49% 86.26% 4.44% 91.88% 82.47% 95.13% 3.77% 98.47% 55.56% 57.47% 31.38%
Ireland 22.17% 93.76% 61.18% 88.43% 9.73% 86.01% 24.38% 96.34% 5.04% 92.76% 52.30% 60.11% 29.59%
Italy 27.66% 98.27% 72.70% 92.14% 5.14% 98.57% 81.43% 88.77% 4.70% 98.77% 47.26% 44.71% 37.65%
Latvia 26.97% 98.85% 82.42% 94.24% 1.87% 97.92% 93.75% 87.50% 5.01% 99.22% 45.74% 47.29% 30.90%
Lithuania 23.16% 99.23% 87.50% 94.42% 2.54% 100.00% 92.98% 91.23% 9.04% 98.03% 47.78% 48.28% 30.80%
Luxembourg 9.39% 98.92% 88.53% 86.38% 2.22% 96.97% 92.42% 98.48% 7.51% 96.86% 50.67% 41.70% 33.64%
Malta 9.84% 98.21% 89.29% 88.10% 4.10% 100.00% 94.29% 95.71% 10.72% 98.36% 67.21% 79.23% 35.43%
Netherlands 16.62% 99.19% 86.32% 89.45% 3.21% 99.43% 91.01% 94.46% 5.77% 98.67% 60.65% 67.71% 27.79%
Poland 30.42% 94.28% 63.90% 89.56% 1.79% 95.57% 90.89% 93.62% 1.80% 95.85% 56.35% 51.30% 28.88%
Portugal 26.70% 97.64% 84.90% 90.64% 5.43% 99.44% 97.20% 97.76% 10.26% 96.04% 37.82% 31.78% 33.08%
Romania 41.05% 91.73% 62.51% 82.05% 12.45% 78.02% 27.78% 49.79% 2.73% 96.80% 37.89% 24.97% 30.08%
Slovakia 45.65% 89.16% 56.04% 87.47% 1.37% 97.56% 92.68% 97.56% 3.28% 97.96% 38.78% 18.37% 28.89%
Slovenia 22.94% 99.30% 79.09% 95.82% 1.12% 100.00% 89.29% 92.86% 2.64% 100.00% 57.58% 60.61% 33.33%
Spain 28.31% 99.14% 82.55% 90.39% 2.67% 98.54% 69.71% 81.94% 5.51% 97.70% 33.15% 30.76% 34.09%
Sweden 10.90% 97.71% 77.84% 90.43% 1.98% 98.89% 95.10% 98.10% 2.38% 95.28% 48.69% 53.67% 29.44%
Iceland 4.40% 97.54% 90.16% 92.62% 0.94% 100.00% 96.15% 100.00% 3.07% 98.82% 72.94% 75.29% 27.86%
Liechtenstein 3.11% 100.00% 100.00% 91.43% 1.78% 100.00% 100.00% 100.00% 4.26% 97.92% 68.75% 60.42% 19.61%
Norway 18.77% 96.05% 74.03% 89.93% 3.27% 96.49% 89.46% 92.65% 7.09% 93.96% 46.54% 55.67% 25.37%
Total EU 23.14% 97.35% 76.62% 90.87% 4.02% 95.03% 75.26% 88.70% 5.69% 95.81% 45.90% 41.70% 32.24%
Total EEA 23.01% 97.34% 76.61% 90.86% 4.00% 95.05% 75.41% 88.75% 5.69% 95.79% 45.97% 41.96% 32.13%

SLI 21.1.2

When cooperating with independent fact-checkers to label content on their services, Relevant Signatories will report on actions taken at the Member State level and their impact, via metrics, of: number of articles published by independent fact-checkers; number of labels applied to content, such as on the basis of such articles; meaningful metrics on the impact of actions taken under Measure 21.1.1 such as the impact of said measures on user interactions with, or user re-shares of, content fact-checked as false or misleading.

The number of videos tagged with the unverified content label is based on the country in which the video was posted.

The share cancel rate (%) following the unverified content label share warning pop-up indicates the percentage of users who do not share a video after seeing the label pop up. This metric is based on the approximate location of the users that engaged with these tools.

Country Number of videos tagged with the unverified content label Share cancel rate (%) following the unverified content label share warning pop-up (users who do not share the video after seeing the pop up)
Austria 1875 31.81%
Belgium 2387 33.81%
Bulgaria 2428 33.97%
Croatia 532 33.66%
Cyprus 330 32.91%
Czech Republic 2431 29.52%
Denmark 2438 30.20%
Estonia 190 28.53%
Finland 1768 27.21%
France 24023 37.13%
Germany 28389 30.09%
Greece 3363 32.05%
Hungary 2683 31.38%
Ireland 1591 29.59%
Italy 23139 37.65%
Latvia 415 30.90%
Lithuania 389 30.80%
Luxembourg 135 33.64%
Malta 64 35.43%
Netherlands 4787 27.79%
Poland 12974 28.88%
Portugal 1921 33.08%
Romania 6708 30.08%
Slovakia 1229 28.89%
Slovenia 169 33.33%
Spain 25829 34.09%
Sweden 3207 29.44%
Iceland 49 27.86%
Liechtenstein 0 19.61%
Norway 1516 25.37%
Total EU 155394 32.24%
Total EEA 156959 32.13%

Measure 21.3

Where Relevant Signatories employ labelling and warning systems, they will design these in accordance with up-to-date scientific evidence and with analysis of their users' needs on how to maximise the impact and usefulness of such interventions, for instance such that they are likely to be viewed and positively received.

QRE 21.3.1

Relevant Signatories will report on their procedures for developing and deploying labelling or warning systems and how they take scientific evidence and their users' needs into account to maximise usefulness.

As set out within our response to QRE 17.1.1, we apply our unverified content, state-controlled media labels, and AI-generated labels to certain content in order to empower our community by providing them with an additional layer of context. We ensure these labels are developed and deployed in line with scientific evidence by partnering with fact-checkers and working with external experts, including scientists, in the following ways:

Unverified content label. In 2021, we partnered with behavioural scientists, Irrational Labs, on the design and testing of the specialised prompts which encourage users to consider content which has been labelled as unverified, before sharing it, as detailed in QRE 17.1.1. On testing the prompts, Irrational Labs found that viewers decreased the rate at which they shared videos by 24%, while likes on such unsubstantiated content also decreased by 7%.Their full report can be found here

As mentioned above, we partner with a number of IFCN accredited fact-checkers in Europe, who assist with assessing the accuracy of certain content on our platform. Where our fact-checking partners determine that a video is not able to be confirmed or their fact-checks are inconclusive (which is sometimes the case, particularly during unfolding events or emergencies), we may apply our unverified content label to the video.

State-controlled media label. Since January 2023, we have been applying state-controlled media labels to accounts or content where there is evidence of clear editorial control and decision-making by members of the state. To inform our state-affiliated media policy, including the updates set out in this report, and our approach to making such designations, we consult with media experts, political scientists, academics, and representatives from international organisations and civil society across North and South America, Africa, Europe, the Middle East, Asia, and Australia. We continue to work with these experts to inform our global approach and expansion of the policy.

We worked closely with Irrational Labs on the development of the state-affiliated media policy and the ways in which we could present the label to our users. We tested various copy options across English, Spanish, and Arabic via quantitative surveys and qualitative panels, and found that "[country] state-controlled media" was the option most preferred by users while being the most accurate representation of the relevant media entities' relationship to their respective governments.

AI-generated content label. In advance of launching our new AI-generated labels for creators to disclose content that is completely AI-generated or significantly edited by AI,  we consulted with our Safety Advisory Councils as well as industry experts including MIT's Dr. David G. Rand, who is studying how viewers perceive different types of AI labels. Dr. Rand's research helped guide the design of our AI-generated labels.

Commitment 22

Relevant Signatories commit to provide users with tools to help them make more informed decisions when they encounter online information that may be false or misleading, and to facilitate user access to tools and information to assess the trustworthiness of information sources, such as indicators of trustworthiness for informed online navigation, particularly relating to societal issues or debates of general interest.

We signed up to the following measures of this commitment

Measure 22.1 Measure 22.7

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

 No

If yes, list these implementation measures here

TikTok did not subscribe to this commitment.

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

We are continuously reviewing and improving our tools and processes to fight misinformation and disinformation and will report on any further development in the next COPD report. 

Commitment 23

Relevant Signatories commit to provide users with the functionality to flag harmful false and/or misleading information that violates Signatories policies or terms of service.

We signed up to the following measures of this commitment

Measure 23.1 Measure 23.2

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Yes

If yes, list these implementation measures here

  • In line with our DSA requirements, we continued to provide a dedicated reporting channel, and appeals process for users who disagree with the outcome, for our community in the European Union to ‘Report Illegal Content,’ enabling users to alert us to content they believe breaches the law.

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

We are continuously reviewing and improving our tools and processes to fight misinformation and disinformation and will report on any further development in the next COPD report. 

Measure 23.1

Relevant Signatories will develop or continue to make available on all their services and in all Member States languages in which their services are provided a user-friendly functionality for users to flag harmful false and/or misleading information that violates Signatories' policies or terms of service. The functionality should lead to appropriate, proportionate and consistent follow-up actions, in full respect of the freedom of expression.

QRE 23.1.1

Relevant Signatories will report on the availability of flagging systems for their policies related to harmful false and/or misleading information across EU Member States and specify the different steps that are required to trigger the systems.

We provide users with simple, intuitive ways to report/flag content in-app for any breach of our Terms of Service or CGs including for harmful misinformation in each EU Member State and in an official language of the European Union.

  • By ‘long-pressing’ (e.g., clicking for 3 seconds) on the video content and selecting the “Report” option. 

  • By selecting the “Share” button available on the right-hand side of the video content and then selecting the “Report” option.

The user is then shown categories of reporting reasons from which to select (which align with the harms our CGs seek to address). In 2024, we updated this feature to make the “Misinformation” categories more intuitive and allow users to report with increased granularity. We have also made changes to implement an additional option to enable users to report illegal content in line with our requirements under DSA.

Users do not need to be logged into an account on the platform to report content, and can also report video content via the TikTok website (by clicking on the “Report” button which is prominently displayed in the upper right hand corner of each video when hovering over a video) or by means of our “Report Inappropriate content” webform which is available in our Support Centre.

We are aware that harmful misinformation is not limited to video content and so users can also report a comment, a suggested search, a hashtag, a sound or an account, again specifically for harmful misinformation.

Measure 23.2

Relevant Signatories will take the necessary measures to ensure that this functionality is duly protected from human or machine-based abuse (e.g., the tactic of 'mass-flagging' to silence other voices).

QRE 23.2.1

Relevant Signatories will report on the general measures they take to ensure the integrity of their reporting and appeals systems, while steering clear of disclosing information that would help would-be abusers find and exploit vulnerabilities in their defences.

Reporting system

To ensure the integrity of our reporting system, we deploy a combination of automated review and human moderation.

Videos uploaded to TikTok are initially reviewed by our automated moderation technology, which aims to identify content that violates our Community Guidelines. If a potential violation of our CGs is found, the automated review system will either pass it on to our moderation teams for further review or, if there is a high degree of confidence that the content violates our CGs, remove it automatically. Automated removal is only applied when violations are clear-cut, such as where the content contains nudity or pertains to youth safety. We are constantly working to improve the precision of our automated moderation technology so we can more effectively remove violative content at scale, while also reducing the number of incorrect removals.

To support the fair and consistent review of potentially violative content, where violations are less clear-cut, content will be passed to our human moderation teams for further review. Human moderators can take additional context and nuance into account, which cannot always be picked up by technology, and in the context of harmful misinformation, for example, our moderators have access to a repository of previously fact-checked claims to help make swift and accurate decisions and direct access to our fact-checking partners who help assess the accuracy of new content.

We have sought to make our CGs as clear and comprehensive as possible and have put in place robust Quality Assurance processes (including steps such as review of moderation cases, flows, appeals and undertaking Root Cause Analyses).

As part of our requirements under the DSA, we have introduced an additional reporting channel for our community in the European Union to ‘Report Illegal Content,’ which enables users to alert us to content they believe breaches the law. TikTok will review the content against our Community Guidelines and where a violation is detected, the content may be removed globally. If it is not removed, our illegal content moderation team will further review the content to assess whether it is unlawful in the relevant jurisdiction - this assessment is undertaken by human review. If it is, access to that content will be restricted in that country.

Those who report suspected illegal content will be notified of our decision, including if we consider that the content is not illegal. Users who disagree can appeal those decisions using the appeals process.

We also note that whilst user reports are important, at TikTok we place considerable emphasis on proactive detection to remove violative content.  We are proud that the vast majority of removed content is identified proactively before it is reported to us.

Appeals system

We are transparent with users in relation to appeals.  We set out the options that may be available both to the user who reported the content and the creator of the affected content, where they disagree with the decision we have taken.  


The integrity of our appeals systems is reinforced by the involvement of our trained human moderators, who can take context and nuance into consideration when deciding whether content is illegal or violates our CGs. 

Our moderators review all appeals raised in relation to removed videos, removed comments, and banned accounts and assess them against our policies. To ensure consistency within this process and its overall integrity, we have sought to make our policies as clear and comprehensive as possible and have put in place robust Quality Assurance processes (including steps such as auditing appeals and undertaking Root Cause Analyses).

If users who have submitted an appeal are still not satisfied with our decision, they can share feedback with us via the webform on TikTok.com. We continuously take user feedback into consideration to identify areas of improvement, including within the appeals process. Users may also have other legal rights in relation to decisions we make, as set out further here.

Commitment 24

Relevant Signatories commit to inform users whose content or accounts has been subject to enforcement actions (content/accounts labelled, demoted or otherwise enforced on) taken on the basis of violation of policies relevant to this section (as outlined in Measure 18.2), and provide them with the possibility to appeal against the enforcement action at issue and to handle complaints in a timely, diligent, transparent, and objective manner and to reverse the action without undue delay where the complaint is deemed to be founded.

We signed up to the following measures of this commitment

Measure 24.1

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

No

If yes, list these implementation measures here

  • Continued to serve user notifications following action on a user’s account or content, which includes a clear explanation about the action taken and a simple way to appeal the decision taken.
  • Continued to provide additional user transparency around our appeals processes (here)

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

We are continuously reviewing and improving our tools and processes to fight misinformation and disinformation and will report on any further development in the next COPD report. 

Measure 24.1

Relevant Signatories commit to provide users with information on why particular content or accounts have been labelled, demoted, or otherwise enforced on, on the basis of violation of policies relevant to this section, as well as the basis for such enforcement action, and the possibility for them to appeal through a transparent mechanism.

QRE 24.1.1

Relevant Signatories will report on the availability of their notification and appeals systems across Member States and languages and provide details on the steps of the appeals procedure.

Users in all EU member states are notified by an in-app notification in their relevant local language where the following action is taken:
  • removal or otherwise restriction of access to their content;
  • a ban of the account;
  • restriction of their access to a feature (such as LIVE); or
  • restriction of their ability to monetise. 

Such notifications are provided in near real time after action has been taken (i.e. generally within several seconds or up to a few minutes at most). 

Where we have taken any of these decisions, an in-app inbox notification sets out the violation deemed to have taken place, along with an option for users to “disagree” and submit an appeal. Users can submit appeals within 180 days of being notified of the decision they want to appeal. Further information, including about how to appeal a report is set out here.

All such appeals raised will be queued for review by our specialised human moderators so as to ensure that context is adequately taken into account in reaching a determination. Users can monitor the status and view the results of their appeal within their in-app inbox. 

As mentioned above, our users have the ability to share feedback with us to the extent that they don't agree with the result of their appeal. They can do so by using the in-app function which allows them to "report a problem". We are continuously taking user feedback into consideration in order to identify areas of improvement within the appeals process.

SLI 24.1.1

Relevant Signatories provide information on the number and nature of enforcement actions for policies described in response to Measure 18.2, the numbers of such actions that were subsequently appealed, the results of these appeals, information, and to the extent possible metrics, providing insight into the duration or effectiveness of processing of appeals process, and publish this information on the Transparency Centre.

Methodology of data measurement:

The number of appeals/overturns is based on the country in which the video being appealed/overturned was posted. These numbers are only related to our Misinformation, Civic and Election Integrity and Edited media and AIGC policies.

Country Number of appeals of videos removed for violation of misinformation policy Number of overturns following appeals for violation of misinformation policy Appeal success rate of videos removed for violation of misinformation policy Number of appeals of videos removed for violation of Civic and Election Integrity policy Number of overturns following appeals for violation of Civic and Election Integrity policy Appeal success rate of videos removed for violation of Civic and Election Integrity policy Number of appeals of videos removed for violation of Edited Media and AI-Generated Content (AIGC) policy Number of overturns following appeals under Edited Media and AI-Generated Content (AIGC) policy Appeal success rate of videos removed for violation of Synthetic and Manipulated Media
Austria 619 352 56.90% 79 65 82.30% 9 8 88.90%
Belgium 863 673 78.00% 149 123 82.60% 14 12 85.70%
Bulgaria 267 107 40.10% 34 23 67.60% 5 2 40.00%
Croatia 140 84 60.00% 7 7 100.00% 12 8 66.70%
Cyprus 108 56 51.90% 4 2 50.00% 4 3 75.00%
Czech Republic 902 433 48.00% 45 33 73.30% 31 12 38.70%
Denmark 289 215 74.40% 57 50 87.70% 18 16 88.90%
Estonia 140 113 80.70% 3 3 100.00% 18 14 77.80%
Finland 202 156 77.20% 12 9 75.00% 6 5 83.30%
France 7461 6189 83.00% 331 301 90.90% 110 87 79.10%
Germany 13540 7268 53.70% 1302 1053 80.90% 177 121 68.40%
Greece 734 425 57.90% 68 56 82.40% 12 9 75.00%
Hungary 481 314 65.30% 45 32 71.10% 22 15 68.20%
Ireland 1091 845 77.50% 53 48 90.60% 17 15 88.20%
Italy 6074 4174 68.70% 553 491 88.80% 57 48 84.20%
Latvia 110 83 75.50% 5 5 100.00% 7 3 42.90%
Lithuania 105 87 82.90% 13 11 84.60% 0 0 0.00%
Luxembourg 17 16 94.10% 7 4 57.10% 0 0 0.00%
Malta 38 37 97.40% 3 3 100.00% 0 0 0.00%
Netherlands 1207 959 79.50% 123 103 83.70% 19 14 73.70%
Poland 4263 1833 43.00% 177 125 70.60% 35 25 71.40%
Portugal 402 274 68.20% 79 56 70.90% 22 16 72.70%
Romania 2573 1598 62.10% 524 403 76.90% 30 24 80.00%
Slovakia 401 175 43.60% 11 7 63.60% 5 4 80.00%
Slovenia 267 153 57.30% 5 4 80.00% 7 2 28.60%
Spain 4920 3961 80.50% 239 202 84.50% 52 40 76.90%
Sweden 943 544 57.70% 124 100 80.60% 15 7 46.70%
Iceland 20 17 85.00% 2 0 0.00% 0 0 0.00%
Liechtenstein 0 0 0.00% 0 0 0.00% 0 0 0.00%
Norway 437 322 73.70% 44 35 79.50% 14 9 64.30%
Total EU 48157 31124 64.60% 4052 3319 81.90% 704 510 72.40%
Total EEA 48,614 31,463 64.70% 4098 3354 81.80% 718 519 72.30%

Empowering Researchers

Commitment 26

Relevant Signatories commit to provide access, wherever safe and practicable, to continuous, real-time or near real-time, searchable stable access to non-personal data and anonymised, aggregated, or manifestly-made public data for research purposes on Disinformation through automated means such as APIs or other open and accessible technical solutions allowing the analysis of said data.

We signed up to the following measures of this commitment

Measure 26.1 Measure 26.2 Measure 26.3

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Yes

If yes, list these implementation measures here

  • Continued to refine the new Virtual Compute Environment (VCE), launched May 2024, by:
    • Providing access to public U18 data. 
    • Adding new data points (e.g., Hashtag Info) and endpoints (e.g., Playlist Info). See Changelog
    • Establishing a new due diligence process with an external partner to confirm the eligibility of NGO applicants. 
  • Continued to support independent research through the Research API and improve accessibility by:
    • Adding three new endpoints for TikTok Shop, which launched in Spain and Ireland in December 2024.
    • Making Python and R (programming languages) wrappers available via GitHub.
  • Continued to make the Commercial Content API available in Europe to bring transparency to paid advertising, advertisers and other commercial content on TikTok.
  • Continued to offer our Commercial Content Library, a publicly searchable EU ads database with information about paid ads and ad metadata, such as the advertising creative, dates the ad was active for, the main parameters used for targeting (e.g. age, gender), the number of people who were served the ad.

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

We are continuously reviewing and improving our tools and processes to fight misinformation and disinformation and will report on any further development in the next COPD report. 

Measure 26.1

Relevant Signatories will provide public access to non-personal data and anonymised, aggregated or manifestly-made public data pertinent to undertaking research on Disinformation on their services, such as engagement and impressions (views) of content hosted by their services, with reasonable safeguards to address risks of abuse (e.g. API policies prohibiting malicious or commercial uses).

QRE 26.1.1

Relevant Signatories will describe the tools and processes in place to provide public access to non-personal data and anonymised, aggregated and manifestly-made public data pertinent to undertaking research on Disinformation, as well as the safeguards in place to address risks of abuse.


(I) Research API

To make it easier to independently research our platform and bring transparency to TikTok content, we built a Research API that provides researchers in the US, EEA, UK and Switzerland, with access to public data on accounts and content, including comments, captions, subtitles, number of comments, shares, likes, followers and following lists, and favourites that a video receives on our platform. More information is available here. We carefully consider feedback from researchers who have used the API and continue to make improvements such as additional data fields, streamlining the application process, and enabling collaboration through Lab Access, which allows up to 10 researchers to work together on a shared research project.

(II) Virtual Compute Environment (VCE)

The VCE allows qualifying non-academic not-for-profit researchers in the EU to access and analyse TikTok's public data, while ensuring robust security and privacy protections. Public data can be accessed and analysed in 2 stages:

  1. Test Stage: Query the data using TikTok's query software development kit (SDK). The VCE will return random sample data based on your query, limited to 5,000 records per day.
  2. Execution Stage: Submit a script to execute against all public data. TikTok provides a powerful search capability that allows data to be paginated in increments of up to 100,000 records. TikTok will review the results file to make sure the output is aggregated.

(III) Commercial Content API  

As required under the DSA, and to enhance transparency on advertisements presented on our platform, we have built a commercial content API that includes ads, ad and advertiser metadata, and targeting information. Researchers and professionals are required to create a TikTok for Developers account and submit an application to access the Commercial Content API which we review to help prevent malicious actors from misusing this data. 

(IV) Commercial Content Library

The Commercial Content Library is a publicly searchable database with information about paid ads and ad metadata, such as the advertising creative, dates the ad ran, main parameters used for targeting (e.g. age, gender), number of people who were served the ad, and more. It also includes information about content that's commercial in nature and tagged with either a paid partnership label or promotional label, such as content that promotes a brand, product or service, but is not a paid ad. 

QRE 26.1.2

Relevant Signatories will publish information related to data points available via Measure 25.1, as well as details regarding the technical protocols to be used to access these data points, in the relevant help centre. This information should also be reachable from the Transparency Centre. At minimum, this information will include definitions of the data points available, technical and methodological information about how they were created, and information about the representativeness of the data.

In this H2 2024 report,TikTok has shared more than 3000 data points across 30 EU/EEA countries - this is a slight decrease as compared to the 3,300 data points from our previous report. The reduction is due to the fact that we are no longer reporting COVID-19-specific metrics in this report. As the pandemic has transitioned from an acute global crisis to a more managed public health issue, the relevance and utility of these metrics have diminished.

We provide access to researchers to data that is publicly available on our platform through our Research Tools and through our Commercial Content API for commercial content (detailed below).

We also provide ongoing insights into the action we take against content and accounts that violate our CGs, Terms of Service, or Advertising Policies, in our quarterly TikTok Community Guideline Enforcement Reports. The report includes a variety of data visualisations, which are designed with transparency and accessibility in mind, including for people with colour vision deficiency.

We work hard to supplement the comprehensive data in the report and provide new insights. For example, we recently shared data on comment enforcement, including the number of comments we removed and the percentage of published comments we removed.

As part of our continued efforts to make it easy to study the TikTok platform, the report also offers access to aggregated data, including removal data by policy category, for the 50 markets with the highest volumes of removed content.

SLI 26.1.1

Relevant Signatories will provide quantitative information on the uptake of the tools and processes described in Measure 26.1, such as number of users.

Research Tools, Commercial Content API, and the Commercial Content Library
During this reporting period we received:
  • 148 applications to access TikTok’s Research Tools (Research API and VCE) from researchers in the EU and EEA.
  • 61 applications to access the TikTok Commercial Content API.

Country Number of applications received for Research API Number of applications accepted for Research API Number of applications rejected for Research API Number of applications received for TikTok Commercial Content Library API Number of applications accepted for TikTok Commercial Content Library API Number of applications rejected for TikTok Commercial Content Library API
Austria 5 3 1 1 1 0
Belgium 0 0 0 3 3 0
Bulgaria 1 0 0 1 1 0
Croatia 2 0 2 0 0 0
Cyprus 0 0 0 0 0 0
Czech Republic 2 1 1 0 0 0
Denmark 4 3 0 0 0 0
Estonia 0 0 0 0 0 0
Finland 1 2 0 3 2 1
France 16 4 6 11 8 3
Germany 50 12 16 14 11 3
Greece 5 1 3 0 0 0
Hungary 1 1 1 2 2 0
Ireland 3 2 4 1 1 0
Italy 13 5 2 2 2 0
Latvia 0 0 0 1 1 0
Lithuania 0 0 0 2 2 0
Luxembourg 0 0 0 0 0 0
Malta 0 0 0 0 0 0
Netherlands 17 7 7 3 2 1
Poland 3 0 1 3 2 1
Portugal 2 2 0 2 2 0
Romania 6 1 1 0 0 0
Slovakia 0 0 0 1 1 0
Slovenia 0 0 0 0 0 0
Spain 11 2 4 6 4 2
Sweden 4 3 1 4 3 1
Iceland 0 0 0 0 0 0
Liechtenstein 0 0 0 0 0 0
Norway 2 2 0 1 1 0
Total EU 146 49 50 60 48 12
Total EEA 148 51 50 61 49 12

Measure 26.2

Relevant Signatories will provide real-time or near real-time, machine-readable access to non-personal data and anonymised, aggregated or manifestly-made public data on their service for research purposes, such as accounts belonging to public figures such as elected official, news outlets and government accounts subject to an application process which is not overly cumbersome.

QRE 26.2.1

Relevant Signatories will describe the tools and processes in place to provide real-time or near real-time access to non-personal data and anonymised, aggregated and manifestly-made public data for research purposes as described in Measure 26.2.


(I) Research API

To make it easier to independently research our platform and bring transparency to TikTok content, we built a Research API that provides researchers in the US, EEA, UK and Switzerland, with access to public data on accounts and content, including comments, captions, subtitles, number of comments, shares, likes, followers and following lists, and favourites that a video receives on our platform. More information is available here. We carefully consider feedback from researchers who have used the API and continue to make improvements such as additional data fields, streamlining the application process, and enabling collaboration through Lab Access, which allows up to 10 researchers to work together on a shared research project.

(II) Virtual Compute Environment (VCE)

The VCE allows qualifying non-academic not-for-profit researchers in the EU to access and analyse TikTok's public data, while ensuring robust security and privacy protections. Public data can be accessed and analysed in 2 stages:

  1. Test Stage: Query the data using TikTok's query software development kit (SDK). The VCE will return random sample data based on your query, limited to 5,000 records per day.
  2. Execution Stage: Submit a script to execute against all public data. TikTok provides a powerful search capability that allows data to be paginated in increments of up to 100,000 records. TikTok will review the results file to make sure the output is aggregated.

(III) Commercial Content API  

As required under the DSA, and to enhance transparency on advertisements presented on our platform, we have built a commercial content API that includes ads, ad and advertiser metadata, and targeting information. Researchers and professionals are required to create a TikTok for Developers account and submit an application to access the Commercial Content API which we review to help prevent malicious actors from misusing this data. 

(IV) Commercial Content Library

The Commercial Content Library is a publicly searchable database with information about paid ads and ad metadata, such as the advertising creative, dates the ad ran, main parameters used for targeting (e.g. age, gender), number of people who were served the ad, and more. It also includes information about content that's commercial in nature and tagged with either a paid partnership label or promotional label, such as content that promotes a brand, product or service, but is not a paid ad. 

QRE 26.2.2

Relevant Signatories will describe the scope of manifestly-made public data as applicable to their services.


(I) Research API

Through our Research API, academic researchers from non-profit academic institutions in the US and Europe, or not-for-profit research institutions, organisations, associations, or bodies in the EU, can apply to study public data about TikTok content and accounts. This public data includes comments, captions, subtitles, number of comments, shares, likes, followers and following lists, and favourites that a video receives on our platform. More information is available here.

(II) Virtual Compute Environment (VCE) 

Through our VCE, qualifying non-academic not-for-profit researchers and academic researchers from non-profit academic institutions in the EU can query and analyse TikTok’s public data. To protect the security and privacy of our users the VCE is designed to ensure that TikTok data is processed within confined parameters. TikTok only reviews the results to ensure that there is no identifiable individual information extracted out of the platform. All aggregated results will be shared as a downloadable link to the approved primary researcher's email.

(III) Commercial Content API  

Through our Commercial Content API, qualifying researchers and professionals, who can be located in any country, can request public data about commercial content including ads, ad and advertiser metadata, and targeting information. To date, the Commercial Content API only includes data from EU countries. 
(IV) Commercial Content Library 

TikTok's Commercial Content Library is a repository of ads and other types of commercial content posted to users in the European Economic Area (EEA), Switzerland, and the UK only, but can be accessed by members of the public located in any country. Each ad and ad details will be available in the library for one year after the advertisement was last viewed by any user. Through the Commercial Content Library, the public can access information about paid ads and ad metadata, such as the advertising creative, dates the ad ran, main parameters used for targeting (e.g. age, gender), number of people who were served the ad, and more. It also includes information about content that is commercial in nature and tagged with either a paid partnership label or promotional label, such as content that promotes a brand, product or service, but is not a paid ad. 

QRE 26.2.3

Relevant Signatories will describe the application process in place to in order to gain the access to non-personal data and anonymised, aggregated and manifestly-made public data described in Measure 26.2.


We make detailed information available to applicants about our Research Tools (Research API and VCE) and Commercial Content API,through our dedicated TikTok for Developers website, including on what data is made available and how to apply for access. In August 2024, we established a new due diligence process with an external vendor to confirm the eligibility of NGO applicants. 

Once an application has been approved for access to our Research Tools, we provide step-by-step instructions for researchers on how to access research data, how to comply with the security steps, and how to run queries on the data.
Similarly with the Commercial Content API, we provide participants with detailed information on how to query ad data and fetch public advertiser data.

SLI 26.2.1

Relevant Signatories will provide meaningful metrics on the uptake, swiftness, and acceptance level of the tools and processes in Measure 26.2, such as: Number of monthly users (or users over a sample representative timeframe), Number of applications received, rejected, and accepted (over a reporting period or a sample representative timeframe), Average response time (over a reporting period or a sample representative timeframe).

Research Tools, Commercial Content API, and the Commercial Content Library
During this reporting period we received:
  • 148 applications to access TikTok’s Research Tools (Research API and VCE) from researchers in the EU and EEA.
  • 61 applications to access the TikTok Commercial Content API.

Country Number of applications received for Research API Number of applications accepted for Research API Number of applications rejected for Research API Number of applications received for TikTok Commercial Content Library API Number of applications accepted for TikTok Commercial Content Library API Number of applications rejected for TikTok Commercial Content Library API
Austria 5 3 1 1 1 0
Belgium 0 0 0 3 3 0
Bulgaria 1 0 0 1 1 0
Croatia 2 0 2 0 0 0
Cyprus 0 0 0 0 0 0
Czech Republic 2 1 1 0 0 0
Denmark 4 3 0 0 0 0
Estonia 0 0 0 0 0 0
Finland 1 2 0 3 2 1
France 16 4 6 11 8 3
Germany 50 12 16 14 11 3
Greece 5 1 3 0 0 0
Hungary 1 1 1 2 2 0
Ireland 3 2 4 1 1 0
Italy 13 5 2 2 2 0
Latvia 0 0 0 1 1 0
Lithuania 0 0 0 2 2 0
Luxembourg 0 0 0 0 0 0
Malta 0 0 0 0 0 0
Netherlands 17 7 7 3 2 1
Poland 3 0 1 3 2 1
Portugal 2 2 0 2 2 0
Romania 6 1 1 0 0 0
Slovakia 0 0 0 1 1 0
Slovenia 0 0 0 0 0 0
Spain 11 2 4 6 4 2
Sweden 4 3 1 4 3 1
Iceland 0 0 0 0 0 0
Liechtenstein 0 0 0 0 0 0
Norway 2 2 0 1 1 0
Total EU 146 49 50 60 48 12
Total EEA 148 51 50 61 49 12

Measure 26.3

Relevant Signatories will implement procedures for reporting the malfunctioning of access systems and for restoring access and repairing faulty functionalities in a reasonable time.

QRE 26.3.1

Relevant Signatories will describe the reporting procedures in place to comply with Measure 26.3 and provide information about their malfunction response procedure, as well as about malfunctions that would have prevented the use of the systems described above during the reporting period and how long it took to remediate them.

We welcome feedback from researchers on our APIs and have a dedicated support form where researchers can provide feedback about their experience. On foot of recent feedback, we added the new data point Video_label, which returns any labels applied to a video such as "election labels".Prior to expanding the Research API to Europe, we acted on feedback from US based researchers by streamlining the application process and enabling greater collaboration through Lab Access. 

Commitment 27

Relevant Signatories commit to provide vetted researchers with access to data necessary to undertake research on Disinformation by developing, funding, and cooperating with an independent, third-party body that can vet researchers and research proposals.

We signed up to the following measures of this commitment

Measure 27.1 Measure 27.2 Measure 27.3 Measure 27.4

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Yes

If yes, list these implementation measures here

  • We are a members of the EDMO working group for the creation of the Independent Intermediary Body (IIB) to support research on digital platforms.
  • Refined our standard operating procedure (SOP) for vetted researcher access to ensure compliance with the provisions of the Delegated Act on Data Access for Research.
  • Participated in the EC Technical Roundtable on data access in December, 2024. The roundtable focused on the technical measures and best practices that could be implemented to facilitate the roll-out of the data access mechanism for vetted researchers.

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

We are continuously reviewing and improving our tools and processes to fight misinformation and disinformation and will report on any further development in the next COPD report.

Measure 27.1

Relevant Signatories commit to work with other relevant organisations (European Commission, Civil Society, DPAs) to develop within a reasonable timeline the independent third-party body referred to in Commitment 27, taking into account, where appropriate, ongoing efforts such as the EDMO proposal for a Code of Conduct on Access to Platform Data.

QRE 27.1.1

Relevant Signatories will describe their engagement with the process outlined in Measure 27.1 with a detailed timeline of the process, the practical outcome and any impacts of this process when it comes to their partnerships, programs, or other forms of engagement with researchers.

We have engaged with EDMO and actively participated in the working group that was set up in order to implement the Independent Intermediary Body (IIB). 
TikTok was also one of two platforms to complete EDMO’s data access pilot, trialling the process for sharing data with vetted researchers designated under the DSA.

Measure 27.2

Relevant Signatories commit to co-fund from 2022 onwards the development of the independent third-party body referred to in Commitment 27.

QRE 27.2.1

Relevant Signatories will disclose their funding for the development of the independent third-party body referred to in Commitment 27.

We continue to participate in the working group which has been set up to implement the Independent Intermediary Body (IIB). 

Measure 27.3

Relevant Signatories commit to cooperate with the independent third-party body referred to in Commitment 27 once it is set up, in accordance with applicable laws, to enable sharing of personal data necessary to undertake research on Disinformation with vetted researchers in accordance with protocols to be defined by the independent third-party body.

QRE 27.3.1

Relevant Signatories will describe how they cooperate with the independent third-party body to enable the sharing of data for purposes of research as outlined in Measure 27.3, once the independent third-party body is set up.

We have participated in the working group which was set up to implement the Independent Intermediary Body (IIB), and remain ready to engage with vetted researcher access. TikTok also completed the data access pilot with EDMO trialling the process for sharing data with vetted researchers designated under the DSA.

Measure 27.4

Relevant Signatories commit to engage in pilot programs towards sharing data with vetted researchers for the purpose of investigating Disinformation, without waiting for the independent third-party body to be fully set up. Such pilot programmes will operate in accordance with all applicable laws regarding the sharing/use of data. Pilots could explore facilitating research on content that was removed from the services of Signatories and the data retention period for this content.

QRE 27.4.1

Relevant Signatories will describe the pilot programs they are engaged in to share data with vetted researchers for the purpose of investigating Disinformation. This will include information about the nature of the programs, number of research teams engaged, and where possible, about research topics or findings.

We completed the data access pilot with EDMO trialling the process for sharing data with vetted researchers designated under the DSA. 

Commitment 28

COOPERATION WITH RESEARCHERS Relevant Signatories commit to support good faith research into Disinformation that involves their services.

We signed up to the following measures of this commitment

Measure 28.1 Measure 28.2 Measure 28.3 Measure 28.4

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Yes

If yes, list these implementation measures here

  • Continued to refine the new Virtual Compute Environment (VCE), launched May 2024, by:
    • Providing access to public U18 data.
    • Adding new data points (e.g., Hashtag Info) and endpoints (e.g., Playlist Info). See Changelog
    • Establishing a new due diligence process with an external partner to confirm the eligibility of NGO applicants. 
  • Continued to support independent research through the Research API and improve accessibility by:
    • Adding three new endpoints for TikTok Shop, which launched in Spain and Ireland in December 2024.
    • Making Python and R (programming languages) wrappers available via GitHub.
  • Continued to make the Commercial Content API available in Europe to bring transparency to paid advertising, advertisers and other commercial content on TikTok.
  • Continued to offer our Commercial Content Library, a publicly searchable EU ads database with information about paid ads and ad metadata, such as the advertising creative, dates the ad ran, main parameters used for targeting (e.g. age, gender), number of people who were served the ad, and more. 

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

We are continuously reviewing and improving our tools and processes to fight misinformation and disinformation and will report on any further development in the next COPD report.

Measure 28.1

Relevant Signatories will ensure they have the appropriate human resources in place in order to facilitate research, and should set-up and maintain an open dialogue with researchers to keep track of the types of data that are likely to be in demand for research and to help researchers find relevant contact points in their organisations.

QRE 28.1.1

Relevant Signatories will describe the resources and processes they deploy to facilitate research and engage with the research community, including e.g. dedicated teams, tools, help centres, programs, or events.


TikTok is committed to facilitating research and engaging with the research community.

As set out above, TikTok is committed to facilitating research through our Research Tools, Commercial Content APIs and Commercial Content Library, full details of which are available on our TikTok for Developers and Commercial Content Library websites.

We have many teams and individuals across product, policy, data science, outreach and legal working to facilitate research. We believe transparency and accountability are essential to fostering trust with our community. We are committed to transparency in how we operate, moderate and recommend content, empower users, and secure our platform. That's why we opened our global Transparency and Accountability Centers (TACs) for invited guests to see first-hand our work to protect the safety and security of the TikTok platform..

Our TACs are located in Dublin, Los Angeles, Singapore, and Washington, DC. In October 2024, we opened our rehoused Dublin based TAC in TikTok’s new premises. DUBTAC offers an opportunity for academics, businesses, policymakers, politicians, regulators, researchers and many other expert audiences from Europe and around the world to see first-hand how teams at TikTok go about the critically important work of securing our community's safety, data, and privacy. During the reporting period, DUBTAC hosted the following visits: 

  • 22 external tours including 3 NGO/industry bodies, 6 media representatives, and 2 creators. 
  • On 22 & 23 October 2024 respectively, we welcomed the Sub-Saharan Africa (SSA) Safety Advisory Council and the Middle East, North Africa and Turkey (MENAT) Safety Advisory Council members. These visits were attended by TikTok T&S personnel where there were discussions on a range of topics and exchanges of views. 
  • In November 2024, we welcomed the Latin America (LATAM) Safety Advisory Council. 

We work closely with our nine regional Advisory Councils, including our European Safety Advisory Council andUS Content Advisory Council, and our global Youth Advisory Council, which bring together a diverse array of independent experts from academia and civil society as well as youth perspectives. Advisory Council members provide subject matter expertise and advice on issues relating to user safety, content policy, and emerging issues that affect TikTok and our community, including in the development of our AI-generated content label and a recent campaign to raise awareness around AI labeling and potentially misleading AIGC.These councils are an important way to bring outside perspectives into our company and onto our platform.

In addition to these efforts, there are a plethora of ways through which we engage with the research community in the course of our work.

Our Outreach & Partnerships Management (OPM) Team is dedicated to establishing partnerships and regularly engaging with civil society stakeholders and external experts, including the academic and research community, to ensure their perspectives inform our policy creation, feature development, risk mitigation, and safety strategies. For example, we engaged with global experts, including numerous academics in Europe, in the development of our state-affiliated media policy, Election Misinformation policies, and AI-generated content labels. OPM also plays an important role in our efforts to counter misinformation by identifying, onboarding and managing new partners to our fact-checking programme. In H2 2024, we expanded fact-checking coverage to a number of wider-European and EU candidate countries:
  • Moldova: AFP/Reuters 
  • Georgia: Fact Check Georgia
  • Albania & Kosovo: Internews Kosova 
  • Serbia: Lead Stories
  • Kazakhstan: Reuters

In the lead-up to certain elections, we invite suitably qualified external local/regional experts, as part of our Election Speaker Series.Sharing their market expertise with our internal teams provides us with insights to better understand areas that could potentially amount to election manipulation, and informs our approach to the upcoming election.

During this reporting period, we ran 9 Election Speaker Series sessions, 7 in EU Member States and 2 in Georgia and Moldova. 

  1. France: Agence France-Presse (AFP)
  2. Germany: German Press Agency (dpa)
  3. Austria: German Press Agency (dpa)
  4. Lithuania: Logically Facts
  5. Romania: Funky Citizens
  6. Ireland: Logically Facts
  7. Croatia: Faktograf
  8. Georgia: FactCheck Georgia
  9. Moldova: Stop Fals!
TikTok teams and personnel also regularly participate in research-focused events. At the end of June 2024, we sent a 12 strong delegation to GlobalFact11 in Sarajevo, Bosnia and Herzegovina. TikTok was one of three top-tier sponsors of GlobalFact11, the International Fact-Checking Network’s largest gathering for professional fact-checkers. In addition to sponsorship, we participated in an on-the-record mainstage presentation answering questions about our misinformation strategy and partnerships with professional fact-checkers. We also had meetings with many existing and potentially new partners as well as the EFCSN. In September 2024, we sent a delegation of 16 to the Trust & Safety Research Conference at Stanford University. In October, we sponsored, attended, and presented at Disinfo24 the annual EU DisinfoLab Conference in Riga. And, in December 2024, we hosted a webinar for approximately 20 French NGOs on the Virtual Compute Environment.

As well as opportunities to share context about our approach, research interests, and opportunities to collaborate, these events enable us to learn from the important work being done by the research community on various topics, which include aspects related to harmful misinformation.

Measure 28.2

Relevant Signatories will be transparent on the data types they currently make available to researchers across Europe.

QRE 28.2.1

Relevant Signatories will describe what data types European researchers can currently access via their APIs or via dedicated teams, tools, help centres, programs, or events.

We have a dedicated TikTok for Developers website which hosts our Research Tools and Commercial Content APIs. 

With the Research API, researchers can access:
  • Public account data, such as user profiles, followers and following lists, liked videos, pinned videos and reposted videos.
  • Public content data, such as comments, captions, subtitles, and number of comments, shares and likes that a video receives. 

Through the VCE, qualifying non-academic not-for-profit researchers in the EU can access and analyse TikTok's public data, including public U18 data, in a secure environment that is subject to strict security controls. 

Our commercial content related APIs includes ads, ad and advertiser metadata, and targeting information. These APIs will allow the public and researchers to perform customised - advertiser name or keyword based - searches on ads and other commercial content data that is stored in the Commercial Content Library repository. The Library is a searchable database with information about paid ads and ad metadata, such as the advertising creative, dates the ad ran, main parameters used for targeting (e.g. age, gender), number of people who were served the ad, and more. 

Measure 28.3

Relevant Signatories will not prohibit or discourage genuinely and demonstratively public interest good faith research into Disinformation on their platforms, and will not take adversarial action against researcher users or accounts that undertake or participate in good-faith research into Disinformation.

QRE 28.3.1

Relevant Signatories will collaborate with EDMO to run an annual consultation of European researchers to assess whether they have experienced adversarial actions or are otherwise prohibited or discouraged to run such research.

The data we make available and the application criteria for our Research Tools (Research API and VCE) and Commercial Content API is research topic agnostic and clearly set out in our dedicated TikTok for Developers website. In August 2024, we established a new due diligence process with an external vendor to confirm the eligibility of NGO applicants. 

Measure 28.4

As part of the cooperation framework between the Signatories and the European research community, relevant Signatories will, with the assistance of the EDMO, make funds available for research on Disinformation, for researchers to independently manage and to define scientific priorities and transparent allocation procedures based on scientific merit.

QRE 28.4.1

Relevant Signatories will disclose the resources made available for the purposes of Measure 28.4 and procedures put in place to ensure the resources are independently managed.

We are committed to continued engagement with EDMO and the broader research community.

Empowering fact-checkers

Commitment 30

Relevant Signatories commit to establish a framework for transparent, structured, open, financially sustainable, and non-discriminatory cooperation between them and the EU fact-checking community regarding resources and support made available to fact-checkers.

We signed up to the following measures of this commitment

Measure 30.1 Measure 30.2 Measure 30.3 Measure 30.4

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Yes

If yes, list these implementation measures here

  • Onboarded two new fact-checking partners in wider Europe:
    • Albania & Kosovo: Internews Kosova
    • Georgia: Fact Check Georgia.
  • In H2 we also expanded our fact-checking coverage to other wider-European and EU candidate countries with existing fact-checking partners:
    • Moldova: AFP/Reuters 
    • Serbia: Lead Stories
  • Continued to expand our fact-checking repository to ensure our teams and systems leverage the full scope of insights our fact-checking partners submitted to TikTok (regardless of the original language of the relevant content).
  • Continued to conduct feedback sessions with our partners to further enhance the efficiency of the fact-checking program.
  • Continued to participate in the working group within the Code framework on the creation of an external fact-checking repository.

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

Measure 30.1

Relevant Signatories will set up agreements between them and independent fact-checking organisations (as defined in whereas (e)) to achieve fact-checking coverage in all Member States. These agreements should meet high ethical and professional standards and be based on transparent, open, consistent and non-discriminatory conditions and will ensure the independence of fact-checkers.

QRE 30.1.1

Relevant Signatories will report on and explain the nature of their agreements with fact-checking organisations; their expected results; relevant quantitative information (for instance: contents fact-checked, increased coverage, changes in integration of fact-checking as depends on the agreements and to be further discussed within the Task-force); and such as relevant common standards and conditions for these agreements.

Within Europe, we work with 14 fact-checking partners who provide fact-checking coverage in  23 EEA languages, including at least one official language of every EU Member State, plus Georgian, Russian, Turkish, and Ukrainian. Our partners have teams of fact-checkers who review and verify reported content. Our moderators then use that independent feedback to take action and where appropriate, remove or make ineligible for recommendation false or misleading content or label unverified content. 

Our agreements with our partners are standardised, meaning the agreements are based on our template master services agreements and consistent of common standards and conditions. We reviewed and updated our template standard agreements as part of our annual contract renewal process.

The terms of the agreements describe:
  • The service the fact-checking partner will provide, namely that their team of fact checkers review, assess and rate video content uploaded to their fact-checking queue. 
  • The expected results e.g., the fact-checkers advise on whether the content may be or contain misinformation and rate it using our classification categories. 
  • An option to agree that our fact-checker partners provide regular written reports about disinformation trends identified. 
  • An option to receive pro-actively flagging of potential harmful misinformation from our partners.
  • The languages in which they will provide fact-checking services.
  • The ability to request temporary coverage regarding additional languages or support on ad hoc additional projects.
  • All other key terms including the applicable term and fees and payment arrangements.

QRE 30.1.2

Relevant Signatories will list the fact-checking organisations they have agreements with (unless a fact-checking organisation opposes such disclosure on the basis of a reasonable fear of retribution or violence).

We currently have 14 IFCN accredited fact-checking partners across the EU, EEA, and wider Europe: 

  1. Agence France-Presse (AFP)
  2. dpa Deutsche Presse-Agentur
  3. Demagog
  4. Facta
  5. Fact Check Georgia
  6. Faktograf
  7. Internews Kosova
  8. Lead Stories
  9. Logically Facts
  10. Newtral
  11. Poligrafo
  12. Reuters
  13. Science Feedback
  14. Teyit

These partners provide fact-checking coverage in 23 official EEA languages, including at least one official language of each EU Member States, plus Georgian, Russian, Turkish, and Ukrainian.

We can, and have, put in place temporary agreements with these fact-checking partners to provide additional EU language coverage during high risk events like elections or an unfolding crisis. For example, we temporarily expanded our fact-checking coverage to Maltese for the EU Parliamentary Election of June 2024.

Outside of our fact-checking program, we also collaborate with fact-checking organisations to develop a variety of media literacy campaigns. For example, during this reporting period, we partnered with a number of fact-checking organisations on election specific media literacy campaigns
  • Austria: Deutsche Presse-Agentur (dpa)
  • Croatia: Faktograf
  • France: Agence France-Presse (AFP)
  • Georgia: Fact Check Georgia
  • Germany (regional elections): Deutsche Presse-Agentur (dpa)
  • Germany (federal election): Deutsche Presse-Agentur (dpa)
  • Ireland: The Journal
  • Moldova: StopFals!
  • Romania: Funky Citizens

We also rolled out two new ongoing general media literacy and critical thinking skills campaigns in the EU and two in EU candidate countries in collaboration with our fact-checking and media literacy partners:

  • France: Agence France-Presse (AFP)
  • Portugal: Polígrafo
  • Georgia: Fact Check Georgia
  • Moldova: StopFals!

Globally, we have 22 IFCN-accredited fact-checking partners. We are continuously working to expand our fact-checking network and we keep users updated here.

QRE 30.1.3

Relevant Signatories will report on resources allocated where relevant in each of their services to achieve fact-checking coverage in each Member State and to support fact-checking organisations' work to combat Disinformation online at the Member State level.

We have fact-checking coverage in 23 official EEA languages: Bulgarian, Croatian, Czech, Danish, Dutch, English, Estonian, Finnish, French, German, Greek, Hungarian, Italian, Latvian, Lithuanian, Norwegian, Polish, Portuguese, Romanian, Slovak, Slovenian, Spanish and Swedish. 

We have fact-checking coverage in a number of other European languages or languages which affect European users, including Georgian, Russian, Turkish, and Ukrainian and we can request additional support in Azeri, Armenian, and Belarusian. 

In terms of global fact-checking initiatives, we currently cover more than 50 languages and assess content in more than 100 countries, thereby improving the overall integrity of the service and benefiting European users. 

In order to effectively scale the feedback provided by our fact-checkers globally, we have implemented the measures listed below.
  • Fact-checking repository. We have built a repository of previously fact-checked claims to help misinformation moderators make swift and accurate decisions.
  • Trends reports. Our fact-checking partners can provide us with regular reports identifying general misinformation trends observed on our platform and across the industry generally, including new/changing industry or market trends, events or topics that generated particular misinformation or disinformation.  
  • Proactive detection by our fact-checking partners. Our fact-checking partners are authorised to proactively identify content that may constitute harmful misinformation on our platform and suggest prominent misinformation that is circulating online that may benefit from verification. 
  • Fact-checking guidelines. We create guidelines and trending topic reminders for our moderators on the basis of previous fact-checking assessments. This ensures our moderation teams leverage the insights from our fact-checking partners and helps our moderators make swift and accurate decisions on flagged content regardless of the language in which the original claim was made.
  • Election Speaker Series. To further promote election integrity, and inform our approach to country-level EU elections, we invited suitably qualified local and regional external experts to share their insights and market expertise with our internal teams. Our recent Election Speaker Series heard presentations from the following organisations: 
    • France: Agence France-Presse (AFP)
  • Germany: German Press Agency (dpa)
  • Austria: German Press Agency (dpa)
  • Lithuania: Logically Facts
  • Romania: Funky Citizens
  • Ireland: Logically Facts
  • Croatia: Faktograf
  • Georgia: FactCheck Georgia
  • Moldova: Stop Fals!

Members of moderation teams receive specialised training on misinformation and have direct access to these tools and measures, which enables them to more accurately take action on violating content across Europe and globally.
We are continuing to invest in building, and improving, models which may allow for the output of these measures to be used to update the machine learning models we use in proactive detection, learning, over time, to search for similar content which can be proactively recalled into our moderation system for review. We use a variety of automated tools, including:

  • Computer Vision models, which help to detect objects so it can be determined whether the content likely contains material which violates our policies.
  • Keyword lists and models are used to review text and audio content to detect material in violation of our policies. We work with various external experts, including our fact-checking partners, to inform our keyword lists.
  • Where we have previously detected content that violates our policies, we use de-duplication and hashing technologies that enable us to recognise copies or near copies of such content to prevent further re-distribution of violative content on our platform.
  • We launched the ability to read Content Credentials that attach metadata to content, which we can use to automatically label AI-generated content that originated on other major platforms.

Continuing to leverage the fact-checking output in this way enables us to further increase the positive impact of our fact checking programme.

Measure 30.2

Relevant Signatories will provide fair financial contributions to the independent European fact-checking organisations for their work to combat Disinformation on their services. Those financial contributions could be in the form of individual agreements, of agreements with multiple fact-checkers or with an elected body representative of the independent European fact-checking organisations that has the mandate to conclude said agreements.

QRE 30.2.1

Relevant Signatories will report on actions taken and general criteria used to ensure the fair financial contributions to the fact-checkers for the work done, on criteria used in those agreements to guarantee high ethical and professional standards, independence of the fact-checking organisations, as well as conditions of transparency, openness, consistency and non-discrimination.

Our agreements with our fact-checking partners are standardised, meaning the agreements are based on our template master services agreements and consistent of common standards and conditions. These agreements, as with all of our agreements, must meet the ethical and professional standards we set internally including containing anti-bribery and corruption provisions. 

Our partners are compensated in a fair, transparent way based on the work done by them using standardised rates. Our fact-checking partners then invoice us on a monthly basis based on work done.

All of our fact-checking partners are independent organisations, which are certified through the non-partisan IFCN. Our agreements with them explicitly state that the fact-checkers are non-exclusive, independent contractors of TikTok who retain editorial independence in relation to the fact-checking, and that the services shall be performed in a professional manner and in line with the highest standards in the industry. Our processes are also set up to ensure our fact-checking partners independence. Our partners access flagged content through an exclusive dashboard for their use and provide their assessment of the accuracy of the content by providing a rating. Fact-checkers will do so independently from us, and their review may include calling sources, consulting public data or authenticating videos and images.

To facilitate transparency and openness with our fact-checking partners, we regularly meet them and provide data regarding their feedback and also conduct surveys with them.

QRE 30.2.2

Relevant Signatories will engage in, and report on, regular reviews with their fact-checking partner organisations to review the nature and effectiveness of the Signatory's fact-checking programme.

We meet regularly with our fact-checking partners and have an ongoing dialogue with them about how our partnership is working and evolving. We survey our fact-checking partners to encourage feedback about what we are doing well and how we could improve.

QRE 30.2.3

European fact-checking organisations will, directly (as Signatories to the Code) or indirectly (e.g. via polling by EDMO or an elected body representative of the independent European fact-checking organisations) report on the fairness of the individual compensations provided to them via these agreements.

This provision is not relevant to TikTok, only to fact-checking organisations.

Measure 30.3

Relevant Signatories will contribute to cross-border cooperation between fact-checkers.

QRE 30.3.1

Relevant Signatories will report on actions taken to facilitate their cross-border collaboration with and between fact-checkers, including examples of fact-checks, languages, or Member States where such cooperation was facilitated.

Given our fact-checking partners are all IFCN-accredited, our fact-checking partners already engage in some informal cross-border collaboration through that network. 

In addition, we continue to collaborate with our partners to understand how we may be able to facilitate further collaboration through individual feedback sessions with partners. 

Measure 30.4

To develop the Measures above, relevant Signatories will consult EDMO and an elected body representative of the independent European fact-checking organisations.

QRE 30.4.1

Relevant Signatories will report, ex ante on plans to involve, and ex post on actions taken to involve, EDMO and the elected body representative of the independent European fact-checking organisations, including on the development of the framework of cooperation described in Measures 30.3 and 30.4.

We are in regular dialogue with EDMO and the EFCSN on these and other issues. We continue to be open to discussing and exploring what further progress can be made on these points.

Commitment 31

Relevant Signatories commit to integrate, showcase, or otherwise consistently use fact-checkers' work in their platforms' services, processes, and contents; with full coverage of all Member States and languages.

We signed up to the following measures of this commitment

Measure 31.1 Measure 31.2 Measure 31.3 Measure 31.4

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Yes

If yes, list these implementation measures here

  • Onboarded two new fact-checking partners in wider Europe:
    • Albania & Kosovo: Internews Kosova
    • Georgia: Fact Check Georgia.
  • And, in addition, expanded our fact-checking coverage to other wider-European and EU candidate countries with existing fact-checking partners:
    • Moldova: AFP/Reuters 
    • Serbia: Lead Stories
  • Continued to expand our fact-checking repository to ensure our teams and systems leverage the full scope of insights our fact-checking partners submitted to TikTok (regardless of the original language of the relevant content).
  • Continued to conduct feedback sessions with our partners to further enhance the efficiency of the fact-checking program.
  • Continued to participate in the working group within the Code framework on the creation of an external fact-checking repository. 

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

We are continuously reviewing and improving our tools and processes to fight misinformation and disinformation and will report on any further development in the next COPD report. 

Measure 31.2

Relevant Signatories that integrate fact-checks in their products or processes will ensure they employ swift and efficient mechanisms such as labelling, information panels, or policy enforcement to help increase the impact of fact-checks on audiences.

QRE 31.2.1

Relevant Signatories will report on their specific activities and initiatives related to Measures 31.1 and 31.2, including the full results and methodology applied in testing solutions to that end.

We see harmful misinformation as different from other content issues. Context and fact-checking are critical to consistently and accurately enforcing our harmful misinformation policies, which is why we work with 14 fact-checking partners in Europe, covering 23 EEA languages. 

While we use machine learning models to help detect potential misinformation, our approach is to have members of our content moderation team, who receive specialised training on misinformation, assess, confirm, and take action on harmful misinformation. This includes direct access to our fact-checking partners who help assess the accuracy of content. Our fact-checking partners are involved in our moderation process in three ways:

(i) a moderator sends a video to fact-checkers for review and their assessment of the accuracy of the content by providing a rating. Fact-checkers will do so independently from us, and their review may include calling sources, consulting public data, authenticating videos and images, and more.

While content is being fact-checked or when content can't be substantiated through fact-checking, we may reduce the content’s distribution so that fewer people see it. Fact-checkers ultimately do not take action on the content directly. The moderator will instead take into account the fact-checkers’ feedback on the accuracy of the content when deciding whether the content violates our CGs and what action to take.

(ii) contributing to our global database of previously fact-checked claims to help our misinformation moderators make decisions. 

(iii) a proactive detection programme with our fact-checkers who flag new and evolving claims they're seeing on our platform. This enables our moderators to quickly assess these claims and remove violations.

In addition, we use fact-checking feedback to provide additional context to users about certain content. As mentioned, when our fact checking partners conclude that the fact-check is inconclusive or content is not able to be confirmed, (which is especially common during unfolding events or crises), we inform viewers via a banner when we identify a video with unverified content in an effort to raise users' awareness about the credibility of the content and to reduce sharing. The video may also become ineligible for recommendation into anyone's For You feed to limit the spread of potentially misleading information.

SLI 31.1.1 (for Measures 31.1 and 31.2)

Member State level reporting on use of fact-checks by service and the swift and efficient mechanisms in place to increase their impact, which may include (as depends on the service): number of fact-check articles published; reach of fact-check articles; number of content pieces reviewed by fact-checkers.

Methodology of data measurement:

The number of fact checked videos is based on the number of videos that have been reviewed by one of our fact-checking partners in the relevant territory.

Country Number of fact-checked videos
Austria 64
Belgium 141
Bulgaria 398
Croatia 137
Cyprus 8
Czech Republic 200
Denmark 175
Estonia 84
Finland 61
France 1045
Germany 837
Greece 64
Hungary 144
Ireland 91
Italy 202
Latvia 40
Lithuania 41
Luxembourg 2
Malta 0
Netherlands 52
Poland 622
Portugal 59
Romania 669
Slovakia 138
Slovenia 22
Spain 407
Sweden 158
Iceland 1
Liechtenstein 0
Norway 227
Total EU 5861
Total EEA 6089

SLI 31.1.2 (for Measures 31.1 and 31.2)

An estimation, through meaningful metrics, of the impact of actions taken such as, for instance, the number of pieces of content labelled on the basis of fact-check articles, or the impact of said measures on user interactions with information fact-checked as false or misleading.

Methodology of data measurement: 

The number of videos removed as a result of a fact-checking assessment and the number of videos removed because of policy guidelines, known misinformation trends and our knowledge based repository is based on the country in which the video was posted. 

These metrics correspond to the numbers of removals under the misinformation policy since all of its enforcement are based on the policy guidelines, known misinformation trends and knowledge based repository.

Country Number of videos removed as a result of a fact-checking assessment Number of videos removed because of policy guidelines, known misinformation trends and knowledge based repository
Austria 8 2888
Belgium 26 3902
Bulgaria 62 1568
Croatia 31 789
Cyprus 0 511
Czech Republic 42 2720
Denmark 12 1455
Estonia 2 319
Finland 4 984
France 166 44354
Germany 177 50335
Greece 8 4198
Hungary 21 2002
Ireland 13 4676
Italy 40 21035
Latvia 1 694
Lithuania 0 520
Luxembourg 0 279
Malta 0 168
Netherlands 13 5422
Poland 152 13028
Portugal 10 2629
Romania 168 14103
Slovakia 42 1365
Slovenia 3 574
Spain 55 22581
Sweden 15 3489
Iceland 1 122
Liechtenstein 0 35
Norway 14 1798
Total EU 1071 206588
Total EEA 1086 208543

SLI 31.1.3 (for Measures 31.1 and 31.2)

Signatories recognise the importance of providing context to SLIs 31.1.1 and 31.1.2 in ways that empower researchers, fact-checkers, the Commission, ERGA, and the public to understand and assess the impact of the actions taken to comply with Commitment 31. To that end, relevant Signatories commit to include baseline quantitative information that will help contextualise these SLIs. Relevant Signatories will present and discuss within the Permanent Task-force the type of baseline quantitative information they consider using for contextualisation ahead of their baseline reports.

Methodology of data measurement:

The metric we have provided demonstrates the % of videos which have been removed as a result of the fact checking assessment, in comparison to the total number of videos removed because of violation of our harmful misinformation policy.

Country Videos removed as a result of a fact checking assessment as a percentage of total number of videos removed due to violation of harmful misinformation policy
Austria 0.20%
Belgium 0.50%
Bulgaria 3.60%
Croatia 1.00%
Cyprus 0.00%
Czech Republic 1.30%
Denmark 0.80%
Estonia 0.60%
Finland 0.40%
France 0.40%
Germany 0.30%
Greece 0.20%
Hungary 0.30%
Ireland 0.00%
Italy 0.20%
Latvia 0.00%
Lithuania 0.00%
Luxembourg 0.00%
Malta 0.00%
Netherlands 0.10%
Poland 1.00%
Portugal 0.30%
Romania 0.90%
Slovakia 2.80%
Slovenia 0.00%
Spain 0.20%
Sweden 0.40%
Iceland 0.00%
Liechtenstein 0.00%
Norway 0.60%
Total EU 0.40%
Total EEA 0.40%

Measure 31.3

Relevant Signatories (including but not necessarily limited to fact-checkers and platforms) will create, in collaboration with EDMO and an elected body representative of the independent European fact-checking organisations, a repository of fact-checking content that will be governed by the representatives of fact-checkers. Relevant Signatories (i.e. platforms) commit to contribute to funding the establishment of the repository, together with other Signatories and/or other relevant interested entities. Funding will be reassessed on an annual basis within the Permanent Task-force after the establishment of the repository, which shall take no longer than 12 months.

QRE 31.3.1

Relevant Signatories will report on their work towards and contribution to the overall repository project, which may include (depending on the Signatories): financial contributions; technical support; resourcing; fact-checks added to the repository. Further relevant metrics should be explored within the Permanent Task-force.

We are participating in the sub-group created for this purpose. We actively worked with all signatories to define clear deliverables and timelines for the creation of an external fact-checking repository, as contemplated in this measure.  

Measure 31.4

Relevant Signatories will explore technological solutions to facilitate the efficient use of this common repository across platforms and languages. They will discuss these solutions with the Permanent Task-force in view of identifying relevant follow up actions.

QRE 31.4.1

Relevant Signatories will report on the technical solutions they explore and insofar as possible and in light of discussions with the Task-force on solutions they implemented to facilitate the efficient use of a common repository across platforms.

We commit to being an active participant in the discussion about technological solutions to facilitate the efficient use of the common repository across platforms and languages.

Commitment 32

Relevant Signatories commit to provide fact-checkers with prompt, and whenever possible automated, access to information that is pertinent to help them to maximise the quality and impact of fact-checking, as defined in a framework to be designed in coordination with EDMO and an elected body representative of the independent European fact-checking organisations.

We signed up to the following measures of this commitment

Measure 32.1 Measure 32.2 Measure 32.3

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Yes

If yes, list these implementation measures here

Continued to explore ways to improve data sharing in connection with our pilot scheme to share enforcement data with our fact-checking partners on the claims they have provided feedback on.

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

We are continuously reviewing and improving our tools and processes to fight disinformation and will report on any further development in the next COPD report. 

Measure 32.2

Relevant Signatories that showcase User Generated Content (UGC) will provide appropriate interfaces, automated wherever possible, for fact-checking organisations to be able to access information on the impact of contents on their platforms and to ensure consistency in the way said Signatories use, credit and provide feedback on the work of fact-checkers.

QRE 31.1.1 (for Measures 31.1 and 31.2)

Relevant Signatories will provide details on the interfaces and other tools put in place to provide fact-checkers with the information referred to in Measure 31.1 and 31.2.

We see harmful misinformation as different from other content issues. Context and fact-checking are critical to consistently and accurately enforcing our harmful misinformation policies, which is why we work with 14 fact-checking partners in Europe, covering 23 EEA languages. 
While we use machine learning models to help detect potential misinformation, our approach is to have members of our content moderation team, who receive specialised training on misinformation, assess, confirm, and take action on harmful misinformation. This includes direct access to our fact-checking partners who help assess the accuracy of content. Our fact-checking partners are involved in our moderation process in three ways:

(i) a moderator sends a video to fact-checkers for review and their assessment of the accuracy of the content by providing a rating. Fact-checkers will do so independently from us, and their review may include calling sources, consulting public data, authenticating videos and images, and more.

While content is being fact-checked or when content can't be substantiated through fact-checking, we may reduce the content’s distribution so that fewer people see it. Fact-checkers ultimately do not take action on the content directly. The moderator will instead take into account the fact-checkers’ feedback on the accuracy of the content when deciding whether the content violates our CGs and what action to take.

(ii) contributing to our global database of previously fact-checked claims to help our misinformation moderators make decisions. 

(iii) a proactive detection programme with our fact-checkers who flag new and evolving claims they're seeing on our platform. This enables our moderators to quickly assess these claims and remove violations.

In addition, we use fact-checking feedback to provide additional context to users about certain content. As mentioned, when our fact checking partners conclude that the fact-check is inconclusive or content is not able to be confirmed, (which is especially common during unfolding events or crises), we inform viewers via a banner when we identify a video with unverified content in an effort to raise users' awareness about the credibility of the content and to reduce sharing. The video may also become ineligible for recommendation into anyone's For You feed to limit the spread of potentially misleading information.

SLI 32.1.1

Relevant Signatories will provide quantitative information on the use of the interfaces and other tools put in place to provide fact-checkers with the information referred to in Measures 32.1 and 32.2 (such as monthly users for instance).

Our fact-checking partners access content which has been flagged for review through a dashboard made available for their exclusive use. The dashboard shows our fact-checkers certain quantitative information about the services they provide, including the number of videos queued for assessment at any one time, as well as the time the review has taken. Fact-checkers can also use the dashboard to see the rating they applied to videos they have previously assessed.

Going forward, we plan to continue to explore ways to further increase the quality of our methods of data sharing with fact-checking partners.

Methodology of data measurement: 

N/A. As mentioned in our response to QRE 32.1.1, the dashboard we currently share with our partners only contains high level quantitative information about the services they provide, including the number of videos queued for assessment at any one time, as well as the time the review has taken. We are continuing to work with our fact checking partners to understand what further data it would be helpful for us to share with them.

Country 0 0
Austria 0 0
Belgium 0 0
Bulgaria 0 0
Croatia 0 0
Cyprus 0 0
Czech Republic 0 0
Denmark 0 0
Estonia 0 0
Finland 0 0
France 0 0
Germany 0 0
Greece 0 0
Hungary 0 0
Ireland 0 0
Italy 0 0
Latvia 0 0
Lithuania 0 0
Luxembourg 0 0
Malta 0 0
Netherlands 0 0
Poland 0 0
Portugal 0 0
Romania 0 0
Slovakia 0 0
Slovenia 0 0
Spain 0 0
Sweden 0 0
Iceland 0 0
Liechtenstein 0 0
Norway 0 0

Measure 32.3

Relevant Signatories will regularly exchange information between themselves and the fact-checking community, to strengthen their cooperation.

QRE 32.3.1

Relevant Signatories will report on the channels of communications and the exchanges conducted to strengthen their cooperation - including success of and satisfaction with the information, interface, and other tools referred to in Measures 32.1 and 32.2 - and any conclusions drawn from such exchanges.

We continue to participate in the taskforce made up of the relevant signatories’ representatives that is being set up for this purpose. Meanwhile we are also engaging with EDMO pro-actively on this commitment.

Transparency Centre

Commitment 34

To ensure transparency and accountability around the implementation of this Code, Relevant Signatories commit to set up and maintain a publicly available common Transparency Centre website.

We signed up to the following measures of this commitment

Measure 34.1 Measure 34.2 Measure 34.3 Measure 34.4 Measure 34.5

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Yes

If yes, list these implementation measures here

We have been an active participant in the working group that has successfully launched the common Transparency Centre this year. We have held the position of co-chair of the Transparency working group since September 2023. From January 2024, we supported the transition of the maintenance and development of the website from the former third-party vendor, to the signatory of the Code, Vost.eu.  

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

Commitment 35

Signatories commit to ensure that the Transparency Centre contains all the relevant information related to the implementation of the Code's Commitments and Measures and that this information is presented in an easy-to-understand manner, per service, and is easily searchable.

We signed up to the following measures of this commitment

Measure 35.1 Measure 35.2 Measure 35.3 Measure 35.4 Measure 35.5 Measure 35.6

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Yes

If yes, list these implementation measures here

Through our participation in the Transparency Centre working group, we have ensured that the Transparency Centre will allow the general public to access general information about the Code as well as the underlying reports (and for the Centre to be navigated both by commitment and by signatory).

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

Commitment 36

Signatories commit to updating the relevant information contained in the Transparency Centre in a timely and complete manner.

We signed up to the following measures of this commitment

Measure 36.1 Measure 36.2 Measure 36.3

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

No

If yes, list these implementation measures here

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

Measure 36.3

Signatories will update the Transparency Centre to reflect the latest decisions of the Permanent Task-force, regarding the Code and the monitoring framework.

QRE 36.1.1

With their initial implementation report, Signatories will outline the state of development of the Transparency Centre, its functionalities, the information it contains, and any other relevant information about its functioning or operations. This information can be drafted jointly by Signatories involved in operating or adding content to the Transparency Centre.

The Transparency Centre was successfully launched in February 2023. We continue to upload our report according to the approved deadlines. 

QRE 36.1.2

Signatories will outline changes to the Transparency Centre's content, operations, or functioning in their reports over time. Such updates can be drafted jointly by Signatories involved in operating or adding content to the Transparency Centre.

The administration of the Transparency Centre website has been transferred fully to the community of the Code’s signatories, with VOST Europe taking the role of developer.

SLI 36.1.1

Signatories will provide meaningful quantitative information on the usage of the Transparency Centre, such as the average monthly visits of the webpage.

We worked with the vendor to develop relevant metrics for this SLI.

Between 1 July 2024 and 31 December 2024, the common Transparency Centre has been visited by 20,255 unique visitors. The Signatories’ reports were downloaded 5,626 times by 1,275 unique visitors. More specifically, TikTok’s previous COPD report was downloaded 302 times by 135 visitors. 

Country 0
Austria 0
Belgium 0
Bulgaria 0
Croatia 0
Cyprus 0
Czech Republic 0
Denmark 0
Estonia 0
Finland 0
France 0
Germany 0
Greece 0
Hungary 0
Ireland 0
Italy 0
Latvia 0
Lithuania 0
Luxembourg 0
Malta 0
Netherlands 0
Poland 0
Portugal 0
Romania 0
Slovakia 0
Slovenia 0
Spain 0
Sweden 0
Iceland 0
Liechtenstein 0
Norway 0

Permanent Task-Force

Commitment 37

Signatories commit to participate in the permanent Task-force. The Task-force includes the Signatories of the Code and representatives from EDMO and ERGA. It is chaired by the European Commission, and includes representatives of the European External Action Service (EEAS). The Task-force can also invite relevant experts as observers to support its work. Decisions of the Task-force are made by consensus.

We signed up to the following measures of this commitment

Measure 37.1 Measure 37.2 Measure 37.3 Measure 37.4 Measure 37.5 Measure 37.6

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

No

If yes, list these implementation measures here

We have meaningfully engaged in the Task-force / Plenaries and all working groups.

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

N/A

Measure 37.6

Signatories agree to notify the rest of the Task-force when a Commitment or Measure would benefit from changes over time as their practices and approaches evolve, in view of technological, societal, market, and legislative developments. Having discussed the changes required, the Relevant Signatories will update their subscription document accordingly and report on the changes in their next report.

QRE 37.6.1

Signatories will describe how they engage in the work of the Task-force in the reporting period, including the sub-groups they engaged with.

We have meaningfully engaged in the Task-force and all of its working groups by attending and participating at meetings and engaging in any relevant discussions, in particular regarding the Code conversion process and development / activation of the RRS for elections. 

We will continue to engage in the Task-force and all of its working groups and subgroups.

Monitoring of the Code

Commitment 38

The Signatories commit to dedicate adequate financial and human resources and put in place appropriate internal processes to ensure the implementation of their commitments under the Code.

We signed up to the following measures of this commitment

Measure 38.1

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

No

If yes, list these implementation measures here

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

We are continuously reviewing and improving our tools and processes to fight misinformation and disinformation and will report on any further development in the next COPD report.

Measure 38.1

Relevant Signatories will outline the teams and internal processes they have in place, per service, to comply with the Code in order to achieve full coverage across the Member States and the languages of the EU.

QRE 38.1.1

Relevant Signatories will outline the teams and internal processes they have in place, per service, to comply with the Code in order to achieve full coverage across the Member States and the languages of the EU.

TikTok has assigned the highest priority level to the Code, which means that we have, and will continue to have, appropriate resources in place to meet our commitments and compliance. 

Given the breadth of the Code and the commitments therein, our work spans multiple teams, including Trust and Safety, Legal, Monetisation Integrity, Product and Public Policy. Teams across the globe are deployed to ensure that we meet our commitments and compliance with the notable involvement of our Trust and Safety Leadership.

Across the European Union, we have thousands of trust and safety professionals dedicated to keeping our platform safe.We also recognise the importance of local knowledge and expertise as we work to ensure online safety for our users. We take a similar approach to our third party partnerships.

Commitment 39

Signatories commit to provide to the European Commission, within 1 month after the end of the implementation period (6 months after this Code’s signature) the baseline reports as set out in the Preamble.

We signed up to the following measures of this commitment

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

No

If yes, list these implementation measures here

We have shared our baseline report with the Commission in accordance with the agreed timeframes.

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

Commitment 40

Signatories commit to provide regular reporting on Service Level Indicators (SLIs) and Qualitative Reporting Elements (QREs). The reports and data provided should allow for a thorough assessment of the extent of the implementation of the Code’s Commitments and Measures by each Signatory, service and at Member State level.

We signed up to the following measures of this commitment

Measure 40.1 Measure 40.2 Measure 40.3 Measure 40.4 Measure 40.5 Measure 40.6

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Yes

If yes, list these implementation measures here

We have reported on the SLIs and QREs relevant to the Commitments we signed-up to within this report. 

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

We are continuously reviewing and improving our tools and processes to fight misinformation and disinformation and will report on any further development in the next COPD report.

Commitment 41

Signatories commit to work within the Task-force towards developing Structural Indicators, and publish a first set of them within 9 months from the signature of this Code; and to publish an initial measurement alongside their first full report.

We signed up to the following measures of this commitment

Measure 41.1 Measure 41.2 Measure 41.3

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Yes

If yes, list these implementation measures here

  • We have been an active participant in the working group dedicated to developing Structural Indicators.
  • We supported the publication of the second analysis of Structural Indicators, expanding it to covering 4 markets and increasing the sample size in September 2024. 

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

Commitment 42

Relevant Signatories commit to provide, in special situations like elections or crisis, upon request of the European Commission, proportionate and appropriate information and data, including ad-hoc specific reports and specific chapters within the regular monitoring, in accordance with the rapid response system established by the Task-force.

We signed up to the following measures of this commitment

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Yes

If yes, list these implementation measures here

We have been an active participant in the Crisis Response working group, which resulted in the implementation of the Rapid Response System being developed/ activated for elections. We have also published Crisis Reports specific to the War in Ukraine, the Israel/Hamas conflict and also elections reports on the French snap election and the Romanian Presidential Election along with this report.

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

We are continuously reviewing and improving our tools and processes to fight misinformation and disinformation and will report on any further development in the next COPD report.

Commitment 43

Relevant Signatories commit to provide, in special situations like elections or crisis, upon request of the European Commission, proportionate and appropriate information and data, including ad-hoc specific reports and specific chapters within the regular monitoring, in accordance with the rapid response system established by the Taskforce.

We signed up to the following measures of this commitment

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Yes

If yes, list these implementation measures here

  • Participated in the monitoring and reporting working group.
  • Published transparency report in September 2024.

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

We are continuously reviewing and improving our tools and processes to fight misinformation and disinformation and will report on any further development in the next COPD report.

Commitment 44

Relevant Signatories that are providers of Very Large Online Platforms commit, seeking alignment with the DSA, to be audited at their own expense, for their compliance with the commitments undertaken pursuant to this Code. Audits should be performed by organisations, independent from, and without conflict of interest with, the provider of the Very Large Online Platform concerned. Such organisations shall have proven expertise in the area of disinformation, appropriate technical competence and capabilities and have proven objectivity and professional ethics, based in particular on adherence to auditing standards and guidelines.

We signed up to the following measures of this commitment

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Yes

If yes, list these implementation measures here

Implemented the annual DSA audit programme. 

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

We are continuously reviewing and improving our tools and processes to fight misinformation and disinformation and will report on any further development in the next COPD report.