TikTok

Report March 2026

Submitted

TikTok allows users to create, share and watch short-form videos and live content, primarily for entertainment purposes

Advertising

Commitment 1

Relevant signatories participating in ad placements commit to defund the dissemination of disinformation, and improve the policies and systems which determine the eligibility of content to be monetised, the controls for monetisation and ad placement, and the data to report on the accuracy and effectiveness of controls and services around ad placements.

We signed up to the following measures of this commitment

Measure 1.1 Measure 1.2 Measure 1.3 Measure 1.4 Measure 1.5 Measure 1.6

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

N/A

If yes, list these implementation measures here

N/A

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

N/A

If yes, which further implementation measures do you plan to put in place in the next 6 months?

N/A

Measure 1.1

Relevant Signatories involved in the selling of advertising, inclusive of media platforms, publishers and ad tech companies, will deploy, disclose, and enforce policies with the aims of: - first avoiding the publishing and carriage of harmful Disinformation to protect the integrity of advertising supported businesses - second taking meaningful enforcement and remediation steps to avoid the placement of advertising next to Disinformation content or on sources that repeatedly violate these policies; and - third adopting measures to enable the verification of the landing / destination pages of ads and origin of ad placement.

TikTok did not subscribe to this measure as outlined in the January 2025 Subscription Document.

QRE 1.1.1

Signatories will disclose and outline the policies they develop, deploy, and enforce to meet the goals of Measure 1.1 and will link to relevant public pages in their help centres.

N/A

Measure 1.2

Relevant Signatories responsible for the selling of advertising, inclusive of publishers, media platforms, and ad tech companies, will tighten eligibility requirements and content review processes for content monetisation and ad revenue share programmes on their services as necessary to effectively scrutinise parties and bar participation by actors who systematically post content or engage in behaviours which violate policies mentioned in Measure 1.1 that tackle Disinformation.

TikTok did not subscribe to this measure as outlined in the January 2025 Subscription Document.

QRE 1.2.1

Signatories will outline their processes for reviewing, assessing, and augmenting their monetisation policies in order to scrutinise and bar participation by actors that systematically provide harmful Disinformation.

N/A

Measure 1.3

Relevant Signatories responsible for the selling of advertising, inclusive of publishers, media platforms, and ad tech companies, will take commercial and technically feasible steps, including support for relevant third-party approaches, to give advertising buyers transparency on the placement of their advertising.

We partner with a number of industry leaders to provide a number of controls and transparency tools to advertising buyers with regard to the placement of ads:

Controls: We offer pre-campaign solutions to advertisers so they can put additional safeguards in place before their campaign goes live to mitigate the risk of their advertising being displayed adjacent to certain types of user-generated content. These measures are in addition to the Community Guidelines, which provide overarching rules around the types of content that can appear on TikTok and are eligible for the For You feed:

  • TikTok Inventory Filter: This is our proprietary system, which enables advertisers to choose the profile of content they want their ads to run adjacent to. We expanded our Inventory Filter which is now available in 29 countries in the EEA and is embedded directly in TikTok Ads Manager, the main system through which advertisers purchase ads. We have expanded the functionality of this Inventory Filter in various EEA countries. More details can be found here. The Inventory Filter is informed by industry standards and policies, which include topics that may be susceptible to disinformation. Additionally, this enabled advertisers to:
    • Selectively exclude unwanted or misaligned videos that do not align with their brand safety requirements from appearing next to their ads through TikTok's Video Exclusion List solution.
    • Exclude specific profile pages from serving their Profile Feed ads through TikTok's Profile Feed Exclusion List.
  • TikTok Pre-bid Brand Safety Solution by Integral Ad Science (“IAS”): Advertisers can filter content based on industry-standard frameworks with all levels of risk (available in France and Germany). Some misinformation content may be captured and filtered out by these industry standard categories, such as “Sensitive Social Issues”.

Transparency: We have partnered with third parties to offer post-campaign solutions that enable advertisers to assess the suitability of user content that ran immediately adjacent to their ad in the For You feed, against their chosen brand suitability parameters:

  • Zefr: Through our partnership with Zefr, advertisers can obtain campaign insights into brand suitability and safety on the platform (now available in 29 countries in the EEA). Zefr aligns with industry standards.
  • IAS: Advertisers can measure brand safety, viewability, and invalid traffic on the platform with the IAS Signal platform (post campaign is available in 28 countries in the EEA). As with IAS’s pre-bid solution covered above, this aligns with industry standards. 
DoubleVerify: We are partnering with DoubleVerify to provide advertisers with media quality measurement for ads. DoubleVerify is working actively with us to expand its suite of brand suitability and media quality solutions on the platform. DoubleVerify is available in 27 countries in the EEA.

QRE 1.3.1

Signatories will report on the controls and transparency they provide to advertising buyers with regards to the placement of their ads as it relates to Measure 1.3.

We partner with a number of industry leaders to provide a number of controls and transparency tools to advertising buyers with regard to the placement of ads:

Controls: We offer pre-campaign solutions to advertisers so they can put additional safeguards in place before their campaign goes live to mitigate the risk of their advertising being displayed adjacent to certain types of user-generated content. These measures are in addition to the Community Guidelines, which provide overarching rules around the types of content that can appear on TikTok and are eligible for the For You feed:

  • TikTok Inventory Filter: This is our proprietary system, which enables advertisers to choose the profile of content they want their ads to run adjacent to. We expanded our Inventory Filter which is now available in 29 jurisdictions in the EEA and is embedded directly in TikTok Ads Manager, the main system through which advertisers purchase ads. We have expanded the functionality of this Inventory Filter in various EEA countries. More details can be found here. The Inventory Filter is informed by Industry Standards and policies, which include topics that may be susceptible to disinformation. Additionally, this enabled advertisers to:
    • Selectively exclude unwanted or misaligned videos that do not align with their brand safety requirements from appearing next to their ads through TikTok's Video Exclusion List solution.
    • Exclude specific profile pages from serving their Profile Feed ads through TikTok's Profile Feed Exclusion List.
  • TikTok Pre-bid Brand Safety Solution by Integral Ad Science (“IAS”): Advertisers can filter content based on industry-standard frameworks with all levels of risk (available in France and Germany). Some misinformation content may be captured and filtered out by these industry standard categories, such as “Sensitive Social Issues”.

Transparency: We have partnered with third parties to offer post-campaign solutions that enable advertisers to assess the suitability of user content that ran immediately adjacent to their ad in the For You feed, against their chosen brand suitability parameters:

  • Zefr: Through our partnership with Zefr, advertisers can obtain campaign insights into brand suitability and safety on the platform (now available in 29 countries in the EEA). Zefr aligns with the Industry Standards.
  • IAS: Advertisers can measure brand safety, viewability, and invalid traffic on the platform with the IAS Signal platform (post campaign is available in 28 countries in the EEA). As with IAS’s pre-bid solution covered above, this aligns with the Industry Standards. 
  • DoubleVerify: We are partnering with DoubleVerify to provide advertisers with media quality measurement for ads. DoubleVerify is working actively with us to expand its suite of brand suitability and media quality solutions on the platform. DoubleVerify is available in 27 EU countries.

Measure 1.4

Relevant Signatories responsible for the buying of advertising, inclusive of advertisers, and agencies, will place advertising through ad sellers that have taken effective, and transparent steps to avoid the placement of advertising next to Disinformation content or in places that repeatedly publish Disinformation.

TikTok did not subscribe to this measure as outlined in the January 2025 Subscription Document.

QRE 1.4.1

Relevant Signatories that are responsible for the buying of advertising will describe their processes and procedures to ensure they place advertising through ad sellers that take the steps described in Measure 1.4.

N/A

Measure 1.5

Relevant Signatories involved in the reporting of monetisation activities inclusive of media platforms, ad networks, and ad verification companies will take the necessary steps to give industry-recognised relevant independent third-party auditors commercially appropriate and fair access to their services and data in order to: - First, confirm the accuracy of first party reporting relative to monetisation and Disinformation, seeking alignment with regular audits performed under the DSA. - Second, accreditation services should assess the effectiveness of media platforms' policy enforcement, including Disinformation policies.

TikTok did not subscribe to this measure as outlined in the January 2025 Subscription Document.

QRE 1.5.1

Signatories that produce first party reporting will report on the access provided to independent third-party auditors as outlined in Measure 1.5 and will link to public reports and results from such auditors, such as MRC Content Level Brand Safety Accreditation, TAG Brand Safety certifications, or other similarly recognised industry accepted certifications.

N/A

QRE 1.5.2

Signatories that conduct independent accreditation via audits will disclose areas of their accreditation that have been updated to reflect needs in Measure 1.5.

N/A

Measure 1.6

Relevant Signatories will advance the development, improve the availability, and take practical steps to advance the use of brand safety tools and partnerships, with the following goals: - To the degree commercially viable, relevant Signatories will provide options to integrate information and analysis from source-raters, services that provide indicators of trustworthiness, fact-checkers, researchers or other relevant stakeholders providing information e.g., on the sources of Disinformation campaigns to help inform decisions on ad placement by ad buyers, namely advertisers and their agencies. - Advertisers, agencies, ad tech companies, and media platforms and publishers will take effective and reasonable steps to integrate the use of brand safety tools throughout the media planning, buying and reporting process, to avoid the placement of their advertising next to Disinformation content and/or in places or sources that repeatedly publish Disinformation. - Brand safety tool providers and rating services who categorise content and domains will provide reasonable transparency about the processes they use, insofar that they do not release commercially sensitive information or divulge trade secrets, and that they establish a mechanism for customer feedback and appeal.

TikTok did not subscribe to this measure as outlined in the January 2025 Subscription Document.

QRE 1.6.1

Signatories that place ads will report on the options they provide for integration of information, indicators and analysis from source raters, services that provide indicators of trustworthiness, fact-checkers, researchers, or other relevant stakeholders providing information e.g. on the sources of Disinformation campaigns to help inform decisions on ad placement by buyers.

N/A

QRE 1.6.2

Signatories that purchase ads will outline the steps they have taken to integrate the use of brand safety tools in their advertising and media operations, disclosing what percentage of their media investment is protected by such services.

N/A

QRE 1.6.3

Signatories that provide brand safety tools will outline how they are ensuring transparency and appealability about their processes and outcomes.

N/A

QRE 1.6.4

Relevant Signatories that rate sources to determine if they persistently publish Disinformation shall provide reasonable information on the criteria under which websites are rated, make public the assessment of the relevant criteria relating to Disinformation, operate in an apolitical manner and give publishers the right to reply before ratings are published.

N/A

Commitment 2

Relevant Signatories participating in advertising commit to prevent the misuse of advertising systems to disseminate Disinformation in the form of advertising messages.

We signed up to the following measures of this commitment

Measure 2.1 Measure 2.2 Measure 2.3 Measure 2.4

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Yes

If yes, list these implementation measures here

At the end of August 2025, we launched more granular misinformation advertising policies in the EEA, providing clearer categorisation, and more targeted risk-based enforcement.
  • Health Misinformation
  • Environment/Climate Misinformation
  • Public Safety & Trust Misinformation
  • Election Misinformation
  • Other Misinformation

These new policies supersede and expand upon the previous set of five policies introduced in H1 2025, which included:
  • Medical Misinformation
  • Dangerous Misinformation
  • Synthetic and Manipulated Media
  • Dangerous Conspiracy Theories
  • Climate Misinformation

We have enhanced our automated detection models, which are now operational and support enforcement of the new misinformation advertising policies. We also continue to develop our automated detection models to support the enforcement of the new policies.

We provided users with a simple and intuitive way to report advertisements in-app for  breach of our misinformation advertising policies in each EU Member State.

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

Yes

If yes, which further implementation measures do you plan to put in place in the next 6 months?

We continue to focus on improving the accuracy and coverage of our automated misinformation moderation systems for advertising.

Measure 2.1

Relevant Signatories will develop, deploy, and enforce appropriate and tailored advertising policies that address the misuse of their advertising systems for propagating harmful Disinformation in advertising messages and in the promotion of content.

QRE 2.1.1:
In H2 2025, we iterated existing advertising policies for misinformation and launched more granular policies in the EEA (covering Health Misinformation, Environment/Climate Misinformation, Public Safety & Trust Misinformation, Election Misinformation, Other Misinformation), with which advertisers need to comply with. These policies provide clearer categorisation of misinformation types and build on the principles and enforcement experience of the five policies set out in the H1 2025 report, enabling more consistent and targeted enforcement in line with evolving risks.

Our advertiser account policies expressly prohibit deceptive behaviours, including prohibiting advertisers from circumventing, evading, or interfering with our advertising systems and processes.

QRE 2.1.1

Signatories will disclose and outline the policies they develop, deploy, and enforce to meet the goals of Measure 2.1 and will link to relevant public pages in their help centres.

Paid ads are subject to our strict ad policies, which specifically prohibit misleading, inauthentic, and deceptive behaviours. Ads are reviewed against these policies before being allowed on our platform. In order to improve our existing ad policies, we launched four more granular policies in the EEA in 2023 (covering Medical Misinformation, Dangerous Misinformation, Synthetic and Manipulated Media, and Dangerous Conspiracy Theories), which advertisers also need to comply with. In December 2024, we launched a fifth granular policy covering Climate Misinformation.

SLI 2.1.1

Signatories will report, quantitatively, on actions they took to enforce each of the policies mentioned in the qualitative part of this service level indicator, at the Member State or language level. This could include, for instance, actions to remove, to block, or to otherwise restrict harmful Disinformation in advertising messages and in the promotion of content.

Methodology of data measurement:
We have set out the number of ads that have been removed from our platform for violation of our granular misinformation advertising policies on Health Misinformation, Environment/Climate Misinformation, Public Safety & Trust Misinformation, Election Misinformation, Other Misinformation. We launched these iterated misinformation policies in August 2025. These policies were developed to provide clearer categorisation and more targeted, risk-based enforcement. 

The methodology for ad removals data for misinformation advertising policies was revised in this period to capture refinement in deduplication logic. 

We are pleased to be able to report on the ads removed for breach of our granular misinformation advertising policies. We have provided the political advertising enforcement metrics in the Elections Crisis Chapter of this Report.

Note that numbers have only been provided for monetised markets and are based on where the ads were displayed.

Country Number of ads removals under the granular misinformation ad policies
Austria 133
Belgium 101
Bulgaria 16
Croatia 9
Cyprus 3
Czech Republic 22
Denmark 90
Estonia 10
Finland 22
France 138
Germany 656
Greece 17
Hungary 49
Ireland 176
Italy 102
Latvia 21
Lithuania 8
Luxembourg 1
Malta -
Netherlands 46
Poland 77
Portugal 37
Romania 19
Slovakia 11
Slovenia 12
Spain 73
Sweden 195
Iceland 0
Liechtenstein -
Norway 165
Total EU 2,044
Total EEA 2,209

Measure 2.2

Relevant Signatories will develop tools, methods, or partnerships, which may include reference to independent information sources both public and proprietary (for instance partnerships with fact-checking or source rating organisations, or services providing indicators of trustworthiness, or proprietary methods developed internally) to identify content and sources as distributing harmful Disinformation, to identify and take action on ads and promoted content that violate advertising policies regarding Disinformation mentioned in Measure 2.1.

QRE 2.2.1
TikTok places considerable emphasis on proactive moderation of advertisements. Advertisements are reviewed against our Advertising Policies through a combination of automated and human moderation.

Our granular misinformation advertising policies launched in H2 2025 currently cover:

  • Health Misinformation
  • Environment/Climate Misinformation
  • Public Safety & Trust Misinformation 
  • Election Misinformation
  • Other Misinformation

Our advertiser account policies expressly prohibit deceptive behaviours, including prohibiting advertisers to circumvent, evade, or interfere with our advertising systems and processes. 

We provide users with a simple and intuitive way to report advertisements in-app for breach of our Advertising Policies including for misinformation in each EU Member State.

There are two main ways to report an advertisement on TikTok, either:
  • By ‘long-pressing’ (e.g., clicking for 3 seconds) on the advertisement and selecting the “Report” option.
  • By selecting the “Share” button available on the right-hand side of the advertisement and then selecting the “Report” option.

The user is then shown categories of reporting reasons from which to select. In H2 2025, we updated this feature to create the specific “Misinformation” category and allow users to report with increased granularity.

QRE 2.2.1

Signatories will describe the tools, methods, or partnerships they use to identify content and sources that contravene policies mentioned in Measure 2.1 - while being mindful of not disclosing information that'd make it easier for malicious actors to circumvent these tools, methods, or partnerships. Signatories will specify the independent information sources involved in these tools, methods, or partnerships.

In order to identify content and sources that breach our ad policies, ads go through moderation prior to going “live” on the platform. 

TikTok places considerable emphasis on proactive moderation of advertisements. Advertisements and advertiser accounts are reviewed against our Advertising Policies at the pre-posting and post-posting stage through a combination of automated and human moderation.

The majority of ads that violate our misinformation policies would have been removed under our existing policies. Our granular advertising policies currently cover:


  • Dangerous Misinformation
  • Dangerous Conspiracy Theories
  • Medical Misinformation
  • Synthetic and Manipulated Media
  • Climate Misinformation

After the ad goes live on the platform, users can report any concerns using the “report” button, and the ad will be reviewed again and appropriate action taken if necessary. 

TikTok also operates a "recall" process whereby ads already on TikTok will undergo an additional stage of review if certain conditions are met, including reaching certain impression thresholds. TikTok also conducts additional reviews on random samples of ads to ensure its processes are functioning as expected.

Measure 2.3

Relevant Signatories will adapt their current ad verification and review systems as appropriate and commercially feasible, with the aim of preventing ads placed through or on their services that do not comply with their advertising policies in respect of Disinformation to be inclusive of advertising message, promoted content, and site landing page.

QRE 2.3.1
TikTok places considerable emphasis on proactive moderation of advertisements. Advertisements and advertiser accounts are reviewed against our Advertising Policies through a combination of automated and human moderation.

Our granular misinformation advertising policies launched in H2 2025 currently cover:

  • Health Misinformation
  • Environment/Climate Misinformation
  • Public Safety & Trust Misinformation
  • Election Misinformation
  • Other Misinformation

Our advertiser account policies expressly prohibit deceptive behaviours, including prohibiting advertisers to circumvent, evade, or interfere with our advertising systems and processes.

We provide users with a simple and intuitive way to report advertisements in-app for breach of our Advertising Policies including for misinformation in each EU Member State.

There are two main ways to report an advertisement on TikTok, either:
  • By ‘long-pressing’ (e.g., clicking for 3 seconds) on the advertisement and selecting the “Report” option.
  • By selecting the “Share” button available on the right-hand side of the advertisement and then selecting the “Report” option.

The user is then shown categories of reporting reasons from which to select. In H2 2025, we updated this feature to create the specific “Misinformation” category and allow users to report with increased granularity.

QRE 2.3.1

Signatories will describe the systems and procedures they use to ensure that ads placed through their services comply with their advertising policies as described in Measure 2.1.

In order to identify content and sources that breach our ad policies, ads go through moderation prior to going “live” on the platform. 

TikTok places considerable emphasis on proactive moderation of advertisements. Advertisements and advertiser accounts are reviewed against our Advertising Policies at the pre-posting and post-posting stage through a combination of automated and human moderation.

The majority of ads that violate our misinformation policies would have been removed under our existing policies. Our granular advertising policies currently cover:

  • Dangerous Misinformation
  • Dangerous Conspiracy Theories
  • Medical Misinformation
  • Synthetic and Manipulated Media
  • Climate Misinformation

After the ad goes live on the platform, users can report any concerns using the “report” button, and the ad will be reviewed again and appropriate action taken if necessary.

TikTok also operates a "recall" process whereby ads already on TikTok will go through an additional stage of review if certain conditions are met, including reaching certain impression thresholds. TikTok also conducts additional reviews on random samples of ads to ensure its processes are functioning as expected.

SLI 2.3.1

Signatories will report quantitatively, at the Member State level, on the ads removed or prohibited from their services using procedures outlined in Measure 2.3. In the event of ads successfully removed, parties should report on the reach of violatory content and advertising.

We are pleased to be able to report on the ads removed for breach of our granular misinformation advertising policies, including the impressions of those ads in this report. The methodology for ad removals data for misinformation advertising policies was revised in this period to capture refinement in deduplication logic. We have provided the political advertising enforcement metrics in the Elections Crisis Chapter of this Report.

Country Number of ads removals under the granular misinformation ad policies Number of impressions for ads removed under the granular misinformation ad policiesÊ
Austria 133 14,139
Belgium 101 35,702
Bulgaria 16 1,245
Croatia 9 2,019
Cyprus 3 1,542
Czech Republic 22 16,572
Denmark 90 12,306
Estonia 10 620
Finland 22 11,521
France 138 36,867
Germany 656 402,684
Greece 17 32,304
Hungary 49 189,097
Ireland 176 44,960
Italy 102 65,589
Latvia 21 128,011
Lithuania 8 866
Luxembourg 1 4,632
Malta - -
Netherlands 46 1,282
Poland 77 57,588
Portugal 37 40,976
Romania 19 2,599
Slovakia 11 588
Slovenia 12 872
Spain 73 7,958
Sweden 195 20,877
Iceland 0 0
Liechtenstein - -
Norway 165 23,045
Total EU 2,044 1,133,416
Total EEA 2,209 1,156,461

Measure 2.4

Relevant Signatories will provide relevant information to advertisers about which advertising policies have been violated when they reject or remove ads violating policies described in Measure 2.1 above or disable advertising accounts in application of these policies and clarify their procedures for appeal.

We are clear with advertisers that their ads must comply with our strict ad policies (see TikTok Business Help Centre). We explain that all ads are reviewed before being uploaded on our platform - usually within 24 hours. Ads already on TikTok may go through an additional stage of review if they are reported, if certain conditions are met (e.g., reaching certain impression thresholds) or because of random sampling conducted at TikTok’s own initiative.

Where an advertiser has violated an ad policy, they are informed by way of a notification. This is visible in their TikTok Ads Manager account and/or sent by email (if they have provided a valid email address), or where an advertiser has booked their ad through a TikTok representative, then the representative will inform the advertiser of any violations. Advertisers are able to make use of functionality to appeal rejections of their ads.

Transparency is an important part of our overarching DSA compliance programme. Notifications of restrictions include the restriction itself, reason for restriction, whether we made that decision by automated means, how we came to detect the violation (e.g. as a result of a user report or proactive TikTok initiatives) and what their rights of redress are. Advertisers can access online functionality to appeal restrictions on their account or ads. These appeals are then also reviewed against our ad policies and additional information could be provided to advertisers to help them understand the violation and what to do about it.

QRE 2.4.1

Signatories will describe how they provide information to advertisers about advertising policies they have violated and how advertisers can appeal these policies.

We are clear with advertisers that their ads must comply with our strict ad policies (see TikTok Business Help Centre). We explain that all ads are reviewed before being uploaded on our platform - usually within 24 hours. Ads already on TikTok may go through an additional stage of review if they are reported, if certain conditions are met (e.g., reaching certain impression thresholds) or because of random sampling conducted at TikTok’s own initiative.

Where an advertiser has violated an ad policy, they are informed by way of a notification. This is visible in their TikTok Ads Manager account and/or sent by email (if they have provided a valid email address), or where an advertiser has booked their ad through a TikTok representative, then the representative will inform the advertiser of any violations. Advertisers are able to make use of functionality to appeal rejections of their ads in certain circumstances. 

As part of our overarching DSA compliance programme, we improved how we notify and increase transparency to our advertisers. Notifications of restrictions include the restriction itself, reason for restriction, whether we made that decision by automated means, how we came to detect the violation (e.g. as a result of a user report or proactive TikTok initiatives) and what their rights of redress are. Advertisers can access online functionality to appeal restrictions on their account or ads. These appeals are then also reviewed against our ad policies and additional information could be provided to advertisers to help them understand the violation and what to do about it.

SLI 2.4.1

Signatories will report quantitatively, at the Member State level, on the number of appeals per their standard procedures they received from advertisers on the application of their policies and on the proportion of these appeals that led to a change of the initial policy decision.

We are pleased to be able to share the number of appeals for ads removed under our granular misinformation advertising policies as well as the number of overturns.  The methodology for ad removals data for misinformation advertising policies was revised in this period to capture refinement in deduplication logic.

Country Number of appeals for ads removed under the granular misinformation ad policies Number of overturns of appeals under the granular misinformation ad policies
Austria 0 0
Belgium 0 0
Bulgaria 0 0
Croatia 0 0
Cyprus 0 0
Czech Republic 0 0
Denmark 0 0
Estonia 0 0
Finland 0 0
France 0 0
Germany 0 0
Greece 0 0
Hungary 0 0
Ireland 0 0
Italy 0 0
Latvia 0 0
Lithuania 0 0
Luxembourg 0 0
Malta 0 0
Netherlands 0 0
Poland 0 0
Portugal 0 0
Romania 0 0
Slovakia 0 0
Slovenia 0 0
Spain 0 0
Sweden 0 0
Iceland 0 0
Liechtenstein 0 0
Norway 0 0

Commitment 3

Relevant Signatories involved in buying, selling and placing digital advertising commit to exchange best practices and strengthen cooperation with relevant players, expanding to organisations active in the online monetisation value chain, such as online e-payment services, e-commerce platforms and relevant crowd-funding/donation systems, with the aim to increase the effectiveness of scrutiny of ad placements on their own services.

We signed up to the following measures of this commitment

Measure 3.1 Measure 3.2 Measure 3.3

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Yes

If yes, list these implementation measures here

  • We have established and strengthened our partnership with third-party fact-checkers to detect harmful misinformation on our platform.

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

Yes

If yes, which further implementation measures do you plan to put in place in the next 6 months?

We will continue to enhance our misinformation detection capabilities through two key initiatives:
  • The optimisation of our collaboration framework with third-party fact-checking organisation in relation to advertising (e.g.Science Feedback); and
  • Continuing enhancing detection within the advertising ecosystem through signal-sharing to improve our internal databases. 

Measure 3.1

Relevant Signatories will cooperate with platforms, advertising supply chain players, source-rating services, services that provide indicators of trustworthiness, fact-checking organisations, advertisers and any other actors active in the online monetisation value chain, to facilitate the integration and flow of information, in particular information relevant for tackling purveyors of harmful Disinformation, in full respect of all relevant data protection rules and confidentiality agreements.

QRE 3.1.1
As set out later on in this report, we cooperate with a number of third parties to facilitate the flow of information that may be relevant for tackling purveyors of harmful misinformation. This information is shared internally to help ensure consistency of approach across our platform.

In this reporting period, we began partnering with third-party fact-checking organisation Science Feedback. Science Feedback provides verification of claims that are prone to misinformation. Claims and signals verified by Science Feedback are now integrated into our moderation workflows for ads. We also source claims and signals from across the platform to further enhance our moderation.

We also continue to be actively involved in the Task-force working group for Chapter 2, specifically the working subgroup on Elections (Crisis Response) which we co-chair. We work with other signatories to define and outline metrics regarding the

QRE 3.1.1

Signatories will outline how they work with others across industry and civil society to facilitate the flow of information that may be relevant for tackling purveyors of harmful Disinformation.

As set out later on in this report, we cooperate with a number of third parties to facilitate the flow of information that may be relevant for tackling purveyors of harmful misinformation. This information is shared internally to help ensure consistency of approach across our platform.

We also continue to be actively involved in the Task-force working group for Chapter 2, specifically the working subgroup on Elections (Crisis Response) which we co-chaired. We work with other signatories to define and outline metrics regarding the monetary reach and impact of harmful misinformation. We are in close collaboration with industry to ensure alignment and clarity on the reporting of these code requirements.

Measure 3.2

Relevant Signatories will exchange among themselves information on Disinformation trends and TTPs (Tactics, Techniques, and Procedures), via the Code Task-force, GARM, IAB Europe, or other relevant fora. This will include sharing insights on new techniques or threats observed by Relevant Signatories, discussing case studies, and other means of improving capabilities and steps to help remove Disinformation across the advertising supply chain - potentially including real-time technical capabilities.

QRE 3.2.1
We work with industry partners to discuss common standards and definitions to support consistency of categorising content, adjacency & measurement relevant topics, in appropriate fora. We work closely with IAB Sweden and other organisations such as TAG in the EEA and globally. We are also on the board of the Brand Safety Institute.

We continue to share relevant insights and metrics within our quarterly transparency reports, which aim to inform industry peers and the research community. We continue to engage in the subgroups set up for insights sharing between signatories and the Commission.

QRE 3.2.1

Signatories will report on their discussions within fora mentioned in Measure 3.2, being mindful of not disclosing information that is confidential and/or that may be used by malicious actors to circumvent the defences set by Signatories and others across the advertising supply chain. This could include, for instance, information about the fora Signatories engaged in; about the kinds of information they shared; and about the learnings they derived from these exchanges.

We work with industry partners to discuss common standards and definitions to support consistency of categorising content, adjacency & measurement relevant topics, in appropriate fora. We work closely with IAB Sweden, IAB Ireland and other organisations such as TAG in the EEA and globally. We are also on the board of the Brand Safety Institute. 

We continue to share relevant insights and metrics within our quarterly transparency reports, which aim to inform industry peers and the research community. We continue to engage in the subgroups set up for insights sharing between signatories and the Commission.

Measure 3.3

Relevant Signatories will integrate the work of or collaborate with relevant third-party organisations, such as independent source-rating services, services that provide indicators of trustworthiness, fact-checkers, researchers, or open-source investigators, in order to reduce monetisation of Disinformation and avoid the dissemination of advertising containing Disinformation.

QRE 3.3.1
We continue to work closely with IAB Sweden and other organisations such as TAG in the EEA and globally.

QRE 3.3.1

Signatories will report on the collaborations and integrations relevant to their work with organisations mentioned.

We continue to work closely with IAB Sweden, IAB Ireland, and other organisations such as TAG in the EEA and globally.

Political Advertising

Commitment 4

Relevant Signatories commit to adopt a common definition of "political and issue advertising".

We signed up to the following measures of this commitment

Measure 4.1

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

TikTok did not subscribe to this commitment as outlined in the January 2025 Subscription Document.

If yes, list these implementation measures here

N/A

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

N/A

If yes, which further implementation measures do you plan to put in place in the next 6 months?

N/A

Measure 4.1

Relevant Signatories commit to define "political and issue advertising" in this section in line with the definition of "political advertising" set out in the European Commission's proposal for a Regulation on the transparency and targeting of political advertising.

TikTok did not subscribe to this commitment as outlined in the January 2025 Subscription Document.

Integrity of Services

Commitment 14

In order to limit impermissible manipulative behaviours and practices across their services, Relevant Signatories commit to put in place or further bolster policies to address both misinformation and disinformation across their services, and to agree on a cross-service understanding of manipulative behaviours, actors and practices not permitted on their services. Such behaviours and practices include: The creation and use of fake accounts, account takeovers and bot-driven amplification, Hack-and-leak operations, Impersonation, Malicious deep fakes, The purchase of fake engagements, Non-transparent paid messages or promotion by influencers, The creation and use of accounts that participate in coordinated inauthentic behaviour, User conduct aimed at artificially amplifying the reach or perceived public support for disinformation.

We signed up to the following measures of this commitment

Measure 14.1 Measure 14.2 Measure 14.3

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Yes

If yes, list these implementation measures here

  • Our 2025 Community Guidelines update was launched on August 14, 2025 and went live on September 13, 2025 (due to the 30-day notice period for users). This update ensured that the Community Guidelines remain aligned with our internal policies.
    • Our Harmful Misinformation policies are referenced under the hack and leak section. They have all been refined in H2 2025, and they continue to drive our work in combating harmful misinformation, such as conspiracy theories, claims relating to unfolding events, and other forms of dangerous misinformation.
  • We continue enforcing our AIGC policy against TikTok Shop content.
  • We launched the Evasive Techniques policy, which combats methods designed to evade moderation systems. Continued to join industry partners as a party to the “Tech Accord to Combat Deceptive Use of AI in 2024 Elections” which is a joint commitment to combat the deceptive use of AI in elections. 
  • We have continued to enhance our ability to detect covert influence operations. To provide more regular and detailed updates about the covert influence operations we disrupt, we have a dedicated Transparency Report on covert influence operations, which is available in TikTok’s Transparency Centre. In this report, we include information about operations that we have previously removed and that have attempted to return to our platform with new accounts.

We continue to update and refine our policies around Covert Influence Operations in order to stay agile to changing behaviours and tactics on the platform and to ensure more granular detail is enshrined in our policy rationales.

Please note: Some TTPs cannot be viewed on disinfocode.eu. Please download our full report, which can be found at the top of the page, for complete information on our work relevant to Commitment 14.

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

N/A

If yes, which further implementation measures do you plan to put in place in the next 6 months?

N/A

Measure 14.1

Relevant Signatories will adopt, reinforce and implement clear policies regarding impermissible manipulative behaviours and practices on their services, based on the latest evidence on the conducts and tactics, techniques and procedures (TTPs) employed by malicious actors, such as the AMITT Disinformation Tactics, Techniques and Procedures Framework.

QRE 14.1.1
As well as our Integrity and Authenticity policies in our Community Guidelines, which safeguard against harmful misinformation (see QRE 18.2.1), our Integrity and Authenticity policies also expressly prohibit deceptive behaviours. Our policies on deceptive behaviours relate to the TTPs as follows:

TTPs which pertain to the creation of assets for the purpose of a disinformation campaign, and the ways to make these assets seem credible: 

Creation of inauthentic accounts or botnets (which may include automated, partially automated, or non-automated accounts)  

Our Integrity and Authenticity policies, which address Spam and Deceptive Account Behaviours, expressly prohibit account behaviours that may spam or mislead our community. You can set up multiple accounts on TikTok to create different channels for authentic creative expression, but not for deceptive purposes.

We do not allow spam, including:
  • Operating large networks of accounts controlled by a single entity, or through automation;
  • Bulk distribution of a high volume of spam; and
  • Manipulation of engagement signals to amplify the reach of certain content, or buying and selling followers, particularly for financial purposes

We also have a number of policies that address account hijacking. Our privacy and security policies under our Community Guidelines expressly prohibit users from providing access to their account credentials to others or enabling others to conduct activities against our Community Guidelines. We do not allow access to any part of TikTok through unauthorised methods; attempts to obtain sensitive, confidential, commercial, or personal information; or any abuse of the security, integrity, or reliability of our platform. We also provide practical guidance to users if they have concerns that their account may have been hacked.  

TTPs which pertain to the dissemination of content created in the context of a disinformation campaign, which may or may not include some forms of targeting or attempting to silence opposing views: 

Deliberately targeting vulnerable recipients (e.g. location spoofing or obfuscation), inauthentic coordination of content creation or amplification, including attempts to deceive/manipulate platforms algorithms (e.g. keyword stuffing or inauthentic posting/reposting designed to mislead people about popularity of content, including by influencers), use of deceptive practices to deceive/manipulate platform algorithms, and coordinated mass reporting of non-violative opposing content or accounts.

We fight against CIOs as our policies prohibit attempts to sway public opinion while also misleading our systems or users about the identity, origin, approximate location, popularity or overall purpose.


We also do not allow impersonation, including:
  • Accounts that pose as another real person or entity without disclosing that they are a fan or parody account in the account name, such as using someone's name, biographical details, content, or image without disclosing it
  • Presenting as a person or entity that does not exist (a fake persona) with a demonstrated intent to mislead others on the platform

If we determine someone has engaged in any of these deceptive account behaviours, we will ban the account, and may ban any new accounts that are created.

Use of fake / inauthentic reactions (e.g. likes, up votes, comments) and use of fake followers or subscribers
Our Integrity and Authenticity policies, which address fake engagement, do not allow the trade or marketing of services that attempt to artificially increase engagement or deceive TikTok’s recommendation system. We do not allow our users to: 

  • facilitate the trade or marketing of services that artificially increase engagement, such as selling followers or likes; or
  • provide instructions on how to artificially increase engagement on TikTok.

If we become aware of accounts or content with inauthentically inflated metrics, we will remove the associated fake followers or likes. Content that tricks or manipulates others as a way to increase engagement metrics, such as “like-for-like” promises and false incentives for engaging with content (to increase gifts, followers, likes, views, or other engagement metrics) is ineligible for our For You feed.

Creation of inauthentic pages, groups, chat groups, fora, or domains 
TikTok does not have pages, groups, chat groups, fora, or domains. This TTP is not relevant to our platform.

Account hijacking or Impersonation
Again, our policies prohibit impersonation, which refers to accounts that pose as another real person or entity or present as a person or entity that does not exist (a fake persona) with a demonstrated intent to mislead others on the platform. Our users are not allowed to use someone else's name, biographical details, or profile picture in a misleading manner. 

In order to protect freedom of expression, we do allow accounts that are clearly parody, commentary, or fan-based, such as where the account name indicates that it is a fan, commentary, or parody account and not affiliated with the subject of the account. We continue to develop our policies to ensure that impersonation of entities (such as businesses or educational institutions, for example) is prohibited and that accounts which impersonate people or entities who are not on the platform are also prohibited. We also issue warnings to users of suspected impersonation accounts and do not recommend those accounts on our For You Feed.

When we investigate and remove these operations, we focus on behaviour and assessing linkages between accounts and techniques to determine if actors are engaging in a coordinated effort to mislead TikTok’s systems or our community. In each case, we believe that the people behind these activities coordinate with one another to misrepresent who they are and what they are doing. We know that CIOs will continue to evolve in response to our detection, and networks may attempt to reestablish a presence on our platform. That is why we take continuous action against these attempts, including banning accounts found to be linked with previously disrupted networks. We continue to iteratively research and evaluate complex deceptive behaviours on our platform and develop appropriate product and policy solutions as appropriate in the long term. We continue to proactively identify and remove CIO networks that pose risks to user safety. We have published details of CIO networks we identified and removed in H2 2025 in a dedicated monthly report within our Transparency Centre here. For advertising-related CIO measures, please refer to Chapter 2.


Use “hack and leak” operation (which may or may not include doctored content) 

We have a number of policies that address hack-and-leak related threats (some examples are below):
  • Our hack and leak policy, which aims to further reduce the harms inflicted by the unauthorised disclosure of hacked materials on the individuals, communities, and organisations that may be implicated or exposed by such disclosures.
  • Our CIO policy addresses use of leaked documents to sway public opinion as part of a wider operation.
  • Our Edited Media and AI-Generated Content (AIGC) policy captures materials that have been digitally altered without an appropriate disclosure.
  • In H2 2025, we deployed a defined suite of misinformation policies. As stated in our Community Guidelines, our policies do not allow misinformation that could cause significant harm to individuals or society, no matter the intent of the person posting it. This includes hoaxes, misleading AIGC, harmful conspiracy theories, and other false information related to public safety, crises, or major civic events—when such content may lead to violence or cause public panic. In addition, content is ineligible for the FYF if it contains misinformation that may cause moderate harm to individuals or society. To be cautious, unverified information about crises, major civic events, or content temporarily under review by fact-checkers is also ineligible for the FYF.
  • Misinformation that poses a risk to public safety or incites panic, including falsely presenting past crisis events as recent or claiming that critical resources are unavailable during emergencies
  • Health misinformation that could cause significant harm, such as promoting unproven treatments that may be fatal, discouraging professional care for life-threatening conditions (e.g., vaccine effectiveness), or spreading false information about how such conditions are transmitted.
  • Misinformation that denies the existence of climate change, misrepresents its causes, or contradicts its established environmental impact
  • Conspiracy theories or hoaxes that could cause significant harm, such as those that make a violent call to action or have links to previous violence.


Deceptive manipulated media (e.g. “deep fakes”, “cheap fakes”...)  

Our ‘Edited Media and AI-Generated Content (AIGC)’ policy includes commonly used and easily understood language when referring to AIGC, and outlines our existing prohibitions on AIGC showing fake authoritative sources or crisis events, or falsely showing public figures in certain contexts, including being bullied, making an endorsement, or being endorsed. We also do not allow content that contains the likeness of young people, or the likeness of adult private figures used without their permission.

For the purposes of our policy, AIGC refers to content created or modified by artificial intelligence (AI) technology or machine-learning processes, which may include images of real people, and may show highly realistic-appearing scenes, or use a particular artistic style, such as a painting, cartoons, or anime. ‘Significantly edited content’ is content that shows people doing or saying something they did not do or say, or altering their appearance in a way that makes them difficult to recognise or identify. Misleading AIGC or edited media is audio or visual content that has been edited, including by combining different clips together, to change the composition, sequencing, or timing in a way that alters the meaning of the content and could mislead viewers about the truth of real-world events.
 
In accordance with our policy, we prohibit AIGC, which features:
  • The likeness of young people or realistic-appearing people under the age of 18 that poses a risk of sexualisation, bullying or privacy concerns, including those related to personally identifiable information or likeness to private individuals.
  • Misleading AIGC or edited media that falsely show:
    • Content made to seem as if it comes from an authoritative source, such as a reputable news organisation.
    • A crisis event, such as a conflict or natural disaster.
    • A public figure who is:
      • being degraded or harassed, or engaging in criminal or antisocial behaviour.
      • taking a position on a political issue, commercial product, or a matter of public importance (such as an election).
      • being politically endorsed or condemned by an individual or group.

As AI evolves, we continue to invest in combating harmful AIGC by evolving our proactive detection models, consulting with experts, and partnering with peers on shared solutions.

We updated our AIGC Deceptive policy to address how we define critical events and matters of public importance. Our policy now covers a larger surface area of potentially deceptive AIGC content.

Non-transparent compensated messages or promotions by influencers 

Our Terms of Service require users posting about a brand or product in return for any payment or other incentive to disclose their content by enabling the Commercial Disclosure Toggle, which we make available for users. We also provide functionality to enable users to report suspected undisclosed branded content, which reminds the user who posted the suspected undisclosed branded content of our requirements and prompts them to turn the Commercial Disclosure Toggle on if required. We made this requirement even more explicit to users in our Commercial Disclosure and Paid Marketing section in the Community Guidelines, which was updated in H2 2025 to provide greater clarity.

TikTok prohibits political advertising, including political branded content. When political branded content is not disclosed as such, and TikTok has high confidence that an individual was paid to post political content, TikTok removes the content as it violates TikTok’s prohibition on paid political branded content. Where TikTok has only medium confidence, the content is restricted from appearing in the For You Feed. Note that TikTok has separate policies applying to paid Advertising. 

With regard to Advertising Policies, please refer to Chapter 2 concerning Misinformation Advertising Policies.

In addition, our CIO policy can also apply to non-transparent compensated messages or promotions by influencers where it is found that those messages or promotions formed part of a covert influence campaign.

QRE 14.1.2
At TikTok, we place considerable emphasis on proactive content moderation and use a combination of technology and safety professionals to detect and remove harmful misinformation and deceptive behaviours on our Platform before they are reported to us by users or third parties. 

For instance, we take proactive measures to prevent inauthentic or spam accounts from being created. Thus, we have created and used detection models and rule engines that:

  • prevent inauthentic accounts from being created based on malicious patterns; and
  • remove registered accounts based on certain signals (i.e., uncommon behaviour on the platform).

We also manually monitor user reports of inauthentic accounts in order to detect larger clusters or similar inauthentic behaviours.

However, given the complex nature of the TTPs, human moderation is critical to success in this area, and TikTok's moderation teams therefore play a key role in assessing and addressing identified violations. We provide our moderation teams with detailed guidance on how to apply the Integrity and Authenticity policies in our Community Guidelines, and allowing them to route new or evolving content to our fact-checking partners for assessment. 

In addition, where content reaches certain popularity levels in terms of the number of video views, it will be flagged for further review. Such a review is undertaken given the extent of the content’s dissemination and the increase in potential harm if the content is found to be in breach of our Community Guidelines including our Integrity and Authenticity policies.

Furthermore, during the reporting period, we improved automated detection and enforcement of our ‘Edited Media and AI-Generated Content (AIGC)’ policy, effectively increasing the number of videos removed for policy violations. This also decreased the number of visitors per video over the reporting period, demonstrating an effective control strategy as the scope of enforcement increased.

We also have specifically-trained teams that are focused on investigating and detecting CIO on our Platform. We've built international trust and safety teams with specialized expertise across threat intelligence, security, law enforcement, and data science to work on influence operations. These teams continuously pursue and analyse on-platform technical signals as well as leads from external sources. They also collaborate with external intelligence vendors to support specific investigations on a case-by-case basis. When we investigate and remove these operations, we focus on behaviour and assessing linkages between accounts and techniques to determine if actors are engaging in a coordinated effort to mislead TikTok’s systems or our community. In each case, we believe that the people behind these activities coordinate with one another to misrepresent who they are and what they are doing.

Accounts that engage in influence operations often avoid posting content that would be violative of platforms' guidelines by itself. That's why we focus on accounts' behaviour and technical linkages when analysing them, specifically looking for evidence that:

  • They are coordinating with each other. For example, they are operated by the same entity, share technical similarities like using the same devices, or work together to spread the same narrative.
  • They are misleading our systems or users. For example, they are trying to conceal their actual location or use fake personas to pose as someone they're not.
  • They are attempting to manipulate or corrupt public debate to impact the decision-making, beliefs, and opinions of a community. For example, they are attempting to shape discourse around an election or conflict.

These criteria are aligned with industry standards and guidance from the experts we regularly consult with. They're particularly important to help us distinguish malicious, inauthentic coordination from authentic interactions that are part of healthy and open communities. For example, it would not violate our policies if a group of people authentically worked together to raise awareness or campaign for a social cause, or express a shared opinion (including political views). However, multiple accounts deceptively working together to spread similar messages in an attempt to influence public discussions would be prohibited and disrupted.



QRE 14.1.1

Relevant Signatories will list relevant policies and clarify how they relate to the threats mentioned above as well as to other Disinformation threats.

As well as our Integrity & Authenticity policies in our Community Guidelines, which safeguard against harmful misinformation (see QRE 18.2.1), our Integrity & Authenticity policies also expressly prohibit deceptive behaviours. Our policies on deceptive behaviours relate to the TTPs as follows:

TTPs which pertain to the creation of assets for the purpose of a disinformation campaign, and the ways to make these assets seem credible: 

Creation of inauthentic accounts or botnets (which may include automated, partially automated, or non-automated accounts)  

Our Integrity & Authenticity policies, which address Spam and Deceptive Account Behaviours, expressly prohibit account behaviours that may spam or mislead our community. You can set up multiple accounts on TikTok to create different channels for authentic creative expression, but not for deceptive purposes.

We do not allow spam, including:
  • Operating large networks of accounts controlled by a single entity, or through automation;
  • Bulk distribution of a high volume of spam; and
  • Manipulation of engagement signals to amplify the reach of certain content, or buying and selling followers, particularly for financial purposes

We also do not allow impersonation, including:
  • Accounts that pose as another real person or entity without disclosing that they are a fan or parody account in the account name, such as using someone's name, biographical details, content, or image without disclosing it
  • Presenting as a person or entity that does not exist (a fake persona) with a demonstrated intent to mislead others on the platform

If we determine someone has engaged in any of these deceptive account behaviours, we will ban the account, and may ban any new accounts that are created.

Use of fake / inauthentic reactions (e.g. likes, up votes, comments) and use of fake followers or subscribers
Our Integrity & Authenticity policies, which address fake engagement, do not allow the trade or marketing of services that attempt to artificially increase engagement or deceive TikTok’s recommendation system. We do not allow our users to: 

  • facilitate the trade or marketing of services that artificially increase engagement, such as selling followers or likes; or
  • provide instructions on how to artificially increase engagement on TikTok.

If we become aware of accounts or content with inauthentically inflated metrics, we will remove the associated fake followers or likes. Content that tricks or manipulates others as a way to increase engagement metrics, such as “like-for-like” promises and false incentives for engaging with content (to increase gifts, followers, likes, views, or other engagement metrics) is ineligible for our For You feed.

Creation of inauthentic pages, groups, chat groups, fora, or domains 
TikTok does not have pages, groups, chat groups, fora, or domains. This TTP is not relevant to our platform.

Account hijacking or Impersonation
Again, our policies prohibit impersonation, which refers to accounts that pose as another real person or entity or present as a person or entity that does not exist (a fake persona) with a demonstrated intent to mislead others on the platform. Our users are not allowed to use someone else's name, biographical details, or profile picture in a misleading manner. 
In order to protect freedom of expression, we do allow accounts that are clearly parody, commentary, or fan-based, such as where the account name indicates that it is a fan, commentary, or parody account and not affiliated with the subject of the account. We continue to develop our policies to ensure that impersonation of entities (such as businesses or educational institutions, for example) is prohibited and that accounts which impersonate people or entities who are not on the platform are also prohibited. We also issue warnings to users of suspected impersonation accounts and do not recommend those accounts on our For You Feed.

We also have a number of policies that address account hijacking. Our privacy and security policies under our Community Guidelines expressly prohibit users from providing access to their account credentials to others or enabling others to conduct activities against our Community Guidelines. We do not allow access to any part of TikTok through unauthorised methods; attempts to obtain sensitive, confidential, commercial, or personal information; or any abuse of the security, integrity, or reliability of our platform. We also provide practical guidance to users if they have concerns that their account may have been hacked.  

TTPs which pertain to the dissemination of content created in the context of a disinformation campaign, which may or may not include some forms of targeting or attempting to silence opposing views: 

Deliberately targeting vulnerable recipients (e.g. via personalised advertising, location spoofing or obfuscation), inauthentic coordination of content creation or amplification, including attempts to deceive/manipulate platforms algorithms (e.g. keyword stuffing or inauthentic posting/reposting designed to mislead people about popularity of content, including by influencers), use of deceptive practices to deceive/manipulate platform algorithms, such as to create, amplify or hijack hashtags, data voids, filter bubbles, or echo chambers and coordinated mass reporting of non-violative opposing content or accounts.

We fight against CIOs as our policies prohibit attempts to sway public opinion while also misleading our systems or users about the identity, origin, approximate location, popularity or overall purpose.

When we investigate and remove these operations, we focus on behaviour and assessing linkages between accounts and techniques to determine if actors are engaging in a coordinated effort to mislead TikTok’s systems or our community. In each case, we believe that the people behind these activities coordinate with one another to misrepresent who they are and what they are doing. We know that CIOs will continue to evolve in response to our detection, and networks may attempt to reestablish a presence on our platform. That is why we take continuous action against these attempts, including banning accounts found to be linked with previously disrupted networks. We continue to iteratively research and evaluate complex deceptive behaviours on our platform and develop appropriate product and policy solutions as appropriate in the long term. We have published details of all the CIO networks we identified and removed in H1 2025 in a dedicated monthly report within our Transparency Centre here.

In H1 2025, through our Deceptive Behaviours policies we worked on a number of initiatives that sought to continue developing and adapting our strategies at combatting manipulative behaviours and practices. We continue to make progress through several updates and development schemes.

Use “hack and leak” operation (which may or may not include doctored content) 

We have a number of policies that address hack-and-leak related threats (some examples are below):
  • Our hack and leak policy, which aims to further reduce the harms inflicted by the unauthorised disclosure of hacked materials on the individuals, communities, and organisations that may be implicated or exposed by such disclosures.
  • Our CIO policy addresses use of leaked documents to sway public opinion as part of a wider operation.
  • Our Edited Media and AI-Generated Content (AIGC) policy captures materials that have been digitally altered without an appropriate disclosure.
  • Our harmful misinformation policies combat conspiracy theories related to unfolding events and dangerous misinformation.
  • Our Trade of Regulated Goods and Services policy prohibits the trading of hacked goods.

Deceptive manipulated media (e.g. “deep fakes”, “cheap fakes”...) 

Our ‘Edited Media and AI-Generated Content (AIGC)’ policy includes commonly used and easily understood language when referring to AIGC, and outlines our existing prohibitions on AIGC showing fake authoritative sources or crisis events, or falsely showing public figures in certain contexts, including being bullied, making an endorsement, or being endorsed. We also do not allow content that contains the likeness of young people, or the likeness of adult private figures used without their permission.

For the purposes of our policy, AIGC refers to content created or modified by artificial intelligence (AI) technology or machine-learning processes, which may include images of real people, and may show highly realistic-appearing scenes, or use a particular artistic style, such as a painting, cartoons, or anime. ‘Significantly edited content’ is content that shows people doing or saying something they did not do or say, or altering their appearance in a way that makes them difficult to recognise or identify. Misleading AIGC or edited media is audio or visual content that has been edited, including by combining different clips together, to change the composition, sequencing, or timing in a way that alters the meaning of the content and could mislead viewers about the truth of real-world events.
 
In accordance with our policy, we prohibit AIGC, which features:
  • The likeness of young people or realistic-appearing people under the age of 18.
  • The likeness of adult private figures, if we become aware that it was used without their permission.
  • Misleading AIGC or edited media that falsely show:
    • Content made to seem as if it comes from an authoritative source, such as a reputable news organisation.
    • A crisis event, such as a conflict or natural disaster
    • A public figure who is:
      • being degraded or harassed, or engaging in criminal or antisocial behaviour.
      • taking a position on a political issue, commercial product, or a matter of public importance (such as an election).
      • being politically endorsed or condemned by an individual or group.

As AI evolves, we continue to invest in combating harmful AIGC by evolving our proactive detection models, consulting with experts, and partnering with peers on shared solutions.

Non-transparent compensated messages or promotions by influencers 

Our Terms of Service and Branded Content Policy require users posting about a brand or product in return for any payment or other incentive to disclose their content by enabling the branded content toggle, which we make available for users. We also provide functionality to enable users to report suspected undisclosed branded content, which reminds the user who posted the suspected undisclosed branded content of our requirements and prompts them to turn the branded content toggle on if required. We made this requirement even clearer to users in our Commercial Disclosures and Paid Promotion policy in our March 2023 Community Guidelines refresh by increasing the information around our policing of this policy and providing specific examples.

We also don't allow paid political advertising. This includes creators being compensated for making branded political content, and the use of other promotional tools on the platform, such as Promote. We prohibit advertising of any kind by political figures and entities, and suspected paid political advertising is ineligible for the For You feed.

In addition to branded content policies, our CIO policy can also apply to non-transparent compensated messages or promotions by influencers where it is found that those messages or promotions formed part of a covert influence campaign.

QRE 14.1.2

Signatories will report on their proactive efforts to detect impermissible content, behaviours, TTPs and practices relevant to this commitment.

At TikTok, we place considerable emphasis on proactive content moderation and use a combination of technology and safety professionals to detect and remove harmful misinformation (see QRE 18.1.1) and deceptive behaviours on our Platform before they are reported to us by users or third parties. 

For instance, we take proactive measures to prevent inauthentic or spam accounts from being created. Thus, we have created and used detection models and rule engines that:

  • prevent inauthentic accounts from being created based on malicious patterns; and
  • remove registered accounts based on certain signals (i.e., uncommon behaviour on the platform).

We also manually monitor user reports of inauthentic accounts in order to detect larger clusters or similar inauthentic behaviours.

However, given the complex nature of the TTPs, human moderation is critical to success in this area, and TikTok's moderation teams therefore play a key role in assessing and addressing identified violations. We provide our moderation teams with detailed guidance on how to apply the Integrity & Authenticity policies in our Community Guidelines, including providing case banks of harmful misinformation claims to support their moderation work, and allowing them to route new or evolving content to our fact-checking partners for assessment. 

In addition, where content reaches certain popularity levels in terms of the number of video views, it will be flagged for further review. Such a review is undertaken given the extent of the content’s dissemination and the increase in potential harm if the content is found to be in breach of our Community Guidelines including our Integrity & Authenticity policies.

Furthermore, during the reporting period, we improved automated detection and enforcement of our ‘Edited Media and AI-Generated Content (AIGC)’ policy, effectively increasing the number of videos removed for policy violations. This also decreased the number of visitors per video over the reporting period, demonstrating an effective control strategy as the scope of enforcement increased.

We have also set up specifically-trained teams that are focused on investigating and detecting CIO on our Platform. We've built international trust & safety teams with specialized expertise across threat intelligence, security, law enforcement, and data science to work on influence operations full-time. These teams continuously pursue and analyse on-platform signals of deceptive behaviour, as well as leads from external sources. They also collaborate with external intelligence vendors to support specific investigations on a case-by-case basis. When we investigate and remove these operations, we focus on behaviour and assessing linkages between accounts and techniques to determine if actors are engaging in a coordinated effort to mislead TikTok’s systems or our community. In each case, we believe that the people behind these activities coordinate with one another to misrepresent who they are and what they are doing.

Accounts that engage in influence operations often avoid posting content that would be violative of platforms' guidelines by itself. That's why we focus on accounts' behaviour and technical linkages when analysing them, specifically looking for evidence that:

  • They are coordinating with each other. For example, they are operated by the same entity, share technical similarities like using the same devices, or work together to spread the same narrative.
  • They are misleading our systems or users. For example, they are trying to conceal their actual location or use fake personas to pose as someone they're not.
  • They are attempting to manipulate or corrupt public debate to impact the decision-making, beliefs, and opinions of a community. For example, they are attempting to shape discourse around an election or conflict.

These criteria are aligned with industry standards and guidance from the experts we regularly consult with. They're particularly important to help us distinguish malicious, inauthentic coordination from authentic interactions that are part of healthy and open communities. For example, it would not violate our policies if a group of people authentically worked together to raise awareness or campaign for a social cause, or express a shared opinion (including political views). However, multiple accounts deceptively working together to spread similar messages in an attempt to influence public discussions would be prohibited and disrupted.

Measure 14.2

Relevant Signatories will keep a detailed, up-to-date list of their publicly available policies that clarifies behaviours and practices that are prohibited on their services and will outline in their reports how their respective policies and their implementation address the above set of TTPs, threats and harms as well as other relevant threats.

QRE 14.2.1
The implementation of our policies is ensured by different means, including specifically-designed tools (such as toggles to disclose branded content - see QRE 14.1.1) or human investigations to detect deceptive behaviours (for CIO activities - see QRE 14.1.2).

The implementation of these policies is also ensured through enforcement measures applied in all EEA countries. 

CIO investigations are resource-intensive and require in-depth analysis to ensure high confidence in proposed actions. Where our teams have the necessary high degree of confidence that an account is engaged in CIO or is connected to networks we took down in the past as part of a CIO, it is removed from our Platform.

Similarly, where our teams have a high degree of confidence that specific content violates one of our TTPs-related policies (See QRE 14.1.1), such content is removed from TikTok.

QRE 14.2.1

Relevant Signatories will report on actions taken to implement the policies they list in their reports and covering the range of TTPs identified/employed, at the Member State level.

The implementation of our policies is ensured by different means, including specifically-designed tools (such as toggles to disclose branded content - see QRE 14.1.1) or human investigations to detect deceptive behaviours (for CIO activities - see QRE 14.1.2).

The implementation of these policies is also ensured through enforcement measures applied in all Member States. 

CIO investigations are resource-intensive and require in-depth analysis to ensure high confidence in proposed actions. Where our teams have the necessary high degree of confidence that an account is engaged in CIO or is connected to networks we took down in the past as part of a CIO, it is removed from our Platform.

Similarly, where our teams have a high degree of confidence that specific content violates one of our TTPs-related policies (See QRE 14.1.1), such content is removed from TikTok.

Lastly, we may reduce the discoverability of some content, including by making videos ineligible for recommendation in the For You feed section of our platform. This is, for example, the case for content that tricks or manipulates users in order to inauthentically increase followers, likes, or views.

Full metrics from this QRE (and QREs 14.2.2 and 14.2.4) can be found in our full report, linked at the top of this page. 

SLI 14.2.4

Estimation, at the Member State level, of TTPs related content, views/impressions and interaction/engagement with such content as a percentage of the total content, views/impressions and interaction/engagement on relevant signatories' service.

Ratio of monthly average of Fake accounts over monthly active users (EU): 2.46%
Impersonation accounts over monthly active users (EU): 0.005%

Country Number of unique videos labelled with AIGC tag of "AI-generated"
Austria 292,401
Belgium 461,238
Bulgaria 832,573
Croatia 94,124
Cyprus 100,228
Czech Republic 455,194
Denmark 114,169
Estonia 63,904
Finland 178,869
France 2,665,168
Germany 3,887,735
Greece 445,046
Hungary 647,102
Ireland 131,936
Italy 2,778,340
Latvia 150,166
Lithuania 169,243
Luxembourg 26,034
Malta 26,024
Netherlands 884,486
Poland 1,455,778
Portugal 697,724
Romania 1,826,466
Slovakia 281,735
Slovenia 35,187
Spain 3,289,279
Sweden 336,203
Iceland 11,757
Liechtenstein 530
Norway 154,897
Total EU 22,326,352
Total EEA 22,493,536

Measure 14.3

Relevant Signatories will convene via the Permanent Task-force to agree upon and publish a list and terminology of TTPs employed by malicious actors, which should be updated on an annual basis.

QRE 14.3.1
We collaborated as part of the Integrity of Services working group to set up the first list of TTPs. We continue to provide updates on observed TTPs through our regular CIO transparency reporting, including observations on novel and emerging tradecraft.

QRE 14.3.1

Signatories will report on the list of TTPs agreed in the Permanent Task-force within 6 months of the signing of the Code and will update this list at least every year. They will also report about the common baseline elements, objectives and benchmarks for the policies and measures.

We collaborated as part of the Integrity of Services working group to set up the first list of TTPs. We continue to provide updates on observed TTPs through our monthly CIO transparency reporting, including observations on novel and emerging tradecraft.

Commitment 15

Relevant Signatories that develop or operate AI systems and that disseminate AI-generated and manipulated content through their services (e.g. deepfakes) commit to take into consideration the transparency obligations and the list of manipulative practices prohibited under the proposal for Artificial Intelligence Act.

We signed up to the following measures of this commitment

Measure 15.1 Measure 15.2

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Yes

If yes, list these implementation measures here

  • We provided extensive training for moderators and risk containment agents to help them better detect and remove deceptive AIGC more quickly. We also conducted a thorough assessment of the effectiveness of our AI policies and provided guidance to reduce systemic error. 
  • We published our Responsible AI Principles
  • Continued to join industry partners as a party to the “Tech Accord to Combat Deceptive Use of AI in 2024 Elections’ which is a joint commitment to combat the deceptive use of AI in elections.
  • We continue to participate in relevant working groups, such as the Generative AI working group, which commenced in September 2023.

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

N/A

If yes, which further implementation measures do you plan to put in place in the next 6 months?

N/A

Measure 15.1

Relevant signatories will establish or confirm their policies in place for countering prohibited manipulative practices for AI systems that generate or manipulate content, such as warning users and proactively detect such content.

QRE 15.1.1 
Our Edited Media and AI-Generated Content (AIGC) policy includes commonly used and easily understood language when referring to AIGC, and outlines our existing prohibitions on AIGC showing fake authoritative sources or crisis events, or falsely showing public figures in certain contexts, including being bullied, making an endorsement, or being endorsed. As AI evolves, we continue to invest in combating harmful AIGC by evolving our proactive detection models, consulting with experts, and partnering with peers on shared solutions.

While we welcome the creativity that new AI may unlock, in line with our updated policy, users must proactively disclose when their content is AI-generated or manipulated but shows realistic scenes (i.e. fake people, places, or events that look like they are real). We launched an AI toggle in September 2023, which allows users to self-disclose AI-generated content when posting. When this has been turned on, a tag “Creator labelled as AI-generated” is displayed to users. Alternatively, this can be done through the use of a sticker or caption, such as ‘synthetic’, ‘fake’, ‘not real’, or ‘altered’. 

We also automatically label content made with TikTok effects if they use AI. TikTok may automatically apply the "AI-generated" label to content we identify as completely generated or significantly edited with AI. This may happen when a creator uses TikTok AI effects or uploads AI-generated content that has Content Credentials attached, a technology from the Coalition for Content Provenance and Authenticity (C2PA). Content Credentials attach metadata to content that we can use to recognize and label AIGC instantly. Once content is labeled as AI-generated with an auto-label, users are unable to remove the label from the post.

We do not allow: 
  • AIGC that shows the likeness of young people or realistic-appearing people under the age of 18 that poses a risk of sexualisation, bullying or privacy concerns, including those related to personally identifiable information or likeness to private individuals. 
  • AIGC that shows the likeness of adult private figures, if we become aware it was used without their permission.
  • Misleading AIGC or edited media that falsely shows:
    • Content made to seem as if it comes from an authoritative source, such as a reputable news organisation
    • A crisis event, such as a conflict or natural disaster.
    • A public figure who is:
      • being degraded or harassed, or engaging in criminal or antisocial behaviour.
      • taking a position on a political issue, commercial product, or a matter of public importance (such as an election).
      • being politically endorsed or condemned by an individual or group.

QRE 15.1.1

In line with EU and national legislation, Relevant Signatories will report on their policies in place for countering prohibited manipulative practices for AI systems that generate or manipulate content.

Our Edited Media and AI-Generated Content (AIGC) policy includes commonly used and easily understood language when referring to AIGC, and outlines our existing prohibitions on AIGC showing fake authoritative sources or crisis events, or falsely showing public figures in certain contexts, including being bullied, making an endorsement, or being endorsed. As AI evolves, we continue to invest in combating harmful AIGC by evolving our proactive detection models, consulting with experts, and partnering with peers on shared solutions.

While we welcome the creativity that new AI may unlock, in line with our updated policy, users must proactively disclose when their content is AI-generated or manipulated but shows realistic scenes (i.e. fake people, places, or events that look like they are real). We launched an AI toggle in September 2023, which allows users to self-disclose AI-generated content when posting. When this has been turned on, a tag “Creator labelled as AI-generated” is displayed to users. Alternatively, this can be done through the use of a sticker or caption, such as ‘synthetic’, ‘fake’, ‘not real’, or ‘altered’. 

We also automatically label content made with TikTok effects if they use AI. TikTok may automatically apply the "AI-generated" label to content we identify as completely generated or significantly edited with AI. This may happen when a creator uses TikTok AI effects or uploads AI-generated content that has Content Credentials attached, a technology from the Coalition for Content Provenance and Authenticity (C2PA). Content Credentials attach metadata to content that we can use to recognize and label AIGC instantly. Once content is labeled as AI-generated with an auto-label, users are unable to remove the label from the post.

We do not allow: 
  • AIGC that shows the likeness of young people or realistic-appearing people under the age of 18 that poses a risk of sexualisation, bullying or privacy concerns, including those related to personally identifiable information or likeness to private individuals. 
  • AIGC that shows the likeness of adult private figures, if we become aware it was used without their permission.
  • Misleading AIGC or edited media that falsely shows:
    • Content made to seem as if it comes from an authoritative source, such as a reputable news organisation
    • A crisis event, such as a conflict or natural disaster.
    • A public figure who is:
      • being degraded or harassed, or engaging in criminal or antisocial behaviour.
      • taking a position on a political issue, commercial product, or a matter of public importance (such as an election).
      • being politically endorsed or condemned by an individual or group.

Measure 15.2

Relevant Signatories will establish or confirm their policies in place to ensure that the algorithms used for detection, moderation and sanctioning of impermissible conduct and content on their services are trustworthy, respect the rights of end-users and do not constitute prohibited manipulative practices impermissibly distorting their behaviour in line with Union and Member States legislation.

QRE 15.2.1
We have a number of measures to ensure the AI systems we develop uphold the principles of fairness and comply with applicable laws. To that end:

  • We have in place internal guidelines on Algorithmic Fairness that are developed with adherence to our commitment to human rights as outlined here: https://www.tiktok.com/transparency/en/upholding-human-rights
  • We have continued to scale our algorithmic fairness compliance review process for new or updated AI systems that meet certain risk-based thresholds.

We are also proud to be a launch partner of the Partnership on AI's Responsible Practices for AIGC.

QRE 15.2.1

Relevant Signatories will report on their policies and actions to ensure that the algorithms used for detection, moderation and sanctioning of impermissible conduct and content on their services are trustworthy, respect the rights of end-users and do not constitute prohibited manipulative practices in line with Union and Member States legislation.

We have a number of measures to ensure the AI systems we develop uphold the principles of fairness and comply with applicable laws. To that end:
  • We have in place internal guidelines and training to help ensure that the training and deployment of our AI systems comply with applicable data protection laws, as well as principles of fairness.
  • We have instituted a compliance review process for new AI systems that meet certain thresholds, and are working to prioritise review of previously developed algorithms.
We are also proud to be a launch partner of the Partnership on AI's Responsible Practices for Synthetic Media.

Commitment 16

Relevant Signatories commit to operate channels of exchange between their relevant teams in order to proactively share information about cross-platform influence operations, foreign interference in information space and relevant incidents that emerge on their respective services, with the aim of preventing dissemination and resurgence on other services, in full compliance with privacy legislation and with due consideration for security and human rights risks.

We signed up to the following measures of this commitment

Measure 16.1 Measure 16.2

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Yes

If yes, list these implementation measures here

We continue to engage in the subgroups set up for insights sharing between signatories and the Commission. For example, we participated in cross-industry forums such as EU elections roundtables in markets including Czechia, Netherlands, Ireland, and Estonia. 

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

N/A

If yes, which further implementation measures do you plan to put in place in the next 6 months?

N/A

Measure 16.1

Relevant Signatories will share relevant information about cross-platform information manipulation, foreign interference in information space and incidents that emerge on their respective services for instance via a dedicated sub-group of the permanent Task-force or via existing fora for exchanging such information.

QRE 16.1.1
Central to our strategy for identifying and removing CIO on our platforms is working with our stakeholders, including civil society and user reports. This approach facilitates us - and others - disrupting the network’s operations in their early stages. In addition to continuously enhancing our in-house capabilities, we proactively engage in comprehensive reviews of our peers' publicly disclosed findings and swiftly implement necessary actions in alignment with our policies.

To provide more regular and detailed updates about the CIO we disrupt, we have a dedicated Transparency Report on covert influence operations, which is available in TikTok’s Transparency Centre. In this report, we also have information about operations that we have previously removed and that have attempted to return to our platform with new accounts. The insights and metrics in this report aim to inform industry peers and the research community. 

We share relevant insights and metrics within our transparency reports, which aim to inform industry peers and the research community. We also review relevant insights and metrics from other industry peers to cross-compare for any similar behaviour on TikTok.

We continue to engage in the subgroups set up for insights sharing between signatories and the Commission. For example, we participated in cross-industry forums such as EU elections roundtable in markets including Czechia, Netherlands, Ireland, and Estonia. 

As we have detailed in other chapters to this report, we have robust monetisation integrity policies in place and have established joint operating procedures between specialist CIO investigations teams and monetisation integrity teams to work on joint investigations of CIOs involving monetised products. 

QRE 16.1.1

Relevant Signatories will disclose the fora they use for information sharing as well as information about learnings derived from this sharing.

N/A

Measure 16.2

Relevant Signatories will pay specific attention to and share information on the tactical migration of known actors of misinformation, disinformation and information manipulation across different platforms as a way to circumvent moderation policies, engage different audiences or coordinate action on platforms with less scrutiny and policy bandwidth.

QRE 16.2.1
We publish details of the CIO networks we identify and remove within our transparency reports here. As new deceptive behaviours emerge, we’ll continue to evolve our response, strengthen enforcement capabilities, and publish our findings.

QRE 16.2.1

As a result of the collaboration and information sharing between them, Relevant Signatories will share qualitative examples and case studies of migration tactics employed and advertised by such actors on their platforms as observed by their moderation team and/or external partners from Academia or fact-checking organisations engaged in such monitoring.

We publish all of the CIO networks we identify and remove within our transparency reports here. As new deceptive behaviours emerge, we’ll continue to evolve our response, strengthen enforcement capabilities, and publish our findings.

Empowering Users

Commitment 17

In light of the European Commission's initiatives in the area of media literacy, including the new Digital Education Action Plan, Relevant Signatories commit to continue and strengthen their efforts in the area of media literacy and critical thinking, also with the aim to include vulnerable groups.

We signed up to the following measures of this commitment

Measure 17.1 Measure 17.2 Measure 17.3

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Yes

If yes, list these implementation measures here

We have 14 ongoing media literacy and critical thinking skills campaigns in Europe (12 in EU/EEA—Denmark, Finland, France, Germany, Ireland, Italy, Romania, Spain, Sweden, Netherlands, Poland and Portugal; 2 in wider European countries—Georgia and Moldova).

  • We ran 8 temporary media literacy election integrity campaigns in advance of regional elections, most in collaboration with our fact-checking and media literacy partners:
    • 7 in the EU 
      • Czechia (Parliamentary election): Demagog.cz
      • Portugal (local election): Poligrafo
      • Estonia (local election): Lead Stories
      • Ireland (presidential election): The Journal
      • Netherlands (parliamentary election)
      • Denmark (local and municipal election): Sikker Digital
      • Portugal (presidential election): Polígrafo
    • 1 in Norway (parliamentary election)
  • Following wildfires in Portugal and Spain, we launched an in-app guide to provide users with guidance on interacting with sensitive content during natural disasters. The guide links to TikTok's tragic event support guide and authoritative third party resources (PT)(ES) of information about aid and relief support. The intervention is available in all in-app languages.
  • Following protests in France, we launched an in-app guide to provide users with guidance on interacting with sensitive content when events are unfolding rapidly. The guide links to TikTok's Community Guidelines and Well-being Guide
  • Continued our in-app interventions, including video tags, search interventions and in-app information centres, available in 23 official EU languages and Norwegian and Icelandic for EEA users, around elections, the Israel-Hamas Conflict, Holocaust Education, and the War in Ukraine.
  • Continued to support mental well-being awareness and literacy and to combat misinformation with reliable content through the WHO's Fides network, a diverse community of trusted healthcare professionals and content creators in a number of countries, including France.
  • We launched a $2 Million AI Literacy fund in partnership with more than 20 civil society organisations across 12 markets worldwide. The ad credit fund is designed to support the creation of educational content that will appear in For You feeds. This initiative launched alongside several new company updates to spot, shape and understand AI-generated content.
  • Brought greater transparency about our systems and our integrity and authenticity efforts to our community by sharing regular insights and updates.  In H2 2025, we launched a new:
    • Transparency Center Global Elections Hub , including dedicated coverage of elections across Europe, the Middle East, and Africa. The Hub outlines our policies, product features, and moderation practices that help protect platform integrity during elections. Throughout this reporting period, we regularly updated the Hub with information on our safety efforts in markets with active elections, including Croatia, Germany, Netherlands, Portugal, Poland and Ireland. 

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

N/A

If yes, which further implementation measures do you plan to put in place in the next 6 months?

N/A

Measure 17.1

Relevant Signatories will design and implement or continue to maintain tools to improve media literacy and critical thinking, for instance by empowering users with context on the content visible on services or with guidance on how to evaluate online content.

QRE 17.1.1
In addition to actioning content that violates our Integrity and Authenticity policies, we continue to dedicate resources to: expanding our in-app measures that show users additional context on certain content (e.g., natural disasters and rapidly unfolding events); redirecting them to authoritative information; and making these tools available in 22 EU official languages (plus, for EEA users, Norwegian and Icelandic).

We work with external experts to combat harmful misinformation. For example, we work with the World Health Organisation (WHO) on medical information, and our global fact-checking partners, taking into account their feedback to continually identify new topics and consider which tools may be best suited for raising awareness around that topic.

We deploy a combination of in-app user intervention tools on topical issues such as elections , the Israel-Hamas Conflict, Holocaust Education, and the War in Ukraine.

Video notice tags. 

A video notice tag is an information bar at the bottom of a video which is automatically applied to a specific word or hashtag (or set of hashtags). The information bar is clickable and invites users to “Learn more about [the topic]”. Users will be directed to an in-app guide, or reliable third party resource, as appropriate.

Search intervention. If users search for terms associated with a topic, they will be presented with a banner encouraging them to verify the facts and providing a link to a trusted source of information. Search interventions are not deployed for search terms that violate our Community Guidelines, which are actioned according to our policies. 

QRE 17.1.1

Relevant Signatories will outline the tools they develop or maintain that are relevant to this commitment and report on their deployment in each Member State.

In addition to actioning content that violates our Integrity & Authenticity policies, we continue to dedicate resources to: expanding our in-app measures that show users additional context on certain content (e.g., natural disasters and rapidly unfolding events); redirecting them to authoritative information; and making these tools available in 23 EU official languages (plus, for EEA users, Norwegian & Icelandic).

We work with external experts to combat harmful misinformation. For example, we work with the World Health Organisation (WHO) on medical information, and our global fact-checking partners, taking into account their feedback, as well as user feedback, to continually identify new topics and consider which tools may be best suited for raising awareness around that topic.

We deploy a combination of in-app user intervention tools on topical issues such as elections , the Israel-Hamas Conflict, Holocaust Education, Mpox and the War in Ukraine.

Video notice tags. 

A video notice tag is an information bar at the bottom of a video which is automatically applied to a specific word or hashtag (or set of hashtags). The information bar is clickable and invites users to “Learn more about [the topic]”. Users will be directed to an in-app guide, or reliable third party resource, as appropriate.

Search intervention. 

If users search for terms associated with a topic, they will be presented with a banner encouraging them to verify the facts and providing a link to a trusted source of information. Search interventions are not deployed for search terms that violate our Community Guidelines, which are actioned according to our policies. 


Measure 17.2

Relevant Signatories will develop, promote and/or support or continue to run activities to improve media literacy and critical thinking such as campaigns to raise awareness about Disinformation, as well as the TTPs that are being used by malicious actors, among the general public across the European Union, also considering the involvement of vulnerable communities.

QRE 17.2.1

Relevant Signatories will describe the activities they launch or support and the Member States they target and reach. Relevant signatories will further report on actions taken to promote the campaigns to their user base per Member States targeted.

In order to raise awareness among our users about specific topics and empower them, we run a variety of on and off-platform media literacy campaigns. Our approach may differ depending on the topic. We localise certain campaigns (e.g., for elections) meaning we collaborate with national partners to develop an approach that best resonates with the local audience. For other issues such as the War in Ukraine, our priority is to connect users to accurate and trusted resources. 

Below are examples of the campaigns we have most recently run in-app which have leveraged a number of the intervention tools we have outlined in our response to QRE 17.1.1 (e.g. search interventions and video notice tags).

(I) Promoting election integrity. As well as the election integrity pages on TikTok's Safety Center and Transparency Center, and the new dedicated Global Elections Hub, which provides an overview of our overall approach to protecting TikTok through the elections, including the most relevant policies that we use to protect the platform during elections, our media literacy features, and the continuous updates we make to support our community in real-time. Along with the hub,  we launched media literacy campaigns in advance of several elections in the EU and wider Europe.

  • Czechia Parliamentary Elections 2025: From 4 Sept 2025, we launched an in-app Election Centre to provide users with up-to-date information about the 2025 Czech parliamentary election. The centre contained a section about spotting misinformation. 
  • Portugal Local Elections 2025: From 16 Sept 2025, we launched an in-app Election Centre to provide users with up-to-date information about the 2025 Portugal local elections. The centre contained a section about spotting misinformation, which included videos created in partnership with the fact-checking organisation Polígrafo
  • Estonia Local Elections 2025: From 24 Sept 2025, we launched an in-app Search Guide and Details Page to provide users with up-to-date information about the Estonia local election. The page contained a section about following our Community Guidelines, with a link to our Estonian fact-checking partner, Lead Stories for digital literacy resources. 
  • Ireland Presidential Election 2025: From 24 Sept 2025, we launched an in-app Election Centre to provide users with up-to-date information about the 2025 Irish presidential elections. The centre contained a section about spotting misinformation, which included videos created in partnership with the fact-checking organisation The Journal.
  • Netherlands Parliamentary Election 2025: From 29 Sept 2025, we launched an in-app Election Centre to provide users with up-to-date information about the 2025 Dutch parliamentary elections. The centre contained a section about spotting misinformation.
  • Danish Local and Municipal Elections 2025: From 24 Oct 2025, we launched an in-app Search Guide and Details Page to provide users with up-to-date information about the Danish local and municipal elections. The page contained a section about following our Community Guidelines, with a link to Sikker Digital for digital literacy resources.
  • Portugal Presidential Election 2026: From 9 Dec 2025, we launched an in-app Election Centre to provide users with up-to-date information about the 2026 Portugal presidential election. The centre contained a section about spotting misinformation, which included videos created in partnership with the fact-checking organisation Polígrafo.
  • Norway Parliamentary Elections 2025: From 8 Aug 2025, we launched an in-app Election Centre to provide users with up-to-date information about the 2025 Norwegian parliamentary election. The centre contained a section about spotting misinformation.

(II) Media literacy (General). We continue our ongoing general media literacy and critical thinking skills campaigns in the EU in collaboration with our fact-checking and media literacy partners, spanning 14 countries (Denmark, Finland, France, Georgia, Germany, Ireland, Italy, Romania, Spain, Sweden, Moldova, Netherlands, Poland, and Portugal).


(III) Media literacy (War in Ukraine). We continue to serve 17 localised media literacy campaigns specific to the war in Ukraine in: Ukraine, Romania, Slovakia, Hungary, Latvia, Estonia, Lithuania, Czechia, Poland, Croatia, Slovenia, Bulgaria, Germany, Austria, Bosnia, Montenegro, and Serbia.

  • Partnered with Lead Stories: Ukraine, Romania, Slovakia, Hungary, Latvia, Estonia, Lithuania.
  • Partnered with fakenews.pl: Poland.
  • Partnered with Correctiv: Germany, Austria.

Through these media literacy campaigns, users searching for keywords relating to the war in Ukraine on TikTok are directed to tips prepared in partnership with local media literacy bodies and our trusted fact-checking partners, to help them identify misinformation and prevent its spread on the platform.

(IV) Israel-Hamas conflict. To help raise awareness and to protect our users, we have search interventions which are triggered when users search for neutral terms related to this topic (e.g., Israel, Palestine). These search interventions remind users to pause and check their sources and also directs them to well-being resources.







SLI 17.2.1

Relevant Signatories report on number of media literacy and awareness raising activities organised and or participated in and will share quantitative information pertinent to show the effects of the campaigns they build or support at the Member State level.


We are pleased to report metrics on the 14 general media literacy and critical thinking skills campaigns that ran through the reporting period in Germany, Romania, Poland, Denmark, Finland, France, Georgia, Ireland, Italy, Moldova, Portugal, Spain, Sweden, and the Netherlands.

Country Total number of impressions of the H5 Page (Views generated between July 1 and December 31, 2025) Number of impressions of the search intervention Number of clicks on the search intervention Click through rate of the search intervention
France (in partnership with AFP) 48,144 26,260,992 71,577 0.27%
Portugal (in partnership with Poligrafo) 10,369 5,426,533 22,811 0.42%
Denmark (in partnership with Logically Facts) 4,098 202,542 881 0.43%
The Netherlands (in partnership with Nieuwscheckers) 34,937 2,739,245 41,868 1.53%
Ireland (in partnership with The journal.ie) 905 359,461 1,883 0.52%
Finland (in partnership with Logically Facts) 1,543 186,559 2,994 1.60%
Sweden (in partnership with Logically Facts) 2,342 413,554 4,115 1.00%
Spain (in partnership with Maldita) 21,922 17,986,294 42,554 0.24%
Italy (in partnership with Facta) 1,433 439,721 2,290 0.52%
Austria (in partnership with Correctiv, joint campaign with Germany) 4,607 1,535,546 7,965 0.52%
Germany (in partnership with Correctiv, joint campaign with Austria) 7,790 536,473 2,533 0.47%
Poland 10,369 9,183,221 54,480 0.59%
Bulgaria 1,137 297,690 1,905 0.64%
Croatia 1,256 397,876 2,240 0.56%
Czechia 2,270 962,911 3,190 0.33%
Slovenia 535 129,253 801 0.62%

Measure 17.3

For both of the above Measures, and in order to build on the expertise of media literacy experts in the design, implementation, and impact measurement of tools, relevant Signatories will partner or consult with media literacy experts in the EU, including for instance the Commission's Media Literacy Expert Group, ERGA's Media Literacy Action Group, EDMO, its country-specific branches, or relevant Member State universities or organisations that have relevant expertise.

QRE 17.3.1

Relevant Signatories will describe how they involved and partnered with media literacy experts for the purposes of all Measures in this Commitment.

We work with fact-checking partners and media literacy bodies to develop campaigns that educate users and redirect them to authoritative resources. Specific examples of partnerships within the campaigns and projects set out in QRE 17.2.1 are:

(I) Promoting election integrity. 

We partner with various media organisations and fact-checkers to promote election integrity on TikTok. For more detail about the input our fact-checking partners provide please refer to QRE 30.1.3.

During this reporting period, we worked with European fact-checkers and media literacy organisations on 6 temporary media literacy election integrity campaigns, in advance of regional elections, through our in-app Election Centers:

  • Portugal (local election): Polígrafo
  • Estonia (local election): Lead Stories
  • Ireland (presidential election): The Journal
  • Denmark (local and municipal elections): Sikker Digital
  • Portugal (presidential election): Polígrafo
  • Czechia (parliamentary election): Demagog.cz

(II) War in Ukraine.

We continue to run our media literacy campaigns about the war in Ukraine, developed in partnership with our media literacy partners Correctiv in Austria and Germany, Fakenews.pl in Poland and Lead Stories in Ukraine, Romania, Slovakia, Hungary, Latvia, Estonia, Lithuania. We also expanded this campaign to Serbia, Bosnia, Montenegro, Czechia, Croatia, Slovenia, Bulgaria

Commitment 18

Relevant Signatories commit to minimise the risks of viral propagation of Disinformation by adopting safe design practices as they develop their systems, policies, and features.

We signed up to the following measures of this commitment

Measure 18.1 Measure 18.2 Measure 18.3

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Yes

If yes, list these implementation measures here

  • Continued to improve the accuracy of, and overall coverage provided by, our machine learning detection models. 
  • Began testing large language models (LLMs) to further support proactive moderation at scale. Because LLMs can comprehend human language and perform highly specific, complex tasks, we are better able to moderate nuanced areas like misinformation by extracting specific misinformation "claims" from videos for moderators to assess directly or route to our fact-checking partners.
  • TikTok teams and personnel also regularly participate in research-focused events. In October 2025, TikTok co-sponsored the EU DisinfoLab conference in Slovenia. Several TikTok staff attended, and we co-led a session with the Centre for Humanitarian Dialogue on how platforms and conflict mediators can work together to reduce the risks of violence during conflicts.
  • Continued to participate in, and co-chair, the working group on Elections.
  • TikTok gathered its global Safety Advisory Councils in Singapore in October 2025 to consult them on a variety of topics including our approach to media literacy.

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

N/A

If yes, which further implementation measures do you plan to put in place in the next 6 months?

N/A

Measure 18.1

Relevant Signatories will take measures to mitigate risks of their services fuelling the viral spread of harmful Disinformation, such as: recommender systems designed to improve the prominence of authoritative information and reduce the prominence of Disinformation based on clear and transparent methods and approaches for defining the criteria for authoritative information; other systemic approaches in the design of their products, policies, or processes, such as pre-testing.

TikTok did not subscribe to this measure as outlined in the January 2025 Subscription Document.

QRE 18.1.1

Relevant Signatories will report on the risk mitigation systems, tools, procedures, or features deployed under Measure 18.1 and report on their deployment in each EU Member State.

N/A

QRE 18.1.2

Relevant Signatories will publish the main parameters of their recommender systems, both in their report and, once it is operational, on the Transparency Centre.

N/A

QRE 18.1.3

Relevant Signatories will outline how they design their products, policies, or processes, to reduce the impressions and engagement with Disinformation whether through recommender systems or through other systemic approaches, and/or to increase the visibility of authoritative information.

N/A

Measure 18.2

Relevant Signatories will develop and enforce publicly documented, proportionate policies to limit the spread of harmful false or misleading information (as depends on the service, such as prohibiting, downranking, or not recommending harmful false or misleading information, adapted to the severity of the impacts and with due regard to freedom of expression and information); and take action on webpages or actors that persistently violate these policies.

QRE 18.2.1

Relevant Signatories will report on the policies or terms of service that are relevant to Measure 18.2 and on their approach towards persistent violations of these policies.

We take action against misinformation that causes significant harm to individuals, our community, or the larger public regardless of intent. We do this by removing content and accounts that violate our rules, by investing in media literacy and connecting our community to authoritative information, and by partnering with experts.

Our Terms of Service and Integrity and Authenticity policies under our Community Guidelines are the first line of defence in combating harmful misinformation and (as outlined in more detail in QRE 14.1.1) deceptive behaviours on our platform. These rules make clear to our users what content we remove or make ineligible for the For You feed when they pose a risk of harm to our users and our community.

Specifically, our policies do not allow:

  • Misinformation 
    • Misinformation that poses a risk to public safety or may induce panic about a crisis event or emergency, including using historical footage of a previous attack as if it were current, or incorrectly claiming a basic necessity (such as food or water) is no longer available in a particular location.Health misinformation, such as misleading statements about vaccines, inaccurate medical advice that discourages people from getting appropriate medical care for a life-threatening disease, or other misinformation which may cause negative health effects on an individual's life.
    • Climate change misinformation that undermines well-established scientific consensus, such as denying the existence of climate change or the factors that contribute to it.
    • Conspiracy theories that name and attack individual people.
    • Conspiracy theories that are violent or hateful, such as making a violent call to action, having links to previous violence, denying well-documented violent events, or causing prejudice towards a group with a protected attribute.
  • Civic and Election Integrity
    • Election misinformation, including:
      • How, when, and where to vote or register to vote;
      • Eligibility requirements of voters to participate in an election, and the qualifications for candidates to run for office;
      • Laws, processes, and procedures that govern the organisation and implementation of elections and other civic processes, such as referendums, ballot propositions, or censuses;
      • Final results or outcome of an election.
  • Edited Media and AI-Generated Content (AIGC)
    • The likeness of young people or realistic-appearing people under the age of 18.
    • The likeness of adult private figures, if we become aware it was used without their permission.
    • Misleading AIGC or edited media that falsely shows:
      • Content made to seem as if it comes from an authoritative source, such as a reputable news organisation;
      • A crisis event, such as a conflict or natural disaster.
    • A public figure who is:
      • being degraded or harassed, or engaging in criminal or antisocial behaviour;
    • taking a position on a political issue, commercial product, or a matter of public importance (such as an election);
    • being politically endorsed or condemned by an individual or group.
  • Fake Engagement
    • Facilitating the trade or marketing of services that artificially increase engagement, such as selling followers or likes.
    • Providing instructions on how to artificially increase engagement on TikTok.

We have made even clearer to our users here that the following content is ineligible for the For You feed:

  • Misinformation 
    • Conspiracy theories that are unfounded and claim that certain events or situations are carried out by covert or powerful groups, such as "the government" or a "secret society".
    • Moderate harm health misinformation, such as an unproven recommendation for how to treat a minor illness.
    • Repurposed media, such as showing a crowd at a music concert and suggesting it is a political protest.
    • Misrepresenting authoritative sources, such as selectively referencing certain scientific data to support a conclusion that is counter to the findings of the study.
    • Unverified claims related to an emergency or unfolding event.
    • Potential high-harm misinformation while it is undergoing a fact-checking review.
  • Civic and Election Integrity
    • Unverified claims about an election, such as a premature claim that all ballots have been counted or tallied.
    • Statements that significantly misrepresent authoritative civic information, such as a false claim about the text of a parliamentary bill.
  • Fake Engagement
    • Content that tricks or manipulates others as a way to increase gifts, or engagement metrics, such as "like-for-like" promises or other false incentives for engaging with content.
As outlined in the QRE 14, we also remove accounts that seek to mislead people or use TikTok to deceptively sway public opinion. These activities range from inauthentic or fake account creation, to more sophisticated efforts to undermine public trust.

We have policy experts within our Trust and Safety team dedicated to the topic of integrity and authenticity. They continually keep these policies under review and collaborate with external partners and experts to understand whether updates or new policies are required and ensure they are informed by a diversity of perspectives, expertise, and lived experiences.

Enforcing our policies. We remove content – including video, audio, livestream, images, comments, links, or other text – that violates our Integrity and Authenticity policies. Individuals are notified of our decisions and can appeal them if they believe no violation has occurred. We also make clear in our Community Guidelines that we will temporarily or permanently ban accounts and/or users that are involved in serious or repeated violations, including violations of our Integrity and Authenticity policies.

We enforce our Community Guidelines policies, including our Integrity and Authenticity policies, through a mix of technology and human moderation. To do this effectively at scale, we continue to invest in our automated review process as well as in people and training. At TikTok we place a considerable emphasis on proactive content moderation. This means our teams work to detect and remove harmful material before it is reported to us.

However, misinformation is different from other content issues. Context and fact-checking are critical to consistently and accurately enforcing our misinformation policies. While machine learning models form the backbone of our misinformation detection capability, human moderators also play a critical role in reviewing, confirming, and actioning violations. We have qualified moderators who have enhanced training, expertise, and tools to take action on harmful misinformation. This includes access to our fact-checking partners who help assess the accuracy of new content.

We strive to maintain a balance between freedom of expression and protecting our users and the wider public from harmful content. Our approach to combating harmful misinformation, as stated in our Community Guidelines, is to remove content that is both false and can cause harm to individuals or the wider public. This does not include simply inaccurate information which does not pose a risk of harm. Additionally, in cases where fact-checks are inconclusive, especially during emergency or unfolding events, content may not be removed and may instead become ineligible for recommendation in the For You feed and labelled with the “unverified content” label to limit the spread of potentially misleading information. 

We are pleased to include in this report the number of videos made ineligible for the For You feed under the relevant Integrity and Authenticity policies as explained to users here.

Note that in relation to the metrics we have shared at SLI 18.2.1 below, of all the views from users in the EEA that were recorded in H1 2025, fewer than 1 in per 10,000 views were of content identified and removed for violating our policies around harmful misinformation. 

SLI 18.2.1

Relevant Signatories will report on actions taken in response to violations of policies relevant to Measure 18.2, at the Member State level. The metrics shall include: Total number of violations and Meaningful metrics to measure the impact of these actions (such as their impact on the visibility of or the engagement with content that was actioned upon).

Methodology of data measurement:

We have based the following numbers on the country in which the video was posted: videos removed because of violations of our Misinformation,  Civic and Election Integrity and Edited media and AIGC policies.

The number of views of videos removed because of violation of each of these policies is based on the approximate location of the user.

We also updated the methodology on the number of videos made ineligible for the For You feed under our Misinformation policy. 

Country Number of videos removed because of violation of Misinformation policyÊ Number of views of videos removed because of violation of Misinformation policy Number of videos made ineligible for the For You feed under the Misinformation policy. Number of videos removed because of violation of Civic and Election Integrity policy Number of views of videos removed because of violation of Civic and Election Integrity policy Number of videos removed because of violation of Edited Media and AI-Generated Content (AIGC) Number of views of videos removed because of violation of Edited Media and AI-Generated Content (AIGC)
Austria 2,612 1,946,472 2,871 511 219,339 1564 2,121,335
Belgium 4,150 8,424,034 3,069 864 292,729 2899 33,524,248
Bulgaria 4,828 3,601,953 9,427 402 58,515 2181 1,380,041
Croatia 638 984,109 793 63 49 1190 857,576
Cyprus 701 825,228 1,060 85 9 1214 877,703
Czech Republic 2,855 846,267 5,263 338 50,350 1551 166,156
Denmark 2,484 1,938,348 2,085 512 20,319 1920 2,522,414
Estonia 527 9,792 865 45 60,189 1571 202,878
Finland 1,357 8,695,926 1,752 268 162,976 921 417,085
France 37,466 94,473,247 60,520 3,650 6,727,613 28565 145,692,240
Germany 42,642 179,399,985 47,221 5,287 926,431 50378 113,670,298
Greece 4,602 1,556,421 8,200 960 48,866 2284 929,995
Hungary 1,490 1,328,847 2,489 876 13,680 990 2,318,366
Ireland 2,613 885,690 3,489 413 8,482 1722 1,125,060
Italy 18,667 40,083,897 36,707 2,726 723,486 15434 93,101,464
Latvia 705 448,180 1,107 301 318 1519 2,530
Lithuania 1,086 59,207 1,257 61 1,190 1727 8,952,387
Luxembourg 349 14,382 305 48 10 1620 164,719
Malta 159 876,245 283 26 0 382 2,127
Netherlands 14,335 15,235,784 13,311 907 2,340,996 7974 28,839,022
Poland 14,770 22,162,809 15,480 1,038 399,418 7227 14,249,635
Portugal 3,141 2,107,021 2,561 270 38,447 1659 6,862,825
Romania 28,743 45,185,198 32,030 4,622 5,017,440 10458 6,624,761
Slovakia 1,122 858,265 1,861 64 639 1589 346,191
Slovenia 370 26,882 589 34 7 844 3,070,784
Spain 21,592 25,310,043 38,779 1,483 162,917 16129 40,748,793
Sweden 4,159 2,192,170 4,633 761 455 5168 4,607,858
Iceland 123 6,208 188 19 0 175 414
Liechtenstein 143 24,362 76 4 0 484 0
Norway 1,662 2,467,190 1,681 290 1,582 1282 2,920,599
Total EU 218,163 459,476,402 298,007 26,615 17,274,870 170,680 513,378,491
Total EEA 220,091 461,974,162 299,952 26,928 17,276,452 172,621 516,299,504

Measure 18.3

Relevant Signatories will invest and/or participate in research efforts on the spread of harmful Disinformation online and related safe design practices, will make findings available to the public or report on those to the Code's taskforce. They will disclose and discuss findings within the permanent Task-force, and explain how they intend to use these findings to improve existing safe design practices and features or develop new ones.

TikTok did not subscribe to this measure as outlined in the January 2025 Subscription Document.

QRE 18.3.1

Relevant Signatories will describe research efforts, both in-house and in partnership with third-party organisations, on the spread of harmful Disinformation online and relevant safe design practices, as well as actions or changes as a result of this research. Relevant Signatories will include where possible information on financial investments in said research. Wherever possible, they will make their findings available to the general public.

N/A

Commitment 19

Relevant Signatories using recommender systems commit to make them transparent to the recipients regarding the main criteria and parameters used for prioritising or deprioritising information, and provide options to users about recommender systems, and make available information on those options.

We signed up to the following measures of this commitment

Measure 19.1 Measure 19.2

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Yes

If yes, list these implementation measures here

  • At TikTok, we strive to bring transparency to how we protect our platform.  We continue to increase the reports we voluntarily publish, the depth of data we disclose, and the frequency with which we publish.
  • We also worked to make it easier for people to independently study our data and platform. For example through: 
    • Our Research Tools, which empower over 900 research teams to independently study our platform.
    • Adding additional functionality to the Research API, including a compliance API (launched in June 2025) that improves the data refresh process for researchers, helping to ensure that efforts to comply with our Terms of Service (ToS) do not impede researchers' ability to efficiently access data from TikTok's Research API.

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

N/A

If yes, which further implementation measures do you plan to put in place in the next 6 months?

N/A

Measure 19.1

Relevant Signatories will make available to their users, including through the Transparency Centre and in their terms and conditions, in a clear, accessible and easily comprehensible manner, information outlining the main parameters their recommender systems employ.

QRE 19.1.1

Relevant Signatories will provide details of the policies and measures put in place to implement the above-mentioned measures accessible to EU users, especially by publishing information outlining the main parameters their recommender systems employ in this regard. This information should also be included in the Transparency Centre.

The For You feed is the interface users first see when they open TikTok. It's central to the TikTok experience and where most of our users spend their time exploring the platform. 

We make clear to users in our Terms of Service and Community Guidelines (and also provide more context in our Help Center article and Transparency Center page, and Safety Center guide) that each account holder’s For You feed is based on a personalised recommendation system. The For You feed is curated to each user. Safety is built into our content recommendations. As well as removing harmful misinformation content that violates our Community Guidelines, we take steps to avoid recommending certain categories of content that may not be appropriate for a broad audience including general conspiracy theories and unverified information related to an emergency or unfolding event. We may also make some of this content harder to find in search. 
Main parameters. The system recommends content by ranking content based on a combination of factors including:

  • User interactions (e.g. content users like, share, comment on, and watch in full or skip, as well as accounts of followers that users follow back); 
  • Content information (e.g. sounds, hashtags, number of views, and the country the content was published); and 
  • User information  (e.g. device settings, language preferences, location, time zone and day, and device types). 

The main parameters help us make predictions on the content users are likely to be interested in. Different factors can play a larger or smaller role in what’s recommended, and the importance – or weighting – of a factor can change over time. For many users, the time spent watching a specific video is generally weighted more heavily than other factors. These predictions are also influenced by the interactions of other people on TikTok who appear to have similar interests. For example, if a user likes videos 1, 2, and 3 and a second user likes videos 1, 2, 3, 4 and 5, the recommendation system may predict that the first user will also like videos 4 and 5.
Users can also access the “Why this video” feature, which allows them to see with any particular video that appears in their For You feed factors that influenced why it appeared in their feed. This feature provides added transparency in relation to how our ranking system works and empowers our users to better understand why a particular video has been recommended to them. The feature essentially explains to users how past interactions on the platform have impacted the video they have been recommended.
User preferences. Together with the safeguards we build into our platform by design, we also empower our users to customise their experience to their preferences and comfort. 
These include a number of features to help shape the content they see. For example, in the For You feed:
  • Users can click on any video and select “not interested” to indicate that they do not want to see similar content.
  • Users are able to automatically filter out specific words or hashtags from the content recommended to them (see here). 
  • Users are able to refresh their For You feed if they no longer feel like recommendations are relevant to them or are too similar. When the For You feed is refreshed, users view a number of new videos which include popular videos (e.g., they have a high view count or a high like rate). Their interaction with these new videos will inform future recommendations.
  • Users can also personalise their "For You" page through our new Manage Topics feature (June 2025). This allows users to adjust the frequency of content they see related to particular topics. The settings don't eliminate topics entirely but can influence how often they're recommended as peoples' interests evolve over time. It adds to the many ways people shape their feed every day - including liking or sharing videos, searching for topics, or simply watching videos for longer.
  • As part of our obligations under the DSA (Article 38), we introduced non-personalized feeds on our platform, which provide our European users with an alternative to personalised recommender systems. They are able to turn off personalisation so that feeds show non-personalised content. The For You feed will instead show popular videos in their region and internationally. See here.




Measure 19.2

Relevant Signatories will provide options for the recipients of the service to select and to modify at any time their preferred options for relevant recommender systems, including giving users transparency about those options.

SLI 19.2.1

Relevant Signatories will provide aggregated information on effective user settings, such as the number of times users have actively engaged with these settings within the reporting period or over a sample representative timeframe, and clearly denote shifts in configuration patterns.

The number of users who have filtered hashtags or a keyword to set preferences for For You feed, the number of times users clicked “not interested” in relation to the For You feed, and the number of times users clicked on the For You Feed Refresh are all based on the approximate location of the users that engaged with these tools.

The number for videos tagged with AIGC label includes both automatic and creator-generated labeling.

Country Number of users that filtered hashtags or words Number of users that clicked on "not interested" Number of times users clicked on the For You Feed Refresh Number of Videos tagged with AIGC label
Austria 81,940 1,054,601 56,468 494,206
Belgium 123,591 1,636,355 90,963 678,792
Bulgaria 59,087 1,077,505 45,808 1,044,321
Croatia 30,698 564,614 25,259 131,396
Cyprus 17,932 224,295 15,464 181,397
Czech Republic 63,982 849,428 41,602 538,710
Denmark 49,192 637,478 31,304 191,270
Estonia 18,895 182,527 13,036 83,768
Finland 67,907 680,821 49,088 275,526
France 680,454 9,545,884 490,210 4,724,226
Germany 813,569 9,455,189 584,609 6,351,939
Greece 89,043 1,629,313 81,664 715,077
Hungary 63,484 1,179,383 35,106 776,347
Ireland 83,988 1,014,346 60,748 181,131
Italy 429,072 7,991,598 277,864 3,843,482
Latvia 28,449 361,290 22,190 198,043
Lithuania 34,507 402,385 26,669 220,796
Luxembourg 7,038 97,071 5,142 48,734
Malta 6,444 98,531 7,840 41,910
Netherlands 267,432 2,926,699 204,997 1,387,922
Poland 285,891 4,192,550 179,424 1,772,881
Portugal 96,468 1,293,884 64,318 903,172
Romania 155,037 3,379,837 196,717 2,213,505
Slovakia 28,269 399,464 14,507 311,951
Slovenia 14,149 204,523 11,366 52,786
Spain 502,835 8,571,737 433,076 4,547,581
Sweden 118,791 1,562,220 105,302 560,744
Iceland 6,537 65,577 3,173 17,098
Liechtenstein 229 3,947 364 1,164
Norway 74,159 820,642 47,990 246,312
Total EU 4,218,144 61,213,528 3,170,741 32,471,613
Total EEA 4,299,069 62,103,694 3,222,268 32,736,187

Commitment 21

Relevant Signatories commit to strengthen their efforts to better equip users to identify Disinformation. In particular, in order to enable users to navigate services in an informed way, Relevant Signatories commit to facilitate, across all Member States languages in which their services are provided, user access to tools for assessing the factual accuracy of sources through fact-checks from fact-checking organisations that have flagged potential Disinformation, as well as warning labels from other authoritative sources.

We signed up to the following measures of this commitment

Measure 21.1 Measure 21.2 Measure 21.3

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Yes

If yes, list these implementation measures here

  • We ran 8 temporary media literacy election integrity campaigns in advance of regional elections, most in collaboration with our fact-checking and media literacy partners:
    • 7 in the EU
      • Czechia (parliamentary election) with Demagog.cz
      • Portugal (local election) with Polígrafo
      • Estonia (local election) with Lead Stories
      • Ireland (presidential election) with The Journal
      • Netherlands (parliamentary election) N/A
      • Denmark (local and municipal election) with Sikker Digital
      • Portugal (presidential election) with Polígrafo
    • 1 in Norway (parliamentary election) N/A
  • Following wildfires in Portugal and Spain, we launched an in-app guide to provide users with guidance on interacting with sensitive content during natural disasters. The guide links to TikTok's tragic event support guide and authoritative third party resources (PT)(ES) of information about aid and relief support. The intervention is available in all in-app languages.
  • Continued our in-app interventions, including video tags, search interventions and in-app information centres, available in 23 official EU languages and Norwegian and Icelandic for EEA users, around the elections, the Israel-Hamas Conflict, Holocaust Education, and the War in Ukraine.
  • We partner with fact checkers to assess the accuracy of content. Sometimes, our fact-checking partners determine that content cannot be confirmed or checks are inconclusive (especially during unfolding events). Where our fact-checking partners provide us with a rating that demonstrates the claim cannot yet be verified, we may use our unverified content label to inform viewers via a banner that a video contains unverified content, in an effort to raise user awareness about content credibility.
  • We launched a $2 Million AI Literacy fund in partnership with more than 20 civil society organisations across 12 markets worldwide. The ad credit fund is designed to support the creation of educational content that will appear in For You feeds. This initiative launched alongside several new company updates to spot, shape and understand AI-generated content.

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

N/A

If yes, which further implementation measures do you plan to put in place in the next 6 months?

N/A

Measure 21.1

Relevant Signatories will further develop and apply policies, features, or programs across Member States and EU languages to help users benefit from the context and insights provided by independent fact-checkers or authoritative sources, for instance by means of labels, such as labels indicating fact-checker ratings, notices to users who try to share or previously shared the rated content, information panels, or by acting upon content notified by fact-checkers that violate their policies.

QRE 21.1.1

Relevant Signatories will report on the policies, features, or programs they deploy to meet this Measure and on their availability across Member States.

We currently have 12 IFCN accredited fact-checking partners across the EU, EEA, and wider Europe: 

  1. Agence France-Presse (AFP)
  2. dpa Deutsche Presse-Agentur
  3. Demagog
  4. Facta
  5. Fact Check Georgia
  6. Faktograf
  7. Internews Kosova
  8. Lead Stories
  9. Newtral
  10. Poligrafo
  11. Reuters
  12. Teyit

These partners provide fact-checking coverage in 23 official EEA languages, including at least one official language of each EU Member States, plus Georgian, Russian, Turkish, and Ukrainian.

We ensure that our users benefit from the context and insights provided by the fact checking organisations we partner with in the following ways: 

  • Enforcement of misinformation policies. Our fact-checking partners play a critical role in helping us enforce our misinformation policies, which aim to promote a trustworthy and authentic experience for our users. We consider context and fact-checking to be key to consistently and accurately enforcing these policies, so, while we use machine learning models to help detect potential misinformation, we have our misinformation moderators assess, confirm, and take action on harmful misinformation. As part of this process, our moderators can access a repository of previously fact-checked claims and they are able to provide content to our expert fact checking partners for further evaluation. Where fact-checking partners advise that content is false, our moderators take measures to assess and remove it from our platform. Our response to QRE 31.1.1 provides further insight into the way in which fact-checking partners  are involved in this process.
  • Unverified content labelling. As mentioned above, we partner with fact checkers to assess the accuracy of content. Sometimes, our fact-checking partners  determine that content cannot be confirmed or checks are inconclusive (especially during unfolding events). Where our fact-checking partners provide us with a rating that demonstrates the claim cannot yet be verified, we may use our unverified content label to inform viewers via a banner that a video contains unverified content, in an effort to raise user awareness about content credibility. In these circumstances, the content creator is also notified that their video was flagged as unsubstantiated content and the video will become ineligible for recommendation in the For You feed.

  • In-app tools related to specific topics:
    • Election integrity. We have launched campaigns in advance of several major elections aimed at educating the public about the voting process which encourage users to fact-check information with our fact-checking partners. For example, the election integrity campaign we rolled out in advance of France legislative elections in June 2024 included a search intervention and in-app Election Centre. The centre contained a section about spotting misinformation, which included videos created in partnership with fact-checking organisation Agence France-Presse (AFP). In total, during the reporting period, we ran 14 temporary media literacy election integrity campaigns in advance of regional elections. 
    • Climate Change. We launched a search intervention which redirects users seeking out climate change-related content to authoritative information. We worked with the UN to provide the authoritative information. 
    • Natural disasters: Launched a new temporary in-app natural disaster media literacy search guide for the Reunion Cyclone Garance between 4 March and 4 April 2025 and continued our temporary search guide for the Mayotte Cyclone until 14 Feb 2025. These search guides link to TikTok's Safety Center tragic events support guide and authoritative third party information about aid and relief support. 
  • User awareness of our fact-checking partnerships and labels. We have created pages on our Safety Center & Transparency Center to raise users’ awareness about our fact-checking program and labels and to support the work of our fact-checking partners. 

SLI 21.1.1

Relevant Signatories will report through meaningful metrics on actions taken under Measure 21.1, at the Member State level. At the minimum, the metrics will include: total impressions of fact-checks; ratio of impressions of fact-checks to original impressions of the fact-checked content–or if these are not pertinent to the implementation of fact-checking on their services, other equally pertinent metrics and an explanation of why those are more adequate.

Methodology
The share of removals under our harmful misinformation policy, share of proactive removals, share of removals before any views and share of the removals within 24h are relative to the total removals of each policy. 

The share cancel rate (%) following the unverified content label share warning pop-up indicates the percentage of users who do not share a video after seeing the label pop up. This metric is based on the approximate location of the users that engaged with these tools.

Country Share cancel rate (%) following the unverified content label share warning pop-up (users who do not share the video after seeing the pop up) Share of removals under misinformation policy Share of proactive removals under misinformation policy Share of video removals before any views under misinformation policy Share of video removals within 24h by misinformation policy Share of video removals under Civic and Election Integrity policy Share of proactive video removals under Civic and Election Integrity policy Share of video removals before any views under Civic and Election Integrity policy Share of video removals within 24h under Civic and Election Integrity policy % video removals under Edited Media and AI-Generated Content (AIGC) policy % proactive video removals under Edited Media and AI-Generated Content (AIGC) policy % video removals before any views under Edited Media and AI-Generated Content (AIGC) policy % video removals within 24h under Edited Media and AI-Generated Content (AIGC) policy
Austria 30.37% 28.57% 98.77% 85.60% 91.54% 5.59% 99.02% 94.91% 97.06% 17.10% 97.51% 88.55% 86.96%
Belgium 29.07% 32.26% 98.29% 86.10% 90.80% 6.72% 99.65% 95.72% 70.14% 22.54% 98.03% 84.24% 82.03%
Bulgaria 35.32% 52.31% 99.52% 78.27% 92.65% 4.36% 99.00% 95.52% 91.04% 23.63% 99.08% 87.35% 88.77%
Croatia 25.89% 21.13% 97.81% 82.13% 92.16% 2.09% 100.00% 98.41% 100.00% 39.42% 98.40% 95.55% 96.05%
Cyprus 33.62% 25.18% 98.00% 85.02% 92.58% 3.05% 100.00% 96.47% 96.47% 43.61% 97.94% 92.50% 92.50%
Czech Republic 31.01% 41.67% 99.37% 80.18% 94.92% 4.93% 98.52% 92.60% 95.56% 22.64% 99.03% 92.07% 93.55%
Denmark 31.82% 18.65% 98.79% 84.02% 90.10% 3.84% 98.63% 89.06% 96.68% 14.42% 98.65% 93.59% 93.70%
Estonia 30.88% 7.88% 99.24% 85.01% 93.93% 0.67% 97.78% 82.22% 91.11% 23.48% 98.79% 97.90% 97.90%
Finland 29.58% 31.31% 97.20% 82.76% 91.75% 6.18% 97.39% 90.67% 92.91% 21.25% 97.39% 91.10% 93.05%
France 30.16% 31.65% 97.72% 80.63% 88.60% 3.08% 98.82% 94.33% 97.86% 24.13% 96.36% 91.40% 91.17%
Germany 29.86% 29.76% 96.93% 83.42% 90.34% 3.69% 98.85% 95.80% 98.20% 35.15% 97.74% 92.71% 92.71%
Greece 30.64% 35.82% 99.17% 81.49% 95.59% 7.47% 99.90% 95.52% 99.38% 17.78% 98.77% 84.15% 86.73%
Hungary 28.14% 15.44% 98.79% 92.35% 96.31% 9.07% 99.66% 95.55% 99.09% 10.26% 96.97% 92.12% 91.92%
Ireland 33.69% 28.97% 98.97% 89.48% 93.19% 4.58% 98.06% 94.67% 97.34% 19.09% 97.97% 91.29% 90.24%
Italy 32.14% 36.23% 92.03% 84.00% 91.51% 5.29% 76.27% 91.53% 96.92% 29.96% 82.05% 89.87% 89.16%
Latvia 36.08% 19.79% 99.29% 86.10% 95.32% 8.45% 100.00% 28.57% 16.61% 42.64% 98.88% 97.63% 97.43%
Lithuania 31.90% 25.86% 98.99% 59.21% 66.67% 1.45% 96.72% 98.36% 96.72% 41.12% 98.61% 96.41% 96.41%
Luxembourg 32.63% 2.81% 98.85% 86.82% 90.54% 0.39% 100.00% 95.83% 100.00% 13.06% 99.57% 98.46% 98.64%
Malta 28.83% 13.64% 95.60% 89.31% 91.19% 2.23% 100.00% 100.00% 100.00% 32.76% 97.91% 96.34% 95.81%
Netherlands 29.12% 42.73% 98.74% 81.47% 82.21% 2.70% 96.47% 82.25% 84.12% 23.77% 97.35% 88.90% 88.15%
Poland 30.73% 37.40% 98.31% 76.58% 91.66% 2.63% 99.33% 89.31% 94.03% 18.30% 97.45% 93.23% 94.52%
Portugal 29.97% 41.29% 99.49% 87.33% 92.26% 3.55% 99.63% 94.81% 97.41% 21.81% 98.61% 84.33% 85.11%
Romania 28.50% 52.46% 98.90% 78.00% 91.12% 8.44% 98.55% 79.64% 93.70% 19.09% 98.16% 89.30% 84.11%
Slovakia 28.39% 27.99% 98.93% 76.83% 94.74% 1.60% 100.00% 93.75% 96.88% 39.64% 99.50% 97.17% 97.99%
Slovenia 30.61% 7.33% 99.73% 88.92% 96.22% 0.67% 100.00% 94.12% 100.00% 16.71% 98.70% 96.21% 95.73%
Spain 36.23% 35.84% 99.07% 88.68% 92.00% 2.46% 99.39% 88.60% 93.73% 26.77% 99.32% 92.43% 91.18%
Sweden 29.51% 25.98% 98.92% 88.05% 94.45% 4.75% 99.61% 98.29% 99.21% 32.28% 99.28% 93.73% 93.77%
Iceland 33.50% 25.52% 98.37% 86.18% 86.99% 3.94% 100.00% 100.00% 100.00% 36.31% 100.00% 94.29% 93.71%
Liechtenstein 31.25% 4.15% 98.60% 93.71% 95.80% 0.12% 100.00% 100.00% 100.00% 14.06% 99.38% 100.00% 99.79%
Norway 29.55% 27.09% 98.26% 85.98% 92.66% 4.73% 100.00% 97.59% 100.00% 20.89% 98.21% 89.63% 90.41%
Total EU 31.26% 33.30% 97.70% 82.24% 90.35% 4.06% 96.56% 90.25% 94.34% 26.06% 96.42% 91.67% 91.18%
Total EEA 31.23% 33.09% 97.70% 82.28% 90.37% 4.05% 96.60% 90.33% 94.40% 25.95% 96.44% 91.68% 91.21%

SLI 21.1.2

When cooperating with independent fact-checkers to label content on their services, Relevant Signatories will report on actions taken at the Member State level and their impact, via metrics, of: number of articles published by independent fact-checkers; number of labels applied to content, such as on the basis of such articles; meaningful metrics on the impact of actions taken under Measure 21.1.1 such as the impact of said measures on user interactions with, or user re-shares of, content fact-checked as false or misleading.

Methodology:
The number of videos tagged with the unverified content label is based on the country in which the video was posted.

The share cancel rate (%) following the unverified content label share warning pop-up indicates the percentage of users who do not share a video after seeing the label pop-up. This metric is based on the approximate location of the users that engaged with these tools.

Country Number of videos tagged with the unverified content label Share cancel rate (%) following the unverified content label share warning pop-up (users who do not share the video after seeing the pop up)
Austria 77 30.37%
Belgium 178 29.07%
Bulgaria 198 35.32%
Croatia 5 25.89%
Cyprus 17 33.62%
Czech Republic 314 31.01%
Denmark 298 31.82%
Estonia 34 30.88%
Finland 57 29.58%
France 2,397 30.16%
Germany 1,945 29.86%
Greece 264 30.64%
Hungary 33 28.14%
Ireland 39 33.69%
Italy 943 32.14%
Latvia 2 36.08%
Lithuania 12 31.90%
Luxembourg 6 32.63%
Malta 0 28.83%
Netherlands 325 29.12%
Poland 425 30.73%
Portugal 138 29.97%
Romania 597 28.50%
Slovakia 133 28.39%
Slovenia 6 30.61%
Spain 549 36.23%
Sweden 123 29.51%
Iceland 0 33.50%
Liechtenstein 0 31.25%
Norway 79 29.55%
Total EU 9,115 31.26%
Total EEA 9,194 31.23%

Commitment 23

Relevant Signatories commit to provide users with the functionality to flag harmful false and/or misleading information that violates Signatories policies or terms of service.

We signed up to the following measures of this commitment

Measure 23.1 Measure 23.2

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Yes

If yes, list these implementation measures here

  • In line with our DSA requirements, we continued to provide a dedicated reporting channel and appeals process for users who disagree with the outcome, for our community in the European Union to ‘Report Illegal Content,’ enabling users to alert us to content they believe breaches the law. For advertising-related user reporting flow, please refer to Chapter 2.

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

If yes, which further implementation measures do you plan to put in place in the next 6 months?

Measure 23.1

Relevant Signatories will develop or continue to make available on all their services and in all Member States languages in which their services are provided a user-friendly functionality for users to flag harmful false and/or misleading information that violates Signatories' policies or terms of service. The functionality should lead to appropriate, proportionate and consistent follow-up actions, in full respect of the freedom of expression.

QRE 23.1.1

Relevant Signatories will report on the availability of flagging systems for their policies related to harmful false and/or misleading information across EU Member States and specify the different steps that are required to trigger the systems.

We provide users with simple, intuitive ways to report/flag content in-app for any breach of our Terms of Service or Community Guidelines including for harmful misinformation in each EU Member State and in an official language of the European Union.
  • By ‘long-pressing’ (e.g., clicking for 3 seconds) on the video content and selecting the “Report” option. 
  • By selecting the “Share” button available on the right-hand side of the video content and then selecting the “Report” option.
The user is then shown categories of reporting reasons from which to select (which align with the harms our Community Guidelines seek to address). In 2024, we updated this feature to make the “Misinformation” categories more intuitive and allow users to report with increased granularity. 

In line with our DSA requirements, we continued to provide a dedicated reporting channel, and appeals process for our community in the European Union to ‘Report Illegal Content,’ enabling users to alert us to content they believe breaches the law.

People can report TikTok content or accounts without needing to sign in or have an account by accessing the Report function using the “More options (…)” menu on videos or profiles in their browser, or through our “Report Inappropriate content” webform which is available in our  Help Centre. Harmful misinformation can be reported across content features such as video, comment, search, hashtag, sound, or account.

Measure 23.2

Relevant Signatories will take the necessary measures to ensure that this functionality is duly protected from human or machine-based abuse (e.g., the tactic of 'mass-flagging' to silence other voices).

QRE 23.2.1

Relevant Signatories will report on the general measures they take to ensure the integrity of their reporting and appeals systems, while steering clear of disclosing information that would help would-be abusers find and exploit vulnerabilities in their defences.

Reporting system

To ensure the integrity of our reporting system, we deploy a combination of automated review and human moderation.

Videos uploaded to TikTok are initially reviewed by our automated moderation technology, which aims to identify content that violates our Community Guidelines. If a potential violation of our Community Guidelines is found, the automated review system will either pass it on to our moderation teams for further review or, if there is a high degree of confidence that the content violates our Community Guidelines, remove it automatically. Automated removal is only applied when violations are clear-cut, such as where the content contains nudity or pertains to youth safety. We are constantly working to improve the precision of our automated moderation technology so we can more effectively remove violative content at scale, while also reducing the number of incorrect removals.

To support the fair and consistent review of potentially violative content, where violations are less clear-cut, content will be passed to our human moderation teams for further review. Human moderators can take additional context and nuance into account, which cannot always be picked up by technology, and in the context of harmful misinformation, for example, our moderators have access to a repository of previously fact-checked claims to help make swift and accurate decisions and direct access to our fact-checking partners who help assess the accuracy of new content.

We have sought to make our Community Guidelines as clear and comprehensive as possible and have put in place robust Quality Assurance processes (including steps such as review of moderation cases, flows, appeals and undertaking Root Cause Analyses).

As part of our requirements under the DSA, we have introduced an additional reporting channel for our community in the European Union to ‘Report Illegal Content,’ which enables users to alert us to content they believe breaches the law. TikTok will review the content against our Community Guidelines and where a violation is detected, the content may be removed globally. If it is not removed, our illegal content moderation team will further review the content to assess whether it is unlawful in the relevant jurisdiction - this assessment is undertaken by human review. If it is, access to that content will be restricted in that country. Those who report suspected illegal content will be notified of our decision, including if we consider that the content is not illegal. Users who disagree can appeal those decisions using the appeals process.

We also note that whilst user reports are important, at TikTok we place considerable emphasis on proactive detection to remove violative content.  We are proud that the vast majority of removed content is identified proactively before it is reported to us.


Appeals system.

We are transparent with users in relation to appeals.  We set out the options that may be available both to the user who reported the content and the creator of the affected content, where they disagree with the decision we have taken.  

The integrity of our appeals systems is reinforced by the involvement of our trained human moderators, who can take context and nuance into consideration when deciding whether content is illegal or violates our Community Guidelines. 

Our moderators review all appeals raised in relation to removed videos, removed comments, and banned accounts and assess them against our policies. To ensure consistency within this process and its overall integrity, we have sought to make our policies as clear and comprehensive as possible and have put in place robust Quality Assurance processes (including steps such as auditing appeals and undertaking Root Cause Analyses).

If users who have submitted an appeal are still not satisfied with our decision, they can share feedback with us via the webform on TikTok.com. We continuously take user feedback into consideration to identify areas of improvement, including within the appeals process. Users may also have other legal rights in relation to decisions we make, as set out further here.

Commitment 24

Relevant Signatories commit to inform users whose content or accounts has been subject to enforcement actions (content/accounts labelled, demoted or otherwise enforced on) taken on the basis of violation of policies relevant to this section (as outlined in Measure 18.2), and provide them with the possibility to appeal against the enforcement action at issue and to handle complaints in a timely, diligent, transparent, and objective manner and to reverse the action without undue delay where the complaint is deemed to be founded.

We signed up to the following measures of this commitment

Measure 24.1

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

No

If yes, list these implementation measures here

N/A

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

N/A

If yes, which further implementation measures do you plan to put in place in the next 6 months?

N/A

Measure 24.1

Relevant Signatories commit to provide users with information on why particular content or accounts have been labelled, demoted, or otherwise enforced on, on the basis of violation of policies relevant to this section, as well as the basis for such enforcement action, and the possibility for them to appeal through a transparent mechanism.



QRE 24.1.1

Relevant Signatories will report on the availability of their notification and appeals systems across Member States and languages and provide details on the steps of the appeals procedure.

Users in all EU member states are notified by an in-app notification in their relevant local language where the following action is taken:
  • removal or otherwise restriction of access to their content;
  • a ban of the account;
  • restriction of their access to a feature (such as LIVE); or
  • restriction of their ability to monetise. 

Such notifications are provided in near real time after action has been taken (i.e. generally within several seconds or up to a few minutes at most). 

Where we have taken any of these decisions, an in-app inbox notification sets out the violation deemed to have taken place, along with an option for users to “disagree” and submit an appeal. Users can submit appeals within 180 days of being notified of the decision they want to appeal. Further information, including about how to appeal a decision is set out here.

All such appeals raised will be queued for review by our specialised human moderators so as to ensure that context is adequately taken into account in reaching a determination. Users can monitor the status and view the results of their appeal within their in-app inbox. 

As mentioned above, our users have the ability to share feedback with us to the extent that they don't agree with the result of their appeal. They can do so by using the in-app function which allows them to "report a problem". We are continuously taking user feedback into consideration in order to identify areas of improvement within the appeals process.

SLI 24.1.1

Relevant Signatories provide information on the number and nature of enforcement actions for policies described in response to Measure 18.2, the numbers of such actions that were subsequently appealed, the results of these appeals, information, and to the extent possible metrics, providing insight into the duration or effectiveness of processing of appeals process, and publish this information on the Transparency Centre.

Methodology of data measurement:

The number of appeals/overturns is based on the country in which the video being appealed/overturned was posted. These numbers are only related to our Misinformation, Civic and Election Integrity and Edited Media and AI-Generated Content (AIGC) policies.

Country Number of Appeals of videos removed for violation of misinformation policy Number of overturns of appealsÊ for violation of misinformation policy Appeal success rate of videos removedÊ for violation of misinformation policy Number of appeals of videos removed for violation of Civic and Election Integrity policy Number of overturns of appeals for violation of Civic and Election Integrity policy Appeal success rate of videos removed for violation of Civic and Election Integrity policy Number of appeals of videos removed for violation of Edited Media and AI-Generated Content (AIGC) Number of overturns of appeals for violation of Edited Media and AI-Generated Content (AIGC) Appeal success rate of videos removed for violation of Edited Media and AI-Generated Content (AIGC)
Austria 885 683 77.20% 152 128 84.20% 645 574 89.00%
Belgium 1,047 877 83.80% 171 145 84.80% 590 521 88.30%
Bulgaria 1,375 983 71.50% 71 60 84.50% 333 278 83.50%
Croatia 133 106 79.70% 13 12 92.30% 256 229 89.50%
Cyprus 155 122 78.70% 13 11 84.60% 246 213 86.60%
Czech Republic 1,119 936 83.60% 110 96 87.30% 479 418 87.30%
Denmark 546 441 80.80% 88 69 78.40% 567 540 95.20%
Estonia 181 137 75.70% 23 15 65.20% 362 308 85.10%
Finland 424 346 81.60% 61 47 77.00% 429 388 90.40%
France 7,543 6,307 83.60% 362 309 85.40% 3,308 2,905 87.80%
Germany 14,569 10,635 73.00% 1,475 1,191 80.70% 10,927 9,547 87.40%
Greece 1,206 1,022 84.70% 168 150 89.30% 461 395 85.70%
Hungary 362 285 78.70% 189 163 86.20% 242 205 84.70%
Ireland 856 716 83.60% 85 77 90.60% 416 388 93.30%
Italy 4,912 3,583 72.90% 367 309 84.20% 2,796 2,399 85.80%
Latvia 253 227 89.70% 23 21 91.30% 544 487 89.50%
Lithuania 275 201 73.10% 28 25 89.30% 470 411 87.40%
Luxembourg 60 50 83.30% 7 6 85.70% 86 81 94.20%
Malta 33 25 75.80% 6 6 100.00% 62 54 87.10%
Netherlands 4,467 3,223 72.20% 330 246 74.50% 4,138 3,671 88.70%
Poland 5,049 3,488 69.10% 284 238 83.80% 1,910 1,563 81.80%
Portugal 823 648 78.70% 76 58 76.30% 291 253 86.90%
Romania 8,645 4,885 56.50% 1,219 777 63.70% 2,528 2,208 87.30%
Slovakia 358 287 80.20% 19 15 78.90% 337 290 86.10%
Slovenia 125 104 83.20% 11 5 45.50% 168 148 88.10%
Spain 5,671 4,159 73.30% 374 328 87.70% 3,988 3,608 90.50%
Sweden 1,039 831 80.00% 163 130 79.80% 830 701 84.50%
Iceland 36 33 91.70% 7 6 85.70% 34 33 97.10%
Liechtenstein 4 2 50.00% 0 0 0.00% 3 3 100.00%
Norway 547 429 78.40% 105 87 82.90% 527 477 90.50%
Total EU 62,111 45,307 72.90% 5,888 4,637 78.80% 37,409 32,783 87.60%
Total EEA 62,698 45,771 73.00% 6,000 4,730 78.80% 37,973 33,296 87.70%

Empowering Researchers

Commitment 26

Relevant Signatories commit to provide access, wherever safe and practicable, to continuous, real-time or near real-time, searchable stable access to non-personal data and anonymised, aggregated, or manifestly-made public data for research purposes on Disinformation through automated means such as APIs or other open and accessible technical solutions allowing the analysis of said data.

We signed up to the following measures of this commitment

Measure 26.1 Measure 26.2 Measure 26.3

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Yes

If yes, list these implementation measures here

  • Enabled researchers to efficiently identify popular, high‑engagement content through our Research Tools (Research API and VCE) by filtering videos by numbers of views and comments, supporting studies across topics including potential disinformation.

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

N/A

If yes, which further implementation measures do you plan to put in place in the next 6 months?

N/A

Measure 26.1

Relevant Signatories will provide public access to non-personal data and anonymised, aggregated or manifestly-made public data pertinent to undertaking research on Disinformation on their services, such as engagement and impressions (views) of content hosted by their services, with reasonable safeguards to address risks of abuse (e.g. API policies prohibiting malicious or commercial uses).



QRE 26.1.1

Relevant Signatories will describe the tools and processes in place to provide public access to non-personal data and anonymised, aggregated and manifestly-made public data pertinent to undertaking research on Disinformation, as well as the safeguards in place to address risks of abuse.

Our dedicated TikTok for Developers website hosts our Research Tools and Commercial Content APIs.

QRE 26.1.2

Relevant Signatories will publish information related to data points available via Measure 26.1, as well as details regarding the technical protocols to be used to access these data points, in the relevant help centre. This information should also be reachable from the Transparency Centre. At minimum, this information will include definitions of the data points available, technical and methodological information about how they were created, and information about the representativeness of the data.

In this H2 2025 report,TikTok has shared more than 2,000 data points across 30 EU/EEA countries. 

We provide access to researchers to data that is publicly available on our platform through our Research Tools and Commercial Content API hosted on our dedicated TikTok for Developers website.

Measure 26.2

Relevant Signatories will provide real-time or near real-time, machine-readable access to non-personal data and anonymised, aggregated or manifestly-made public data on their service for research purposes, such as accounts belonging to public figures such as elected official, news outlets and government accounts subject to an application process which is not overly cumbersome.

QRE 26.2.1

Relevant Signatories will describe the tools and processes in place to provide real-time or near real-time access to non-personal data and anonymised, aggregated and manifestly-made public data for research purposes as described in Measure 26.2.

(I) Research API

Through our Research API, academic researchers from non-profit academic institutions in the US and Europe can apply to study public data about TikTok content and accounts. This public data includes comments, captions, subtitles, number of comments, shares, likes, followers and following lists, and favourites that a video receives on our platform. More information is available here.

(II) Virtual Compute Environment (VCE)

Through our VCE, qualifying not-for-profit researchers and academic researchers from non-profit academic institutions in the EU can query and analyse TikTok’s public data. To protect the security and privacy of our users the VCE is designed to ensure that TikTok data is processed within confined parameters. TikTok only reviews the results to ensure that there is no identifiable individual information extracted out of the platform. All aggregated results will be shared as a downloadable link to the approved primary researcher's email.

(III) Commercial Content API  

Through our Commercial Content API, qualifying researchers and professionals, who can be located in any country, can request public data about searches on ads and other commercial contents including ads, ad and advertiser metadata, and targeting information. To date, the Commercial Content API only includes data from EU countries.

(IV) Commercial Content Library

TikTok's Commercial Content Library (CCL) is a repository of ads and other commercial content posted on TikTok.

There are two main sub-libraries within the CCL:
  • Ad Library: This library features ads that we're paid to display to people, including those that aren't currently active or have been paused by the advertisers.
  • Other commercial content: This library features content that we're not paid to display, including content that promotes a brand, product, or service.

The CCL currently includes information on ads available to users in the European Economic Area (EEA), Switzerland, and the U.K. 

QRE 26.2.2

Relevant Signatories will describe the scope of manifestly-made public data as applicable to their services.

(I) Research API

Through our Research API, academic researchers from non-profit academic institutions in the US and Europe can apply to study public data about TikTok content and accounts. This public data includes comments, captions, subtitles, number of comments, shares, likes, followers and following lists, and favourites that a video receives on our platform. More information is available here.

(II) Virtual Compute Environment (VCE) 

Through our VCE, qualifying not-for-profit researchers and academic researchers from non-profit academic institutions in the EU can query and analyse TikTok’s public data. To protect the security and privacy of our users the VCE is designed to ensure that TikTok data is processed within confined parameters. TikTok only reviews the results to ensure that there is no identifiable individual information extracted out of the platform. All aggregated results will be shared as a downloadable link to the approved primary researcher's email.

(III) Commercial Content API  
Through our Commercial Content API, qualifying researchers and professionals, who can be located in any country, can request public data about commercial content including ads, ad and advertiser metadata, and targeting information. To date, the Commercial Content API only includes data from EU countries. 

(IV) Commercial Content Library 

TikTok's Commercial Content Library (CCL) is a repository of ads and other commercial content posted on TikTok.

There are two main sub-libraries within the CCL:
  • Ad Library: This library features ads that we're paid to display to people, including those that aren't currently active or have been paused by the advertisers.
  • Other commercial content: This library features content that we're not paid to display, including content that promotes a brand, product, or service.

The CCL currently includes information on ads available to users in the European Economic Area (EEA), Switzerland, and the U.K.

QRE 26.2.3

Relevant Signatories will describe the application process in place to in order to gain the access to non-personal data and anonymised, aggregated and manifestly-made public data described in Measure 26.2.


We make detailed information available to applicants about our Research Tools (Research API and VCE) and Commercial Content API,through our dedicated TikTok for Developers website, including on what data is made available and how to apply for access. 

Once an application has been approved for access to our Research Tools, we provide step-by-step instructions for researchers on how to access research data, how to comply with the security steps, and how to run queries on the data.
Similarly with the Commercial Content API, we provide participants with detailed information on how to query ad data and fetch public advertiser data.

SLI 26.2.1

Relevant Signatories will provide meaningful metrics on the uptake, swiftness, and acceptance level of the tools and processes in Measure 26.2, such as: Number of monthly users (or users over a sample representative timeframe), Number of applications received, rejected, and accepted (over a reporting period or a sample representative timeframe), Average response time (over a reporting period or a sample representative timeframe).

Research Tools, Commercial Content API, and the Commercial Content Library
During this reporting period we received:
  • 201 applications to access TikTok’s Research Tools (Research API and VCE) from researchers in the EU and EEA.
  • 90 applications to access the TikTok Commercial Content API.

Country Number of applications received for Research Tools Number of applications accepted for Research Tools Number of applications rejected for Research Tools Number of applications received for TikTok Commercial Content APIÊ Number of applications accepted for TikTok Commercial Content APIÊ Number of applications rejected for TikTok Commercial Content API
Austria 8 6 2 0 0 0
Belgium 5 3 2 2 2 0
Bulgaria 0 0 0 0 0 0
Croatia 0 0 0 0 0 0
Cyprus 0 0 0 1 1 0
Czech Republic 2 1 0 2 2 0
Denmark 6 6 0 10 10 0
Estonia 0 0 0 1 1 0
Finland 3 3 0 1 1 0
France 25 10 11 17 17 0
Germany 50 34 10 16 16 0
Greece 0 0 0 0 0 0
Hungary 3 3 0 0 0 0
Ireland 6 4 2 2 2 0
Italy 24 18 6 2 2 0
Latvia 1 0 1 2 2 0
Lithuania 1 1 0 0 0 0
Luxembourg 1 0 0 0 0 0
Malta 0 0 0 0 0 0
Netherlands 11 10 1 7 7 0
Poland 5 4 1 9 9 0
Portugal 4 3 0 0 0 0
Romania 5 2 1 2 2 0
Slovakia 0 0 0 1 1 0
Slovenia 2 1 1 1 1 0
Spain 35 14 17 7 7 0
Sweden 10 7 2 4 4 0
Iceland 0 0 0 0 0 0
Liechtenstein 0 0 0 0 0 0
Norway 2 1 0 3 3 0
Total EU 207 130 57 87 87 0
Total EEA 209 131 57 90 90 0

Measure 26.3

Relevant Signatories will implement procedures for reporting the malfunctioning of access systems and for restoring access and repairing faulty functionalities in a reasonable time.

QRE 26.3.1

Relevant Signatories will describe the reporting procedures in place to comply with Measure 26.3 and provide information about their malfunction response procedure, as well as about malfunctions that would have prevented the use of the systems described above during the reporting period and how long it took to remediate them.

We welcome feedback from researchers on our APIs and have a dedicated support form where researchers can provide feedback about their experience. In response to recent feedback, we have added the option to our Research Tools to filter video results by number of views and number of comments. This feature helps researchers more efficiently identify popular and high-engagement content, with the goal of enabling research across many topics including the potential dissemination of disinformation.

Commitment 28

COOPERATION WITH RESEARCHERS Relevant Signatories commit to support good faith research into Disinformation that involves their services.

We signed up to the following measures of this commitment

Measure 28.1 Measure 28.2 Measure 28.3 Measure 28.4

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Yes

If yes, list these implementation measures here

Enable researchers to efficiently identify popular, high‑engagement content through our Research Tools (Research API and VCE) by filtering videos by views and comments, supporting studies across topics including potential disinformation.

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

If yes, which further implementation measures do you plan to put in place in the next 6 months?

Measure 28.1

Relevant Signatories will ensure they have the appropriate human resources in place in order to facilitate research, and should set-up and maintain an open dialogue with researchers to keep track of the types of data that are likely to be in demand for research and to help researchers find relevant contact points in their organisations.

QRE 28.1.1

Relevant Signatories will describe the resources and processes they deploy to facilitate research and engage with the research community, including e.g. dedicated teams, tools, help centres, programs, or events.

TikTok is committed to facilitating research and engaging with the research community.

As set out in this report , TikTok is committed to facilitating research through our Research Tools, Commercial Content APIs and Commercial Content Library, full details of which are available on our TikTok for Developers and Commercial Content Library websites.

TikTok teams and personnel also regularly participate in research-focused events. During this reporting period, we hosted an academic API workshop in our Dublin office (July), conducted a demo of the VCE for a Spanish NGO (September), participated in the DSA Data Access Days hosted by the DSA40 Collaboratory (September), attended the Stanford Trust & Safety Research Conference (September), hosted two webinars for academic and NGO researchers in Europe (October), conducted a small Research Tools briefing for an Italian university lab (November), hosted a webinar about the Research Tools for academic researchers in Romania (November), and presented two Research Tools demos for groups at the University of Texas at Austin (November). 

As well as opportunities to share context about our approach, research interests, and opportunities to collaborate, these events enable us to learn from the important work being done by the research community on various topics, which include aspects related to harmful misinformation.


Measure 28.2

Relevant Signatories will be transparent on the data types they currently make available to researchers across Europe.

QRE 28.2.1

Relevant Signatories will describe what data types European researchers can currently access via their APIs or via dedicated teams, tools, help centres, programs, or events.

We have a dedicated TikTok for Developers website which hosts our Research Tools and Commercial Content APIs. 

With the Research API, researchers can access:

  • Public account data, such as user profiles, followers and following lists, liked videos, pinned videos and reposted videos.
  • Public content data, such as comments, captions, subtitles, and number of comments, shares and likes that a video receives. 

Through the VCE, qualifying not-for-profit researchers in the EU can access and analyse TikTok's public data, including public U18 data, in a secure environment that is subject to strict security controls. 

Our commercial content related APIs includes ads, ad and advertiser metadata, and targeting information. These APIs will allow the public and researchers to perform customised - advertiser name or keyword based - searches on ads and other commercial content data that is stored in the Commercial Content Library repository. The Library is a searchable database with information about paid ads and ad metadata, such as the advertising creative, dates the ad ran, main parameters used for targeting (e.g. age, gender), number of people who were served the ad, and more. 

Measure 28.3

Relevant Signatories will not prohibit or discourage genuinely and demonstratively public interest good faith research into Disinformation on their platforms, and will not take adversarial action against researcher users or accounts that undertake or participate in good-faith research into Disinformation.

QRE 28.3.1

Relevant Signatories will collaborate with EDMO to run an annual consultation of European researchers to assess whether they have experienced adversarial actions or are otherwise prohibited or discouraged to run such research.

The data we make available and the application criteria for our Research Tools (Research API and VCE) and Commercial Content API is research topic agnostic and clearly set out in our dedicated TikTok for Developers website. In August 2024, we introduced a due diligence process with an external vendor to confirm the eligibility of NGO applicants. 

Empowering fact-checkers

Commitment 30

Relevant Signatories commit to establish a framework for transparent, structured, open, financially sustainable, and non-discriminatory cooperation between them and the EU fact-checking community regarding resources and support made available to fact-checkers.

We signed up to the following measures of this commitment

Measure 30.1 Measure 30.2 Measure 30.3 Measure 30.4

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Yes

If yes, list these implementation measures here

N/A

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

N/A

If yes, which further implementation measures do you plan to put in place in the next 6 months?

N/A

Measure 30.1

Relevant Signatories will set up agreements between them and independent fact-checking organisations (as defined in whereas (e)) to achieve fact-checking coverage in all Member States. These agreements should meet high ethical and professional standards and be based on transparent, open, consistent and non-discriminatory conditions and will ensure the independence of fact-checkers.

QRE 30.1.1

Relevant Signatories will report on and explain the nature of their agreements with fact-checking organisations; their expected results; relevant quantitative information (for instance: contents fact-checked, increased coverage, changes in integration of fact-checking as depends on the agreements and to be further discussed within the Task-force); and such as relevant common standards and conditions for these agreements.

Within Europe, we work with 13 fact-checking partners who provide fact-checking coverage in 23 EEA languages, including at least one official language of every EU Member State, and additional languages including Georgian, Russian, Turkish, Ukrainian, Albanian and Serbian.

Our partners have teams of fact-checkers who review and verify reported content. Our Integrity and Authenticity moderators then use that independent feedback to take action and where appropriate, remove or make ineligible for recommendation false or misleading content or label unverified content. 

Our agreements with our partners are standardised, meaning the agreements are based on our template master services agreements and consistent with common standards and conditions. We reviewed and updated our template standard agreements as part of our annual contract renewal process.

The terms of the agreements describe:
  • The service the fact-checking partner will provide, namely, that their team of fact checkers review, assess and rate video content uploaded to their fact-checking queue, and will provide regular pro-active Insights Reports about general misinformation trends observed on our platform and across the industry generally, including new/changing industry or market trends, events or topics that generate particular misinformation or disinformation. 
  • The expected results e.g., the fact-checkers advise on whether the content may be or contain misinformation and rate it using our classification categories. 
  • An option to receive pro-actively flagging of potential harmful misinformation from our partners.
  • The languages in which they will provide fact-checking services.
  • The ability to request temporary coverage regarding additional languages or support on ad hoc additional projects.
  • All other key terms including the applicable term and fees and payment arrangements.

QRE 30.1.2

Relevant Signatories will list the fact-checking organisations they have agreements with (unless a fact-checking organisation opposes such disclosure on the basis of a reasonable fear of retribution or violence).

We currently have 13 IFCN accredited fact-checking partners across the EU, EEA, and wider Europe: 

  1. Agence France-Presse (AFP)
  2. Deutsche Presse-Agentur (dpa)
  3. Demagog
  4. Facta
  5. Geofacts
  6. Faktograf
  7. Internews Kosova (Kallxo)
  8. Lead Stories
  9. Newtral
  10. Poligrafo
  11. Reuters
  12. Science Feedback- For advertising-related fact-checking partnerships, please refer to Chapter 2. 
  13. Teyit

These partners provide fact-checking coverage in 23 official EEA languages, including at least one official language of each EU Member States, and additional languages including Georgian, Russian, Turkish, Ukrainian, Albanian and Serbian.

We can, and have, put in place temporary agreements with these fact-checking partners to provide additional EU language coverage during high risk events like elections or an unfolding crisis.

Outside of our fact-checking program, we also collaborate with fact-checking organisations to develop a variety of media literacy campaigns. For example, during this reporting period, we
worked with European fact-checkers on 6 temporary media literacy campaigns, in advance of regional elections, through our in-app Election Centers:
  1. Portugal Local Elections -Polígrafo
  2. Estonia Local Elections - Lead Stories
  3. Ireland Presidential Election - The Journal
  4. Portugal Presidential Election - Polígrafo
  5. Denmark (local and municipal elections): Sikker Digital
  6. Czechia (parliamentary election elections): Demagog.cz

Globally, we have more than 20 IFCN-accredited fact-checking partners and we keep users updated here.

QRE 30.1.3

Relevant Signatories will report on resources allocated where relevant in each of their services to achieve fact-checking coverage in each Member State and to support fact-checking organisations' work to combat Disinformation online at the Member State level.

We have fact-checking coverage in 23 official EEA languages: Bulgarian, Croatian, Czech, Danish, Dutch, English, Estonian, Finnish, French, German, Greek, Hungarian, Italian, Latvian, Lithuanian, Norwegian, Polish, Portuguese, Romanian, Slovak, Slovenian, Spanish and Swedish. 

We have fact-checking coverage in a number of other European languages or languages used in Europe which affect European users, including Georgian, Russian, Turkish, and Ukrainian and we can request additional support in Azeri, Armenian, and Belarusian. 

In terms of global fact-checking initiatives, we currently cover more than 60 languages and 130 markets across the world, thereby improving the overall integrity of the service and benefiting European users. 

In order to effectively scale the feedback provided by our fact-checkers globally, we have implemented the measures listed below.
  • Insights reports. Our fact-checking partners provide regular reports identifying general misinformation trends observed on our platform and across the industry generally, including new/changing industry or market trends, events or topics that generated particular misinformation or disinformation.  
  • Proactive detection by our fact-checking partners. Our fact-checking partners are authorised to proactively identify content that may constitute harmful misinformation on our platform, which our moderators assess against our Community Guidelines, and suggest prominent misinformation that is circulating online that may benefit from verification. 
  • Fact-checking guidelines. Where relevant, we create guidelines and trending topic reminders for our moderators which are informed by previous fact checking assessments. This helps our teams leverage the insights from our fact-checking partners and supports swift and accurate decisions on flagged content regardless of the language in which the original claim was made.

Moderation teams working dedicated misinformation queues receive enhanced training on our misinformation policies and have access to the above-mentioned tools and measures, which enables them to make accurate content decisions across Europe and globally.

We place considerable emphasis on proactive detection to remove violative content and reduce exposure to potentially distressing content for our human safety experts. Before content is posted to our platform, it's reviewed by automated moderation technologies which identify content or behavior that may violate our policies or For You feed eligibility standards, or that may require age-restriction or other actions. While undergoing this review, the content is visible only to the uploader.

If our automated moderation technology identifies content that is a potential violation, it will either take action against the content or flag it for human review. In line with our safeguards to help ensure accurate decisions are made, automated removal is applied when violations are more clear-cut.

Some of the methods and technologies that support these efforts include:
  • Vision-based: Computer vision models can identify objects that violate our Community Guidelines, such as weapons or hate symbols.
  • Audio-based: Audio clips are reviewed for violations of our policies, supported by a dedicated audio bank and "classifiers" that help us detect audios that are similar or modified to previous violations.
  • Text-based: Detection models review written content like comments or hashtags, using foundational keyword lists to find variations of violative text. Artificial Intelligence (AI) that can interpret the context surrounding content—helps us identify violations that are context-dependent, such as words that can be used in a hateful way but may not violate our policies by themselves. We also work with various external experts, like our fact-checking partners, to inform our keyword lists.
  • Similarity-based: "Similarity detection systems" enable us to not only catch identical or highly similar versions of violative content, but other types of content that share key contextual similarities and may require additional review.
  • Activity-based: Technologies that look at how accounts are being operated help us disrupt deceptive activities like bot accounts, spam, or attempts to artificially inflate engagement through fake likes or follow attempts.
  • LLMs: We use multimodal LLMs to help moderate content faster and more consistently at scale, from taking automated action on activity like fake engagement, to empowering teams with better moderation tools and risk insights.
  • Content Credentials: We launched the ability to read Content Credentials that attach metadata to content, which we can use to automatically label AI-generated content that originated on other major platforms.

Continuing to leverage the fact-checking output in this way enables us to further increase the positive impact of our fact checking programme.


Measure 30.2

Relevant Signatories will provide fair financial contributions to the independent European fact-checking organisations for their work to combat Disinformation on their services. Those financial contributions could be in the form of individual agreements, of agreements with multiple fact-checkers or with an elected body representative of the independent European fact-checking organisations that has the mandate to conclude said agreements.

QRE 30.2.1

Relevant Signatories will report on actions taken and general criteria used to ensure the fair financial contributions to the fact-checkers for the work done, on criteria used in those agreements to guarantee high ethical and professional standards, independence of the fact-checking organisations, as well as conditions of transparency, openness, consistency and non-discrimination.

Our agreements with our fact-checking partners are standardised, meaning the agreements are based on our template master services agreements and consistent with common standards and conditions. These agreements, as with all of our agreements, must meet the ethical and professional standards we set internally including containing anti-bribery and corruption provisions. 

Our partners are compensated in a fair, transparent way based on the work done by them using standardised rates. Our fact-checking partners then invoice us on a monthly basis based on work done.

All of our fact-checking partners are independent organisations, which are certified through the non-partisan IFCN. Our agreements with them explicitly state that the fact-checkers are non-exclusive, independent contractors of TikTok who retain editorial independence in relation to the fact-checking, and that the services shall be performed in a professional manner and in line with the highest standards in the industry. Our processes are also set up to ensure our fact-checking partners’ independence. Our partners access flagged content through a tool dedicated for their use and provide their assessment of the accuracy of the content by providing a rating. Fact-checkers will do so independently from us, and their review may include calling sources, consulting public data or authenticating videos and images.

To facilitate transparency and openness with our fact-checking partners, we regularly meet them and provide data regarding their feedback and also conduct surveys with them.

QRE 30.2.2

Relevant Signatories will engage in, and report on, regular reviews with their fact-checking partner organisations to review the nature and effectiveness of the Signatory's fact-checking programme.

We meet regularly with our fact-checking partners and have an ongoing dialogue with them about how our partnership is working and evolving. We survey our fact-checking partners to encourage feedback about what we are doing well and how we could improve.

QRE 30.2.3

European fact-checking organisations will, directly (as Signatories to the Code) or indirectly (e.g. via polling by EDMO or an elected body representative of the independent European fact-checking organisations) report on the fairness of the individual compensations provided to them via these agreements.

This provision is not relevant to TikTok, only to fact-checking organisations.

Measure 30.3

Relevant Signatories will contribute to cross-border cooperation between fact-checkers.

QRE 30.3.1

Relevant Signatories will report on actions taken to facilitate their cross-border collaboration with and between fact-checkers, including examples of fact-checks, languages, or Member States where such cooperation was facilitated.

Given our fact-checking partners are all IFCN-accredited, our fact-checking partners already engage in some informal cross-border collaboration through that network. 

In October 2025, TikTok co-sponsored the EU DisinfoLab conference in Slovenia. Several TikTok staff attended, and we co-led a session with the Centre for Humanitarian Dialogue on how platforms and conflict mediators can work together to reduce the risk of violence during conflicts.

Measure 30.4

To develop the Measures above, relevant Signatories will consult EDMO and an elected body representative of the independent European fact-checking organisations.

QRE 30.4.1

Relevant Signatories will report, ex ante on plans to involve, and ex post on actions taken to involve, EDMO and the elected body representative of the independent European fact-checking organisations, including on the development of the framework of cooperation described in Measures 30.3 and 30.4.

We maintain dialogue with EFCSN on these and other issues. We continue to be open to discussing and exploring what further progress can be made on these points.

Commitment 31

Relevant Signatories commit to integrate, showcase, or otherwise consistently use fact-checkers' work in their platforms' services, processes, and contents; with full coverage of all Member States and languages.

We signed up to the following measures of this commitment

Measure 31.1 and 31.2 Measure 31.3 Measure 31.4

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

No

If yes, list these implementation measures here

N/A

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

N/A

If yes, which further implementation measures do you plan to put in place in the next 6 months?

N/A

Measure 31.1 and 31.2

31.1: Relevant Signatories that showcase User Generated Content (UGC) will integrate, showcase, or otherwise consistently use independent fact-checkers’ work in their platforms’ services, processes, and contents across all Member States and across formats relevant to the service. Relevant Signatories will collaborate with fact-checkers to that end, starting by conducting and documenting research and testing. 31.2: Relevant Signatories that integrate fact-checks in their products or processes will ensure they employ swift and efficient mechanisms such as labelling, information panels or policy enforcement to help increase the impact of fact-checks on audiences.

Measure 31.1
TikTok did not subscribe to this measure as outlined in the January 2025 Subscription Document.

QRE 31.1.1 (for Measures 31.1 and 31.2)

Relevant Signatories will report on their specific activities and initiatives related to Measures 31.1 and 31.2, including the full results and methodology applied in testing solutions to that end.

We see harmful misinformation as different from other content issues. Context and fact-checking are critical to consistently and accurately enforcing our harmful misinformation policies, which is why we work with 13 fact-checking partners in Europe, covering 23 EEA languages. 

As previously outlined, we place considerable emphasis on proactive detection and automated moderation technology to action violative content. For example, "multi-modal LLMs" can perform complex, highly specific tasks related to visual content. We can use this technology to make misinformation moderation easier by extracting specific misinformation "claims" from videos for moderators to assess directly or route to our fact-checking partners.

Our Integrity and Authenticity moderators receive specialised training to assess, confirm, and take action on harmful misinformation. This includes direct access to our fact-checking partners who help assess the accuracy of content. We also use fact-checking feedback to provide additional context to users about certain content. As mentioned, when our fact checking partners conclude that the fact-check is inconclusive or content is not able to be confirmed, (which is especially common during unfolding events or crises), we inform viewers via a banner when we identify a video with unverified content in an effort to raise users' awareness about the credibility of the content and to reduce sharing. The video may also become ineligible for recommendation into anyone's For You feed to limit the spread of potentially misleading information.

SLI 31.1.1

Member State level reporting on use of fact-checks by service and the swift and efficient mechanisms in place to increase their impact, which may include (as depends on the service): number of fact-check articles published; reach of fact-check articles; number of content pieces reviewed by fact-checkers.

Methodology of data measurement:

The number of fact checked videos is based on the number of videos that have been reviewed by one of our fact-checking partners in the relevant territory.

Country Number of fact checked videos (tasks)
Austria 52
Belgium 396
Bulgaria 1,221
Croatia 135
Cyprus 21
Czechia 367
Denmark 239
Estonia 326
Finland 58
France 3,659
Germany 1,203
Greece 97
Hungary 107
Ireland 53
Italy 743
Latvia 223
Lithuania 182
Luxembourg 1
Malta 1
Netherlands 1,603
Poland 806
Portugal 344
Romania 308
Slovakia 283
Slovenia 193
Spain 349
Sweden 102
Iceland 5
Liechtenstein 0
Norway 190
Total EU 13,072
Total EEA 13,267

SLI 31.1.2

An estimation, through meaningful metrics, of the impact of actions taken such as, for instance, the number of pieces of content labelled on the basis of fact-check articles, or the impact of said measures on user interactions with information fact-checked as false or misleading.

Methodology of data measurement: 

The number of videos removed as a result of a fact-checking assessment and the number of videos removed because of policy guidelines and known misinformation trends.. 

These metrics correspond to the numbers of removals under the misinformation policy since all of its enforcement are based on the policy guidelines and known misinformation trends. 

Country Number of videos removed as a result of a fact checking assessment Number of videos removed under Misinformation policyÊ
Austria 12 2,612
Belgium 11 4,150
Bulgaria 166 4,828
Croatia 40 638
Cyprus 1 701
Czech Republic 20 2,855
Denmark 12 2,484
Estonia 28 527
Finland 7 1,357
France 273 37,466
Germany 216 42,642
Greece 7 4,602
Hungary 4 1,490
Ireland 3 2,613
Italy 207 18,667
Latvia 0 705
Lithuania 7 1,086
Luxembourg 0 349
Malta 0 159
Netherlands 258 14,335
Poland 128 14,770
Portugal 48 3,141
Romania 65 28,743
Slovakia 18 1,122
Slovenia 5 370
Spain 50 21,592
Sweden 3 4,159
Iceland 1 123
Liechtenstein 0 143
Norway 10 1,662
Total EU 1,589 218,163
Total EEA 1,600 220,091

SLI 31.1.3

Signatories recognise the importance of providing context to SLIs 31.1.1 and 31.1.2 in ways that empower researchers, fact-checkers, the Commission, ERGA, and the public to understand and assess the impact of the actions taken to comply with Commitment 31. To that end, relevant Signatories commit to include baseline quantitative information that will help contextualise these SLIs. Relevant Signatories will present and discuss within the Permanent Task-force the type of baseline quantitative information they consider using for contextualisation ahead of their baseline reports.

Methodology of data measurement:

The metric we have provided demonstrates the % of videos which have been removed as a result of the fact checking assessment, in comparison to the total number of videos removed because of violation of our harmful misinformation policy.

Country Videos removed as a result of a fact checking assessment as a percentage of total number of videos removed due to violation of harmful misinformation policy
Austria 0.50%
Belgium 0.30%
Bulgaria 3.40%
Croatia 6.30%
Cyprus 0.10%
Czech Republic 0.70%
Denmark 0.50%
Estonia 5.30%
Finland 0.50%
France 0.70%
Germany 0.50%
Greece 0.20%
Hungary 0.30%
Ireland 0.10%
Italy 1.10%
Latvia 0.00%
Lithuania 0.60%
Luxembourg 0.00%
Malta 0.00%
Netherlands 1.80%
Poland 0.90%
Portugal 1.50%
Romania 0.20%
Slovakia 1.60%
Slovenia 1.40%
Spain 0.20%
Sweden 0.10%
Iceland 0.80%
Liechtenstein 0.00%
Norway 0.60%
Total EU 0.70%
Total EEA 0.70%

Commitment 32

Relevant Signatories commit to provide fact-checkers with prompt, and whenever possible automated, access to information that is pertinent to help them to maximise the quality and impact of fact-checking, as defined in a framework to be designed in coordination with EDMO and an elected body representative of the independent European fact-checking organisations.

We signed up to the following measures of this commitment

Measure 32.3

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

No

If yes, list these implementation measures here

N/A

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

N/A

If yes, which further implementation measures do you plan to put in place in the next 6 months?

N/A

Measure 32.3

Relevant Signatories will regularly exchange information between themselves and the fact-checking community, to strengthen their cooperation.

QRE 32.3.1

Relevant Signatories will report on the channels of communications and the exchanges conducted to strengthen their cooperation - including success of and satisfaction with the information, interface, and other tools referred to in Measures 32.1 and 32.2 - and any conclusions drawn from such exchanges.

Our fact-checking partners access content which has been flagged for review through a content review tool made available for their exclusive use. The dashboard shows our fact-checkers certain quantitative information about the services they provide, including the number of videos queued for assessment at any one time, as well as the time the review has taken. Fact-checkers can also use the dashboard to see the rating they applied to videos they have previously assessed.


Transparency Centre

Commitment 34

To ensure transparency and accountability around the implementation of this Code, Relevant Signatories commit to set up and maintain a publicly available common Transparency Centre website.

We signed up to the following measures of this commitment

Measure 34.1 Measure 34.2 Measure 34.3 Measure 34.4 Measure 34.5

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Yes

If yes, list these implementation measures here

We have been an active participant in the working group that has successfully launched the common Transparency Centre in 2023. We held the position of co-chair of the Transparency working group since September 2023, before the position was transferred to VOST, a civil society organisation that is a signatory of the Code. We have since supported VOST's work on the relaunched shared Transparency Center at disinfocode.eu and continue to upload reports and share feedback as required.

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

N/A

If yes, which further implementation measures do you plan to put in place in the next 6 months?

N/A

Measure 34.1

Signatories establish and maintain the common Transparency Centre website, which will be operational and available to the public within 6 months from the signature of this Code.

We continue to work with VOST to maintain the Transparency Center.

Measure 34.2

Signatories provide appropriate funding, for setting up and operating the Transparency Centre website, including its maintenance, daily operation, management, and regular updating. Funding contribution should be commensurate with the nature of the Signatories' activity and shall be sufficient for the website's operations and maintenance and proportional to each Signatories' risk profile and economic capacity.

We continue to fund VOST in line with our standing agreement.

Measure 34.3

Relevant Signatories will contribute to the Transparency Centre's information to the extent that the Code is applicable to their services.

We provided information for H1 2025 to be published on the VOST-maintained Transparency Center during H2 2025.

Measure 34.4

Signatories will agree on the functioning and financing of the Transparency Centre within the Task-force, to be recorded and reviewed within the Task-Force on an annual basis.

No changes on previous report.

Measure 34.5

The Task-force will regularly discuss the Transparency Centre and assess whether adjustments or actions are necessary. Signatories commit to implement the actions and adjustments decided within the Task-force within a reasonable timeline.

We remain in correspondence with VOST and other signatories regarding the Transparency Center.

Commitment 35

Signatories commit to ensure that the Transparency Centre contains all the relevant information related to the implementation of the Code's Commitments and Measures and that this information is presented in an easy-to-understand manner, per service, and is easily searchable.

We signed up to the following measures of this commitment

Measure 35.1 Measure 35.2 Measure 35.3 Measure 35.4 Measure 35.5 Measure 35.6

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

No, through our participation in the Transparency Centre working group, we have ensured that the Transparency Centre will allow the general public to access general information about the Code as well as the underlying reports (and for the Centre to be navigated both by commitment and by signatory).

If yes, list these implementation measures here

N/A

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

N/A

If yes, which further implementation measures do you plan to put in place in the next 6 months?

N/A

Measure 35.1

Signatories will list in the Transparency Centre, per each Commitment and Measure that they subscribe to, the terms of service and policies that their service applies to implement these Commitments and Measures.

No changes since the last report. We continue to publish a list of commitments on the VOST transparency center.

Measure 35.2

Signatories provide information on the implementation and enforcement of their policies per service, including geographical and language coverage.

No changes since the last report (H1 2025).

Measure 35.3

Signatories ensure that the Transparency Centre contains a repository of their reports assessing the implementation of the Code's commitments.

No changes since the last report. We continue to provide updates to the VOST Transparency Center in order to meet this commitment, as well as making such information available on our own Transparency Center.

Measure 35.4

In crisis situations, Signatories use the Transparency Centre to publish information regarding the specific mitigation actions taken related to the crisis.

No changes since the previous report. We continue to publish relevant information about our response to crises in the report itself as well as the Disinfocode.eu Transparency Center.

Measure 35.5

Signatories ensure that the Transparency Centre is built with state-of-the-art technology, is user-friendly, and that the relevant information is easily searchable (including per Commitment and Measure). Users of the Transparency Centre will be able to easily track changes in Signatories' policies and actions.

No changes since the last report. VOST maintains the Transparency Center and we continue to support this.

Measure 35.6

The Transparency Centre will enable users to easily access and understand the Service Level Indicators and Qualitative Reporting Elements tied to each Commitment and Measure of the Code for each service, including Member State breakdowns, in a standardised and searchable way. The Transparency Centre should also enable users to easily access and understand Structural Indicators for each Signatory.

No changes since the last report. VOST maintains the Transparency Center and we continue to support this.

Commitment 36

Signatories commit to updating the relevant information contained in the Transparency Centre in a timely and complete manner.

We signed up to the following measures of this commitment

Measure 36.1 Measure 36.2 Measure 36.3

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

No

If yes, list these implementation measures here

N/A

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

N/A

If yes, which further implementation measures do you plan to put in place in the next 6 months?

N/A

Measure 36.1

Signatories provide updates about relevant changes in policies and implementation actions in a timely manner, and in any event no later than 30 days after changes are announced or implemented.

We provided an update in line with our disclosures in the H1 2025 report shortly after the submission of the previous report to the EC, in line with the agreed deadline.

Measure 36.2

Signatories will regularly update Service Level Indicators, reporting elements, and Structural Indicators, in parallel with the regular reporting foreseen by the monitoring framework. After the first reporting period, Relevant Signatories are encouraged to also update the Transparency Centre more regularly.

We provided an update in line with our disclosures in the H1 2025 report shortly after the submission of the previous report to the EC, in line with the agreed deadline.

Measure 36.3

Signatories will update the Transparency Centre to reflect the latest decisions of the Permanent Task-force, regarding the Code and the monitoring framework.

No changes. The Transparency Center continues to function as intended. We provided metrics in the previous report that were provided by VOST. The metric itself and its wording were agreed with the EC's representatives and adopted by all platforms.

QRE 36.1.1 (for the Commitments 34-36)

With their initial implementation report, Signatories will outline the state of development of the Transparency Centre, its functionalities, the information it contains, and any other relevant information about its functioning or operations. This information can be drafted jointly by Signatories involved in operating or adding content to the Transparency Centre.

The Transparency Centre was successfully launched in February 2023. We continue to upload our report according to the approved deadlines. 

QRE 36.1.2 (for the Commitments 34-36)

Signatories will outline changes to the Transparency Centre's content, operations, or functioning in their reports over time. Such updates can be drafted jointly by Signatories involved in operating or adding content to the Transparency Centre.

The administration of the Transparency Centre website has been transferred fully to the community of the Code’s signatories, with VOST Europe taking the role of developer.

SLI 36.1.1 (for the Commitments 34-36)

Signatories will provide meaningful quantitative information on the usage of the Transparency Centre, such as the average monthly visits of the webpage.

We worked with the vendor to develop relevant metrics for this SLI.

Platform Metrics
TikTok Between July 1 and December 31 2025, our signatory profile was visited 1,350 times, and our signatory reports were downloaded 3,456 times. The Transparency Centre Webpage overall was visited 30,384 times.

Permanent Task-Force

Commitment 37

Signatories commit to participate in the permanent Task-force. The Task-force includes the Signatories of the Code and representatives from EDMO and ERGA. It is chaired by the European Commission, and includes representatives of the European External Action Service (EEAS). The Task-force can also invite relevant experts as observers to support its work. Decisions of the Task-force are made by consensus.

We signed up to the following measures of this commitment

Measure 37.1 Measure 37.2 Measure 37.3 Measure 37.4 Measure 37.5 Measure 37.6

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

No

If yes, list these implementation measures here

We have meaningfully engaged in the Task-force / Plenaries and all working groups.

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

N/A

If yes, which further implementation measures do you plan to put in place in the next 6 months?

N/A

Measure 37.1

Signatories will participate in the Task-force and contribute to its work. Signatories, in particular smaller or emerging services will contribute to the work of the Task-force proportionate to their resources, size and risk profile. Smaller or emerging services can also agree to pool their resources together and represent each other in the Task-force. The Task-force will meet in plenary sessions as necessary and at least every 6 months, and, where relevant, in subgroups dedicated to specific issues or workstreams.

We have attended all Plenary meetings.

Measure 37.2

Signatories agree to work in the Task-force in particular – but not limited to – on the following tasks: Establishing a risk assessment methodology and a rapid response system to be used in special situations like elections or crises; Cooperate and coordinate their work in special situations like elections or crisis; Agree on the harmonised reporting templates for the implementation of the Code's Commitments and Measures, the refined methodology of the reporting, and the relevant data disclosure for monitoring purposes; Review the quality and effectiveness of the harmonised reporting templates, as well as the formats and methods of data disclosure for monitoring purposes, throughout future monitoring cycles and adapt them, as needed; Contribute to the assessment of the quality and effectiveness of Service Level and Structural Indicators and the data points provided to measure these indicators, as well as their relevant adaptation; Refine, test and adjust Structural Indicators and design mechanisms to measure them at Member State level; Agree, publish and update a list of TTPs employed by malicious actors, and set down baseline elements, objectives and benchmarks for Measures to counter them, in line with the Chapter IV of this Code.

We continue to participate in all relevant workstreams of the Task-force.

Measure 37.3

The Task-force will agree on and define its operating rules, including on the involvement of third-party experts, which will be laid down in a Vademecum drafted by the European Commission in collaboration with the Signatories and agreed on by consensus between the members of the Task-force.

We continue to participate in all relevant workstreams of the Task-force.

Measure 37.4

Signatories agree to set up subgroups dedicated to the specific issues related to the implementation and revision of the Code with the participation of the relevant Signatories.

We continue to participate in all relevant workstreams of the Task-force.

Measure 37.5

When needed, and in any event at least once per year the Task-force organises meetings with relevant stakeholder groups and experts to inform them about the operation of the Code and gather their views related to important developments in the field of Disinformation.

We continue to participate in all relevant workstreams of the Task-force.

Measure 37.6

Signatories agree to notify the rest of the Task-force when a Commitment or Measure would benefit from changes over time as their practices and approaches evolve, in view of technological, societal, market, and legislative developments. Having discussed the changes required, the Relevant Signatories will update their subscription document accordingly and report on the changes in their next report.

We continue to engage in the work of the Task-force.

QRE 37.6.1

Signatories will describe how they engage in the work of the Task-force in the reporting period, including the sub-groups they engaged with.

We have meaningfully engaged in the Task-force and all of its working groups by attending and participating in meetings and engaging in any relevant discussions, in particular regarding elections and further developing/activating the Rapid Response System (RRS). 

We will continue to engage in the Task-force and all of its working groups and subgroups.

Monitoring of the Code

Commitment 38

The Signatories commit to dedicate adequate financial and human resources and put in place appropriate internal processes to ensure the implementation of their commitments under the Code.

We signed up to the following measures of this commitment

Measure 38.1

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

No

If yes, list these implementation measures here

N/A

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

N/A

If yes, which further implementation measures do you plan to put in place in the next 6 months?

N/A

Measure 38.1

Relevant Signatories will outline the teams and internal processes they have in place, per service, to comply with the Code in order to achieve full coverage across the Member States and the languages of the EU.

QRE 38.1.1

Relevant Signatories will outline the teams and internal processes they have in place, per service, to comply with the Code in order to achieve full coverage across the Member States and the languages of the EU.

TikTok will continue to have appropriate resources in place to meet our commitments and compliance.

Given the breadth of the Code and the commitments therein, our work spans multiple teams, including Trust and Safety, Legal, Monetisation Integrity, Product and Public Policy. Teams across the globe are deployed to ensure that we meet our commitments and compliance with the notable involvement of our Trust and Safety Leadership.

We have dedicated Trust and Safety staff in the European Union. We recognise the importance of local knowledge and expertise as we work to ensure online safety for our users. We take a similar approach to our third party partnerships.

Commitment 39

Signatories commit to provide to the European Commission, within 1 month after the end of the implementation period (6 months after this Code’s signature) the baseline reports as set out in the Preamble.

We signed up to the following measures of this commitment

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

If yes, list these implementation measures here

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

If yes, which further implementation measures do you plan to put in place in the next 6 months?

Commitment 40

Signatories commit to provide regular reporting on Service Level Indicators (SLIs) and Qualitative Reporting Elements (QREs). The reports and data provided should allow for a thorough assessment of the extent of the implementation of the Code’s Commitments and Measures by each Signatory, service and at Member State level.

We signed up to the following measures of this commitment

Measure 40.1 Measure 40.2 Measure 40.3 Measure 40.4 Measure 40.5 Measure 40.6

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Yes

If yes, list these implementation measures here

We have reported on the SLIs and QREs relevant to the Commitments we signed-up to within this report. 

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

N/A

If yes, which further implementation measures do you plan to put in place in the next 6 months?

N/A

Measure 40.1

Relevant Signatories that are Very Large Online Platforms, as defined in the DSA, will report every six-months on the implementation of the Commitments and Measures they signed up to under the Code, including on the relevant QREs and SLIs at service and Member State Level.

We publish a report, detailing the implementation of the commitments and measures (including QREs and SLIs) we have signed up to under the Code, every 6 months.

Measure 40.2

Other Signatories will report yearly on the implementation of the Commitments and Measures taken under the present Code, including on the relevant QREs and SLIs, at service and Member State level.

We publish a report, detailing the implementation of the commitments and measures (including QREs and SLIs) we have signed up to under the Code, every 6 months.

Measure 40.3

We publish our reports online. All reports published under the Code can be accessed through our Transparency Center

Measure 40.4

We continue to work with the Taskforce, as applicable.

Measure 40.5

We continue to engage in the work of the Task-force, as applicable.

Measure 40.6

We continue to work and cooperate with the EC, as applicable.

Commitment 41

Signatories commit to work within the Task-force towards developing Structural Indicators, and publish a first set of them within 9 months from the signature of this Code; and to publish an initial measurement alongside their first full report.

We signed up to the following measures of this commitment

Measure 41.1 Measure 41.2 Measure 41.3

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

No, pending further updates from the Commission

If yes, list these implementation measures here

N/A

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

N/A

If yes, which further implementation measures do you plan to put in place in the next 6 months?

N/A

Crisis and Elections Response

Elections 2025

[Note: Signatories are requested to provide information relevant to their particular response to the threats and challenges they observed on their service(s). They ensure that the information below provides an accurate and complete report of their relevant actions. As operational responses to crisis/election situations can vary from service to service, an absence of information should not be considered a priori a shortfall in the way a particular service has responded. Impact metrics are accurate to the best of signatories’ abilities to measure them].

Threats observed or anticipated

Ireland Election 2025

We have comprehensive measures in place to anticipate and address the risks associated with electoral processes, including the risks associated with election misinformation in the context of the Irish presidential election held on 24 October 2025.

In advance of the election, a dedicated election Task-Force was established to proactively assess potential risks. Through cross-functional consultations, the team identified key threats—including the spread of AI-generated deepfakes and misinformation—and developed response strategies to mitigate them before they could gain traction on the platform.

Throughout the election, we monitored for and actioned inauthentic behavior, and removed content that violated our Community Guidelines.

Czechia Federal Election 2025
 
We have comprehensive measures in place to anticipate and address risks associated with electoral processes, including risks associated with election misinformation in the context of the Czech federal election held on 3 & 4 October 2025. In advance of the election, a core election Task-Force was formed, and consultations between cross-functional teams helped to identify and design response strategies.

TikTok did not observe major threats during the Czech election. Through the election, we monitored for and actioned inauthentic behavior and removed content that violated our Community Guidelines.

Netherlands Election 2025:

We have comprehensive measures in place to anticipate and address risks associated with electoral processes, including risks associated with election misinformation in the context of the Dutch parliamentary election held on 29 October 2025. In advance of the election, a core election Task-Force was formed, and consultations between cross-functional teams helped to identify and design response strategies.

TikTok did not observe major threats during the Dutch election. Through the election, we monitored for and actioned inauthentic behavior and removed content that violated our Community Guidelines.



Mitigations in place

Ireland Elections 2025:

Enforcing our policies

(I) Monitoring capabilities

We have dedicated Trust and Safety professionals working to keep our platform safe. As they usually do, our teams worked alongside technology to ensure that we were consistently enforcing our rules to detect and remove misinformation, covert influence operations, and other content and behaviour that can increase during an election period. In advance of the election, we had proactive data monitoring, trend detection and regular monitoring of enriched keywords and accounts.


(II) Mission Control Centre: internal cross-functional collaboration


As part of our advance preparations ahead of the Irish presidential election, we established a dedicated Mission Control Centre (MCC) bringing together employees from multiple specialist teams within our safety department. Through the MCC, our teams were able to provide consistent and dedicated coverage of potential election-related issues in the run-up to, during,and immediately after the election.


(III) Integrity and Authenticity policies


We prioritise proactive content moderation, with the vast majority of violative content removed before it is viewed or reported. In H2 2025, more than 98% of videos violating our Integrity and Authenticity policies were removed proactively worldwide.


(IV) Fact-checking


Our global fact-checking programme is a critical part of our layered approach to detecting harmful misinformation in the context of elections. The core objective of the fact-checking program is to leverage the expertise of external fact-checking organisations to help assess the accuracy of potentially harmful claims that are difficult to verify.


Within Europe, we partner with 13 fact-checking organisations who provide fact-checking coverage in 25 languages (22 official EU languages plus Russian, Ukrainian and Turkish). Reuters serves as the fact-checking partner for Ireland.


(V) Deterring covert influence operations


We prohibit covert influence operations and remain constantly vigilant against attempts to use deceptive behaviours and manipulate our platform. We proactively seek and continuously investigate leads for potential influence operations. We're also working with government authorities and encourage them to share any intelligence so that we can work together to ensure election integrity. More detail on our policy against covert influence operations is published on our website.


(VI) Tackling misleading AI-generated content


Creators are required to label any realistic AI-generated content (AIGC) and we have an AI-generated content label to help people do this. TikTok has a ‘Edited Media and AI-Generated Content (AIGC)’ policy, which prohibits AIGC showing fake authoritative sources or crisis events, or falsely showing public figures in certain contexts including being bullied, making an endorsement, or being endorsed.


(VII) Government, Politician, and Political Party Accounts (GPPPAs)


Many political leaders, ministers, and political parties have a presence on TikTok.These politicians and parties play an important role on our platform - we believe that verified accounts belonging to politicians and institutions provide the electorate with another route to access their representatives, and additional trusted voices in the shared fight against misinformation.


We strongly recommend GPPPAs have their accounts verified by TikTok. Verified badges help users make informed choices about the accounts they choose to follow. It is also an easy way for notable figures to let users know they’re seeing authentic content, and it helps to build trust among high-profile accounts and their followers.


Directing people to trusted sources


(I) Investing in media literacy

We invest in media literacy campaigns as a counter-misinformation strategy. From 24 Sept to 25 Oct 2025, we launched an in-app Election Centre to provide users with up-to-date information about the 2025 Irish presidential election. The centre contained a section about spotting misinformation, which included videos created in partnership with The Journal's fact-checking unit.

External engagement at the national and EU levels

(I) Rapid Response System: external collaboration with COPD Signatories 

Throughout the election period, our teams maintained communication with XFNs as part of the COCD Rapid Response System (RRS). We received 10 reports via the RRS related to AIGC, misinformation and impersonation, which were rapidly addressed. Actions included banning of accounts and content removals for violation of Community Guidelines.

(II) Engagement with local experts

To further promote election integrity, and inform our approach to the Irish Election, we organised an Election Speaker Series with local fact-checking partner Reuters who shared their insights and market expertise with our internal teams.

Czech Federal Elections:

Enforcing our policies

(I) Monitoring capabilities


We have dedicated Trust and Safety professionals working to keep our platform safe. As they usually do, our teams worked alongside technology to ensure that we were consistently enforcing our rules to detect and remove misinformation, covert influence operations, and other content and behaviour that can increase during an election period. In advance of the election, we had proactive data monitoring, trend detection, and regular monitoring of election keywords and accounts.


(II) Mission Control Centre: internal cross-functional collaboration


As part of our advance preparations, ahead of the Czech election, we established a dedicated Mission Control Centre (MCC) bringing together employees from multiple specialist teams within our safety department. Through the MCC, our teams provided consistent and dedicated coverage of potential election-related issues in the run-up to, and during, the election.

(III) Integrity and Authenticity policies

We prioritise proactive content moderation, with the vast majority of violative content removed before it is viewed or reported.

(IV) Fact-checking


Our global fact-checking programme is a critical part of our layered approach to detecting harmful misinformation in the context of elections. The core objective of the fact-checking program is to leverage the expertise of external fact-checking organisations to help assess the accuracy of potentially harmful claims that are difficult to verify.


Within Europe, we partner with 13 fact-checking organisations who provide fact-checking coverage in 25 languages (22 official EU languages plus Russian, Ukrainian and Turkish). Lead Stories serves as the fact-checking partner for Czechia.


(V) Deterring covert influence operations


We prohibit covert influence operations and remain constantly vigilant against attempts to use deceptive behaviours and manipulate our platform. We proactively seek and continuously investigate leads for potential influence operations. We're also working with government authorities and encourage them to share any intelligence so that we can work together to ensure election integrity. More detail on our policy against covert influence operations is published on our website.


(VI) Tackling misleading AI-generated content


Creators are required to label any realistic AI-generated content (AIGC) and we have an AI-generated content label to help people do this. TikTok has a ‘Edited Media and AI-Generated Content (AIGC)’ policy, which prohibits AIGC showing fake authoritative sources or crisis events, or falsely showing public figures in certain contexts including being bullied, making an endorsement, or being endorsed.


(VII) Government, Politician, and Political Party Accounts (GPPPAs)


Many political leaders, ministers, and political parties have a presence on TikTok.These politicians and parties play an important role on our platform - we believe that verified accounts belonging to politicians and institutions provide the electorate with another route to access their representatives, and additional trusted voices in the shared fight against misinformation.


We strongly recommend GPPPAs have their accounts verified by TikTok. Verified badges help users make informed choices about the accounts they choose to follow. It is also an easy way for notable figures to let users know they’re seeing authentic content, and it helps to build trust among high-profile accounts and their followers.

Directing people to trusted sources

(I) Investing in media literacy

We invest in media literacy campaigns as a counter-misinformation strategy. We engaged with the local fact-checking Demagog.cz to develop, review, and launch two videos as part of a media literacy campaign.


External engagement at the national and EU levels


(I) Rapid Response System: external collaboration with COCD Signatories 


The COCD Rapid Response System (RRS) was utilised to  exchange information among civil society organisations, fact-checkers, and online platforms. TikTok received 1 RRS. Throughout the election period, the team maintained consistent prioritisation of RRS requests and ensured timely, accurate support for cross-functional partners.


(II) Engagement with local experts


To further promote election integrity, and inform our approach to the Czech election, we organised an Election Speaker Series with our local fact-checking partner, LeadStories, who shared their insights and market expertise with our internal teams. 

Netherlands election 2025:

(I) Monitoring capabilities


We have dedicated Trust and Safety professionals working to keep our platform safe. As they usually do, our teams worked alongside technology to ensure that we were consistently enforcing our rules to detect and remove misinformation, covert influence operations, and other content and behaviour that can increase during an election period. In advance of the election, we had proactive data monitoring, trend detection, and regular monitoring of election keywords and accounts.


(II) Mission Control Centre: internal cross-functional collaboration


As part of our advance preparations, ahead of the Dutch election, we established a dedicated Mission Control Centre (MCC) bringing together employees from multiple specialist teams within our safety department. Through the MCC, our teams provided consistent and dedicated coverage of potential election-related issues in the run-up to, and during, the election.


(III) Integrity and Authenticity policies


We prioritise proactive content moderation, with the vast majority of violative content removed before it is viewed or reported.


(IV) Fact-checking


Our global fact-checking programme is a critical part of our layered approach to detecting harmful misinformation in the context of elections. The core objective of the fact-checking program is to leverage the expertise of external fact-checking organisations to help assess the accuracy of potentially harmful claims that are difficult to verify.


Within Europe, we partner with 13 fact-checking organisations who provide fact-checking coverage in 25 languages (22 official EU languages plus Russian, Ukrainian and Turkish). Deutsche Presse-Agentur (dpa) serves as the fact-checking partner for the Netherlands.


(V) Deterring covert influence operations


We prohibit covert influence operations and remain constantly vigilant against attempts to use deceptive behaviours and manipulate our platform. We proactively seek and continuously investigate leads for potential influence operations. We're also working with government authorities and encouraging them to share any intelligence so that we can work together to ensure election integrity. More detail on our policy against covert influence operations is published on our website.


(VI) Tackling misleading AI-generated content


Creators are required to label any realistic AI-generated content (AIGC) and we have an AI-generated content label to help people do this. TikTok has a ‘Edited Media and AI-Generated Content (AIGC)’ policy, which prohibits AIGC showing fake authoritative sources or crisis events, or falsely showing public figures in certain contexts including being bullied, making an endorsement, or being endorsed.


(VII) Government, Politician, and Political Party Accounts (GPPPAs)


Many political leaders, ministers, and political parties have a presence on TikTok.These politicians and parties play an important role on our platform - we believe that verified accounts belonging to politicians and institutions provide the electorate with another route to access their representatives, and additional trusted voices in the shared fight against misinformation.


We strongly recommend GPPPAs have their accounts verified by TikTok. Verified badges help users make informed choices about the accounts they choose to follow. It is also an easy way for notable figures to let users know they’re seeing authentic content, and it helps to build trust among high-profile accounts and their followers.


Directing people to trusted sources


(I) Investing in media literacy


We invest in media literacy campaigns as a counter-misinformation strategy.


External engagement at the national and EU levels


(I) Rapid Response System: external collaboration with COCD Signatories 


The COCD Rapid Response System (RRS) was utilised to  exchange information among civil society organisations, fact-checkers, and online platforms. TikTok received 1 RRS with the content violating our AIGC policies. Throughout the election period, the team maintained consistent prioritisation of RRS requests and ensured timely, accurate support for cross-functional partners.


(II) Engagement with local experts


To further promote election integrity, and inform our approach to the Dutch election, we organised an Election Speaker Series with our fact-checking partner, dpa,who shared their insights and market expertise with our internal teams. 


Policies and Terms and Conditions

Outline any changes to your policies

N/A

Scrutiny of Ads Placements

Outline approaches pertinent to this chapter, highlighting similarities/commonalities and differences with regular enforcement.

Specific Action applied - 50.2.1

Scrutiny of Ad Placements, including prohibition on monetisation and fundraising campaigns for GPPPAs

(Commitment 1 and Measure 1.1) 


Description of intervention - 50.2.2

At the end of August 2025, we implemented specific granular misinformation policies that provide comprehensive coverage to address harmful misinformation in advertising. In particular, election-related misinformation is explicitly addressed within this policy framework under the Election Misinformation Policy.

In addition, we are pleased to be able to report on the advertisements removed for breach of our political advertising policy in H2 2025, including the impressions associated with those advertisements. This information is set out in the “Political Advertising Data H2 2025” section which can be found on the report PDF.

Indication of impact - 50.2.3

By prohibiting political advertising, we help ensure our community can have a creative and authentic TikTok experience, and it is one way that we can reduce the risk of our platform being used to advertise and amplify narratives that may be divisive or false.
  • Number of ads removed for our political advertisement policies during the 4 weeks leading up to and including the days of the Irish Presidential Election (22 Sep., 2025, and 26 Oct., 2025): 2,134
  • Number of ads removed for our political advertisement policies during the 4 weeks leading up to and including the days of the Czech Election (1 Sep., 2025, and 5 Oct., 2025): 1,092
  • Number of ads removed for our political advertisement policies during the 4 weeks leading up to and including the days of the Netherlands Election (29 Sep. 2025, and 2 Nov. 2025): 2,113

Political Advertising

Outline approaches pertinent to this chapter, highlighting similarities/commonalities and differences with regular enforcement.


TikTok did not subscribe to this commitment as outlined in the January 2025 Subscription Document


Integrity of Services

Outline approaches pertinent to this chapter, highlighting similarities/commonalities and differences with regular enforcement.

Specific Action applied - 50.4.1

Identifying and removing CIO networks 

(Commitment 14, Measure 14.1

Description of intervention - 50.4.2

During the Irish, Czech, and Dutch elections, we did not detect any instances of CIO on the Platform. We publish details of the CIO networks we identify and remove in our dedicated CIO transparency reports

Indication of impact - 50.4.3

N/A

Specific Action applied - 50.4.4

Tackling misleading AIGC and edited media 

(Commitment 15, Measures 15.1 and 15.2)

Description of intervention - 50.4.5

Our Edited Media and AI-Generated Content (AIGC) policy makes it clear that we do not want our users to be misled about crisis events. For the purposes of our policy, AIGC refers to content created or modified by AI technology or machine-learning processes. It includes images of real people and may show highly realistic-looking scenes.

We do not allow misleading AIGC or edited media that falsely shows:
  • Content made to seem as if it comes from an authoritative source, such as a reputable news organization, scientific or medical society, or government entity providing critical services;
  • A critical event, such as an election, natural disaster, or a mass casualty incident;
  • Matters of public importance, including debates about significant and challenging policy issues;
  • A public figure who is:
    • being degraded or harassed, or engaging in criminal or anti-social behavior
    • taking a position on a political issue, commercial product, or a matter of public importance (such as an election)
    • spreading misinformation about matters of public importance
In addition, all AIGC or edited media, including depictions of public figures, such as politicians, must be clearly labelled as AI-generated, and can not be used for endorsements.

We have an AI-generated content label for users to easily inform their community when they post AIGC. The label can be applied to any content that has been completely generated or significantly edited by AI, which makes it easier to comply with the obligation to disclose AIGC that shows realistic scenes. Creators can do this through this label or through other types of disclosures, like a sticker, watermark, or caption.

TikTok has invested in labeling technologies and tools, including the implementation of Content Credentials technology from the Coalition for Content Provenance and Authenticity (C2PA), which enables the automatic recognition and labeling of AIGC, including AIGC created on some other platforms. AI-generated content. This is complemented by a TikTok-developed tool that allows creators to easily label AI-generated content, already used by 37 million creators. TikTok’s commitment to AIGC transparency ensures a safe environment for users, who can easily identify synthetic content and understand its context.

Indication of impact - 50.4.6

Number of videos removed for violating our Edited Media and AI-Generated Content (AIGC) policy during the Irish presidential election: 36 
Number of videos removed for violating our Edited Media and AI-Generated Content (AIGC) policy during the Czech federal election: 17   
Number of videos removed for violating our Edited Media and AI-Generated Content (AIGC) policy during the Dutch parliamentary election: 324 



Empowering Users

Outline approaches pertinent to this chapter, highlighting similarities/commonalities and differences with regular enforcement.

Specific Action applied - 50.5.1

Rolling out Media literacy campaigns (Commitment 17, Measure 17.2) 

Description of intervention - 50.5.2

Irish election:
From 24 Sept 2025, we launched an in-app Election Centre to provide users with up-to-date information about the 2025 Irish presidential election. The centre contained a section about spotting misinformation, which included videos created in partnership with the fact-checking organisation The Journal.

We directed people to the Election Centre through prompts on videos, LIVEs and searches related to elections.

Czech elections:
From 4 Sept 2025, we launched an in-app Election Centre to provide users with up-to-date information about the 2025 Czech parliamentary election. The centre contained a section about spotting misinformation.

Dutch election:
From 29 Sept 2025, we launched an in-app Election Centre to provide users with up-to-date information about the 2025 Dutch parliamentary elections. The centre contained a section about spotting misinformation.

Indication of impact - 50.5.3

The Election Centre launched before the Irish presidential election was visited 111,131 times.
The Election Centre launched before the Czech election was visited 78,337 times.
The Election Centre launched before the Dutch election was visited 337,472 times

Specific Action applied - 50.5.4

Engagement with local and regional experts (Commitment 17, Measure 17.2)


Description of intervention - 50.5.5

Irish election:
To further promote election integrity, and inform our approach to the Irish presidential election, we organised an Election Speaker Series with Reuters who shared their insights and market expertise with our internal teams.

Czech elections:
To further promote election integrity, and inform our approach to the Czech election, we engaged with our fact-checking partner, LeadStories, to ensure our responsible teams for election integrity on the platform are aware of online trends concerning the elections.
We engaged with national authorities through onboarding, education, regulatory escalations, and proactive outreach with authorities. Stakeholders include the following: CTU, OSCE, media regulator, police, political parties/campaigns, and the Czech government.

This engagement with external regional and local experts, as well as national authorities, allowed us to inform our country-level approach to the Czech election.

Dutch election:
To further promote election integrity, and inform our approach to the Dutch election, we engaged with our fact-checking partner, dpa, to ensure our responsible teams for election integrity on the platform are aware of online trends concerning the elections.

We engaged with national authorities through the following: regulator briefings and roundtables, election questionnaires, ACM & Ministry of Interior TAC tour (Dublin), media briefings, regulator conference panel participation, TikTok 101 workshops for political parties, MP and party meetings, and NGO election integrity discussions. Our stakeholders included: ACM (Dutch DSC), European Commission, OSCE, Ministry of Interior Affairs, and political parties.


Indication of impact - 50.5.6

Irish election:
This engagement with external regional and local experts allowed us to inform our country-level approach to the Irish presidential election.

Czech elections:
This engagement with external regional and local experts, as well as national authorities, allowed us to inform our country-level approach to the Czech election.

Dutch election:

This engagement with external regional and local experts allowed us to inform our country-level approach to the Dutch election.


Empowering the Research Community

Outline approaches pertinent to this chapter, highlighting similarities/commonalities and differences with regular enforcement.



Specific Action applied - 50.6.1


Providing access to our Research API (Commitment 26 and Measures 26.1 and 26.2)

Description of intervention - 50.6.2

Through our Research API, academic researchers from non-profit universities in the US and Europe can apply to study public data about TikTok content and accounts. This public data includes comments, captions, subtitles, and number of comments, shares, likes, and favourites that a video receives, and comments from our platform. More information is available here


Indication of impact - 50.6.3

Number of Research API applications related to the Irish presidential election that have been approved from July-December 2025: 0
Number of Research API applications related to the Czech election that have been approved from July-December 2025: 0 
Number of Research API applications related to the Dutch election that have been approved from July-December 2025: 0 

Empowering the Fact-Checking Community

Outline approaches pertinent to this chapter, highlighting similarities/commonalities and differences with regular enforcement.

Specific Action applied - 50.7.1

Ensuring fact-checking coverage (Commitment 30, Measure 30.1)

Description of intervention - 50.7.2

Reuters serves as the fact-checking partner for Ireland and provided coverage throughout the election period.
Lead Stories serves as the fact-checking partner for Czechia and provided coverage throughout the election period.
dpa serves as the fact-checking partner for the Netherlands and provided coverage throughout the election period. 

Indication of impact - 50.7.3

Please refer to Chapter 7 - Empowering the Fact-Checking Community for metrics.

Crisis 2025

[Note: Signatories are requested to provide information relevant to their particular response to the threats and challenges they observed on their service(s). They ensure that the information below provides an accurate and complete report of their relevant actions. As operational responses to crisis/election situations can vary from service to service, an absence of information should not be considered a priori a shortfall in the way a particular service has responded. Impact metrics are accurate to the best of signatories’ abilities to measure them].

Threats observed or anticipated

War of Aggression by Russia on Ukraine

Threats observed or anticipated at time of reporting: [suggested character limit 2000 characters].

Since the start of the war of aggression by Russia on Ukraine in February 2022 (the “War in Ukraine”), we have observed false or unverified claims about specific attacks and events, the development or use of weapons, the involvement of particular countries, and military activities such as troop movements. We have also seen misleadingly repurposed footage, including clips from video games, AI-generated content, or unrelated past events presented as current.
While no specific threats related to the War in Ukraine were identified or anticipated in H2 2025, we remained alert to the spread of harmful misinformation and covert influence operations (CIO), and continue working to prevent such content from being shared.


(I) Spread of harmful misinformation

TikTok takes a multi-faceted approach to tackling the spread of harmful misinformation, regardless of intent. This includes our Integrity and Authenticity policies, as well as our products, operational practices, and external partnerships with fact-checkers, media literacy organisations, and researchers.
We support our Integrity and Authenticity moderators with detailed misinformation policy guidance, enhanced training, and direct access to our IFCN-accredited fact-checking partners, who help assess the accuracy of content.
We continue to take swift action against misinformation, conspiracy theories, fake engagement, and fake accounts relating to the War in Ukraine.

(II) CIOs

TikTok’s integrity and authenticity policies do not allow deceptive behaviour that may cause harm to our community or society at large. This includes coordinated attempts to influence or sway public opinion while also misleading individuals, our community, or our systems about an account’s identity, approximate location, relationships, popularity, or purpose. We have specifically-trained teams on high alert to investigate, disrupt and remove CIO networks from our platform and we provide regular updates in our dedicated CIO transparency reports. For advertising-related CIO measures, please refer to Chapter 2.

Israel-Hamas Conflict:
TikTok acknowledges the significance and sensitivity of the Israel–Hamas conflict (referred to as the “Conflict” in this chapter), which has been ongoing for an extended period. We recognise that it continues to be a challenging and deeply felt issue for many people around the world and on TikTok.
TikTok continues to moderate violative content at scale, while respecting and protecting the fundamental rights and freedoms of European users. We remain committed to supporting freedom of expression, upholding our commitment to human rights, and maintaining the safety and integrity of our platform during the Conflict.

Below, we outline some of the main threats, both observed and considered, in relation to the Conflict and the steps taken to address them during the reporting period.

(I) Spread of harmful misinformation

TikTok takes a multi-faceted approach to tackling the spread of harmful misinformation, regardless of intent. This includes our Integrity and Authenticity policies, as well as our products, operational practices, and external partnerships with fact-checkers, media literacy organisations, and researchers. We support our Integrity and Authenticity moderators with detailed misinformation policy guidance, enhanced training, and direct access to our IFCN-accredited fact-checking partners, who help assess the accuracy of content. 

We continue to take swift action against misinformation, conspiracy theories, fake engagement, and fake accounts relating to the Conflict.


TikTok’s integrity and authenticity policies do not allow deceptive behaviour that may cause harm to our community or society at large. This includes coordinated attempts to influence or sway public opinion while also misleading individuals, our community, or our systems about an account’s identity, approximate location, relationships, popularity, or purpose. We have specifically-trained teams on high alert to investigate, disrupt and remove CIO networks from our platform and we provide regular updates in our dedicated CIO transparency reports. For advertising-related CIO measures, please refer to Chapter 2.


Mitigations in place

War of Aggression by Russia on Ukraine
We aim to ensure that TikTok is a source of reliable and safe information and recognise the heightened risk and impact of misleading information during a time of crisis such as the War in Ukraine. 

(I) Upholding TikTok's Community Guidelines
Continuing to enforce our policies against violence, hate, and harmful misinformation by taking action to remove violative content and accounts. We use a combination of advanced moderation technologies and teams of human safety experts to identify, review, and action content that violates our policies.

Automated Review

We place considerable emphasis on proactive detection to remove violative content and reduce exposure to potentially distressing content for our human moderators. Before content is posted to our platform, it's reviewed by automated moderation technologies which identify content or behavior that may violate our policies or For You feed eligibility standards, or that may require age-restriction or other actions. While undergoing this review, the content is visible only to the uploader.

If our automated moderation technology identifies content that is a potential violation, it will either take action against the content or flag it for further review by our human moderation teams. In line with our safeguards to help ensure accurate decisions are made, automated removal is applied when violations are the most clear-cut.

Some of the methods and technologies that support these efforts include:
  • Vision-based: Computer vision models can identify objects that violate our Community Guidelines, such as weapons or hate symbols.
  • Audio-based: Audio clips are reviewed for violations of our policies, supported by a dedicated audio bank and "classifiers" that help us detect audios that are similar or modified to previous violations.
  • Text-based: Detection models review written content like comments or hashtags,. Artificial Intelligence (AI) that can interpret the context surrounding content—helps us identify violations that are context-dependent, such as words that can be used in a hateful way but may not violate our policies by themselves. 
  • Similarity-based: "Similarity detection systems" enable us to not only catch identical or highly similar versions of violative content, but other types of content that share key contextual similarities and may require additional review.
  • Activity-based: Technologies that look at how accounts are being operated help us disrupt deceptive activities like bot accounts, spam, or attempts to artificially inflate engagement through fake likes or follow attempts.
  • LLMs: We use multimodal LLMs to help moderate content faster and more consistently at scale, from taking automated action on activity like fake engagement, to empowering teams with better moderation tools and risk insights.
  • We work with external groups, for example Tech Against Terrorism in the context of violent extremist content, who help us to more quickly detect and remove violative content that has already been identified off the platform.


Scaling human expertise

Human insight plays a crucial role in the content moderation process, from our community or external experts, to our own safety professionals. Our teams of human safety experts speak more than 60 languages and dialects, including Russian and Ukrainian. We strive to promote a caring working environment for all TikTok employees, and especially for trust and safety professionals. We use an evidence-based approach to develop programmes and resources that support their psychological well-being, including for Trust & Safety personnel working on mis & disinformation.

In H2 2025, we removed 1,352 videos in relation to the War in Ukraine, which violated our misinformation policies.

(II) Leveraging our Global Fact-Checking Program

We use a layered approach to detect harmful misinformation that violates our Community Guidelines, with our Global Fact-Checking Program playing a key role. We assess the accuracy of harmful or hard-to-verify claims by partnering with more than 20 IFCN-accredited fact-checking organizations who support over 60 languages on TikTok, including Russian, Ukrainian, and Belarusian. We also collaborate with certain fact-checking partners to receive advance warning of emerging misinformation narratives. This helps facilitate proactive responses against high-harm trends and ensures that our Integrity and Authenticity moderators have up-to-date guidance.

To limit the spread of potentially misleading information, we apply warning labels and prompt users to reconsider sharing content about unfolding or emergency events that have been reviewed by fact-checkers but cannot be verified—referred to as “unverified content.” Recognising that the situation around the Conflict can change rapidly, we have put in place a process allowing our fact-checking partners to quickly update us if claims previously marked as “unverified” are later verified or clarified with additional context.

(III) Disruption of CIOs

Disrupting CIO networks targeting discourse related to the War in Ukraine remains a priority. Between July and December 2025, we took action to remove a total of four such networks.


(IV) Mitigating the risk of monetisation of harmful misinformation

Political advertising has been prohibited on our platform for many years, but as an additional risk mitigation measure against the risk of profiteering from the War in Ukraine we prohibit Russian-based advertisers from outbound targeting of EU markets. We also suspended TikTok in the Donetsk and Luhansk regions.

(V) Localised media literacy campaigns

Proactive measures aimed at improving our users' digital literacy are vital, and we recognise the importance of increasing the prominence of authoritative information. We have 17 localised media literacy campaigns addressing disinformation related to the War in Ukraine in Austria, Bosnia, Bulgaria, Czechia, Croatia, Estonia, Germany, Hungary, Latvia, Lithuania, Montenegro, Poland, Romania, Serbia, Slovakia, Slovenia, and Ukraine in close collaboration with our factchecking partners. Users searching for keywords relating to the War in Ukraine are directed to tips, prepared in partnership with our fact-checking partners, to help users identify misinformation and prevent the spread of it on the platform.

(VI) Adding opt-in screens over content that could be shocking or graphic
We recognise that some content that may otherwise break our rules can be in the public interest, and we allow this content to remain on the platform for documentary, educational, and counterspeech purposes. As we continue to make public interest exceptions for some content, we provide opt-in screens to help prevent people from unexpectedly viewing shocking or graphic content.

(VII) External engagement
We are committed to engaging with experts across the industry and civil society, and cooperating with law enforcement agencies globally in line with our Law Enforcement Guidelines, to further safeguard and secure our platform during times of conflict.

Israel-Hamas Conflict:
We aim to ensure that TikTok is a source of reliable and safe information and recognise the heightened risk and impact of misleading information during a time of crisis such as the Conflict. 

(I) Upholding TikTok's Community Guidelines

Continuing to enforce our policies against violence, hate, and harmful misinformation by taking action to remove violative content and accounts. For example, we remove content that promotes Hamas, or otherwise supports the attacks or mocks victims affected by the violence. We do not tolerate attempts to incite violence or spread hateful ideologies. We have a zero-tolerance policy for content praising violent and hateful organisations and individuals, and those organisations and individuals aren't allowed on our platform. We also block hashtags that promote violence or otherwise break our rules. We use a combination of advanced moderation technologies and teams of human safety experts to identify, review, and action content that violates our policies.


Automated Review

We place considerable emphasis on proactive detection to remove violative content and reduce exposure to potentially distressing content for our human moderators. Before content is posted to our platform, it's reviewed by automated moderation technologies which identify content or behavior that may violate our policies or For You feed eligibility standards, or that may require age-restriction or other actions. While undergoing this review, the content is visible only to the uploader.

If our automated moderation technology identifies content that is a potential violation, it will either take action against the content or flag it for further review by our human moderation teams. In line with our safeguards to help ensure accurate decisions are made, automated removal is applied when violations are the most clear-cut.

Some of the methods and technologies that support these efforts include:

  • Vision-based:
    Computer vision models can identify objects that violate our Community Guidelines, such as weapons or hate symbols.
  • Audio-based: Audio clips are reviewed for violations of our policies, supported by a dedicated audio bank and "classifiers" that help us detect audios that are similar or modified to previous violations.
  • Text-based: Detection models review written content like comments or hashtags, using foundational keyword lists to find variations of violative text. Artificial Intelligence (AI) that can interpret the context surrounding content—helps us identify violations that are context-dependent, such as words that can be used in a hateful way but may not violate our policies by themselves. We also work with various external experts, like our fact-checking partners, to inform our keyword lists.
  • Similarity-based: "Similarity detection systems" enable us to not only catch identical or highly similar versions of violative content, but other types of content that share key contextual similarities and may require additional review.
  • Activity-based: Technologies that look at how accounts are being operated help us disrupt deceptive activities like bot accounts, spam, or attempts to artificially inflate engagement through fake likes or follow attempts
  • LLMs: We use multimodal LLMs to help moderate content faster and more consistently at scale, from taking automated action on activity like fake engagement, to empowering teams with better moderation tools and risk insights.
  • We work with external groups, for example Tech Against Terrorism in the context of violent extremist content, who help us to more quickly detect and remove violative content that has already been identified off the platform.

Scaling human expertise

Human insight plays a crucial role in the content moderation process, from our community or external experts, to our own safety professionals. TikTok has Arabic and Hebrew speaking content moderators who review content and assist with Conflict-related translations. We continue to focus on moderator care through the provision of internal training and well-being resources for T&S personnel working on mis & disinformation.

In H2 2025, we have removed 3,901 videos in relation to the Conflict, which violated our misinformation policies.

(II) Leveraging our Global Fact-Checking Program

We use a layered approach to detect harmful misinformation that violates our Community Guidelines, with our Global Fact-Checking Program playing a key role. We assess the accuracy of harmful or hard-to-verify claims by partnering with more than 20 IFCN-accredited fact-checking organizations who support over 60 languages on TikTok, including Arabic and Hebrew. We also collaborate with certain fact-checking partners to receive advance warning of emerging misinformation narratives. This helps facilitate proactive responses against high-harm trends and ensures that our Integrity and Authenticity moderators have up-to-date guidance.

To limit the spread of potentially misleading information, we apply warning labels and prompt users to reconsider sharing content about unfolding or emergency events that have been reviewed by fact-checkers but cannot be verified—referred to as “unverified content.” Recognising that the situation around the Conflict can change rapidly, we have put in place a process allowing our fact-checking partners to quickly update us if claims previously marked as “unverified” are later verified or clarified with additional context.

(III) Disruption of CIOs

Disrupting CIO networks targeting discourse related to Israel and Palestine remains a priority. Between July and December 2025, we took action to remove a total of four such networks.

(IV) Deploying search interventions to raise awareness of potential misinformation

To help raise awareness and to protect our users, we provide in-app search interventions that are triggered when users search for non-violating terms related to the Conflict (e.g., Israel, Palestine). These search interventions remind users to pause and check their sources.

(V) Adding opt-in screens over content that could be shocking or graphic

We recognise that some content that may otherwise break our rules can be of public interest, and we allow this content to remain on the platform for documentary, educational, and counterspeech purposes. As we continue to make public interest exceptions for some content, we provide opt-in screens to help prevent people from unexpectedly viewing shocking or graphic content.
 
(VI) External engagement

We are committed to engaging with experts across the industry and civil society, such as Tech Against Terrorism and cooperating with law enforcement agencies globally in line with our Law Enforcement Guidelines, to further safeguard and secure our platform during times of conflict.

Policies and Terms and Conditions

Outline any changes to your policies

Russia-Ukraine:
In a crisis, we keep under review our policies and to ensure moderation teams have supplementary guidance.

Israel-Hamas:

During the reporting period, no Conflict-specific policy changes were implemented.


Policy - 51.1.1

Russia-Ukraine:
No update during the reporting period.

Israel-Hamas:
No update during the reporting period.


Political Advertising

Outline approaches pertinent to this chapter, highlighting similarities/commonalities and differences with regular enforcement.

TikTok did not subscribe to this Chapter as outlined in the January 2025 Subscription Document


Specific Action applied - 51.3.1



Description of intervention - 51.3.2



Integrity of Services

Outline approaches pertinent to this chapter, highlighting similarities/commonalities and differences with regular enforcement.

Specific Action applied - 51.4.1

Identifying and removing CIO networks

(Commitment 14, Measure 14.1)

Description of intervention - 51.4.2

Russia-Ukraine and Israel-Hamas:
We combat CIO because our Integrity and Authenticity policies prohibit attempts to manipulate public opinion while misleading our systems or users about identity, origin, approximate location, popularity, or purpose. Dedicated teams monitor and investigate CIO networks and have removed networks targeting discourse related to the War in Ukraine in line with these policies.

We know that CIO will continue to evolve in response to our detection and networks may attempt to reestablish a presence on our platform, which is why we continually seek to strengthen our policies and enforcement actions in order to protect our community against new types of harmful misinformation and inauthentic behaviours. 



Indication of impact - 51.4.3

Russia-Ukraine War:
Between July to December 2025, we took action to remove the following 4 networks (consisting of 114 accounts in total) that were found to be involved in coordinated attempts to influence public opinion about the Russia-Ukraine war while also misleading individuals, our community, or our systems:

  1. Network Origin: Ukraine

Description: We assess that this network operated from Ukraine and targeted a Russian audience. The individuals behind this network created inauthentic accounts in order to amplify narratives undermining trust in the Russian government, within the context of the ongoing Russia-Ukraine war. The network attempted to direct users to an off-platform messaging channel.
Accounts Removed: 53
Followers: 114,830

2. Network Origin: Ukraine

Description: We assess that this network operated from Ukraine and targeted a Russian audience. The individuals behind this network created inauthentic accounts in order to amplify narratives of Russian military defeat, within the context of the ongoing Russia-Ukraine war. The network was found to create accounts which it presented as news accounts.
Accounts Removed: 35
Followers: 1,184,387

3. Network Origin: Belarus

Description: We assess that this network operated from Belarus and targeted a Russian-speaking audience within Ukraine. The individuals behind this network created inauthentic accounts, posing as a partisan group, in order to promote narratives of Ukrainian military defeat and political incompetence. The network directed users to an off-platform messaging channel.
Accounts Removed: 16
Followers: 37,680

4. Network Origin: US

Description: We assess that this network operated from the US and targeted a global audience. The individuals behind this network created fake media brands in order to promote Russia and China as leaders on the global stage. The network targeted Latino-Americans, African-Americans, and African audiences, as well as English and Spanish-speaking audiences worldwide.
Accounts Removed: 10
Followers: 57,000

We publish CIO networks we identify and remove, including those relating to the War in Ukraine, in our dedicated CIO transparency report

Israel-Hamas:
Between July-December 2025, we took action to remove the following four networks (consisting of 75 accounts in total) that were found to be related to the Conflict:

1. Network Origin: Iran

Description: We assess that this network operated from Iran and targeted Israeli and Palestinian audiences. The individuals behind this network created inauthentic accounts in order to amplify narratives that divide Israelis and promote Palestinian nationalism. This network was found to use sock puppet accounts and accounts posing as news accounts to deliver language-tailored messaging to the target audiences.
Accounts in network: 22
Followers of network: 15,061

2. Network Origin: Unidentified

Description: We assess that this network targeted an Israeli audience. The individuals behind this network created inauthentic accounts in order to exploit and increase social tension among the Israeli population. The network attempted to direct users to an off-platform communication channel. The network was found to be using location obfuscation services in order to hide their true location.
Accounts in network: 14

Followers of network: 12,685

3. Network Origin: Iran

Description: We assess that this network operated from Iran and primarily targeted Francophone Africa. The individuals behind the network created inauthentic accounts in order to spread pro-Iran, anti-US, and anti-Israel content. The network was found to create accounts which it presented as news accounts.
Accounts in network: 26
Followers of network: 10,994

4. Network Origin: Iran

Description: We assess that this network operated from Iran and targeted Arabic-speaking audiences. The individuals behind this network created inauthentic accounts in order to amplify narratives critical of Israel’s actions and the inaction of Arab countries regarding Palestine. This network was found to create fictitious personas which posed as generic Arab users and masked its operating location through advanced operational security.
Accounts in network: 13
Followers of network: 39

We publish details of the CIO networks we identify and remove, including those relating to the Conflict, in our dedicated CIO transparency report




Specific Action applied - 51.4.4

Tackling Edited Media and AI-Generated Content (AIGC)


(Commitments 14 and 15, Measures 14.1, 15.1 and 15.2). 

Description of intervention - 51.4.5

Our Edited Media and AI-Generated Content (AIGC) policy makes it clear that we do not want our users to be misled about crisis events. For the purposes of our policy, AIGC refers to content created or modified by AI technology or machine-learning processes. It includes images of real people and may show highly realistic-looking scenes.

We do not allow misleading AIGC or edited media that falsely shows:
  • Content made to seem as if it comes from an authoritative source, such as a reputable news organization, scientific or medical society, or government entity providing critical services;
  • A critical event, such as an election, natural disaster, or a mass casualty incident;
  • Matters of public importance, including debates about significant and challenging policy issues;
  • A public figure who is:
    • being degraded or harassed, or engaging in criminal or anti-social behavior
    • taking a position on a political issue, commercial product, or a matter of public importance (such as an election)
    • spreading misinformation about matters of public importance
In addition, all AIGC or edited media, including depictions of public figures, such as politicians, must be clearly labelled as AI-generated, and can not be used for endorsements.

We have an AI-generated content label for users to easily inform their community when they post AIGC. The label can be applied to any content that has been completely generated or significantly edited by AI, which makes it easier to comply with the obligation to disclose AIGC that shows realistic scenes. Creators can do this through this label or through other types of disclosures, like a sticker, watermark, or caption.

TikTok has invested in labeling technologies and tools, including the implementation of Content Credentials technology from the Coalition for Content Provenance and Authenticity (C2PA), which enables the automatic recognition and labeling of AIGC, including AIGC created on some other platforms. AI-generated content. This is complemented by a TikTok-developed tool that allows creators to easily label AI-generated content, already used by 37 million creators. TikTok’s commitment to AIGC transparency ensures a safe environment for users, who can easily identify synthetic content and understand its context.

Indication of impact - 51.4.6

Our efforts support transparent and responsible content creation practices, both in the context of the War in Ukraine and more broadly on our platform. 

Specific Action applied - 51.4.7


Removing harmful misinformation from our platform
 


(Commitment 14, Measure 14.1)



Description of intervention - 51.4.8

We prioritise proactive content moderation, with the vast majority of violative content removed before it is viewed or reported. In H2 2025, more than 98% of videos violating our Integrity and Authenticity policies were removed proactively worldwide.

We take action to remove accounts or content that contain inaccurate, misleading, or false information that may cause significant harm to individuals or society, regardless of intent. In conflict environments, such information may include content that is repurposed from past conflicts, content that makes false and harmful claims about specific events, or incites panic. In certain circumstances, we may reduce the prominence of such content.

Indication of impact - 51.4.9

Russia-Ukraine:
In the context of the crisis, we have proactively removed 1,313 videos in H2 containing harmful misinformation related to the War in Ukraine. We carry out targeted sweeps of certain types of content as well as working closely with our fact-checking partners and responding to emerging trends they identify. 

Relevant metrics:

  • Number of videos removed because of violation of misinformation policy with a proxy related to the War in Ukraine - 1,352
  • Number of videos not recommended because of violation of misinformation policy with a proxy (only focusing on RU/UA) - 1,458
  • Number of proactive removals of videos removed because of violation of misinformation policy with a proxy related to the War in Ukraine - 1,313

Israel-Hamas:
In the context of the crisis, we have proactively removed 3,874 videos in H2 containing harmful misinformation related to the Conflict. We carry out targeted sweeps of certain types of content (e.g. hashtags/sensitive keyword lists) as well as working closely with our fact-checking partners and responding to emerging trends they identify. 

We have Arabic and Hebrew speaking content moderation as we recognise the importance of language and cultural context in the misinformation moderation process.

Relevant metrics: 
  • Number of videos removed because of violation of misinformation policy with a proxy (IL-Hamas) -  3,901
  • Number of videos not recommended because of violation of misinformation policy with a proxy (IL-Hamas) - 4,941
  • Number of proactive removals of videos removed because of violation of misinformation policy with a proxy (IL/Hamas) - 3,874

Empowering Users

Outline approaches pertinent to this chapter, highlighting similarities/commonalities and differences with regular enforcement.

Specific Action applied - 51.5.1

Creating localised media literacy campaigns
(Commitment 17, Measures 17.2 and 17.3)


Description of intervention - 51.5.2

Russia-Ukraine:
We recognise the importance of proactive measures that are aimed at improving our users' digital literacy and increasing the prominence of authoritative information.

We have localised media literacy campaigns related to the crisis to raise awareness amongst our users. We promoted the campaign through a combination of our in-app intervention tools to ensure that authoritative information is promoted to our users. 

Users searching for keywords related to the War in Ukraine are directed to tips, prepared in partnership with our fact checking partners. These tips help users identify misinformation and prevent its spread on the platform.

Indication of impact - 51.5.3

Russia-Ukraine:
Working with our fact-checking partners, we have 17 localised media literacy campaigns addressing disinformation related to the War in Ukraine in Austria, Bosnia, Bulgaria, Czechia, Croatia, Estonia, Germany, Hungary, Latvia, Lithuania, Montenegro, Poland, Romania, Serbia, Slovakia, Slovenia, and Ukraine. 

Relevant metrics for the media literacy campaigns (EEA total numbers, in countries where campaigns are active):

  • Total Number of impressions of the search intervention - 23,191,195
  • Total Number of clicks on the search intervention - 109,770
  • Click through rate of the search intervention - 0.47%

Specific Action applied - 51.5.4

Deploying search interventions to raise awareness of potential misinformation
(Commitment 21, Measure 21.1) 

Description of intervention - 51.5.5

Israel-Hamas:
To minimise the discoverability of misinformation and help to protect our users, we have launched search interventions which are triggered when users search for neutral terms related to the Conflict (e.g., Israel, Palestine). 

Indication of impact - 51.5.6

Israel-Hamas:
These search interventions remind users to pause and check their sources and also direct them to well-being resources. 

Empowering the Research Community

Outline approaches pertinent to this chapter, highlighting similarities/commonalities and differences with regular enforcement.

Specific Action applied - 51.6.1


Measures taken to support research into crisis related misinformation and disinformation

(Commitment 26, Measure 26.1 and 26.2)

Description of intervention - 51.6.2

Through our Research API, academic researchers from non-profit universities in the US and Europe can apply to study public data about TikTok content and accounts.This public data includes comments, captions, subtitles, and number of comments, shares, likes, and favourites that a video receives from our platform. More information is available here

Indication of impact - 51.6.3

Number of Research API applications related to the War in Ukraine that have been approved from July-December 2025: 0

Number of Research API applications related to the Israel-Hamas Conflict that have been approved from July-December 2025: 0

Empowering the Fact-Checking Community

Outline approaches pertinent to this chapter, highlighting similarities/commonalities and differences with regular enforcement.

Specific Action applied - 51.7.1

Applying our unverified content label and making content ineligible for recommendation

(Commitment 31, Measure 31.2)

Description of intervention - 51.7.2

Where our Integrity & Authenticity moderators or fact-checking partners determine that content is not able to be verified at the given time (which is common during an unfolding event), we apply our unverified content label to the content to encourage users to consider the reliability or source of the content. The application of the label will also result in the content becoming ineligible for recommendation in order to limit the spread of potentially misleading information. Our unverified content label is available to users in 23 EU official languages (plus, for EEA users, Norwegian and Icelandic).

Indication of impact - 51.7.3

N/A

Specific Action applied - 51.7.4

Ensuring fact-checking coverage

(Commitment 30, Measure 30.1) 

Description of intervention - 51.7.5

Russia-Ukraine:
Our fact checking efforts cover Russian, Ukrainian, Belarusian and all major European languages (including 18 official European languages as well as a number of other languages which affect European users).

Israel-Hamas:
As part of our fact-checking program, TikTok works with more than 20 IFCN-accredited fact-checking organisations that support more than 60 languages, including Hebrew and Arabic, to help assess the accuracy of content in this rapidly-changing environment. In the context of the Conflict, our independent fact-checking partners are following our standard practice, whereby they do not moderate content directly on TikTok, but assess whether a claim is true, false, or unsubstantiated so that our moderators can take action based on our Community Guidelines. Fact-checker input is then incorporated into our broader content moderation efforts in a number of different ways, as further outlined in the ‘indication of impact’ section below.  

Indication of impact - 51.7.6

Context and fact-checking are critical to consistently and accurately enforce our harmful misinformation policies, which is why we have ensured that, in the context of the crisis, our fact-checking programme covers Russian, Ukrainian and Belarusian. 

More generally, we work with 13 fact-checking partners in Europe, providing fact-checking coverage in 23 official EEA languages, including at least one official language of each EU Member States, and additional languages including Georgian, Russian, Turkish, Ukrainian, Albanian and Serbian.

Relevant metrics:
  • Number of fact-checked videos with a proxy related to the War in Ukraine - 665
  • Number of videos removed as a result of a fact-checking assessment with words related to the War in Ukraine - 78
  • Number of videos not recommended in the For Your Feed as a result of a fact-checking assessment with words related to the War in Ukraine - 147

Israel-Hamas
We see harmful misinformation as different from other content issues. Context and fact-checking are critical to consistently and accurately enforcing our harmful misinformation policies, which is why we have ensured that, in the context of the Conflict, our fact-checking programme covers Arabic and Hebrew. 
As noted above, we also incorporate fact-checker input into our broader content moderation efforts in different ways: 

  • Proactive insight reports that flag new and evolving claims they’re seeing across the internet. This helps us detect harmful misinformation and anticipate misinformation trends on our platform.
  • Collaborating with our fact-checking partners to receive advance warning of emerging misinformation narratives has facilitated proactive responses against high-harm trends and has helped to ensure that our Integrity and Authenticity moderators have up-to-date guidance.

Relevant metrics:
  • Number of fact checked tasks related to IL/Hamas - 879
  • Number of videos removed as a result of a fact checking assessment with words related to IL/Hamas - 101
  • Number of videos demoted (NR) as a result of a fact checking assessment with words related to IL/Hamas - 199