Report March 2026
TikTok allows users to create, share and watch short-form videos and live content, primarily for entertainment purposes
Advertising
Commitment 1
Relevant signatories participating in ad placements commit to defund the dissemination of disinformation, and improve the policies and systems which determine the eligibility of content to be monetised, the controls for monetisation and ad placement, and the data to report on the accuracy and effectiveness of controls and services around ad placements.
We signed up to the following measures of this commitment
Measure 1.1 Measure 1.2 Measure 1.3 Measure 1.4 Measure 1.5 Measure 1.6
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 1.1
Relevant Signatories involved in the selling of advertising, inclusive of media platforms, publishers and ad tech companies, will deploy, disclose, and enforce policies with the aims of: - first avoiding the publishing and carriage of harmful Disinformation to protect the integrity of advertising supported businesses - second taking meaningful enforcement and remediation steps to avoid the placement of advertising next to Disinformation content or on sources that repeatedly violate these policies; and - third adopting measures to enable the verification of the landing / destination pages of ads and origin of ad placement.
QRE 1.1.1
Signatories will disclose and outline the policies they develop, deploy, and enforce to meet the goals of Measure 1.1 and will link to relevant public pages in their help centres.
Measure 1.2
Relevant Signatories responsible for the selling of advertising, inclusive of publishers, media platforms, and ad tech companies, will tighten eligibility requirements and content review processes for content monetisation and ad revenue share programmes on their services as necessary to effectively scrutinise parties and bar participation by actors who systematically post content or engage in behaviours which violate policies mentioned in Measure 1.1 that tackle Disinformation.
QRE 1.2.1
Signatories will outline their processes for reviewing, assessing, and augmenting their monetisation policies in order to scrutinise and bar participation by actors that systematically provide harmful Disinformation.
Measure 1.3
Relevant Signatories responsible for the selling of advertising, inclusive of publishers, media platforms, and ad tech companies, will take commercial and technically feasible steps, including support for relevant third-party approaches, to give advertising buyers transparency on the placement of their advertising.
- TikTok Inventory Filter: This is our proprietary system, which enables advertisers to choose the profile of content they want their ads to run adjacent to. We expanded our Inventory Filter which is now available in 29 countries in the EEA and is embedded directly in TikTok Ads Manager, the main system through which advertisers purchase ads. We have expanded the functionality of this Inventory Filter in various EEA countries. More details can be found here. The Inventory Filter is informed by industry standards and policies, which include topics that may be susceptible to disinformation. Additionally, this enabled advertisers to:
- Selectively exclude unwanted or misaligned videos that do not align with their brand safety requirements from appearing next to their ads through TikTok's Video Exclusion List solution.
- Exclude specific profile pages from serving their Profile Feed ads through TikTok's Profile Feed Exclusion List.
- TikTok Pre-bid Brand Safety Solution by Integral Ad Science (“IAS”): Advertisers can filter content based on industry-standard frameworks with all levels of risk (available in France and Germany). Some misinformation content may be captured and filtered out by these industry standard categories, such as “Sensitive Social Issues”.
- Zefr: Through our partnership with Zefr, advertisers can obtain campaign insights into brand suitability and safety on the platform (now available in 29 countries in the EEA). Zefr aligns with industry standards.
- IAS: Advertisers can measure brand safety, viewability, and invalid traffic on the platform with the IAS Signal platform (post campaign is available in 28 countries in the EEA). As with IAS’s pre-bid solution covered above, this aligns with industry standards.
QRE 1.3.1
Signatories will report on the controls and transparency they provide to advertising buyers with regards to the placement of their ads as it relates to Measure 1.3.
- TikTok Inventory Filter: This is our proprietary system, which enables advertisers to choose the profile of content they want their ads to run adjacent to. We expanded our Inventory Filter which is now available in 29 jurisdictions in the EEA and is embedded directly in TikTok Ads Manager, the main system through which advertisers purchase ads. We have expanded the functionality of this Inventory Filter in various EEA countries. More details can be found here. The Inventory Filter is informed by Industry Standards and policies, which include topics that may be susceptible to disinformation. Additionally, this enabled advertisers to:
- Selectively exclude unwanted or misaligned videos that do not align with their brand safety requirements from appearing next to their ads through TikTok's Video Exclusion List solution.
- Exclude specific profile pages from serving their Profile Feed ads through TikTok's Profile Feed Exclusion List.
- TikTok Pre-bid Brand Safety Solution by Integral Ad Science (“IAS”): Advertisers can filter content based on industry-standard frameworks with all levels of risk (available in France and Germany). Some misinformation content may be captured and filtered out by these industry standard categories, such as “Sensitive Social Issues”.
- Zefr: Through our partnership with Zefr, advertisers can obtain campaign insights into brand suitability and safety on the platform (now available in 29 countries in the EEA). Zefr aligns with the Industry Standards.
- IAS: Advertisers can measure brand safety, viewability, and invalid traffic on the platform with the IAS Signal platform (post campaign is available in 28 countries in the EEA). As with IAS’s pre-bid solution covered above, this aligns with the Industry Standards.
- DoubleVerify: We are partnering with DoubleVerify to provide advertisers with media quality measurement for ads. DoubleVerify is working actively with us to expand its suite of brand suitability and media quality solutions on the platform. DoubleVerify is available in 27 EU countries.
Measure 1.4
Relevant Signatories responsible for the buying of advertising, inclusive of advertisers, and agencies, will place advertising through ad sellers that have taken effective, and transparent steps to avoid the placement of advertising next to Disinformation content or in places that repeatedly publish Disinformation.
QRE 1.4.1
Relevant Signatories that are responsible for the buying of advertising will describe their processes and procedures to ensure they place advertising through ad sellers that take the steps described in Measure 1.4.
Measure 1.5
Relevant Signatories involved in the reporting of monetisation activities inclusive of media platforms, ad networks, and ad verification companies will take the necessary steps to give industry-recognised relevant independent third-party auditors commercially appropriate and fair access to their services and data in order to: - First, confirm the accuracy of first party reporting relative to monetisation and Disinformation, seeking alignment with regular audits performed under the DSA. - Second, accreditation services should assess the effectiveness of media platforms' policy enforcement, including Disinformation policies.
QRE 1.5.1
Signatories that produce first party reporting will report on the access provided to independent third-party auditors as outlined in Measure 1.5 and will link to public reports and results from such auditors, such as MRC Content Level Brand Safety Accreditation, TAG Brand Safety certifications, or other similarly recognised industry accepted certifications.
QRE 1.5.2
Signatories that conduct independent accreditation via audits will disclose areas of their accreditation that have been updated to reflect needs in Measure 1.5.
Measure 1.6
Relevant Signatories will advance the development, improve the availability, and take practical steps to advance the use of brand safety tools and partnerships, with the following goals: - To the degree commercially viable, relevant Signatories will provide options to integrate information and analysis from source-raters, services that provide indicators of trustworthiness, fact-checkers, researchers or other relevant stakeholders providing information e.g., on the sources of Disinformation campaigns to help inform decisions on ad placement by ad buyers, namely advertisers and their agencies. - Advertisers, agencies, ad tech companies, and media platforms and publishers will take effective and reasonable steps to integrate the use of brand safety tools throughout the media planning, buying and reporting process, to avoid the placement of their advertising next to Disinformation content and/or in places or sources that repeatedly publish Disinformation. - Brand safety tool providers and rating services who categorise content and domains will provide reasonable transparency about the processes they use, insofar that they do not release commercially sensitive information or divulge trade secrets, and that they establish a mechanism for customer feedback and appeal.
QRE 1.6.1
Signatories that place ads will report on the options they provide for integration of information, indicators and analysis from source raters, services that provide indicators of trustworthiness, fact-checkers, researchers, or other relevant stakeholders providing information e.g. on the sources of Disinformation campaigns to help inform decisions on ad placement by buyers.
QRE 1.6.2
Signatories that purchase ads will outline the steps they have taken to integrate the use of brand safety tools in their advertising and media operations, disclosing what percentage of their media investment is protected by such services.
QRE 1.6.3
Signatories that provide brand safety tools will outline how they are ensuring transparency and appealability about their processes and outcomes.
QRE 1.6.4
Relevant Signatories that rate sources to determine if they persistently publish Disinformation shall provide reasonable information on the criteria under which websites are rated, make public the assessment of the relevant criteria relating to Disinformation, operate in an apolitical manner and give publishers the right to reply before ratings are published.
Commitment 2
Relevant Signatories participating in advertising commit to prevent the misuse of advertising systems to disseminate Disinformation in the form of advertising messages.
We signed up to the following measures of this commitment
Measure 2.1 Measure 2.2 Measure 2.3 Measure 2.4
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
- Health Misinformation
- Environment/Climate Misinformation
- Public Safety & Trust Misinformation
- Election Misinformation
- Other Misinformation
- Medical Misinformation
- Dangerous Misinformation
- Synthetic and Manipulated Media
- Dangerous Conspiracy Theories
- Climate Misinformation
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 2.1
Relevant Signatories will develop, deploy, and enforce appropriate and tailored advertising policies that address the misuse of their advertising systems for propagating harmful Disinformation in advertising messages and in the promotion of content.
In H2 2025, we iterated existing advertising policies for misinformation and launched more granular policies in the EEA (covering Health Misinformation, Environment/Climate Misinformation, Public Safety & Trust Misinformation, Election Misinformation, Other Misinformation), with which advertisers need to comply with. These policies provide clearer categorisation of misinformation types and build on the principles and enforcement experience of the five policies set out in the H1 2025 report, enabling more consistent and targeted enforcement in line with evolving risks.
QRE 2.1.1
Signatories will disclose and outline the policies they develop, deploy, and enforce to meet the goals of Measure 2.1 and will link to relevant public pages in their help centres.
SLI 2.1.1
Signatories will report, quantitatively, on actions they took to enforce each of the policies mentioned in the qualitative part of this service level indicator, at the Member State or language level. This could include, for instance, actions to remove, to block, or to otherwise restrict harmful Disinformation in advertising messages and in the promotion of content.
| Country | Number of ads removals under the granular misinformation ad policies |
|---|---|
| Austria | 133 |
| Belgium | 101 |
| Bulgaria | 16 |
| Croatia | 9 |
| Cyprus | 3 |
| Czech Republic | 22 |
| Denmark | 90 |
| Estonia | 10 |
| Finland | 22 |
| France | 138 |
| Germany | 656 |
| Greece | 17 |
| Hungary | 49 |
| Ireland | 176 |
| Italy | 102 |
| Latvia | 21 |
| Lithuania | 8 |
| Luxembourg | 1 |
| Malta | - |
| Netherlands | 46 |
| Poland | 77 |
| Portugal | 37 |
| Romania | 19 |
| Slovakia | 11 |
| Slovenia | 12 |
| Spain | 73 |
| Sweden | 195 |
| Iceland | 0 |
| Liechtenstein | - |
| Norway | 165 |
| Total EU | 2,044 |
| Total EEA | 2,209 |
Measure 2.2
Relevant Signatories will develop tools, methods, or partnerships, which may include reference to independent information sources both public and proprietary (for instance partnerships with fact-checking or source rating organisations, or services providing indicators of trustworthiness, or proprietary methods developed internally) to identify content and sources as distributing harmful Disinformation, to identify and take action on ads and promoted content that violate advertising policies regarding Disinformation mentioned in Measure 2.1.
TikTok places considerable emphasis on proactive moderation of advertisements. Advertisements are reviewed against our Advertising Policies through a combination of automated and human moderation.
Our granular misinformation advertising policies launched in H2 2025 currently cover:
- Health Misinformation
- Environment/Climate Misinformation
- Public Safety & Trust Misinformation
- Election Misinformation
- Other Misinformation
We provide users with a simple and intuitive way to report advertisements in-app for breach of our Advertising Policies including for misinformation in each EU Member State.
- By ‘long-pressing’ (e.g., clicking for 3 seconds) on the advertisement and selecting the “Report” option.
- By selecting the “Share” button available on the right-hand side of the advertisement and then selecting the “Report” option.
The user is then shown categories of reporting reasons from which to select. In H2 2025, we updated this feature to create the specific “Misinformation” category and allow users to report with increased granularity.
QRE 2.2.1
Signatories will describe the tools, methods, or partnerships they use to identify content and sources that contravene policies mentioned in Measure 2.1 - while being mindful of not disclosing information that'd make it easier for malicious actors to circumvent these tools, methods, or partnerships. Signatories will specify the independent information sources involved in these tools, methods, or partnerships.
The majority of ads that violate our misinformation policies would have been removed under our existing policies. Our granular advertising policies currently cover:
Dangerous Misinformation- Dangerous Conspiracy Theories
- Medical Misinformation
- Synthetic and Manipulated Media
- Climate Misinformation
Measure 2.3
Relevant Signatories will adapt their current ad verification and review systems as appropriate and commercially feasible, with the aim of preventing ads placed through or on their services that do not comply with their advertising policies in respect of Disinformation to be inclusive of advertising message, promoted content, and site landing page.
TikTok places considerable emphasis on proactive moderation of advertisements. Advertisements and advertiser accounts are reviewed against our Advertising Policies through a combination of automated and human moderation.
Our granular misinformation advertising policies launched in H2 2025 currently cover:
- Health Misinformation
- Environment/Climate Misinformation
- Public Safety & Trust Misinformation
- Election Misinformation
- Other Misinformation
- By ‘long-pressing’ (e.g., clicking for 3 seconds) on the advertisement and selecting the “Report” option.
- By selecting the “Share” button available on the right-hand side of the advertisement and then selecting the “Report” option.
QRE 2.3.1
Signatories will describe the systems and procedures they use to ensure that ads placed through their services comply with their advertising policies as described in Measure 2.1.
The majority of ads that violate our misinformation policies would have been removed under our existing policies. Our granular advertising policies currently cover:
- Dangerous Misinformation
- Dangerous Conspiracy Theories
- Medical Misinformation
- Synthetic and Manipulated Media
- Climate Misinformation
After the ad goes live on the platform, users can report any concerns using the “report” button, and the ad will be reviewed again and appropriate action taken if necessary.
SLI 2.3.1
Signatories will report quantitatively, at the Member State level, on the ads removed or prohibited from their services using procedures outlined in Measure 2.3. In the event of ads successfully removed, parties should report on the reach of violatory content and advertising.
| Country | Number of ads removals under the granular misinformation ad policies | Number of impressions for ads removed under the granular misinformation ad policiesÊ |
|---|---|---|
| Austria | 133 | 14,139 |
| Belgium | 101 | 35,702 |
| Bulgaria | 16 | 1,245 |
| Croatia | 9 | 2,019 |
| Cyprus | 3 | 1,542 |
| Czech Republic | 22 | 16,572 |
| Denmark | 90 | 12,306 |
| Estonia | 10 | 620 |
| Finland | 22 | 11,521 |
| France | 138 | 36,867 |
| Germany | 656 | 402,684 |
| Greece | 17 | 32,304 |
| Hungary | 49 | 189,097 |
| Ireland | 176 | 44,960 |
| Italy | 102 | 65,589 |
| Latvia | 21 | 128,011 |
| Lithuania | 8 | 866 |
| Luxembourg | 1 | 4,632 |
| Malta | - | - |
| Netherlands | 46 | 1,282 |
| Poland | 77 | 57,588 |
| Portugal | 37 | 40,976 |
| Romania | 19 | 2,599 |
| Slovakia | 11 | 588 |
| Slovenia | 12 | 872 |
| Spain | 73 | 7,958 |
| Sweden | 195 | 20,877 |
| Iceland | 0 | 0 |
| Liechtenstein | - | - |
| Norway | 165 | 23,045 |
| Total EU | 2,044 | 1,133,416 |
| Total EEA | 2,209 | 1,156,461 |
Measure 2.4
Relevant Signatories will provide relevant information to advertisers about which advertising policies have been violated when they reject or remove ads violating policies described in Measure 2.1 above or disable advertising accounts in application of these policies and clarify their procedures for appeal.
QRE 2.4.1
Signatories will describe how they provide information to advertisers about advertising policies they have violated and how advertisers can appeal these policies.
SLI 2.4.1
Signatories will report quantitatively, at the Member State level, on the number of appeals per their standard procedures they received from advertisers on the application of their policies and on the proportion of these appeals that led to a change of the initial policy decision.
| Country | Number of appeals for ads removed under the granular misinformation ad policies | Number of overturns of appeals under the granular misinformation ad policies |
|---|---|---|
| Austria | 0 | 0 |
| Belgium | 0 | 0 |
| Bulgaria | 0 | 0 |
| Croatia | 0 | 0 |
| Cyprus | 0 | 0 |
| Czech Republic | 0 | 0 |
| Denmark | 0 | 0 |
| Estonia | 0 | 0 |
| Finland | 0 | 0 |
| France | 0 | 0 |
| Germany | 0 | 0 |
| Greece | 0 | 0 |
| Hungary | 0 | 0 |
| Ireland | 0 | 0 |
| Italy | 0 | 0 |
| Latvia | 0 | 0 |
| Lithuania | 0 | 0 |
| Luxembourg | 0 | 0 |
| Malta | 0 | 0 |
| Netherlands | 0 | 0 |
| Poland | 0 | 0 |
| Portugal | 0 | 0 |
| Romania | 0 | 0 |
| Slovakia | 0 | 0 |
| Slovenia | 0 | 0 |
| Spain | 0 | 0 |
| Sweden | 0 | 0 |
| Iceland | 0 | 0 |
| Liechtenstein | 0 | 0 |
| Norway | 0 | 0 |
Commitment 3
Relevant Signatories involved in buying, selling and placing digital advertising commit to exchange best practices and strengthen cooperation with relevant players, expanding to organisations active in the online monetisation value chain, such as online e-payment services, e-commerce platforms and relevant crowd-funding/donation systems, with the aim to increase the effectiveness of scrutiny of ad placements on their own services.
We signed up to the following measures of this commitment
Measure 3.1 Measure 3.2 Measure 3.3
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
- We have established and strengthened our partnership with third-party fact-checkers to detect harmful misinformation on our platform.
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
- The optimisation of our collaboration framework with third-party fact-checking organisation in relation to advertising (e.g.Science Feedback); and
- Continuing enhancing detection within the advertising ecosystem through signal-sharing to improve our internal databases.
Measure 3.1
Relevant Signatories will cooperate with platforms, advertising supply chain players, source-rating services, services that provide indicators of trustworthiness, fact-checking organisations, advertisers and any other actors active in the online monetisation value chain, to facilitate the integration and flow of information, in particular information relevant for tackling purveyors of harmful Disinformation, in full respect of all relevant data protection rules and confidentiality agreements.
As set out later on in this report, we cooperate with a number of third parties to facilitate the flow of information that may be relevant for tackling purveyors of harmful misinformation. This information is shared internally to help ensure consistency of approach across our platform.
QRE 3.1.1
Signatories will outline how they work with others across industry and civil society to facilitate the flow of information that may be relevant for tackling purveyors of harmful Disinformation.
We also continue to be actively involved in the Task-force working group for Chapter 2, specifically the working subgroup on Elections (Crisis Response) which we co-chaired. We work with other signatories to define and outline metrics regarding the monetary reach and impact of harmful misinformation. We are in close collaboration with industry to ensure alignment and clarity on the reporting of these code requirements.
Measure 3.2
Relevant Signatories will exchange among themselves information on Disinformation trends and TTPs (Tactics, Techniques, and Procedures), via the Code Task-force, GARM, IAB Europe, or other relevant fora. This will include sharing insights on new techniques or threats observed by Relevant Signatories, discussing case studies, and other means of improving capabilities and steps to help remove Disinformation across the advertising supply chain - potentially including real-time technical capabilities.
We work with industry partners to discuss common standards and definitions to support consistency of categorising content, adjacency & measurement relevant topics, in appropriate fora. We work closely with IAB Sweden and other organisations such as TAG in the EEA and globally. We are also on the board of the Brand Safety Institute.
QRE 3.2.1
Signatories will report on their discussions within fora mentioned in Measure 3.2, being mindful of not disclosing information that is confidential and/or that may be used by malicious actors to circumvent the defences set by Signatories and others across the advertising supply chain. This could include, for instance, information about the fora Signatories engaged in; about the kinds of information they shared; and about the learnings they derived from these exchanges.
Measure 3.3
Relevant Signatories will integrate the work of or collaborate with relevant third-party organisations, such as independent source-rating services, services that provide indicators of trustworthiness, fact-checkers, researchers, or open-source investigators, in order to reduce monetisation of Disinformation and avoid the dissemination of advertising containing Disinformation.
We continue to work closely with IAB Sweden and other organisations such as TAG in the EEA and globally.
QRE 3.3.1
Signatories will report on the collaborations and integrations relevant to their work with organisations mentioned.
Political Advertising
Commitment 4
Relevant Signatories commit to adopt a common definition of "political and issue advertising".
We signed up to the following measures of this commitment
Measure 4.1
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 4.1
Relevant Signatories commit to define "political and issue advertising" in this section in line with the definition of "political advertising" set out in the European Commission's proposal for a Regulation on the transparency and targeting of political advertising.
Integrity of Services
Commitment 14
In order to limit impermissible manipulative behaviours and practices across their services, Relevant Signatories commit to put in place or further bolster policies to address both misinformation and disinformation across their services, and to agree on a cross-service understanding of manipulative behaviours, actors and practices not permitted on their services. Such behaviours and practices include: The creation and use of fake accounts, account takeovers and bot-driven amplification, Hack-and-leak operations, Impersonation, Malicious deep fakes, The purchase of fake engagements, Non-transparent paid messages or promotion by influencers, The creation and use of accounts that participate in coordinated inauthentic behaviour, User conduct aimed at artificially amplifying the reach or perceived public support for disinformation.
We signed up to the following measures of this commitment
Measure 14.1 Measure 14.2 Measure 14.3
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
- Our 2025 Community Guidelines update was launched on August 14, 2025 and went live on September 13, 2025 (due to the 30-day notice period for users). This update ensured that the Community Guidelines remain aligned with our internal policies.
- Our Harmful Misinformation policies are referenced under the hack and leak section. They have all been refined in H2 2025, and they continue to drive our work in combating harmful misinformation, such as conspiracy theories, claims relating to unfolding events, and other forms of dangerous misinformation.
- We continue enforcing our AIGC policy against TikTok Shop content.
- We launched the Evasive Techniques policy, which combats methods designed to evade moderation systems. Continued to join industry partners as a party to the “Tech Accord to Combat Deceptive Use of AI in 2024 Elections” which is a joint commitment to combat the deceptive use of AI in elections.
- We have continued to enhance our ability to detect covert influence operations. To provide more regular and detailed updates about the covert influence operations we disrupt, we have a dedicated Transparency Report on covert influence operations, which is available in TikTok’s Transparency Centre. In this report, we include information about operations that we have previously removed and that have attempted to return to our platform with new accounts.
Please note: Some TTPs cannot be viewed on disinfocode.eu. Please download our full report, which can be found at the top of the page, for complete information on our work relevant to Commitment 14.
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 14.1
Relevant Signatories will adopt, reinforce and implement clear policies regarding impermissible manipulative behaviours and practices on their services, based on the latest evidence on the conducts and tactics, techniques and procedures (TTPs) employed by malicious actors, such as the AMITT Disinformation Tactics, Techniques and Procedures Framework.
As well as our Integrity and Authenticity policies in our Community Guidelines, which safeguard against harmful misinformation (see QRE 18.2.1), our Integrity and Authenticity policies also expressly prohibit deceptive behaviours. Our policies on deceptive behaviours relate to the TTPs as follows:
TTPs which pertain to the creation of assets for the purpose of a disinformation campaign, and the ways to make these assets seem credible:
- Operating large networks of accounts controlled by a single entity, or through automation;
- Bulk distribution of a high volume of spam; and
- Manipulation of engagement signals to amplify the reach of certain content, or buying and selling followers, particularly for financial purposes
We also have a number of policies that address account hijacking. Our privacy and security policies under our Community Guidelines expressly prohibit users from providing access to their account credentials to others or enabling others to conduct activities against our Community Guidelines. We do not allow access to any part of TikTok through unauthorised methods; attempts to obtain sensitive, confidential, commercial, or personal information; or any abuse of the security, integrity, or reliability of our platform. We also provide practical guidance to users if they have concerns that their account may have been hacked.
- Accounts that pose as another real person or entity without disclosing that they are a fan or parody account in the account name, such as using someone's name, biographical details, content, or image without disclosing it
- Presenting as a person or entity that does not exist (a fake persona) with a demonstrated intent to mislead others on the platform
Use of fake / inauthentic reactions (e.g. likes, up votes, comments) and use of fake followers or subscribers
- facilitate the trade or marketing of services that artificially increase engagement, such as selling followers or likes; or
- provide instructions on how to artificially increase engagement on TikTok.
When we investigate and remove these operations, we focus on behaviour and assessing linkages between accounts and techniques to determine if actors are engaging in a coordinated effort to mislead TikTok’s systems or our community. In each case, we believe that the people behind these activities coordinate with one another to misrepresent who they are and what they are doing. We know that CIOs will continue to evolve in response to our detection, and networks may attempt to reestablish a presence on our platform. That is why we take continuous action against these attempts, including banning accounts found to be linked with previously disrupted networks. We continue to iteratively research and evaluate complex deceptive behaviours on our platform and develop appropriate product and policy solutions as appropriate in the long term. We continue to proactively identify and remove CIO networks that pose risks to user safety. We have published details of CIO networks we identified and removed in H2 2025 in a dedicated monthly report within our Transparency Centre here. For advertising-related CIO measures, please refer to Chapter 2.
- Our hack and leak policy, which aims to further reduce the harms inflicted by the unauthorised disclosure of hacked materials on the individuals, communities, and organisations that may be implicated or exposed by such disclosures.
- Our CIO policy addresses use of leaked documents to sway public opinion as part of a wider operation.
- Our Edited Media and AI-Generated Content (AIGC) policy captures materials that have been digitally altered without an appropriate disclosure.
- In H2 2025, we deployed a defined suite of misinformation policies. As stated in our Community Guidelines, our policies do not allow misinformation that could cause significant harm to individuals or society, no matter the intent of the person posting it. This includes hoaxes, misleading AIGC, harmful conspiracy theories, and other false information related to public safety, crises, or major civic events—when such content may lead to violence or cause public panic. In addition, content is ineligible for the FYF if it contains misinformation that may cause moderate harm to individuals or society. To be cautious, unverified information about crises, major civic events, or content temporarily under review by fact-checkers is also ineligible for the FYF.
- Misinformation that poses a risk to public safety or incites panic, including falsely presenting past crisis events as recent or claiming that critical resources are unavailable during emergencies
- Health misinformation that could cause significant harm, such as promoting unproven treatments that may be fatal, discouraging professional care for life-threatening conditions (e.g., vaccine effectiveness), or spreading false information about how such conditions are transmitted.
- Misinformation that denies the existence of climate change, misrepresents its causes, or contradicts its established environmental impact
- Conspiracy theories or hoaxes that could cause significant harm, such as those that make a violent call to action or have links to previous violence.
For the purposes of our policy, AIGC refers to content created or modified by artificial intelligence (AI) technology or machine-learning processes, which may include images of real people, and may show highly realistic-appearing scenes, or use a particular artistic style, such as a painting, cartoons, or anime. ‘Significantly edited content’ is content that shows people doing or saying something they did not do or say, or altering their appearance in a way that makes them difficult to recognise or identify. Misleading AIGC or edited media is audio or visual content that has been edited, including by combining different clips together, to change the composition, sequencing, or timing in a way that alters the meaning of the content and could mislead viewers about the truth of real-world events.
- The likeness of young people or realistic-appearing people under the age of 18 that poses a risk of sexualisation, bullying or privacy concerns, including those related to personally identifiable information or likeness to private individuals.
- Misleading AIGC or edited media that falsely show:
- Content made to seem as if it comes from an authoritative source, such as a reputable news organisation.
- A crisis event, such as a conflict or natural disaster.
- A public figure who is:
- being degraded or harassed, or engaging in criminal or antisocial behaviour.
- taking a position on a political issue, commercial product, or a matter of public importance (such as an election).
- being politically endorsed or condemned by an individual or group.
Non-transparent compensated messages or promotions by influencers
Our Terms of Service require users posting about a brand or product in return for any payment or other incentive to disclose their content by enabling the Commercial Disclosure Toggle, which we make available for users. We also provide functionality to enable users to report suspected undisclosed branded content, which reminds the user who posted the suspected undisclosed branded content of our requirements and prompts them to turn the Commercial Disclosure Toggle on if required. We made this requirement even more explicit to users in our Commercial Disclosure and Paid Marketing section in the Community Guidelines, which was updated in H2 2025 to provide greater clarity.
At TikTok, we place considerable emphasis on proactive content moderation and use a combination of technology and safety professionals to detect and remove harmful misinformation and deceptive behaviours on our Platform before they are reported to us by users or third parties.
- prevent inauthentic accounts from being created based on malicious patterns; and
- remove registered accounts based on certain signals (i.e., uncommon behaviour on the platform).
- They are coordinating with each other. For example, they are operated by the same entity, share technical similarities like using the same devices, or work together to spread the same narrative.
- They are misleading our systems or users. For example, they are trying to conceal their actual location or use fake personas to pose as someone they're not.
- They are attempting to manipulate or corrupt public debate to impact the decision-making, beliefs, and opinions of a community. For example, they are attempting to shape discourse around an election or conflict.
QRE 14.1.1
Relevant Signatories will list relevant policies and clarify how they relate to the threats mentioned above as well as to other Disinformation threats.
TTPs which pertain to the creation of assets for the purpose of a disinformation campaign, and the ways to make these assets seem credible:
- Operating large networks of accounts controlled by a single entity, or through automation;
- Bulk distribution of a high volume of spam; and
- Manipulation of engagement signals to amplify the reach of certain content, or buying and selling followers, particularly for financial purposes
- Accounts that pose as another real person or entity without disclosing that they are a fan or parody account in the account name, such as using someone's name, biographical details, content, or image without disclosing it
- Presenting as a person or entity that does not exist (a fake persona) with a demonstrated intent to mislead others on the platform
Use of fake / inauthentic reactions (e.g. likes, up votes, comments) and use of fake followers or subscribers
- facilitate the trade or marketing of services that artificially increase engagement, such as selling followers or likes; or
- provide instructions on how to artificially increase engagement on TikTok.
We also have a number of policies that address account hijacking. Our privacy and security policies under our Community Guidelines expressly prohibit users from providing access to their account credentials to others or enabling others to conduct activities against our Community Guidelines. We do not allow access to any part of TikTok through unauthorised methods; attempts to obtain sensitive, confidential, commercial, or personal information; or any abuse of the security, integrity, or reliability of our platform. We also provide practical guidance to users if they have concerns that their account may have been hacked.
When we investigate and remove these operations, we focus on behaviour and assessing linkages between accounts and techniques to determine if actors are engaging in a coordinated effort to mislead TikTok’s systems or our community. In each case, we believe that the people behind these activities coordinate with one another to misrepresent who they are and what they are doing. We know that CIOs will continue to evolve in response to our detection, and networks may attempt to reestablish a presence on our platform. That is why we take continuous action against these attempts, including banning accounts found to be linked with previously disrupted networks. We continue to iteratively research and evaluate complex deceptive behaviours on our platform and develop appropriate product and policy solutions as appropriate in the long term. We have published details of all the CIO networks we identified and removed in H1 2025 in a dedicated monthly report within our Transparency Centre here.
- Our hack and leak policy, which aims to further reduce the harms inflicted by the unauthorised disclosure of hacked materials on the individuals, communities, and organisations that may be implicated or exposed by such disclosures.
- Our CIO policy addresses use of leaked documents to sway public opinion as part of a wider operation.
- Our Edited Media and AI-Generated Content (AIGC) policy captures materials that have been digitally altered without an appropriate disclosure.
- Our harmful misinformation policies combat conspiracy theories related to unfolding events and dangerous misinformation.
- Our Trade of Regulated Goods and Services policy prohibits the trading of hacked goods.
Deceptive manipulated media (e.g. “deep fakes”, “cheap fakes”...)
Our ‘Edited Media and AI-Generated Content (AIGC)’ policy includes commonly used and easily understood language when referring to AIGC, and outlines our existing prohibitions on AIGC showing fake authoritative sources or crisis events, or falsely showing public figures in certain contexts, including being bullied, making an endorsement, or being endorsed. We also do not allow content that contains the likeness of young people, or the likeness of adult private figures used without their permission.
For the purposes of our policy, AIGC refers to content created or modified by artificial intelligence (AI) technology or machine-learning processes, which may include images of real people, and may show highly realistic-appearing scenes, or use a particular artistic style, such as a painting, cartoons, or anime. ‘Significantly edited content’ is content that shows people doing or saying something they did not do or say, or altering their appearance in a way that makes them difficult to recognise or identify. Misleading AIGC or edited media is audio or visual content that has been edited, including by combining different clips together, to change the composition, sequencing, or timing in a way that alters the meaning of the content and could mislead viewers about the truth of real-world events.
In accordance with our policy, we prohibit AIGC, which features:
- The likeness of young people or realistic-appearing people under the age of 18.
- The likeness of adult private figures, if we become aware that it was used without their permission.
- Misleading AIGC or edited media that falsely show:
- Content made to seem as if it comes from an authoritative source, such as a reputable news organisation.
- A crisis event, such as a conflict or natural disaster
- A public figure who is:
- being degraded or harassed, or engaging in criminal or antisocial behaviour.
- taking a position on a political issue, commercial product, or a matter of public importance (such as an election).
- being politically endorsed or condemned by an individual or group.
As AI evolves, we continue to invest in combating harmful AIGC by evolving our proactive detection models, consulting with experts, and partnering with peers on shared solutions.
Our Terms of Service and Branded Content Policy require users posting about a brand or product in return for any payment or other incentive to disclose their content by enabling the branded content toggle, which we make available for users. We also provide functionality to enable users to report suspected undisclosed branded content, which reminds the user who posted the suspected undisclosed branded content of our requirements and prompts them to turn the branded content toggle on if required. We made this requirement even clearer to users in our Commercial Disclosures and Paid Promotion policy in our March 2023 Community Guidelines refresh by increasing the information around our policing of this policy and providing specific examples.
QRE 14.1.2
Signatories will report on their proactive efforts to detect impermissible content, behaviours, TTPs and practices relevant to this commitment.
- prevent inauthentic accounts from being created based on malicious patterns; and
- remove registered accounts based on certain signals (i.e., uncommon behaviour on the platform).
We have also set up specifically-trained teams that are focused on investigating and detecting CIO on our Platform. We've built international trust & safety teams with specialized expertise across threat intelligence, security, law enforcement, and data science to work on influence operations full-time. These teams continuously pursue and analyse on-platform signals of deceptive behaviour, as well as leads from external sources. They also collaborate with external intelligence vendors to support specific investigations on a case-by-case basis. When we investigate and remove these operations, we focus on behaviour and assessing linkages between accounts and techniques to determine if actors are engaging in a coordinated effort to mislead TikTok’s systems or our community. In each case, we believe that the people behind these activities coordinate with one another to misrepresent who they are and what they are doing.
- They are coordinating with each other. For example, they are operated by the same entity, share technical similarities like using the same devices, or work together to spread the same narrative.
- They are misleading our systems or users. For example, they are trying to conceal their actual location or use fake personas to pose as someone they're not.
- They are attempting to manipulate or corrupt public debate to impact the decision-making, beliefs, and opinions of a community. For example, they are attempting to shape discourse around an election or conflict.
Measure 14.2
Relevant Signatories will keep a detailed, up-to-date list of their publicly available policies that clarifies behaviours and practices that are prohibited on their services and will outline in their reports how their respective policies and their implementation address the above set of TTPs, threats and harms as well as other relevant threats.
The implementation of our policies is ensured by different means, including specifically-designed tools (such as toggles to disclose branded content - see QRE 14.1.1) or human investigations to detect deceptive behaviours (for CIO activities - see QRE 14.1.2).
QRE 14.2.1
Relevant Signatories will report on actions taken to implement the policies they list in their reports and covering the range of TTPs identified/employed, at the Member State level.
Full metrics from this QRE (and QREs 14.2.2 and 14.2.4) can be found in our full report, linked at the top of this page.
SLI 14.2.4
Estimation, at the Member State level, of TTPs related content, views/impressions and interaction/engagement with such content as a percentage of the total content, views/impressions and interaction/engagement on relevant signatories' service.
Impersonation accounts over monthly active users (EU): 0.005%
| Country | Number of unique videos labelled with AIGC tag of "AI-generated" |
|---|---|
| Austria | 292,401 |
| Belgium | 461,238 |
| Bulgaria | 832,573 |
| Croatia | 94,124 |
| Cyprus | 100,228 |
| Czech Republic | 455,194 |
| Denmark | 114,169 |
| Estonia | 63,904 |
| Finland | 178,869 |
| France | 2,665,168 |
| Germany | 3,887,735 |
| Greece | 445,046 |
| Hungary | 647,102 |
| Ireland | 131,936 |
| Italy | 2,778,340 |
| Latvia | 150,166 |
| Lithuania | 169,243 |
| Luxembourg | 26,034 |
| Malta | 26,024 |
| Netherlands | 884,486 |
| Poland | 1,455,778 |
| Portugal | 697,724 |
| Romania | 1,826,466 |
| Slovakia | 281,735 |
| Slovenia | 35,187 |
| Spain | 3,289,279 |
| Sweden | 336,203 |
| Iceland | 11,757 |
| Liechtenstein | 530 |
| Norway | 154,897 |
| Total EU | 22,326,352 |
| Total EEA | 22,493,536 |
Measure 14.3
Relevant Signatories will convene via the Permanent Task-force to agree upon and publish a list and terminology of TTPs employed by malicious actors, which should be updated on an annual basis.
QRE 14.3.1
Signatories will report on the list of TTPs agreed in the Permanent Task-force within 6 months of the signing of the Code and will update this list at least every year. They will also report about the common baseline elements, objectives and benchmarks for the policies and measures.
Commitment 15
Relevant Signatories that develop or operate AI systems and that disseminate AI-generated and manipulated content through their services (e.g. deepfakes) commit to take into consideration the transparency obligations and the list of manipulative practices prohibited under the proposal for Artificial Intelligence Act.
We signed up to the following measures of this commitment
Measure 15.1 Measure 15.2
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
- We provided extensive training for moderators and risk containment agents to help them better detect and remove deceptive AIGC more quickly. We also conducted a thorough assessment of the effectiveness of our AI policies and provided guidance to reduce systemic error.
- We published our Responsible AI Principles.
- Continued to join industry partners as a party to the “Tech Accord to Combat Deceptive Use of AI in 2024 Elections’ which is a joint commitment to combat the deceptive use of AI in elections.
- We continue to participate in relevant working groups, such as the Generative AI working group, which commenced in September 2023.
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 15.1
Relevant signatories will establish or confirm their policies in place for countering prohibited manipulative practices for AI systems that generate or manipulate content, such as warning users and proactively detect such content.
- AIGC that shows the likeness of young people or realistic-appearing people under the age of 18 that poses a risk of sexualisation, bullying or privacy concerns, including those related to personally identifiable information or likeness to private individuals.
- AIGC that shows the likeness of adult private figures, if we become aware it was used without their permission.
- Misleading AIGC or edited media that falsely shows:
- Content made to seem as if it comes from an authoritative source, such as a reputable news organisation
- A crisis event, such as a conflict or natural disaster.
- A public figure who is:
- being degraded or harassed, or engaging in criminal or antisocial behaviour.
- taking a position on a political issue, commercial product, or a matter of public importance (such as an election).
- being politically endorsed or condemned by an individual or group.
QRE 15.1.1
In line with EU and national legislation, Relevant Signatories will report on their policies in place for countering prohibited manipulative practices for AI systems that generate or manipulate content.
- AIGC that shows the likeness of young people or realistic-appearing people under the age of 18 that poses a risk of sexualisation, bullying or privacy concerns, including those related to personally identifiable information or likeness to private individuals.
- AIGC that shows the likeness of adult private figures, if we become aware it was used without their permission.
- Misleading AIGC or edited media that falsely shows:
- Content made to seem as if it comes from an authoritative source, such as a reputable news organisation
- A crisis event, such as a conflict or natural disaster.
- A public figure who is:
- being degraded or harassed, or engaging in criminal or antisocial behaviour.
- taking a position on a political issue, commercial product, or a matter of public importance (such as an election).
- being politically endorsed or condemned by an individual or group.
Measure 15.2
Relevant Signatories will establish or confirm their policies in place to ensure that the algorithms used for detection, moderation and sanctioning of impermissible conduct and content on their services are trustworthy, respect the rights of end-users and do not constitute prohibited manipulative practices impermissibly distorting their behaviour in line with Union and Member States legislation.
We have a number of measures to ensure the AI systems we develop uphold the principles of fairness and comply with applicable laws. To that end:
- We have in place internal guidelines on Algorithmic Fairness that are developed with adherence to our commitment to human rights as outlined here: https://www.tiktok.com/transparency/en/upholding-human-rights
- We have continued to scale our algorithmic fairness compliance review process for new or updated AI systems that meet certain risk-based thresholds.
QRE 15.2.1
Relevant Signatories will report on their policies and actions to ensure that the algorithms used for detection, moderation and sanctioning of impermissible conduct and content on their services are trustworthy, respect the rights of end-users and do not constitute prohibited manipulative practices in line with Union and Member States legislation.
- We have in place internal guidelines and training to help ensure that the training and deployment of our AI systems comply with applicable data protection laws, as well as principles of fairness.
- We have instituted a compliance review process for new AI systems that meet certain thresholds, and are working to prioritise review of previously developed algorithms.
Commitment 16
Relevant Signatories commit to operate channels of exchange between their relevant teams in order to proactively share information about cross-platform influence operations, foreign interference in information space and relevant incidents that emerge on their respective services, with the aim of preventing dissemination and resurgence on other services, in full compliance with privacy legislation and with due consideration for security and human rights risks.
We signed up to the following measures of this commitment
Measure 16.1 Measure 16.2
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 16.1
Relevant Signatories will share relevant information about cross-platform information manipulation, foreign interference in information space and incidents that emerge on their respective services for instance via a dedicated sub-group of the permanent Task-force or via existing fora for exchanging such information.
Central to our strategy for identifying and removing CIO on our platforms is working with our stakeholders, including civil society and user reports. This approach facilitates us - and others - disrupting the network’s operations in their early stages. In addition to continuously enhancing our in-house capabilities, we proactively engage in comprehensive reviews of our peers' publicly disclosed findings and swiftly implement necessary actions in alignment with our policies.
QRE 16.1.1
Relevant Signatories will disclose the fora they use for information sharing as well as information about learnings derived from this sharing.
Measure 16.2
Relevant Signatories will pay specific attention to and share information on the tactical migration of known actors of misinformation, disinformation and information manipulation across different platforms as a way to circumvent moderation policies, engage different audiences or coordinate action on platforms with less scrutiny and policy bandwidth.
We publish details of the CIO networks we identify and remove within our transparency reports here. As new deceptive behaviours emerge, we’ll continue to evolve our response, strengthen enforcement capabilities, and publish our findings.
QRE 16.2.1
As a result of the collaboration and information sharing between them, Relevant Signatories will share qualitative examples and case studies of migration tactics employed and advertised by such actors on their platforms as observed by their moderation team and/or external partners from Academia or fact-checking organisations engaged in such monitoring.
Empowering Users
Commitment 17
In light of the European Commission's initiatives in the area of media literacy, including the new Digital Education Action Plan, Relevant Signatories commit to continue and strengthen their efforts in the area of media literacy and critical thinking, also with the aim to include vulnerable groups.
We signed up to the following measures of this commitment
Measure 17.1 Measure 17.2 Measure 17.3
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
- We ran 8 temporary media literacy election integrity campaigns in advance of regional elections, most in collaboration with our fact-checking and media literacy partners:
- 7 in the EU
- Czechia (Parliamentary election): Demagog.cz
- Portugal (local election): Poligrafo
- Estonia (local election): Lead Stories
- Ireland (presidential election): The Journal
- Netherlands (parliamentary election)
- Denmark (local and municipal election): Sikker Digital
- Portugal (presidential election): Polígrafo
- 1 in Norway (parliamentary election)
- 7 in the EU
- Following wildfires in Portugal and Spain, we launched an in-app guide to provide users with guidance on interacting with sensitive content during natural disasters. The guide links to TikTok's tragic event support guide and authoritative third party resources (PT)(ES) of information about aid and relief support. The intervention is available in all in-app languages.
- Following protests in France, we launched an in-app guide to provide users with guidance on interacting with sensitive content when events are unfolding rapidly. The guide links to TikTok's Community Guidelines and Well-being Guide.
- Continued our in-app interventions, including video tags, search interventions and in-app information centres, available in 23 official EU languages and Norwegian and Icelandic for EEA users, around elections, the Israel-Hamas Conflict, Holocaust Education, and the War in Ukraine.
- Continued to support mental well-being awareness and literacy and to combat misinformation with reliable content through the WHO's Fides network, a diverse community of trusted healthcare professionals and content creators in a number of countries, including France.
- We launched a $2 Million AI Literacy fund in partnership with more than 20 civil society organisations across 12 markets worldwide. The ad credit fund is designed to support the creation of educational content that will appear in For You feeds. This initiative launched alongside several new company updates to spot, shape and understand AI-generated content.
- Brought greater transparency about our systems and our integrity and authenticity efforts to our community by sharing regular insights and updates. In H2 2025, we launched a new:
- Transparency Center Global Elections Hub , including dedicated coverage of elections across Europe, the Middle East, and Africa. The Hub outlines our policies, product features, and moderation practices that help protect platform integrity during elections. Throughout this reporting period, we regularly updated the Hub with information on our safety efforts in markets with active elections, including Croatia, Germany, Netherlands, Portugal, Poland and Ireland.
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 17.1
Relevant Signatories will design and implement or continue to maintain tools to improve media literacy and critical thinking, for instance by empowering users with context on the content visible on services or with guidance on how to evaluate online content.
In addition to actioning content that violates our Integrity and Authenticity policies, we continue to dedicate resources to: expanding our in-app measures that show users additional context on certain content (e.g., natural disasters and rapidly unfolding events); redirecting them to authoritative information; and making these tools available in 22 EU official languages (plus, for EEA users, Norwegian and Icelandic).
QRE 17.1.1
Relevant Signatories will outline the tools they develop or maintain that are relevant to this commitment and report on their deployment in each Member State.
Search intervention.
If users search for terms associated with a topic, they will be presented with a banner encouraging them to verify the facts and providing a link to a trusted source of information. Search interventions are not deployed for search terms that violate our Community Guidelines, which are actioned according to our policies.
Measure 17.2
Relevant Signatories will develop, promote and/or support or continue to run activities to improve media literacy and critical thinking such as campaigns to raise awareness about Disinformation, as well as the TTPs that are being used by malicious actors, among the general public across the European Union, also considering the involvement of vulnerable communities.
QRE 17.2.1
Relevant Signatories will describe the activities they launch or support and the Member States they target and reach. Relevant signatories will further report on actions taken to promote the campaigns to their user base per Member States targeted.
- Czechia Parliamentary Elections 2025: From 4 Sept 2025, we launched an in-app Election Centre to provide users with up-to-date information about the 2025 Czech parliamentary election. The centre contained a section about spotting misinformation.
- Portugal Local Elections 2025: From 16 Sept 2025, we launched an in-app Election Centre to provide users with up-to-date information about the 2025 Portugal local elections. The centre contained a section about spotting misinformation, which included videos created in partnership with the fact-checking organisation Polígrafo.
- Estonia Local Elections 2025: From 24 Sept 2025, we launched an in-app Search Guide and Details Page to provide users with up-to-date information about the Estonia local election. The page contained a section about following our Community Guidelines, with a link to our Estonian fact-checking partner, Lead Stories for digital literacy resources.
- Ireland Presidential Election 2025: From 24 Sept 2025, we launched an in-app Election Centre to provide users with up-to-date information about the 2025 Irish presidential elections. The centre contained a section about spotting misinformation, which included videos created in partnership with the fact-checking organisation The Journal.
- Netherlands Parliamentary Election 2025: From 29 Sept 2025, we launched an in-app Election Centre to provide users with up-to-date information about the 2025 Dutch parliamentary elections. The centre contained a section about spotting misinformation.
- Danish Local and Municipal Elections 2025: From 24 Oct 2025, we launched an in-app Search Guide and Details Page to provide users with up-to-date information about the Danish local and municipal elections. The page contained a section about following our Community Guidelines, with a link to Sikker Digital for digital literacy resources.
- Portugal Presidential Election 2026: From 9 Dec 2025, we launched an in-app Election Centre to provide users with up-to-date information about the 2026 Portugal presidential election. The centre contained a section about spotting misinformation, which included videos created in partnership with the fact-checking organisation Polígrafo.
- Norway Parliamentary Elections 2025: From 8 Aug 2025, we launched an in-app Election Centre to provide users with up-to-date information about the 2025 Norwegian parliamentary election. The centre contained a section about spotting misinformation.
(II) Media literacy (General). We continue our ongoing general media literacy and critical thinking skills campaigns in the EU in collaboration with our fact-checking and media literacy partners, spanning 14 countries (Denmark, Finland, France, Georgia, Germany, Ireland, Italy, Romania, Spain, Sweden, Moldova, Netherlands, Poland, and Portugal).
Partnered with Lead Stories: Ukraine, Romania, Slovakia, Hungary, Latvia, Estonia, Lithuania.- Partnered with fakenews.pl: Poland.
- Partnered with Correctiv: Germany, Austria.
SLI 17.2.1
Relevant Signatories report on number of media literacy and awareness raising activities organised and or participated in and will share quantitative information pertinent to show the effects of the campaigns they build or support at the Member State level.
We are pleased to report metrics on the 14 general media literacy and critical thinking skills campaigns that ran through the reporting period in Germany, Romania, Poland, Denmark, Finland, France, Georgia, Ireland, Italy, Moldova, Portugal, Spain, Sweden, and the Netherlands.
| Country | Total number of impressions of the H5 Page (Views generated between July 1 and December 31, 2025) | Number of impressions of the search intervention | Number of clicks on the search intervention | Click through rate of the search intervention |
|---|---|---|---|---|
| France (in partnership with AFP) | 48,144 | 26,260,992 | 71,577 | 0.27% |
| Portugal (in partnership with Poligrafo) | 10,369 | 5,426,533 | 22,811 | 0.42% |
| Denmark (in partnership with Logically Facts) | 4,098 | 202,542 | 881 | 0.43% |
| The Netherlands (in partnership with Nieuwscheckers) | 34,937 | 2,739,245 | 41,868 | 1.53% |
| Ireland (in partnership with The journal.ie) | 905 | 359,461 | 1,883 | 0.52% |
| Finland (in partnership with Logically Facts) | 1,543 | 186,559 | 2,994 | 1.60% |
| Sweden (in partnership with Logically Facts) | 2,342 | 413,554 | 4,115 | 1.00% |
| Spain (in partnership with Maldita) | 21,922 | 17,986,294 | 42,554 | 0.24% |
| Italy (in partnership with Facta) | 1,433 | 439,721 | 2,290 | 0.52% |
| Austria (in partnership with Correctiv, joint campaign with Germany) | 4,607 | 1,535,546 | 7,965 | 0.52% |
| Germany (in partnership with Correctiv, joint campaign with Austria) | 7,790 | 536,473 | 2,533 | 0.47% |
| Poland | 10,369 | 9,183,221 | 54,480 | 0.59% |
| Bulgaria | 1,137 | 297,690 | 1,905 | 0.64% |
| Croatia | 1,256 | 397,876 | 2,240 | 0.56% |
| Czechia | 2,270 | 962,911 | 3,190 | 0.33% |
| Slovenia | 535 | 129,253 | 801 | 0.62% |
Measure 17.3
For both of the above Measures, and in order to build on the expertise of media literacy experts in the design, implementation, and impact measurement of tools, relevant Signatories will partner or consult with media literacy experts in the EU, including for instance the Commission's Media Literacy Expert Group, ERGA's Media Literacy Action Group, EDMO, its country-specific branches, or relevant Member State universities or organisations that have relevant expertise.
QRE 17.3.1
Relevant Signatories will describe how they involved and partnered with media literacy experts for the purposes of all Measures in this Commitment.
Portugal (local election): Polígrafo- Estonia (local election): Lead Stories
- Ireland (presidential election): The Journal
- Denmark (local and municipal elections): Sikker Digital
- Portugal (presidential election): Polígrafo
- Czechia (parliamentary election): Demagog.cz
Commitment 18
Relevant Signatories commit to minimise the risks of viral propagation of Disinformation by adopting safe design practices as they develop their systems, policies, and features.
We signed up to the following measures of this commitment
Measure 18.1 Measure 18.2 Measure 18.3
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
- Continued to improve the accuracy of, and overall coverage provided by, our machine learning detection models.
- Began testing large language models (LLMs) to further support proactive moderation at scale. Because LLMs can comprehend human language and perform highly specific, complex tasks, we are better able to moderate nuanced areas like misinformation by extracting specific misinformation "claims" from videos for moderators to assess directly or route to our fact-checking partners.
- TikTok teams and personnel also regularly participate in research-focused events. In October 2025, TikTok co-sponsored the EU DisinfoLab conference in Slovenia. Several TikTok staff attended, and we co-led a session with the Centre for Humanitarian Dialogue on how platforms and conflict mediators can work together to reduce the risks of violence during conflicts.
- Continued to participate in, and co-chair, the working group on Elections.
- TikTok gathered its global Safety Advisory Councils in Singapore in October 2025 to consult them on a variety of topics including our approach to media literacy.
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 18.1
Relevant Signatories will take measures to mitigate risks of their services fuelling the viral spread of harmful Disinformation, such as: recommender systems designed to improve the prominence of authoritative information and reduce the prominence of Disinformation based on clear and transparent methods and approaches for defining the criteria for authoritative information; other systemic approaches in the design of their products, policies, or processes, such as pre-testing.
QRE 18.1.1
Relevant Signatories will report on the risk mitigation systems, tools, procedures, or features deployed under Measure 18.1 and report on their deployment in each EU Member State.
QRE 18.1.2
Relevant Signatories will publish the main parameters of their recommender systems, both in their report and, once it is operational, on the Transparency Centre.
QRE 18.1.3
Relevant Signatories will outline how they design their products, policies, or processes, to reduce the impressions and engagement with Disinformation whether through recommender systems or through other systemic approaches, and/or to increase the visibility of authoritative information.
Measure 18.2
Relevant Signatories will develop and enforce publicly documented, proportionate policies to limit the spread of harmful false or misleading information (as depends on the service, such as prohibiting, downranking, or not recommending harmful false or misleading information, adapted to the severity of the impacts and with due regard to freedom of expression and information); and take action on webpages or actors that persistently violate these policies.
QRE 18.2.1
Relevant Signatories will report on the policies or terms of service that are relevant to Measure 18.2 and on their approach towards persistent violations of these policies.
- Misinformation
- Misinformation that poses a risk to public safety or may induce panic about a crisis event or emergency, including using historical footage of a previous attack as if it were current, or incorrectly claiming a basic necessity (such as food or water) is no longer available in a particular location.Health misinformation, such as misleading statements about vaccines, inaccurate medical advice that discourages people from getting appropriate medical care for a life-threatening disease, or other misinformation which may cause negative health effects on an individual's life.
- Climate change misinformation that undermines well-established scientific consensus, such as denying the existence of climate change or the factors that contribute to it.
- Conspiracy theories that name and attack individual people.
- Conspiracy theories that are violent or hateful, such as making a violent call to action, having links to previous violence, denying well-documented violent events, or causing prejudice towards a group with a protected attribute.
- Civic and Election Integrity
- Election misinformation, including:
- How, when, and where to vote or register to vote;
- Eligibility requirements of voters to participate in an election, and the qualifications for candidates to run for office;
- Laws, processes, and procedures that govern the organisation and implementation of elections and other civic processes, such as referendums, ballot propositions, or censuses;
- Final results or outcome of an election.
- Election misinformation, including:
- Edited Media and AI-Generated Content (AIGC)
- The likeness of young people or realistic-appearing people under the age of 18.
- The likeness of adult private figures, if we become aware it was used without their permission.
- Misleading AIGC or edited media that falsely shows:
- Content made to seem as if it comes from an authoritative source, such as a reputable news organisation;
- A crisis event, such as a conflict or natural disaster.
- A public figure who is:
- being degraded or harassed, or engaging in criminal or antisocial behaviour;
- taking a position on a political issue, commercial product, or a matter of public importance (such as an election);
- being politically endorsed or condemned by an individual or group.
- Fake Engagement
- Facilitating the trade or marketing of services that artificially increase engagement, such as selling followers or likes.
- Providing instructions on how to artificially increase engagement on TikTok.
- Misinformation
- Conspiracy theories that are unfounded and claim that certain events or situations are carried out by covert or powerful groups, such as "the government" or a "secret society".
- Moderate harm health misinformation, such as an unproven recommendation for how to treat a minor illness.
- Repurposed media, such as showing a crowd at a music concert and suggesting it is a political protest.
- Misrepresenting authoritative sources, such as selectively referencing certain scientific data to support a conclusion that is counter to the findings of the study.
- Unverified claims related to an emergency or unfolding event.
- Potential high-harm misinformation while it is undergoing a fact-checking review.
- Civic and Election Integrity
- Unverified claims about an election, such as a premature claim that all ballots have been counted or tallied.
- Statements that significantly misrepresent authoritative civic information, such as a false claim about the text of a parliamentary bill.
- Fake Engagement
- Content that tricks or manipulates others as a way to increase gifts, or engagement metrics, such as "like-for-like" promises or other false incentives for engaging with content.
We strive to maintain a balance between freedom of expression and protecting our users and the wider public from harmful content. Our approach to combating harmful misinformation, as stated in our Community Guidelines, is to remove content that is both false and can cause harm to individuals or the wider public. This does not include simply inaccurate information which does not pose a risk of harm. Additionally, in cases where fact-checks are inconclusive, especially during emergency or unfolding events, content may not be removed and may instead become ineligible for recommendation in the For You feed and labelled with the “unverified content” label to limit the spread of potentially misleading information.
SLI 18.2.1
Relevant Signatories will report on actions taken in response to violations of policies relevant to Measure 18.2, at the Member State level. The metrics shall include: Total number of violations and Meaningful metrics to measure the impact of these actions (such as their impact on the visibility of or the engagement with content that was actioned upon).
The number of views of videos removed because of violation of each of these policies is based on the approximate location of the user.
We also updated the methodology on the number of videos made ineligible for the For You feed under our Misinformation policy.
| Country | Number of videos removed because of violation of Misinformation policyÊ | Number of views of videos removed because of violation of Misinformation policy | Number of videos made ineligible for the For You feed under the Misinformation policy. | Number of videos removed because of violation of Civic and Election Integrity policy | Number of views of videos removed because of violation of Civic and Election Integrity policy | Number of videos removed because of violation of Edited Media and AI-Generated Content (AIGC) | Number of views of videos removed because of violation of Edited Media and AI-Generated Content (AIGC) |
|---|---|---|---|---|---|---|---|
| Austria | 2,612 | 1,946,472 | 2,871 | 511 | 219,339 | 1564 | 2,121,335 |
| Belgium | 4,150 | 8,424,034 | 3,069 | 864 | 292,729 | 2899 | 33,524,248 |
| Bulgaria | 4,828 | 3,601,953 | 9,427 | 402 | 58,515 | 2181 | 1,380,041 |
| Croatia | 638 | 984,109 | 793 | 63 | 49 | 1190 | 857,576 |
| Cyprus | 701 | 825,228 | 1,060 | 85 | 9 | 1214 | 877,703 |
| Czech Republic | 2,855 | 846,267 | 5,263 | 338 | 50,350 | 1551 | 166,156 |
| Denmark | 2,484 | 1,938,348 | 2,085 | 512 | 20,319 | 1920 | 2,522,414 |
| Estonia | 527 | 9,792 | 865 | 45 | 60,189 | 1571 | 202,878 |
| Finland | 1,357 | 8,695,926 | 1,752 | 268 | 162,976 | 921 | 417,085 |
| France | 37,466 | 94,473,247 | 60,520 | 3,650 | 6,727,613 | 28565 | 145,692,240 |
| Germany | 42,642 | 179,399,985 | 47,221 | 5,287 | 926,431 | 50378 | 113,670,298 |
| Greece | 4,602 | 1,556,421 | 8,200 | 960 | 48,866 | 2284 | 929,995 |
| Hungary | 1,490 | 1,328,847 | 2,489 | 876 | 13,680 | 990 | 2,318,366 |
| Ireland | 2,613 | 885,690 | 3,489 | 413 | 8,482 | 1722 | 1,125,060 |
| Italy | 18,667 | 40,083,897 | 36,707 | 2,726 | 723,486 | 15434 | 93,101,464 |
| Latvia | 705 | 448,180 | 1,107 | 301 | 318 | 1519 | 2,530 |
| Lithuania | 1,086 | 59,207 | 1,257 | 61 | 1,190 | 1727 | 8,952,387 |
| Luxembourg | 349 | 14,382 | 305 | 48 | 10 | 1620 | 164,719 |
| Malta | 159 | 876,245 | 283 | 26 | 0 | 382 | 2,127 |
| Netherlands | 14,335 | 15,235,784 | 13,311 | 907 | 2,340,996 | 7974 | 28,839,022 |
| Poland | 14,770 | 22,162,809 | 15,480 | 1,038 | 399,418 | 7227 | 14,249,635 |
| Portugal | 3,141 | 2,107,021 | 2,561 | 270 | 38,447 | 1659 | 6,862,825 |
| Romania | 28,743 | 45,185,198 | 32,030 | 4,622 | 5,017,440 | 10458 | 6,624,761 |
| Slovakia | 1,122 | 858,265 | 1,861 | 64 | 639 | 1589 | 346,191 |
| Slovenia | 370 | 26,882 | 589 | 34 | 7 | 844 | 3,070,784 |
| Spain | 21,592 | 25,310,043 | 38,779 | 1,483 | 162,917 | 16129 | 40,748,793 |
| Sweden | 4,159 | 2,192,170 | 4,633 | 761 | 455 | 5168 | 4,607,858 |
| Iceland | 123 | 6,208 | 188 | 19 | 0 | 175 | 414 |
| Liechtenstein | 143 | 24,362 | 76 | 4 | 0 | 484 | 0 |
| Norway | 1,662 | 2,467,190 | 1,681 | 290 | 1,582 | 1282 | 2,920,599 |
| Total EU | 218,163 | 459,476,402 | 298,007 | 26,615 | 17,274,870 | 170,680 | 513,378,491 |
| Total EEA | 220,091 | 461,974,162 | 299,952 | 26,928 | 17,276,452 | 172,621 | 516,299,504 |
Measure 18.3
Relevant Signatories will invest and/or participate in research efforts on the spread of harmful Disinformation online and related safe design practices, will make findings available to the public or report on those to the Code's taskforce. They will disclose and discuss findings within the permanent Task-force, and explain how they intend to use these findings to improve existing safe design practices and features or develop new ones.
QRE 18.3.1
Relevant Signatories will describe research efforts, both in-house and in partnership with third-party organisations, on the spread of harmful Disinformation online and relevant safe design practices, as well as actions or changes as a result of this research. Relevant Signatories will include where possible information on financial investments in said research. Wherever possible, they will make their findings available to the general public.
Commitment 19
Relevant Signatories using recommender systems commit to make them transparent to the recipients regarding the main criteria and parameters used for prioritising or deprioritising information, and provide options to users about recommender systems, and make available information on those options.
We signed up to the following measures of this commitment
Measure 19.1 Measure 19.2
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
- At TikTok, we strive to bring transparency to how we protect our platform. We continue to increase the reports we voluntarily publish, the depth of data we disclose, and the frequency with which we publish.
- We also worked to make it easier for people to independently study our data and platform. For example through:
- Our Research Tools, which empower over 900 research teams to independently study our platform.
- Adding additional functionality to the Research API, including a compliance API (launched in June 2025) that improves the data refresh process for researchers, helping to ensure that efforts to comply with our Terms of Service (ToS) do not impede researchers' ability to efficiently access data from TikTok's Research API.
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 19.1
Relevant Signatories will make available to their users, including through the Transparency Centre and in their terms and conditions, in a clear, accessible and easily comprehensible manner, information outlining the main parameters their recommender systems employ.
QRE 19.1.1
Relevant Signatories will provide details of the policies and measures put in place to implement the above-mentioned measures accessible to EU users, especially by publishing information outlining the main parameters their recommender systems employ in this regard. This information should also be included in the Transparency Centre.
- User interactions (e.g. content users like, share, comment on, and watch in full or skip, as well as accounts of followers that users follow back);
- Content information (e.g. sounds, hashtags, number of views, and the country the content was published); and
- User information (e.g. device settings, language preferences, location, time zone and day, and device types).
- Users can click on any video and select “not interested” to indicate that they do not want to see similar content.
- Users are able to automatically filter out specific words or hashtags from the content recommended to them (see here).
- Users are able to refresh their For You feed if they no longer feel like recommendations are relevant to them or are too similar. When the For You feed is refreshed, users view a number of new videos which include popular videos (e.g., they have a high view count or a high like rate). Their interaction with these new videos will inform future recommendations.
- Users can also personalise their "For You" page through our new Manage Topics feature (June 2025). This allows users to adjust the frequency of content they see related to particular topics. The settings don't eliminate topics entirely but can influence how often they're recommended as peoples' interests evolve over time. It adds to the many ways people shape their feed every day - including liking or sharing videos, searching for topics, or simply watching videos for longer.
- As part of our obligations under the DSA (Article 38), we introduced non-personalized feeds on our platform, which provide our European users with an alternative to personalised recommender systems. They are able to turn off personalisation so that feeds show non-personalised content. The For You feed will instead show popular videos in their region and internationally. See here.
Measure 19.2
Relevant Signatories will provide options for the recipients of the service to select and to modify at any time their preferred options for relevant recommender systems, including giving users transparency about those options.
SLI 19.2.1
Relevant Signatories will provide aggregated information on effective user settings, such as the number of times users have actively engaged with these settings within the reporting period or over a sample representative timeframe, and clearly denote shifts in configuration patterns.
The number for videos tagged with AIGC label includes both automatic and creator-generated labeling.
| Country | Number of users that filtered hashtags or words | Number of users that clicked on "not interested" | Number of times users clicked on the For You Feed Refresh | Number of Videos tagged with AIGC label |
|---|---|---|---|---|
| Austria | 81,940 | 1,054,601 | 56,468 | 494,206 |
| Belgium | 123,591 | 1,636,355 | 90,963 | 678,792 |
| Bulgaria | 59,087 | 1,077,505 | 45,808 | 1,044,321 |
| Croatia | 30,698 | 564,614 | 25,259 | 131,396 |
| Cyprus | 17,932 | 224,295 | 15,464 | 181,397 |
| Czech Republic | 63,982 | 849,428 | 41,602 | 538,710 |
| Denmark | 49,192 | 637,478 | 31,304 | 191,270 |
| Estonia | 18,895 | 182,527 | 13,036 | 83,768 |
| Finland | 67,907 | 680,821 | 49,088 | 275,526 |
| France | 680,454 | 9,545,884 | 490,210 | 4,724,226 |
| Germany | 813,569 | 9,455,189 | 584,609 | 6,351,939 |
| Greece | 89,043 | 1,629,313 | 81,664 | 715,077 |
| Hungary | 63,484 | 1,179,383 | 35,106 | 776,347 |
| Ireland | 83,988 | 1,014,346 | 60,748 | 181,131 |
| Italy | 429,072 | 7,991,598 | 277,864 | 3,843,482 |
| Latvia | 28,449 | 361,290 | 22,190 | 198,043 |
| Lithuania | 34,507 | 402,385 | 26,669 | 220,796 |
| Luxembourg | 7,038 | 97,071 | 5,142 | 48,734 |
| Malta | 6,444 | 98,531 | 7,840 | 41,910 |
| Netherlands | 267,432 | 2,926,699 | 204,997 | 1,387,922 |
| Poland | 285,891 | 4,192,550 | 179,424 | 1,772,881 |
| Portugal | 96,468 | 1,293,884 | 64,318 | 903,172 |
| Romania | 155,037 | 3,379,837 | 196,717 | 2,213,505 |
| Slovakia | 28,269 | 399,464 | 14,507 | 311,951 |
| Slovenia | 14,149 | 204,523 | 11,366 | 52,786 |
| Spain | 502,835 | 8,571,737 | 433,076 | 4,547,581 |
| Sweden | 118,791 | 1,562,220 | 105,302 | 560,744 |
| Iceland | 6,537 | 65,577 | 3,173 | 17,098 |
| Liechtenstein | 229 | 3,947 | 364 | 1,164 |
| Norway | 74,159 | 820,642 | 47,990 | 246,312 |
| Total EU | 4,218,144 | 61,213,528 | 3,170,741 | 32,471,613 |
| Total EEA | 4,299,069 | 62,103,694 | 3,222,268 | 32,736,187 |
Commitment 21
Relevant Signatories commit to strengthen their efforts to better equip users to identify Disinformation. In particular, in order to enable users to navigate services in an informed way, Relevant Signatories commit to facilitate, across all Member States languages in which their services are provided, user access to tools for assessing the factual accuracy of sources through fact-checks from fact-checking organisations that have flagged potential Disinformation, as well as warning labels from other authoritative sources.
We signed up to the following measures of this commitment
Measure 21.1 Measure 21.2 Measure 21.3
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
- We ran 8 temporary media literacy election integrity campaigns in advance of regional elections, most in collaboration with our fact-checking and media literacy partners:
- 7 in the EU
- Czechia (parliamentary election) with Demagog.cz
- Portugal (local election) with Polígrafo
- Estonia (local election) with Lead Stories
- Ireland (presidential election) with The Journal
- Netherlands (parliamentary election) N/A
- Denmark (local and municipal election) with Sikker Digital
- Portugal (presidential election) with Polígrafo
- 1 in Norway (parliamentary election) N/A
- 7 in the EU
- Following wildfires in Portugal and Spain, we launched an in-app guide to provide users with guidance on interacting with sensitive content during natural disasters. The guide links to TikTok's tragic event support guide and authoritative third party resources (PT)(ES) of information about aid and relief support. The intervention is available in all in-app languages.
- Continued our in-app interventions, including video tags, search interventions and in-app information centres, available in 23 official EU languages and Norwegian and Icelandic for EEA users, around the elections, the Israel-Hamas Conflict, Holocaust Education, and the War in Ukraine.
- We partner with fact checkers to assess the accuracy of content. Sometimes, our fact-checking partners determine that content cannot be confirmed or checks are inconclusive (especially during unfolding events). Where our fact-checking partners provide us with a rating that demonstrates the claim cannot yet be verified, we may use our unverified content label to inform viewers via a banner that a video contains unverified content, in an effort to raise user awareness about content credibility.
- We launched a $2 Million AI Literacy fund in partnership with more than 20 civil society organisations across 12 markets worldwide. The ad credit fund is designed to support the creation of educational content that will appear in For You feeds. This initiative launched alongside several new company updates to spot, shape and understand AI-generated content.
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 21.1
Relevant Signatories will further develop and apply policies, features, or programs across Member States and EU languages to help users benefit from the context and insights provided by independent fact-checkers or authoritative sources, for instance by means of labels, such as labels indicating fact-checker ratings, notices to users who try to share or previously shared the rated content, information panels, or by acting upon content notified by fact-checkers that violate their policies.
QRE 21.1.1
Relevant Signatories will report on the policies, features, or programs they deploy to meet this Measure and on their availability across Member States.
- Agence France-Presse (AFP)
- dpa Deutsche Presse-Agentur
- Demagog
- Facta
- Fact Check Georgia
- Faktograf
- Internews Kosova
- Lead Stories
- Newtral
- Poligrafo
- Reuters
- Teyit
- Enforcement of misinformation policies. Our fact-checking partners play a critical role in helping us enforce our misinformation policies, which aim to promote a trustworthy and authentic experience for our users. We consider context and fact-checking to be key to consistently and accurately enforcing these policies, so, while we use machine learning models to help detect potential misinformation, we have our misinformation moderators assess, confirm, and take action on harmful misinformation. As part of this process, our moderators can access a repository of previously fact-checked claims and they are able to provide content to our expert fact checking partners for further evaluation. Where fact-checking partners advise that content is false, our moderators take measures to assess and remove it from our platform. Our response to QRE 31.1.1 provides further insight into the way in which fact-checking partners are involved in this process.
- Unverified content labelling. As mentioned above, we partner with fact checkers to assess the accuracy of content. Sometimes, our fact-checking partners determine that content cannot be confirmed or checks are inconclusive (especially during unfolding events). Where our fact-checking partners provide us with a rating that demonstrates the claim cannot yet be verified, we may use our unverified content label to inform viewers via a banner that a video contains unverified content, in an effort to raise user awareness about content credibility. In these circumstances, the content creator is also notified that their video was flagged as unsubstantiated content and the video will become ineligible for recommendation in the For You feed.
- In-app tools related to specific topics:
- Election integrity. We have launched campaigns in advance of several major elections aimed at educating the public about the voting process which encourage users to fact-check information with our fact-checking partners. For example, the election integrity campaign we rolled out in advance of France legislative elections in June 2024 included a search intervention and in-app Election Centre. The centre contained a section about spotting misinformation, which included videos created in partnership with fact-checking organisation Agence France-Presse (AFP). In total, during the reporting period, we ran 14 temporary media literacy election integrity campaigns in advance of regional elections.
- Climate Change. We launched a search intervention which redirects users seeking out climate change-related content to authoritative information. We worked with the UN to provide the authoritative information.
- Natural disasters: Launched a new temporary in-app natural disaster media literacy search guide for the Reunion Cyclone Garance between 4 March and 4 April 2025 and continued our temporary search guide for the Mayotte Cyclone until 14 Feb 2025. These search guides link to TikTok's Safety Center tragic events support guide and authoritative third party information about aid and relief support.
- User awareness of our fact-checking partnerships and labels. We have created pages on our Safety Center & Transparency Center to raise users’ awareness about our fact-checking program and labels and to support the work of our fact-checking partners.
SLI 21.1.1
Relevant Signatories will report through meaningful metrics on actions taken under Measure 21.1, at the Member State level. At the minimum, the metrics will include: total impressions of fact-checks; ratio of impressions of fact-checks to original impressions of the fact-checked content–or if these are not pertinent to the implementation of fact-checking on their services, other equally pertinent metrics and an explanation of why those are more adequate.
The share of removals under our harmful misinformation policy, share of proactive removals, share of removals before any views and share of the removals within 24h are relative to the total removals of each policy.
| Country | Share cancel rate (%) following the unverified content label share warning pop-up (users who do not share the video after seeing the pop up) | Share of removals under misinformation policy | Share of proactive removals under misinformation policy | Share of video removals before any views under misinformation policy | Share of video removals within 24h by misinformation policy | Share of video removals under Civic and Election Integrity policy | Share of proactive video removals under Civic and Election Integrity policy | Share of video removals before any views under Civic and Election Integrity policy | Share of video removals within 24h under Civic and Election Integrity policy | % video removals under Edited Media and AI-Generated Content (AIGC) policy | % proactive video removals under Edited Media and AI-Generated Content (AIGC) policy | % video removals before any views under Edited Media and AI-Generated Content (AIGC) policy | % video removals within 24h under Edited Media and AI-Generated Content (AIGC) policy | ||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Austria | 30.37% | 28.57% | 98.77% | 85.60% | 91.54% | 5.59% | 99.02% | 94.91% | 97.06% | 17.10% | 97.51% | 88.55% | 86.96% | ||||
| Belgium | 29.07% | 32.26% | 98.29% | 86.10% | 90.80% | 6.72% | 99.65% | 95.72% | 70.14% | 22.54% | 98.03% | 84.24% | 82.03% | ||||
| Bulgaria | 35.32% | 52.31% | 99.52% | 78.27% | 92.65% | 4.36% | 99.00% | 95.52% | 91.04% | 23.63% | 99.08% | 87.35% | 88.77% | ||||
| Croatia | 25.89% | 21.13% | 97.81% | 82.13% | 92.16% | 2.09% | 100.00% | 98.41% | 100.00% | 39.42% | 98.40% | 95.55% | 96.05% | ||||
| Cyprus | 33.62% | 25.18% | 98.00% | 85.02% | 92.58% | 3.05% | 100.00% | 96.47% | 96.47% | 43.61% | 97.94% | 92.50% | 92.50% | ||||
| Czech Republic | 31.01% | 41.67% | 99.37% | 80.18% | 94.92% | 4.93% | 98.52% | 92.60% | 95.56% | 22.64% | 99.03% | 92.07% | 93.55% | ||||
| Denmark | 31.82% | 18.65% | 98.79% | 84.02% | 90.10% | 3.84% | 98.63% | 89.06% | 96.68% | 14.42% | 98.65% | 93.59% | 93.70% | ||||
| Estonia | 30.88% | 7.88% | 99.24% | 85.01% | 93.93% | 0.67% | 97.78% | 82.22% | 91.11% | 23.48% | 98.79% | 97.90% | 97.90% | ||||
| Finland | 29.58% | 31.31% | 97.20% | 82.76% | 91.75% | 6.18% | 97.39% | 90.67% | 92.91% | 21.25% | 97.39% | 91.10% | 93.05% | ||||
| France | 30.16% | 31.65% | 97.72% | 80.63% | 88.60% | 3.08% | 98.82% | 94.33% | 97.86% | 24.13% | 96.36% | 91.40% | 91.17% | ||||
| Germany | 29.86% | 29.76% | 96.93% | 83.42% | 90.34% | 3.69% | 98.85% | 95.80% | 98.20% | 35.15% | 97.74% | 92.71% | 92.71% | ||||
| Greece | 30.64% | 35.82% | 99.17% | 81.49% | 95.59% | 7.47% | 99.90% | 95.52% | 99.38% | 17.78% | 98.77% | 84.15% | 86.73% | ||||
| Hungary | 28.14% | 15.44% | 98.79% | 92.35% | 96.31% | 9.07% | 99.66% | 95.55% | 99.09% | 10.26% | 96.97% | 92.12% | 91.92% | ||||
| Ireland | 33.69% | 28.97% | 98.97% | 89.48% | 93.19% | 4.58% | 98.06% | 94.67% | 97.34% | 19.09% | 97.97% | 91.29% | 90.24% | ||||
| Italy | 32.14% | 36.23% | 92.03% | 84.00% | 91.51% | 5.29% | 76.27% | 91.53% | 96.92% | 29.96% | 82.05% | 89.87% | 89.16% | ||||
| Latvia | 36.08% | 19.79% | 99.29% | 86.10% | 95.32% | 8.45% | 100.00% | 28.57% | 16.61% | 42.64% | 98.88% | 97.63% | 97.43% | ||||
| Lithuania | 31.90% | 25.86% | 98.99% | 59.21% | 66.67% | 1.45% | 96.72% | 98.36% | 96.72% | 41.12% | 98.61% | 96.41% | 96.41% | ||||
| Luxembourg | 32.63% | 2.81% | 98.85% | 86.82% | 90.54% | 0.39% | 100.00% | 95.83% | 100.00% | 13.06% | 99.57% | 98.46% | 98.64% | ||||
| Malta | 28.83% | 13.64% | 95.60% | 89.31% | 91.19% | 2.23% | 100.00% | 100.00% | 100.00% | 32.76% | 97.91% | 96.34% | 95.81% | ||||
| Netherlands | 29.12% | 42.73% | 98.74% | 81.47% | 82.21% | 2.70% | 96.47% | 82.25% | 84.12% | 23.77% | 97.35% | 88.90% | 88.15% | ||||
| Poland | 30.73% | 37.40% | 98.31% | 76.58% | 91.66% | 2.63% | 99.33% | 89.31% | 94.03% | 18.30% | 97.45% | 93.23% | 94.52% | ||||
| Portugal | 29.97% | 41.29% | 99.49% | 87.33% | 92.26% | 3.55% | 99.63% | 94.81% | 97.41% | 21.81% | 98.61% | 84.33% | 85.11% | ||||
| Romania | 28.50% | 52.46% | 98.90% | 78.00% | 91.12% | 8.44% | 98.55% | 79.64% | 93.70% | 19.09% | 98.16% | 89.30% | 84.11% | ||||
| Slovakia | 28.39% | 27.99% | 98.93% | 76.83% | 94.74% | 1.60% | 100.00% | 93.75% | 96.88% | 39.64% | 99.50% | 97.17% | 97.99% | ||||
| Slovenia | 30.61% | 7.33% | 99.73% | 88.92% | 96.22% | 0.67% | 100.00% | 94.12% | 100.00% | 16.71% | 98.70% | 96.21% | 95.73% | ||||
| Spain | 36.23% | 35.84% | 99.07% | 88.68% | 92.00% | 2.46% | 99.39% | 88.60% | 93.73% | 26.77% | 99.32% | 92.43% | 91.18% | ||||
| Sweden | 29.51% | 25.98% | 98.92% | 88.05% | 94.45% | 4.75% | 99.61% | 98.29% | 99.21% | 32.28% | 99.28% | 93.73% | 93.77% | ||||
| Iceland | 33.50% | 25.52% | 98.37% | 86.18% | 86.99% | 3.94% | 100.00% | 100.00% | 100.00% | 36.31% | 100.00% | 94.29% | 93.71% | ||||
| Liechtenstein | 31.25% | 4.15% | 98.60% | 93.71% | 95.80% | 0.12% | 100.00% | 100.00% | 100.00% | 14.06% | 99.38% | 100.00% | 99.79% | ||||
| Norway | 29.55% | 27.09% | 98.26% | 85.98% | 92.66% | 4.73% | 100.00% | 97.59% | 100.00% | 20.89% | 98.21% | 89.63% | 90.41% | ||||
| Total EU | 31.26% | 33.30% | 97.70% | 82.24% | 90.35% | 4.06% | 96.56% | 90.25% | 94.34% | 26.06% | 96.42% | 91.67% | 91.18% | ||||
| Total EEA | 31.23% | 33.09% | 97.70% | 82.28% | 90.37% | 4.05% | 96.60% | 90.33% | 94.40% | 25.95% | 96.44% | 91.68% | 91.21% | ||||
SLI 21.1.2
When cooperating with independent fact-checkers to label content on their services, Relevant Signatories will report on actions taken at the Member State level and their impact, via metrics, of: number of articles published by independent fact-checkers; number of labels applied to content, such as on the basis of such articles; meaningful metrics on the impact of actions taken under Measure 21.1.1 such as the impact of said measures on user interactions with, or user re-shares of, content fact-checked as false or misleading.
The number of videos tagged with the unverified content label is based on the country in which the video was posted.
| Country | Number of videos tagged with the unverified content label | Share cancel rate (%) following the unverified content label share warning pop-up (users who do not share the video after seeing the pop up) |
|---|---|---|
| Austria | 77 | 30.37% |
| Belgium | 178 | 29.07% |
| Bulgaria | 198 | 35.32% |
| Croatia | 5 | 25.89% |
| Cyprus | 17 | 33.62% |
| Czech Republic | 314 | 31.01% |
| Denmark | 298 | 31.82% |
| Estonia | 34 | 30.88% |
| Finland | 57 | 29.58% |
| France | 2,397 | 30.16% |
| Germany | 1,945 | 29.86% |
| Greece | 264 | 30.64% |
| Hungary | 33 | 28.14% |
| Ireland | 39 | 33.69% |
| Italy | 943 | 32.14% |
| Latvia | 2 | 36.08% |
| Lithuania | 12 | 31.90% |
| Luxembourg | 6 | 32.63% |
| Malta | 0 | 28.83% |
| Netherlands | 325 | 29.12% |
| Poland | 425 | 30.73% |
| Portugal | 138 | 29.97% |
| Romania | 597 | 28.50% |
| Slovakia | 133 | 28.39% |
| Slovenia | 6 | 30.61% |
| Spain | 549 | 36.23% |
| Sweden | 123 | 29.51% |
| Iceland | 0 | 33.50% |
| Liechtenstein | 0 | 31.25% |
| Norway | 79 | 29.55% |
| Total EU | 9,115 | 31.26% |
| Total EEA | 9,194 | 31.23% |
Commitment 23
Relevant Signatories commit to provide users with the functionality to flag harmful false and/or misleading information that violates Signatories policies or terms of service.
We signed up to the following measures of this commitment
Measure 23.1 Measure 23.2
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
- In line with our DSA requirements, we continued to provide a dedicated reporting channel and appeals process for users who disagree with the outcome, for our community in the European Union to ‘Report Illegal Content,’ enabling users to alert us to content they believe breaches the law. For advertising-related user reporting flow, please refer to Chapter 2.
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 23.1
Relevant Signatories will develop or continue to make available on all their services and in all Member States languages in which their services are provided a user-friendly functionality for users to flag harmful false and/or misleading information that violates Signatories' policies or terms of service. The functionality should lead to appropriate, proportionate and consistent follow-up actions, in full respect of the freedom of expression.
QRE 23.1.1
Relevant Signatories will report on the availability of flagging systems for their policies related to harmful false and/or misleading information across EU Member States and specify the different steps that are required to trigger the systems.
- By ‘long-pressing’ (e.g., clicking for 3 seconds) on the video content and selecting the “Report” option.
- By selecting the “Share” button available on the right-hand side of the video content and then selecting the “Report” option.
People can report TikTok content or accounts without needing to sign in or have an account by accessing the Report function using the “More options (…)” menu on videos or profiles in their browser, or through our “Report Inappropriate content” webform which is available in our Help Centre. Harmful misinformation can be reported across content features such as video, comment, search, hashtag, sound, or account.
Measure 23.2
Relevant Signatories will take the necessary measures to ensure that this functionality is duly protected from human or machine-based abuse (e.g., the tactic of 'mass-flagging' to silence other voices).
QRE 23.2.1
Relevant Signatories will report on the general measures they take to ensure the integrity of their reporting and appeals systems, while steering clear of disclosing information that would help would-be abusers find and exploit vulnerabilities in their defences.
We have sought to make our Community Guidelines as clear and comprehensive as possible and have put in place robust Quality Assurance processes (including steps such as review of moderation cases, flows, appeals and undertaking Root Cause Analyses).
We also note that whilst user reports are important, at TikTok we place considerable emphasis on proactive detection to remove violative content. We are proud that the vast majority of removed content is identified proactively before it is reported to us.
Appeals system.
We are transparent with users in relation to appeals. We set out the options that may be available both to the user who reported the content and the creator of the affected content, where they disagree with the decision we have taken.
The integrity of our appeals systems is reinforced by the involvement of our trained human moderators, who can take context and nuance into consideration when deciding whether content is illegal or violates our Community Guidelines.
Our moderators review all appeals raised in relation to removed videos, removed comments, and banned accounts and assess them against our policies. To ensure consistency within this process and its overall integrity, we have sought to make our policies as clear and comprehensive as possible and have put in place robust Quality Assurance processes (including steps such as auditing appeals and undertaking Root Cause Analyses).
If users who have submitted an appeal are still not satisfied with our decision, they can share feedback with us via the webform on TikTok.com. We continuously take user feedback into consideration to identify areas of improvement, including within the appeals process. Users may also have other legal rights in relation to decisions we make, as set out further here.
Commitment 24
Relevant Signatories commit to inform users whose content or accounts has been subject to enforcement actions (content/accounts labelled, demoted or otherwise enforced on) taken on the basis of violation of policies relevant to this section (as outlined in Measure 18.2), and provide them with the possibility to appeal against the enforcement action at issue and to handle complaints in a timely, diligent, transparent, and objective manner and to reverse the action without undue delay where the complaint is deemed to be founded.
We signed up to the following measures of this commitment
Measure 24.1
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 24.1
Relevant Signatories commit to provide users with information on why particular content or accounts have been labelled, demoted, or otherwise enforced on, on the basis of violation of policies relevant to this section, as well as the basis for such enforcement action, and the possibility for them to appeal through a transparent mechanism.
QRE 24.1.1
Relevant Signatories will report on the availability of their notification and appeals systems across Member States and languages and provide details on the steps of the appeals procedure.
- removal or otherwise restriction of access to their content;
- a ban of the account;
- restriction of their access to a feature (such as LIVE); or
- restriction of their ability to monetise.
Such notifications are provided in near real time after action has been taken (i.e. generally within several seconds or up to a few minutes at most).
All such appeals raised will be queued for review by our specialised human moderators so as to ensure that context is adequately taken into account in reaching a determination. Users can monitor the status and view the results of their appeal within their in-app inbox.
SLI 24.1.1
Relevant Signatories provide information on the number and nature of enforcement actions for policies described in response to Measure 18.2, the numbers of such actions that were subsequently appealed, the results of these appeals, information, and to the extent possible metrics, providing insight into the duration or effectiveness of processing of appeals process, and publish this information on the Transparency Centre.
| Country | Number of Appeals of videos removed for violation of misinformation policy | Number of overturns of appealsÊ for violation of misinformation policy | Appeal success rate of videos removedÊ for violation of misinformation policy | Number of appeals of videos removed for violation of Civic and Election Integrity policy | Number of overturns of appeals for violation of Civic and Election Integrity policy | Appeal success rate of videos removed for violation of Civic and Election Integrity policy | Number of appeals of videos removed for violation of Edited Media and AI-Generated Content (AIGC) | Number of overturns of appeals for violation of Edited Media and AI-Generated Content (AIGC) | Appeal success rate of videos removed for violation of Edited Media and AI-Generated Content (AIGC) |
|---|---|---|---|---|---|---|---|---|---|
| Austria | 885 | 683 | 77.20% | 152 | 128 | 84.20% | 645 | 574 | 89.00% |
| Belgium | 1,047 | 877 | 83.80% | 171 | 145 | 84.80% | 590 | 521 | 88.30% |
| Bulgaria | 1,375 | 983 | 71.50% | 71 | 60 | 84.50% | 333 | 278 | 83.50% |
| Croatia | 133 | 106 | 79.70% | 13 | 12 | 92.30% | 256 | 229 | 89.50% |
| Cyprus | 155 | 122 | 78.70% | 13 | 11 | 84.60% | 246 | 213 | 86.60% |
| Czech Republic | 1,119 | 936 | 83.60% | 110 | 96 | 87.30% | 479 | 418 | 87.30% |
| Denmark | 546 | 441 | 80.80% | 88 | 69 | 78.40% | 567 | 540 | 95.20% |
| Estonia | 181 | 137 | 75.70% | 23 | 15 | 65.20% | 362 | 308 | 85.10% |
| Finland | 424 | 346 | 81.60% | 61 | 47 | 77.00% | 429 | 388 | 90.40% |
| France | 7,543 | 6,307 | 83.60% | 362 | 309 | 85.40% | 3,308 | 2,905 | 87.80% |
| Germany | 14,569 | 10,635 | 73.00% | 1,475 | 1,191 | 80.70% | 10,927 | 9,547 | 87.40% |
| Greece | 1,206 | 1,022 | 84.70% | 168 | 150 | 89.30% | 461 | 395 | 85.70% |
| Hungary | 362 | 285 | 78.70% | 189 | 163 | 86.20% | 242 | 205 | 84.70% |
| Ireland | 856 | 716 | 83.60% | 85 | 77 | 90.60% | 416 | 388 | 93.30% |
| Italy | 4,912 | 3,583 | 72.90% | 367 | 309 | 84.20% | 2,796 | 2,399 | 85.80% |
| Latvia | 253 | 227 | 89.70% | 23 | 21 | 91.30% | 544 | 487 | 89.50% |
| Lithuania | 275 | 201 | 73.10% | 28 | 25 | 89.30% | 470 | 411 | 87.40% |
| Luxembourg | 60 | 50 | 83.30% | 7 | 6 | 85.70% | 86 | 81 | 94.20% |
| Malta | 33 | 25 | 75.80% | 6 | 6 | 100.00% | 62 | 54 | 87.10% |
| Netherlands | 4,467 | 3,223 | 72.20% | 330 | 246 | 74.50% | 4,138 | 3,671 | 88.70% |
| Poland | 5,049 | 3,488 | 69.10% | 284 | 238 | 83.80% | 1,910 | 1,563 | 81.80% |
| Portugal | 823 | 648 | 78.70% | 76 | 58 | 76.30% | 291 | 253 | 86.90% |
| Romania | 8,645 | 4,885 | 56.50% | 1,219 | 777 | 63.70% | 2,528 | 2,208 | 87.30% |
| Slovakia | 358 | 287 | 80.20% | 19 | 15 | 78.90% | 337 | 290 | 86.10% |
| Slovenia | 125 | 104 | 83.20% | 11 | 5 | 45.50% | 168 | 148 | 88.10% |
| Spain | 5,671 | 4,159 | 73.30% | 374 | 328 | 87.70% | 3,988 | 3,608 | 90.50% |
| Sweden | 1,039 | 831 | 80.00% | 163 | 130 | 79.80% | 830 | 701 | 84.50% |
| Iceland | 36 | 33 | 91.70% | 7 | 6 | 85.70% | 34 | 33 | 97.10% |
| Liechtenstein | 4 | 2 | 50.00% | 0 | 0 | 0.00% | 3 | 3 | 100.00% |
| Norway | 547 | 429 | 78.40% | 105 | 87 | 82.90% | 527 | 477 | 90.50% |
| Total EU | 62,111 | 45,307 | 72.90% | 5,888 | 4,637 | 78.80% | 37,409 | 32,783 | 87.60% |
| Total EEA | 62,698 | 45,771 | 73.00% | 6,000 | 4,730 | 78.80% | 37,973 | 33,296 | 87.70% |
Empowering Researchers
Commitment 26
Relevant Signatories commit to provide access, wherever safe and practicable, to continuous, real-time or near real-time, searchable stable access to non-personal data and anonymised, aggregated, or manifestly-made public data for research purposes on Disinformation through automated means such as APIs or other open and accessible technical solutions allowing the analysis of said data.
We signed up to the following measures of this commitment
Measure 26.1 Measure 26.2 Measure 26.3
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
- Enabled researchers to efficiently identify popular, high‑engagement content through our Research Tools (Research API and VCE) by filtering videos by numbers of views and comments, supporting studies across topics including potential disinformation.
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 26.1
Relevant Signatories will provide public access to non-personal data and anonymised, aggregated or manifestly-made public data pertinent to undertaking research on Disinformation on their services, such as engagement and impressions (views) of content hosted by their services, with reasonable safeguards to address risks of abuse (e.g. API policies prohibiting malicious or commercial uses).
QRE 26.1.1
Relevant Signatories will describe the tools and processes in place to provide public access to non-personal data and anonymised, aggregated and manifestly-made public data pertinent to undertaking research on Disinformation, as well as the safeguards in place to address risks of abuse.
QRE 26.1.2
Relevant Signatories will publish information related to data points available via Measure 26.1, as well as details regarding the technical protocols to be used to access these data points, in the relevant help centre. This information should also be reachable from the Transparency Centre. At minimum, this information will include definitions of the data points available, technical and methodological information about how they were created, and information about the representativeness of the data.
We provide access to researchers to data that is publicly available on our platform through our Research Tools and Commercial Content API hosted on our dedicated TikTok for Developers website.
Measure 26.2
Relevant Signatories will provide real-time or near real-time, machine-readable access to non-personal data and anonymised, aggregated or manifestly-made public data on their service for research purposes, such as accounts belonging to public figures such as elected official, news outlets and government accounts subject to an application process which is not overly cumbersome.
QRE 26.2.1
Relevant Signatories will describe the tools and processes in place to provide real-time or near real-time access to non-personal data and anonymised, aggregated and manifestly-made public data for research purposes as described in Measure 26.2.
Through our Commercial Content API, qualifying researchers and professionals, who can be located in any country, can request public data about searches on ads and other commercial contents including ads, ad and advertiser metadata, and targeting information. To date, the Commercial Content API only includes data from EU countries.
- Ad Library: This library features ads that we're paid to display to people, including those that aren't currently active or have been paused by the advertisers.
- Other commercial content: This library features content that we're not paid to display, including content that promotes a brand, product, or service.
The CCL currently includes information on ads available to users in the European Economic Area (EEA), Switzerland, and the U.K.
QRE 26.2.2
Relevant Signatories will describe the scope of manifestly-made public data as applicable to their services.
- Ad Library: This library features ads that we're paid to display to people, including those that aren't currently active or have been paused by the advertisers.
- Other commercial content: This library features content that we're not paid to display, including content that promotes a brand, product, or service.
QRE 26.2.3
Relevant Signatories will describe the application process in place to in order to gain the access to non-personal data and anonymised, aggregated and manifestly-made public data described in Measure 26.2.
We make detailed information available to applicants about our Research Tools (Research API and VCE) and Commercial Content API,through our dedicated TikTok for Developers website, including on what data is made available and how to apply for access.
Once an application has been approved for access to our Research Tools, we provide step-by-step instructions for researchers on how to access research data, how to comply with the security steps, and how to run queries on the data.
SLI 26.2.1
Relevant Signatories will provide meaningful metrics on the uptake, swiftness, and acceptance level of the tools and processes in Measure 26.2, such as: Number of monthly users (or users over a sample representative timeframe), Number of applications received, rejected, and accepted (over a reporting period or a sample representative timeframe), Average response time (over a reporting period or a sample representative timeframe).
During this reporting period we received:
- 201 applications to access TikTok’s Research Tools (Research API and VCE) from researchers in the EU and EEA.
- 90 applications to access the TikTok Commercial Content API.
| Country | Number of applications received for Research Tools | Number of applications accepted for Research Tools | Number of applications rejected for Research Tools | Number of applications received for TikTok Commercial Content APIÊ | Number of applications accepted for TikTok Commercial Content APIÊ | Number of applications rejected for TikTok Commercial Content API |
|---|---|---|---|---|---|---|
| Austria | 8 | 6 | 2 | 0 | 0 | 0 |
| Belgium | 5 | 3 | 2 | 2 | 2 | 0 |
| Bulgaria | 0 | 0 | 0 | 0 | 0 | 0 |
| Croatia | 0 | 0 | 0 | 0 | 0 | 0 |
| Cyprus | 0 | 0 | 0 | 1 | 1 | 0 |
| Czech Republic | 2 | 1 | 0 | 2 | 2 | 0 |
| Denmark | 6 | 6 | 0 | 10 | 10 | 0 |
| Estonia | 0 | 0 | 0 | 1 | 1 | 0 |
| Finland | 3 | 3 | 0 | 1 | 1 | 0 |
| France | 25 | 10 | 11 | 17 | 17 | 0 |
| Germany | 50 | 34 | 10 | 16 | 16 | 0 |
| Greece | 0 | 0 | 0 | 0 | 0 | 0 |
| Hungary | 3 | 3 | 0 | 0 | 0 | 0 |
| Ireland | 6 | 4 | 2 | 2 | 2 | 0 |
| Italy | 24 | 18 | 6 | 2 | 2 | 0 |
| Latvia | 1 | 0 | 1 | 2 | 2 | 0 |
| Lithuania | 1 | 1 | 0 | 0 | 0 | 0 |
| Luxembourg | 1 | 0 | 0 | 0 | 0 | 0 |
| Malta | 0 | 0 | 0 | 0 | 0 | 0 |
| Netherlands | 11 | 10 | 1 | 7 | 7 | 0 |
| Poland | 5 | 4 | 1 | 9 | 9 | 0 |
| Portugal | 4 | 3 | 0 | 0 | 0 | 0 |
| Romania | 5 | 2 | 1 | 2 | 2 | 0 |
| Slovakia | 0 | 0 | 0 | 1 | 1 | 0 |
| Slovenia | 2 | 1 | 1 | 1 | 1 | 0 |
| Spain | 35 | 14 | 17 | 7 | 7 | 0 |
| Sweden | 10 | 7 | 2 | 4 | 4 | 0 |
| Iceland | 0 | 0 | 0 | 0 | 0 | 0 |
| Liechtenstein | 0 | 0 | 0 | 0 | 0 | 0 |
| Norway | 2 | 1 | 0 | 3 | 3 | 0 |
| Total EU | 207 | 130 | 57 | 87 | 87 | 0 |
| Total EEA | 209 | 131 | 57 | 90 | 90 | 0 |
Measure 26.3
Relevant Signatories will implement procedures for reporting the malfunctioning of access systems and for restoring access and repairing faulty functionalities in a reasonable time.
QRE 26.3.1
Relevant Signatories will describe the reporting procedures in place to comply with Measure 26.3 and provide information about their malfunction response procedure, as well as about malfunctions that would have prevented the use of the systems described above during the reporting period and how long it took to remediate them.
Commitment 28
COOPERATION WITH RESEARCHERS Relevant Signatories commit to support good faith research into Disinformation that involves their services.
We signed up to the following measures of this commitment
Measure 28.1 Measure 28.2 Measure 28.3 Measure 28.4
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 28.1
Relevant Signatories will ensure they have the appropriate human resources in place in order to facilitate research, and should set-up and maintain an open dialogue with researchers to keep track of the types of data that are likely to be in demand for research and to help researchers find relevant contact points in their organisations.
QRE 28.1.1
Relevant Signatories will describe the resources and processes they deploy to facilitate research and engage with the research community, including e.g. dedicated teams, tools, help centres, programs, or events.
As well as opportunities to share context about our approach, research interests, and opportunities to collaborate, these events enable us to learn from the important work being done by the research community on various topics, which include aspects related to harmful misinformation.
Measure 28.2
Relevant Signatories will be transparent on the data types they currently make available to researchers across Europe.
QRE 28.2.1
Relevant Signatories will describe what data types European researchers can currently access via their APIs or via dedicated teams, tools, help centres, programs, or events.
- Public account data, such as user profiles, followers and following lists, liked videos, pinned videos and reposted videos.
- Public content data, such as comments, captions, subtitles, and number of comments, shares and likes that a video receives.
Our commercial content related APIs includes ads, ad and advertiser metadata, and targeting information. These APIs will allow the public and researchers to perform customised - advertiser name or keyword based - searches on ads and other commercial content data that is stored in the Commercial Content Library repository. The Library is a searchable database with information about paid ads and ad metadata, such as the advertising creative, dates the ad ran, main parameters used for targeting (e.g. age, gender), number of people who were served the ad, and more.
Measure 28.3
Relevant Signatories will not prohibit or discourage genuinely and demonstratively public interest good faith research into Disinformation on their platforms, and will not take adversarial action against researcher users or accounts that undertake or participate in good-faith research into Disinformation.
QRE 28.3.1
Relevant Signatories will collaborate with EDMO to run an annual consultation of European researchers to assess whether they have experienced adversarial actions or are otherwise prohibited or discouraged to run such research.
Empowering fact-checkers
Commitment 30
Relevant Signatories commit to establish a framework for transparent, structured, open, financially sustainable, and non-discriminatory cooperation between them and the EU fact-checking community regarding resources and support made available to fact-checkers.
We signed up to the following measures of this commitment
Measure 30.1 Measure 30.2 Measure 30.3 Measure 30.4
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 30.1
Relevant Signatories will set up agreements between them and independent fact-checking organisations (as defined in whereas (e)) to achieve fact-checking coverage in all Member States. These agreements should meet high ethical and professional standards and be based on transparent, open, consistent and non-discriminatory conditions and will ensure the independence of fact-checkers.
QRE 30.1.1
Relevant Signatories will report on and explain the nature of their agreements with fact-checking organisations; their expected results; relevant quantitative information (for instance: contents fact-checked, increased coverage, changes in integration of fact-checking as depends on the agreements and to be further discussed within the Task-force); and such as relevant common standards and conditions for these agreements.
- The service the fact-checking partner will provide, namely, that their team of fact checkers review, assess and rate video content uploaded to their fact-checking queue, and will provide regular pro-active Insights Reports about general misinformation trends observed on our platform and across the industry generally, including new/changing industry or market trends, events or topics that generate particular misinformation or disinformation.
- The expected results e.g., the fact-checkers advise on whether the content may be or contain misinformation and rate it using our classification categories.
- An option to receive pro-actively flagging of potential harmful misinformation from our partners.
- The languages in which they will provide fact-checking services.
- The ability to request temporary coverage regarding additional languages or support on ad hoc additional projects.
- All other key terms including the applicable term and fees and payment arrangements.
QRE 30.1.2
Relevant Signatories will list the fact-checking organisations they have agreements with (unless a fact-checking organisation opposes such disclosure on the basis of a reasonable fear of retribution or violence).
- Agence France-Presse (AFP)
- Deutsche Presse-Agentur (dpa)
- Demagog
- Facta
- Geofacts
- Faktograf
- Internews Kosova (Kallxo)
- Lead Stories
- Newtral
- Poligrafo
- Reuters
- Science Feedback- For advertising-related fact-checking partnerships, please refer to Chapter 2.
- Teyit
These partners provide fact-checking coverage in 23 official EEA languages, including at least one official language of each EU Member States, and additional languages including Georgian, Russian, Turkish, Ukrainian, Albanian and Serbian.
We can, and have, put in place temporary agreements with these fact-checking partners to provide additional EU language coverage during high risk events like elections or an unfolding crisis.
Outside of our fact-checking program, we also collaborate with fact-checking organisations to develop a variety of media literacy campaigns. For example, during this reporting period, we
worked with European fact-checkers on 6 temporary media literacy campaigns, in advance of regional elections, through our in-app Election Centers:
- Portugal Local Elections -Polígrafo
- Estonia Local Elections - Lead Stories
- Ireland Presidential Election - The Journal
- Portugal Presidential Election - Polígrafo
- Denmark (local and municipal elections): Sikker Digital
- Czechia (parliamentary election elections): Demagog.cz
Globally, we have more than 20 IFCN-accredited fact-checking partners and we keep users updated here.
QRE 30.1.3
Relevant Signatories will report on resources allocated where relevant in each of their services to achieve fact-checking coverage in each Member State and to support fact-checking organisations' work to combat Disinformation online at the Member State level.
In order to effectively scale the feedback provided by our fact-checkers globally, we have implemented the measures listed below.
- Insights reports. Our fact-checking partners provide regular reports identifying general misinformation trends observed on our platform and across the industry generally, including new/changing industry or market trends, events or topics that generated particular misinformation or disinformation.
- Proactive detection by our fact-checking partners. Our fact-checking partners are authorised to proactively identify content that may constitute harmful misinformation on our platform, which our moderators assess against our Community Guidelines, and suggest prominent misinformation that is circulating online that may benefit from verification.
- Fact-checking guidelines. Where relevant, we create guidelines and trending topic reminders for our moderators which are informed by previous fact checking assessments. This helps our teams leverage the insights from our fact-checking partners and supports swift and accurate decisions on flagged content regardless of the language in which the original claim was made.
If our automated moderation technology identifies content that is a potential violation, it will either take action against the content or flag it for human review. In line with our safeguards to help ensure accurate decisions are made, automated removal is applied when violations are more clear-cut.
- Vision-based: Computer vision models can identify objects that violate our Community Guidelines, such as weapons or hate symbols.
- Audio-based: Audio clips are reviewed for violations of our policies, supported by a dedicated audio bank and "classifiers" that help us detect audios that are similar or modified to previous violations.
- Text-based: Detection models review written content like comments or hashtags, using foundational keyword lists to find variations of violative text. Artificial Intelligence (AI) that can interpret the context surrounding content—helps us identify violations that are context-dependent, such as words that can be used in a hateful way but may not violate our policies by themselves. We also work with various external experts, like our fact-checking partners, to inform our keyword lists.
- Similarity-based: "Similarity detection systems" enable us to not only catch identical or highly similar versions of violative content, but other types of content that share key contextual similarities and may require additional review.
- Activity-based: Technologies that look at how accounts are being operated help us disrupt deceptive activities like bot accounts, spam, or attempts to artificially inflate engagement through fake likes or follow attempts.
- LLMs: We use multimodal LLMs to help moderate content faster and more consistently at scale, from taking automated action on activity like fake engagement, to empowering teams with better moderation tools and risk insights.
- Content Credentials: We launched the ability to read Content Credentials that attach metadata to content, which we can use to automatically label AI-generated content that originated on other major platforms.
Measure 30.2
Relevant Signatories will provide fair financial contributions to the independent European fact-checking organisations for their work to combat Disinformation on their services. Those financial contributions could be in the form of individual agreements, of agreements with multiple fact-checkers or with an elected body representative of the independent European fact-checking organisations that has the mandate to conclude said agreements.
QRE 30.2.1
Relevant Signatories will report on actions taken and general criteria used to ensure the fair financial contributions to the fact-checkers for the work done, on criteria used in those agreements to guarantee high ethical and professional standards, independence of the fact-checking organisations, as well as conditions of transparency, openness, consistency and non-discrimination.
QRE 30.2.2
Relevant Signatories will engage in, and report on, regular reviews with their fact-checking partner organisations to review the nature and effectiveness of the Signatory's fact-checking programme.
QRE 30.2.3
European fact-checking organisations will, directly (as Signatories to the Code) or indirectly (e.g. via polling by EDMO or an elected body representative of the independent European fact-checking organisations) report on the fairness of the individual compensations provided to them via these agreements.
Measure 30.3
Relevant Signatories will contribute to cross-border cooperation between fact-checkers.
QRE 30.3.1
Relevant Signatories will report on actions taken to facilitate their cross-border collaboration with and between fact-checkers, including examples of fact-checks, languages, or Member States where such cooperation was facilitated.
Measure 30.4
To develop the Measures above, relevant Signatories will consult EDMO and an elected body representative of the independent European fact-checking organisations.
QRE 30.4.1
Relevant Signatories will report, ex ante on plans to involve, and ex post on actions taken to involve, EDMO and the elected body representative of the independent European fact-checking organisations, including on the development of the framework of cooperation described in Measures 30.3 and 30.4.
Commitment 31
Relevant Signatories commit to integrate, showcase, or otherwise consistently use fact-checkers' work in their platforms' services, processes, and contents; with full coverage of all Member States and languages.
We signed up to the following measures of this commitment
Measure 31.1 and 31.2 Measure 31.3 Measure 31.4
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 31.1 and 31.2
31.1: Relevant Signatories that showcase User Generated Content (UGC) will integrate, showcase, or otherwise consistently use independent fact-checkers’ work in their platforms’ services, processes, and contents across all Member States and across formats relevant to the service. Relevant Signatories will collaborate with fact-checkers to that end, starting by conducting and documenting research and testing. 31.2: Relevant Signatories that integrate fact-checks in their products or processes will ensure they employ swift and efficient mechanisms such as labelling, information panels or policy enforcement to help increase the impact of fact-checks on audiences.
TikTok did not subscribe to this measure as outlined in the January 2025 Subscription Document.
QRE 31.1.1 (for Measures 31.1 and 31.2)
Relevant Signatories will report on their specific activities and initiatives related to Measures 31.1 and 31.2, including the full results and methodology applied in testing solutions to that end.
SLI 31.1.1
Member State level reporting on use of fact-checks by service and the swift and efficient mechanisms in place to increase their impact, which may include (as depends on the service): number of fact-check articles published; reach of fact-check articles; number of content pieces reviewed by fact-checkers.
| Country | Number of fact checked videos (tasks) |
|---|---|
| Austria | 52 |
| Belgium | 396 |
| Bulgaria | 1,221 |
| Croatia | 135 |
| Cyprus | 21 |
| Czechia | 367 |
| Denmark | 239 |
| Estonia | 326 |
| Finland | 58 |
| France | 3,659 |
| Germany | 1,203 |
| Greece | 97 |
| Hungary | 107 |
| Ireland | 53 |
| Italy | 743 |
| Latvia | 223 |
| Lithuania | 182 |
| Luxembourg | 1 |
| Malta | 1 |
| Netherlands | 1,603 |
| Poland | 806 |
| Portugal | 344 |
| Romania | 308 |
| Slovakia | 283 |
| Slovenia | 193 |
| Spain | 349 |
| Sweden | 102 |
| Iceland | 5 |
| Liechtenstein | 0 |
| Norway | 190 |
| Total EU | 13,072 |
| Total EEA | 13,267 |
SLI 31.1.2
An estimation, through meaningful metrics, of the impact of actions taken such as, for instance, the number of pieces of content labelled on the basis of fact-check articles, or the impact of said measures on user interactions with information fact-checked as false or misleading.
| Country | Number of videos removed as a result of a fact checking assessment | Number of videos removed under Misinformation policyÊ |
|---|---|---|
| Austria | 12 | 2,612 |
| Belgium | 11 | 4,150 |
| Bulgaria | 166 | 4,828 |
| Croatia | 40 | 638 |
| Cyprus | 1 | 701 |
| Czech Republic | 20 | 2,855 |
| Denmark | 12 | 2,484 |
| Estonia | 28 | 527 |
| Finland | 7 | 1,357 |
| France | 273 | 37,466 |
| Germany | 216 | 42,642 |
| Greece | 7 | 4,602 |
| Hungary | 4 | 1,490 |
| Ireland | 3 | 2,613 |
| Italy | 207 | 18,667 |
| Latvia | 0 | 705 |
| Lithuania | 7 | 1,086 |
| Luxembourg | 0 | 349 |
| Malta | 0 | 159 |
| Netherlands | 258 | 14,335 |
| Poland | 128 | 14,770 |
| Portugal | 48 | 3,141 |
| Romania | 65 | 28,743 |
| Slovakia | 18 | 1,122 |
| Slovenia | 5 | 370 |
| Spain | 50 | 21,592 |
| Sweden | 3 | 4,159 |
| Iceland | 1 | 123 |
| Liechtenstein | 0 | 143 |
| Norway | 10 | 1,662 |
| Total EU | 1,589 | 218,163 |
| Total EEA | 1,600 | 220,091 |
SLI 31.1.3
Signatories recognise the importance of providing context to SLIs 31.1.1 and 31.1.2 in ways that empower researchers, fact-checkers, the Commission, ERGA, and the public to understand and assess the impact of the actions taken to comply with Commitment 31. To that end, relevant Signatories commit to include baseline quantitative information that will help contextualise these SLIs. Relevant Signatories will present and discuss within the Permanent Task-force the type of baseline quantitative information they consider using for contextualisation ahead of their baseline reports.
The metric we have provided demonstrates the % of videos which have been removed as a result of the fact checking assessment, in comparison to the total number of videos removed because of violation of our harmful misinformation policy.
| Country | Videos removed as a result of a fact checking assessment as a percentage of total number of videos removed due to violation of harmful misinformation policy |
|---|---|
| Austria | 0.50% |
| Belgium | 0.30% |
| Bulgaria | 3.40% |
| Croatia | 6.30% |
| Cyprus | 0.10% |
| Czech Republic | 0.70% |
| Denmark | 0.50% |
| Estonia | 5.30% |
| Finland | 0.50% |
| France | 0.70% |
| Germany | 0.50% |
| Greece | 0.20% |
| Hungary | 0.30% |
| Ireland | 0.10% |
| Italy | 1.10% |
| Latvia | 0.00% |
| Lithuania | 0.60% |
| Luxembourg | 0.00% |
| Malta | 0.00% |
| Netherlands | 1.80% |
| Poland | 0.90% |
| Portugal | 1.50% |
| Romania | 0.20% |
| Slovakia | 1.60% |
| Slovenia | 1.40% |
| Spain | 0.20% |
| Sweden | 0.10% |
| Iceland | 0.80% |
| Liechtenstein | 0.00% |
| Norway | 0.60% |
| Total EU | 0.70% |
| Total EEA | 0.70% |
Commitment 32
Relevant Signatories commit to provide fact-checkers with prompt, and whenever possible automated, access to information that is pertinent to help them to maximise the quality and impact of fact-checking, as defined in a framework to be designed in coordination with EDMO and an elected body representative of the independent European fact-checking organisations.
We signed up to the following measures of this commitment
Measure 32.3
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 32.3
Relevant Signatories will regularly exchange information between themselves and the fact-checking community, to strengthen their cooperation.
QRE 32.3.1
Relevant Signatories will report on the channels of communications and the exchanges conducted to strengthen their cooperation - including success of and satisfaction with the information, interface, and other tools referred to in Measures 32.1 and 32.2 - and any conclusions drawn from such exchanges.
Transparency Centre
Commitment 34
To ensure transparency and accountability around the implementation of this Code, Relevant Signatories commit to set up and maintain a publicly available common Transparency Centre website.
We signed up to the following measures of this commitment
Measure 34.1 Measure 34.2 Measure 34.3 Measure 34.4 Measure 34.5
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 34.1
Signatories establish and maintain the common Transparency Centre website, which will be operational and available to the public within 6 months from the signature of this Code.
Measure 34.2
Signatories provide appropriate funding, for setting up and operating the Transparency Centre website, including its maintenance, daily operation, management, and regular updating. Funding contribution should be commensurate with the nature of the Signatories' activity and shall be sufficient for the website's operations and maintenance and proportional to each Signatories' risk profile and economic capacity.
Measure 34.3
Relevant Signatories will contribute to the Transparency Centre's information to the extent that the Code is applicable to their services.
Measure 34.4
Signatories will agree on the functioning and financing of the Transparency Centre within the Task-force, to be recorded and reviewed within the Task-Force on an annual basis.
Measure 34.5
The Task-force will regularly discuss the Transparency Centre and assess whether adjustments or actions are necessary. Signatories commit to implement the actions and adjustments decided within the Task-force within a reasonable timeline.
Commitment 35
Signatories commit to ensure that the Transparency Centre contains all the relevant information related to the implementation of the Code's Commitments and Measures and that this information is presented in an easy-to-understand manner, per service, and is easily searchable.
We signed up to the following measures of this commitment
Measure 35.1 Measure 35.2 Measure 35.3 Measure 35.4 Measure 35.5 Measure 35.6
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 35.1
Signatories will list in the Transparency Centre, per each Commitment and Measure that they subscribe to, the terms of service and policies that their service applies to implement these Commitments and Measures.
Measure 35.2
Signatories provide information on the implementation and enforcement of their policies per service, including geographical and language coverage.
Measure 35.3
Signatories ensure that the Transparency Centre contains a repository of their reports assessing the implementation of the Code's commitments.
Measure 35.4
In crisis situations, Signatories use the Transparency Centre to publish information regarding the specific mitigation actions taken related to the crisis.
Measure 35.5
Signatories ensure that the Transparency Centre is built with state-of-the-art technology, is user-friendly, and that the relevant information is easily searchable (including per Commitment and Measure). Users of the Transparency Centre will be able to easily track changes in Signatories' policies and actions.
Measure 35.6
The Transparency Centre will enable users to easily access and understand the Service Level Indicators and Qualitative Reporting Elements tied to each Commitment and Measure of the Code for each service, including Member State breakdowns, in a standardised and searchable way. The Transparency Centre should also enable users to easily access and understand Structural Indicators for each Signatory.
Commitment 36
Signatories commit to updating the relevant information contained in the Transparency Centre in a timely and complete manner.
We signed up to the following measures of this commitment
Measure 36.1 Measure 36.2 Measure 36.3
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 36.1
Signatories provide updates about relevant changes in policies and implementation actions in a timely manner, and in any event no later than 30 days after changes are announced or implemented.
Measure 36.2
Signatories will regularly update Service Level Indicators, reporting elements, and Structural Indicators, in parallel with the regular reporting foreseen by the monitoring framework. After the first reporting period, Relevant Signatories are encouraged to also update the Transparency Centre more regularly.
Measure 36.3
Signatories will update the Transparency Centre to reflect the latest decisions of the Permanent Task-force, regarding the Code and the monitoring framework.
QRE 36.1.1 (for the Commitments 34-36)
With their initial implementation report, Signatories will outline the state of development of the Transparency Centre, its functionalities, the information it contains, and any other relevant information about its functioning or operations. This information can be drafted jointly by Signatories involved in operating or adding content to the Transparency Centre.
QRE 36.1.2 (for the Commitments 34-36)
Signatories will outline changes to the Transparency Centre's content, operations, or functioning in their reports over time. Such updates can be drafted jointly by Signatories involved in operating or adding content to the Transparency Centre.
SLI 36.1.1 (for the Commitments 34-36)
Signatories will provide meaningful quantitative information on the usage of the Transparency Centre, such as the average monthly visits of the webpage.
| Platform | Metrics |
|---|---|
| TikTok | Between July 1 and December 31 2025, our signatory profile was visited 1,350 times, and our signatory reports were downloaded 3,456 times. The Transparency Centre Webpage overall was visited 30,384 times. |
Permanent Task-Force
Commitment 37
Signatories commit to participate in the permanent Task-force. The Task-force includes the Signatories of the Code and representatives from EDMO and ERGA. It is chaired by the European Commission, and includes representatives of the European External Action Service (EEAS). The Task-force can also invite relevant experts as observers to support its work. Decisions of the Task-force are made by consensus.
We signed up to the following measures of this commitment
Measure 37.1 Measure 37.2 Measure 37.3 Measure 37.4 Measure 37.5 Measure 37.6
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 37.1
Signatories will participate in the Task-force and contribute to its work. Signatories, in particular smaller or emerging services will contribute to the work of the Task-force proportionate to their resources, size and risk profile. Smaller or emerging services can also agree to pool their resources together and represent each other in the Task-force. The Task-force will meet in plenary sessions as necessary and at least every 6 months, and, where relevant, in subgroups dedicated to specific issues or workstreams.
Measure 37.2
Signatories agree to work in the Task-force in particular – but not limited to – on the following tasks: Establishing a risk assessment methodology and a rapid response system to be used in special situations like elections or crises; Cooperate and coordinate their work in special situations like elections or crisis; Agree on the harmonised reporting templates for the implementation of the Code's Commitments and Measures, the refined methodology of the reporting, and the relevant data disclosure for monitoring purposes; Review the quality and effectiveness of the harmonised reporting templates, as well as the formats and methods of data disclosure for monitoring purposes, throughout future monitoring cycles and adapt them, as needed; Contribute to the assessment of the quality and effectiveness of Service Level and Structural Indicators and the data points provided to measure these indicators, as well as their relevant adaptation; Refine, test and adjust Structural Indicators and design mechanisms to measure them at Member State level; Agree, publish and update a list of TTPs employed by malicious actors, and set down baseline elements, objectives and benchmarks for Measures to counter them, in line with the Chapter IV of this Code.
Measure 37.3
The Task-force will agree on and define its operating rules, including on the involvement of third-party experts, which will be laid down in a Vademecum drafted by the European Commission in collaboration with the Signatories and agreed on by consensus between the members of the Task-force.
Measure 37.4
Signatories agree to set up subgroups dedicated to the specific issues related to the implementation and revision of the Code with the participation of the relevant Signatories.
Measure 37.5
When needed, and in any event at least once per year the Task-force organises meetings with relevant stakeholder groups and experts to inform them about the operation of the Code and gather their views related to important developments in the field of Disinformation.
Measure 37.6
Signatories agree to notify the rest of the Task-force when a Commitment or Measure would benefit from changes over time as their practices and approaches evolve, in view of technological, societal, market, and legislative developments. Having discussed the changes required, the Relevant Signatories will update their subscription document accordingly and report on the changes in their next report.
QRE 37.6.1
Signatories will describe how they engage in the work of the Task-force in the reporting period, including the sub-groups they engaged with.
We will continue to engage in the Task-force and all of its working groups and subgroups.
Monitoring of the Code
Commitment 38
The Signatories commit to dedicate adequate financial and human resources and put in place appropriate internal processes to ensure the implementation of their commitments under the Code.
We signed up to the following measures of this commitment
Measure 38.1
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 38.1
Relevant Signatories will outline the teams and internal processes they have in place, per service, to comply with the Code in order to achieve full coverage across the Member States and the languages of the EU.
QRE 38.1.1
Relevant Signatories will outline the teams and internal processes they have in place, per service, to comply with the Code in order to achieve full coverage across the Member States and the languages of the EU.
We have dedicated Trust and Safety staff in the European Union. We recognise the importance of local knowledge and expertise as we work to ensure online safety for our users. We take a similar approach to our third party partnerships.
Commitment 39
Signatories commit to provide to the European Commission, within 1 month after the end of the implementation period (6 months after this Code’s signature) the baseline reports as set out in the Preamble.
We signed up to the following measures of this commitment
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Commitment 40
Signatories commit to provide regular reporting on Service Level Indicators (SLIs) and Qualitative Reporting Elements (QREs). The reports and data provided should allow for a thorough assessment of the extent of the implementation of the Code’s Commitments and Measures by each Signatory, service and at Member State level.
We signed up to the following measures of this commitment
Measure 40.1 Measure 40.2 Measure 40.3 Measure 40.4 Measure 40.5 Measure 40.6
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 40.1
Relevant Signatories that are Very Large Online Platforms, as defined in the DSA, will report every six-months on the implementation of the Commitments and Measures they signed up to under the Code, including on the relevant QREs and SLIs at service and Member State Level.
Measure 40.2
Other Signatories will report yearly on the implementation of the Commitments and Measures taken under the present Code, including on the relevant QREs and SLIs, at service and Member State level.
Measure 40.3
Measure 40.4
Measure 40.5
Measure 40.6
Commitment 41
Signatories commit to work within the Task-force towards developing Structural Indicators, and publish a first set of them within 9 months from the signature of this Code; and to publish an initial measurement alongside their first full report.
We signed up to the following measures of this commitment
Measure 41.1 Measure 41.2 Measure 41.3
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Crisis and Elections Response
Elections 2025
[Note: Signatories are requested to provide information relevant to their particular response to the threats and challenges they observed on their service(s). They ensure that the information below provides an accurate and complete report of their relevant actions. As operational responses to crisis/election situations can vary from service to service, an absence of information should not be considered a priori a shortfall in the way a particular service has responded. Impact metrics are accurate to the best of signatories’ abilities to measure them].
Threats observed or anticipated
We have comprehensive measures in place to anticipate and address the risks associated with electoral processes, including the risks associated with election misinformation in the context of the Irish presidential election held on 24 October 2025.
In advance of the election, a dedicated election Task-Force was established to proactively assess potential risks. Through cross-functional consultations, the team identified key threats—including the spread of AI-generated deepfakes and misinformation—and developed response strategies to mitigate them before they could gain traction on the platform.
We have comprehensive measures in place to anticipate and address risks associated with electoral processes, including risks associated with election misinformation in the context of the Czech federal election held on 3 & 4 October 2025. In advance of the election, a core election Task-Force was formed, and consultations between cross-functional teams helped to identify and design response strategies.
TikTok did not observe major threats during the Czech election. Through the election, we monitored for and actioned inauthentic behavior and removed content that violated our Community Guidelines.
We have comprehensive measures in place to anticipate and address risks associated with electoral processes, including risks associated with election misinformation in the context of the Dutch parliamentary election held on 29 October 2025. In advance of the election, a core election Task-Force was formed, and consultations between cross-functional teams helped to identify and design response strategies.
TikTok did not observe major threats during the Dutch election. Through the election, we monitored for and actioned inauthentic behavior and removed content that violated our Community Guidelines.
Mitigations in place
Enforcing our policies
We have dedicated Trust and Safety professionals working to keep our platform safe. As they usually do, our teams worked alongside technology to ensure that we were consistently enforcing our rules to detect and remove misinformation, covert influence operations, and other content and behaviour that can increase during an election period. In advance of the election, we had proactive data monitoring, trend detection and regular monitoring of enriched keywords and accounts.
(II) Mission Control Centre: internal cross-functional collaboration
As part of our advance preparations ahead of the Irish presidential election, we established a dedicated Mission Control Centre (MCC) bringing together employees from multiple specialist teams within our safety department. Through the MCC, our teams were able to provide consistent and dedicated coverage of potential election-related issues in the run-up to, during,and immediately after the election.
(III) Integrity and Authenticity policies
We prioritise proactive content moderation, with the vast majority of violative content removed before it is viewed or reported. In H2 2025, more than 98% of videos violating our Integrity and Authenticity policies were removed proactively worldwide.
(IV) Fact-checking
Our global fact-checking programme is a critical part of our layered approach to detecting harmful misinformation in the context of elections. The core objective of the fact-checking program is to leverage the expertise of external fact-checking organisations to help assess the accuracy of potentially harmful claims that are difficult to verify.
Within Europe, we partner with 13 fact-checking organisations who provide fact-checking coverage in 25 languages (22 official EU languages plus Russian, Ukrainian and Turkish). Reuters serves as the fact-checking partner for Ireland.
(V) Deterring covert influence operations
We prohibit covert influence operations and remain constantly vigilant against attempts to use deceptive behaviours and manipulate our platform. We proactively seek and continuously investigate leads for potential influence operations. We're also working with government authorities and encourage them to share any intelligence so that we can work together to ensure election integrity. More detail on our policy against covert influence operations is published on our website.
(VI) Tackling misleading AI-generated content
Creators are required to label any realistic AI-generated content (AIGC) and we have an AI-generated content label to help people do this. TikTok has a ‘Edited Media and AI-Generated Content (AIGC)’ policy, which prohibits AIGC showing fake authoritative sources or crisis events, or falsely showing public figures in certain contexts including being bullied, making an endorsement, or being endorsed.
(VII) Government, Politician, and Political Party Accounts (GPPPAs)
Many political leaders, ministers, and political parties have a presence on TikTok.These politicians and parties play an important role on our platform - we believe that verified accounts belonging to politicians and institutions provide the electorate with another route to access their representatives, and additional trusted voices in the shared fight against misinformation.
We strongly recommend GPPPAs have their accounts verified by TikTok. Verified badges help users make informed choices about the accounts they choose to follow. It is also an easy way for notable figures to let users know they’re seeing authentic content, and it helps to build trust among high-profile accounts and their followers.
Directing people to trusted sources
(I) Investing in media literacy
External engagement at the national and EU levels
To further promote election integrity, and inform our approach to the Irish Election, we organised an Election Speaker Series with local fact-checking partner Reuters who shared their insights and market expertise with our internal teams.
Czech Federal Elections:
Enforcing our policies
We have dedicated Trust and Safety professionals working to keep our platform safe. As they usually do, our teams worked alongside technology to ensure that we were consistently enforcing our rules to detect and remove misinformation, covert influence operations, and other content and behaviour that can increase during an election period. In advance of the election, we had proactive data monitoring, trend detection, and regular monitoring of election keywords and accounts.
(II) Mission Control Centre: internal cross-functional collaboration
As part of our advance preparations, ahead of the Czech election, we established a dedicated Mission Control Centre (MCC) bringing together employees from multiple specialist teams within our safety department. Through the MCC, our teams provided consistent and dedicated coverage of potential election-related issues in the run-up to, and during, the election.
(III) Integrity and Authenticity policies
Our global fact-checking programme is a critical part of our layered approach to detecting harmful misinformation in the context of elections. The core objective of the fact-checking program is to leverage the expertise of external fact-checking organisations to help assess the accuracy of potentially harmful claims that are difficult to verify.
Within Europe, we partner with 13 fact-checking organisations who provide fact-checking coverage in 25 languages (22 official EU languages plus Russian, Ukrainian and Turkish). Lead Stories serves as the fact-checking partner for Czechia.
(V) Deterring covert influence operations
We prohibit covert influence operations and remain constantly vigilant against attempts to use deceptive behaviours and manipulate our platform. We proactively seek and continuously investigate leads for potential influence operations. We're also working with government authorities and encourage them to share any intelligence so that we can work together to ensure election integrity. More detail on our policy against covert influence operations is published on our website.
(VI) Tackling misleading AI-generated content
Creators are required to label any realistic AI-generated content (AIGC) and we have an AI-generated content label to help people do this. TikTok has a ‘Edited Media and AI-Generated Content (AIGC)’ policy, which prohibits AIGC showing fake authoritative sources or crisis events, or falsely showing public figures in certain contexts including being bullied, making an endorsement, or being endorsed.
(VII) Government, Politician, and Political Party Accounts (GPPPAs)
Many political leaders, ministers, and political parties have a presence on TikTok.These politicians and parties play an important role on our platform - we believe that verified accounts belonging to politicians and institutions provide the electorate with another route to access their representatives, and additional trusted voices in the shared fight against misinformation.
We strongly recommend GPPPAs have their accounts verified by TikTok. Verified badges help users make informed choices about the accounts they choose to follow. It is also an easy way for notable figures to let users know they’re seeing authentic content, and it helps to build trust among high-profile accounts and their followers.
(I) Investing in media literacy
External engagement at the national and EU levels
(I) Rapid Response System: external collaboration with COCD Signatories
The COCD Rapid Response System (RRS) was utilised to exchange information among civil society organisations, fact-checkers, and online platforms. TikTok received 1 RRS. Throughout the election period, the team maintained consistent prioritisation of RRS requests and ensured timely, accurate support for cross-functional partners.
(II) Engagement with local experts
To further promote election integrity, and inform our approach to the Czech election, we organised an Election Speaker Series with our local fact-checking partner, LeadStories, who shared their insights and market expertise with our internal teams.
(I) Monitoring capabilities
We have dedicated Trust and Safety professionals working to keep our platform safe. As they usually do, our teams worked alongside technology to ensure that we were consistently enforcing our rules to detect and remove misinformation, covert influence operations, and other content and behaviour that can increase during an election period. In advance of the election, we had proactive data monitoring, trend detection, and regular monitoring of election keywords and accounts.
(II) Mission Control Centre: internal cross-functional collaboration
As part of our advance preparations, ahead of the Dutch election, we established a dedicated Mission Control Centre (MCC) bringing together employees from multiple specialist teams within our safety department. Through the MCC, our teams provided consistent and dedicated coverage of potential election-related issues in the run-up to, and during, the election.
(III) Integrity and Authenticity policies
We prioritise proactive content moderation, with the vast majority of violative content removed before it is viewed or reported.
(IV) Fact-checking
Our global fact-checking programme is a critical part of our layered approach to detecting harmful misinformation in the context of elections. The core objective of the fact-checking program is to leverage the expertise of external fact-checking organisations to help assess the accuracy of potentially harmful claims that are difficult to verify.
Within Europe, we partner with 13 fact-checking organisations who provide fact-checking coverage in 25 languages (22 official EU languages plus Russian, Ukrainian and Turkish). Deutsche Presse-Agentur (dpa) serves as the fact-checking partner for the Netherlands.
(V) Deterring covert influence operations
We prohibit covert influence operations and remain constantly vigilant against attempts to use deceptive behaviours and manipulate our platform. We proactively seek and continuously investigate leads for potential influence operations. We're also working with government authorities and encouraging them to share any intelligence so that we can work together to ensure election integrity. More detail on our policy against covert influence operations is published on our website.
(VI) Tackling misleading AI-generated content
Creators are required to label any realistic AI-generated content (AIGC) and we have an AI-generated content label to help people do this. TikTok has a ‘Edited Media and AI-Generated Content (AIGC)’ policy, which prohibits AIGC showing fake authoritative sources or crisis events, or falsely showing public figures in certain contexts including being bullied, making an endorsement, or being endorsed.
(VII) Government, Politician, and Political Party Accounts (GPPPAs)
Many political leaders, ministers, and political parties have a presence on TikTok.These politicians and parties play an important role on our platform - we believe that verified accounts belonging to politicians and institutions provide the electorate with another route to access their representatives, and additional trusted voices in the shared fight against misinformation.
We strongly recommend GPPPAs have their accounts verified by TikTok. Verified badges help users make informed choices about the accounts they choose to follow. It is also an easy way for notable figures to let users know they’re seeing authentic content, and it helps to build trust among high-profile accounts and their followers.
Directing people to trusted sources
(I) Investing in media literacy
We invest in media literacy campaigns as a counter-misinformation strategy.
External engagement at the national and EU levels
(I) Rapid Response System: external collaboration with COCD Signatories
The COCD Rapid Response System (RRS) was utilised to exchange information among civil society organisations, fact-checkers, and online platforms. TikTok received 1 RRS with the content violating our AIGC policies. Throughout the election period, the team maintained consistent prioritisation of RRS requests and ensured timely, accurate support for cross-functional partners.
(II) Engagement with local experts
To further promote election integrity, and inform our approach to the Dutch election, we organised an Election Speaker Series with our fact-checking partner, dpa,who shared their insights and market expertise with our internal teams.
Policies and Terms and Conditions
Outline any changes to your policies
Scrutiny of Ads Placements
Outline approaches pertinent to this chapter, highlighting similarities/commonalities and differences with regular enforcement.
Specific Action applied - 50.2.1
Description of intervention - 50.2.2
Indication of impact - 50.2.3
- Number of ads removed for our political advertisement policies during the 4 weeks leading up to and including the days of the Irish Presidential Election (22 Sep., 2025, and 26 Oct., 2025): 2,134
- Number of ads removed for our political advertisement policies during the 4 weeks leading up to and including the days of the Czech Election (1 Sep., 2025, and 5 Oct., 2025): 1,092
- Number of ads removed for our political advertisement policies during the 4 weeks leading up to and including the days of the Netherlands Election (29 Sep. 2025, and 2 Nov. 2025): 2,113
Political Advertising
Outline approaches pertinent to this chapter, highlighting similarities/commonalities and differences with regular enforcement.
Integrity of Services
Outline approaches pertinent to this chapter, highlighting similarities/commonalities and differences with regular enforcement.
Specific Action applied - 50.4.1
(Commitment 14, Measure 14.1)
Description of intervention - 50.4.2
Indication of impact - 50.4.3
Specific Action applied - 50.4.4
Description of intervention - 50.4.5
- Content made to seem as if it comes from an authoritative source, such as a reputable news organization, scientific or medical society, or government entity providing critical services;
- A critical event, such as an election, natural disaster, or a mass casualty incident;
- Matters of public importance, including debates about significant and challenging policy issues;
- A public figure who is:
- being degraded or harassed, or engaging in criminal or anti-social behavior
- taking a position on a political issue, commercial product, or a matter of public importance (such as an election)
- spreading misinformation about matters of public importance
Indication of impact - 50.4.6
Number of videos removed for violating our Edited Media and AI-Generated Content (AIGC) policy during the Czech federal election: 17
Number of videos removed for violating our Edited Media and AI-Generated Content (AIGC) policy during the Dutch parliamentary election: 324
Empowering Users
Outline approaches pertinent to this chapter, highlighting similarities/commonalities and differences with regular enforcement.
Specific Action applied - 50.5.1
Description of intervention - 50.5.2
From 24 Sept 2025, we launched an in-app Election Centre to provide users with up-to-date information about the 2025 Irish presidential election. The centre contained a section about spotting misinformation, which included videos created in partnership with the fact-checking organisation The Journal.
Czech elections:
Dutch election:
From 29 Sept 2025, we launched an in-app Election Centre to provide users with up-to-date information about the 2025 Dutch parliamentary elections. The centre contained a section about spotting misinformation.
Indication of impact - 50.5.3
The Election Centre launched before the Czech election was visited 78,337 times.
Specific Action applied - 50.5.4
Description of intervention - 50.5.5
To further promote election integrity, and inform our approach to the Irish presidential election, we organised an Election Speaker Series with Reuters who shared their insights and market expertise with our internal teams.
Czech elections:
To further promote election integrity, and inform our approach to the Czech election, we engaged with our fact-checking partner, LeadStories, to ensure our responsible teams for election integrity on the platform are aware of online trends concerning the elections.
This engagement with external regional and local experts, as well as national authorities, allowed us to inform our country-level approach to the Czech election.
Dutch election:
To further promote election integrity, and inform our approach to the Dutch election, we engaged with our fact-checking partner, dpa, to ensure our responsible teams for election integrity on the platform are aware of online trends concerning the elections.
Indication of impact - 50.5.6
This engagement with external regional and local experts allowed us to inform our country-level approach to the Irish presidential election.
Czech elections:
This engagement with external regional and local experts, as well as national authorities, allowed us to inform our country-level approach to the Czech election.
This engagement with external regional and local experts allowed us to inform our country-level approach to the Dutch election.
Empowering the Research Community
Outline approaches pertinent to this chapter, highlighting similarities/commonalities and differences with regular enforcement.
Specific Action applied - 50.6.1
Description of intervention - 50.6.2
Indication of impact - 50.6.3
Empowering the Fact-Checking Community
Outline approaches pertinent to this chapter, highlighting similarities/commonalities and differences with regular enforcement.
Specific Action applied - 50.7.1
Description of intervention - 50.7.2
Lead Stories serves as the fact-checking partner for Czechia and provided coverage throughout the election period.
Indication of impact - 50.7.3
Crisis 2025
[Note: Signatories are requested to provide information relevant to their particular response to the threats and challenges they observed on their service(s). They ensure that the information below provides an accurate and complete report of their relevant actions. As operational responses to crisis/election situations can vary from service to service, an absence of information should not be considered a priori a shortfall in the way a particular service has responded. Impact metrics are accurate to the best of signatories’ abilities to measure them].
Threats observed or anticipated
Threats observed or anticipated at time of reporting: [suggested character limit 2000 characters].
Israel-Hamas Conflict:
TikTok acknowledges the significance and sensitivity of the Israel–Hamas conflict (referred to as the “Conflict” in this chapter), which has been ongoing for an extended period. We recognise that it continues to be a challenging and deeply felt issue for many people around the world and on TikTok.
Mitigations in place
We aim to ensure that TikTok is a source of reliable and safe information and recognise the heightened risk and impact of misleading information during a time of crisis such as the War in Ukraine.
Automated Review
- Vision-based: Computer vision models can identify objects that violate our Community Guidelines, such as weapons or hate symbols.
- Audio-based: Audio clips are reviewed for violations of our policies, supported by a dedicated audio bank and "classifiers" that help us detect audios that are similar or modified to previous violations.
- Text-based: Detection models review written content like comments or hashtags,. Artificial Intelligence (AI) that can interpret the context surrounding content—helps us identify violations that are context-dependent, such as words that can be used in a hateful way but may not violate our policies by themselves.
- Similarity-based: "Similarity detection systems" enable us to not only catch identical or highly similar versions of violative content, but other types of content that share key contextual similarities and may require additional review.
- Activity-based: Technologies that look at how accounts are being operated help us disrupt deceptive activities like bot accounts, spam, or attempts to artificially inflate engagement through fake likes or follow attempts.
- LLMs: We use multimodal LLMs to help moderate content faster and more consistently at scale, from taking automated action on activity like fake engagement, to empowering teams with better moderation tools and risk insights.
- We work with external groups, for example Tech Against Terrorism in the context of violent extremist content, who help us to more quickly detect and remove violative content that has already been identified off the platform.
Scaling human expertise
In H2 2025, we removed 1,352 videos in relation to the War in Ukraine, which violated our misinformation policies.
(VII) External engagement
Israel-Hamas Conflict:
We aim to ensure that TikTok is a source of reliable and safe information and recognise the heightened risk and impact of misleading information during a time of crisis such as the Conflict.
Automated Review
Vision-based: Computer vision models can identify objects that violate our Community Guidelines, such as weapons or hate symbols.- Audio-based: Audio clips are reviewed for violations of our policies, supported by a dedicated audio bank and "classifiers" that help us detect audios that are similar or modified to previous violations.
- Text-based: Detection models review written content like comments or hashtags, using foundational keyword lists to find variations of violative text. Artificial Intelligence (AI) that can interpret the context surrounding content—helps us identify violations that are context-dependent, such as words that can be used in a hateful way but may not violate our policies by themselves. We also work with various external experts, like our fact-checking partners, to inform our keyword lists.
- Similarity-based: "Similarity detection systems" enable us to not only catch identical or highly similar versions of violative content, but other types of content that share key contextual similarities and may require additional review.
- Activity-based: Technologies that look at how accounts are being operated help us disrupt deceptive activities like bot accounts, spam, or attempts to artificially inflate engagement through fake likes or follow attempts
- LLMs: We use multimodal LLMs to help moderate content faster and more consistently at scale, from taking automated action on activity like fake engagement, to empowering teams with better moderation tools and risk insights.
- We work with external groups, for example Tech Against Terrorism in the context of violent extremist content, who help us to more quickly detect and remove violative content that has already been identified off the platform.
Human insight plays a crucial role in the content moderation process, from our community or external experts, to our own safety professionals. TikTok has Arabic and Hebrew speaking content moderators who review content and assist with Conflict-related translations. We continue to focus on moderator care through the provision of internal training and well-being resources for T&S personnel working on mis & disinformation.
In H2 2025, we have removed 3,901 videos in relation to the Conflict, which violated our misinformation policies.
(II) Leveraging our Global Fact-Checking Program
We use a layered approach to detect harmful misinformation that violates our Community Guidelines, with our Global Fact-Checking Program playing a key role. We assess the accuracy of harmful or hard-to-verify claims by partnering with more than 20 IFCN-accredited fact-checking organizations who support over 60 languages on TikTok, including Arabic and Hebrew. We also collaborate with certain fact-checking partners to receive advance warning of emerging misinformation narratives. This helps facilitate proactive responses against high-harm trends and ensures that our Integrity and Authenticity moderators have up-to-date guidance.
To limit the spread of potentially misleading information, we apply warning labels and prompt users to reconsider sharing content about unfolding or emergency events that have been reviewed by fact-checkers but cannot be verified—referred to as “unverified content.” Recognising that the situation around the Conflict can change rapidly, we have put in place a process allowing our fact-checking partners to quickly update us if claims previously marked as “unverified” are later verified or clarified with additional context.
(III) Disruption of CIOs
Disrupting CIO networks targeting discourse related to Israel and Palestine remains a priority. Between July and December 2025, we took action to remove a total of four such networks.
(IV) Deploying search interventions to raise awareness of potential misinformation
To help raise awareness and to protect our users, we provide in-app search interventions that are triggered when users search for non-violating terms related to the Conflict (e.g., Israel, Palestine). These search interventions remind users to pause and check their sources.
We are committed to engaging with experts across the industry and civil society, such as Tech Against Terrorism and cooperating with law enforcement agencies globally in line with our Law Enforcement Guidelines, to further safeguard and secure our platform during times of conflict.
Policies and Terms and Conditions
Outline any changes to your policies
In a crisis, we keep under review our policies and to ensure moderation teams have supplementary guidance.
Israel-Hamas:
During the reporting period, no Conflict-specific policy changes were implemented.
Policy - 51.1.1
No update during the reporting period.
Israel-Hamas:
No update during the reporting period.
Political Advertising
Outline approaches pertinent to this chapter, highlighting similarities/commonalities and differences with regular enforcement.
Specific Action applied - 51.3.1
Description of intervention - 51.3.2
Integrity of Services
Outline approaches pertinent to this chapter, highlighting similarities/commonalities and differences with regular enforcement.
Specific Action applied - 51.4.1
Description of intervention - 51.4.2
We combat CIO because our Integrity and Authenticity policies prohibit attempts to manipulate public opinion while misleading our systems or users about identity, origin, approximate location, popularity, or purpose. Dedicated teams monitor and investigate CIO networks and have removed networks targeting discourse related to the War in Ukraine in line with these policies.
Indication of impact - 51.4.3
Between July to December 2025, we took action to remove the following 4 networks (consisting of 114 accounts in total) that were found to be involved in coordinated attempts to influence public opinion about the Russia-Ukraine war while also misleading individuals, our community, or our systems:
- Network Origin: Ukraine
3. Network Origin: Belarus
4. Network Origin: US
Israel-Hamas:
Between July-December 2025, we took action to remove the following four networks (consisting of 75 accounts in total) that were found to be related to the Conflict:
1. Network Origin: Iran
2. Network Origin: Unidentified
Followers of network: 12,685
3. Network Origin: Iran
4. Network Origin: Iran
Specific Action applied - 51.4.4
Description of intervention - 51.4.5
- Content made to seem as if it comes from an authoritative source, such as a reputable news organization, scientific or medical society, or government entity providing critical services;
- A critical event, such as an election, natural disaster, or a mass casualty incident;
- Matters of public importance, including debates about significant and challenging policy issues;
- A public figure who is:
- being degraded or harassed, or engaging in criminal or anti-social behavior
- taking a position on a political issue, commercial product, or a matter of public importance (such as an election)
- spreading misinformation about matters of public importance
We have an AI-generated content label for users to easily inform their community when they post AIGC. The label can be applied to any content that has been completely generated or significantly edited by AI, which makes it easier to comply with the obligation to disclose AIGC that shows realistic scenes. Creators can do this through this label or through other types of disclosures, like a sticker, watermark, or caption.
TikTok has invested in labeling technologies and tools, including the implementation of Content Credentials technology from the Coalition for Content Provenance and Authenticity (C2PA), which enables the automatic recognition and labeling of AIGC, including AIGC created on some other platforms. AI-generated content. This is complemented by a TikTok-developed tool that allows creators to easily label AI-generated content, already used by 37 million creators. TikTok’s commitment to AIGC transparency ensures a safe environment for users, who can easily identify synthetic content and understand its context.
Indication of impact - 51.4.6
Specific Action applied - 51.4.7
Removing harmful misinformation from our platform
(Commitment 14, Measure 14.1)
Description of intervention - 51.4.8
Indication of impact - 51.4.9
In the context of the crisis, we have proactively removed 1,313 videos in H2 containing harmful misinformation related to the War in Ukraine. We carry out targeted sweeps of certain types of content as well as working closely with our fact-checking partners and responding to emerging trends they identify.
- Number of videos removed because of violation of misinformation policy with a proxy related to the War in Ukraine - 1,352
- Number of videos not recommended because of violation of misinformation policy with a proxy (only focusing on RU/UA) - 1,458
- Number of proactive removals of videos removed because of violation of misinformation policy with a proxy related to the War in Ukraine - 1,313
Israel-Hamas:
In the context of the crisis, we have proactively removed 3,874 videos in H2 containing harmful misinformation related to the Conflict. We carry out targeted sweeps of certain types of content (e.g. hashtags/sensitive keyword lists) as well as working closely with our fact-checking partners and responding to emerging trends they identify.
- Number of videos removed because of violation of misinformation policy with a proxy (IL-Hamas) - 3,901
- Number of videos not recommended because of violation of misinformation policy with a proxy (IL-Hamas) - 4,941
- Number of proactive removals of videos removed because of violation of misinformation policy with a proxy (IL/Hamas) - 3,874
Empowering Users
Outline approaches pertinent to this chapter, highlighting similarities/commonalities and differences with regular enforcement.
Specific Action applied - 51.5.1
Description of intervention - 51.5.2
We recognise the importance of proactive measures that are aimed at improving our users' digital literacy and increasing the prominence of authoritative information.
Indication of impact - 51.5.3
Working with our fact-checking partners, we have 17 localised media literacy campaigns addressing disinformation related to the War in Ukraine in Austria, Bosnia, Bulgaria, Czechia, Croatia, Estonia, Germany, Hungary, Latvia, Lithuania, Montenegro, Poland, Romania, Serbia, Slovakia, Slovenia, and Ukraine.
Relevant metrics for the media literacy campaigns (EEA total numbers, in countries where campaigns are active):
- Total Number of impressions of the search intervention - 23,191,195
- Total Number of clicks on the search intervention - 109,770
- Click through rate of the search intervention - 0.47%
Specific Action applied - 51.5.4
Description of intervention - 51.5.5
To minimise the discoverability of misinformation and help to protect our users, we have launched search interventions which are triggered when users search for neutral terms related to the Conflict (e.g., Israel, Palestine).
Indication of impact - 51.5.6
These search interventions remind users to pause and check their sources and also direct them to well-being resources.
Empowering the Research Community
Outline approaches pertinent to this chapter, highlighting similarities/commonalities and differences with regular enforcement.
Specific Action applied - 51.6.1
Measures taken to support research into crisis related misinformation and disinformation
Description of intervention - 51.6.2
Indication of impact - 51.6.3
Empowering the Fact-Checking Community
Outline approaches pertinent to this chapter, highlighting similarities/commonalities and differences with regular enforcement.
Specific Action applied - 51.7.1
Description of intervention - 51.7.2
Indication of impact - 51.7.3
Specific Action applied - 51.7.4
Description of intervention - 51.7.5
Our fact checking efforts cover Russian, Ukrainian, Belarusian and all major European languages (including 18 official European languages as well as a number of other languages which affect European users).
Israel-Hamas:
As part of our fact-checking program, TikTok works with more than 20 IFCN-accredited fact-checking organisations that support more than 60 languages, including Hebrew and Arabic, to help assess the accuracy of content in this rapidly-changing environment. In the context of the Conflict, our independent fact-checking partners are following our standard practice, whereby they do not moderate content directly on TikTok, but assess whether a claim is true, false, or unsubstantiated so that our moderators can take action based on our Community Guidelines. Fact-checker input is then incorporated into our broader content moderation efforts in a number of different ways, as further outlined in the ‘indication of impact’ section below.
Indication of impact - 51.7.6
Relevant metrics:
- Number of fact-checked videos with a proxy related to the War in Ukraine - 665
- Number of videos removed as a result of a fact-checking assessment with words related to the War in Ukraine - 78
- Number of videos not recommended in the For Your Feed as a result of a fact-checking assessment with words related to the War in Ukraine - 147
Israel-Hamas
We see harmful misinformation as different from other content issues. Context and fact-checking are critical to consistently and accurately enforcing our harmful misinformation policies, which is why we have ensured that, in the context of the Conflict, our fact-checking programme covers Arabic and Hebrew.
- Proactive insight reports that flag new and evolving claims they’re seeing across the internet. This helps us detect harmful misinformation and anticipate misinformation trends on our platform.
- Collaborating with our fact-checking partners to receive advance warning of emerging misinformation narratives has facilitated proactive responses against high-harm trends and has helped to ensure that our Integrity and Authenticity moderators have up-to-date guidance.
- Number of fact checked tasks related to IL/Hamas - 879
- Number of videos removed as a result of a fact checking assessment with words related to IL/Hamas - 101
- Number of videos demoted (NR) as a result of a fact checking assessment with words related to IL/Hamas - 199