
Report March 2025
TikTok allows users to create, share and watch short-form videos and live content, primarily for entertainment purposes
Advertising
Commitment 1
Relevant signatories participating in ad placements commit to defund the dissemination of disinformation, and improve the policies and systems which determine the eligibility of content to be monetised, the controls for monetisation and ad placement, and the data to report on the accuracy and effectiveness of controls and services around ad placements.
We signed up to the following measures of this commitment
Measure 1.1 Measure 1.2 Measure 1.3 Measure 1.4 Measure 1.5 Measure 1.6
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
- In order to improve the granularity of existing ad policies, developed a specific climate misinformation ad policy.
- Continued to enforce our four granular harmful misinformation ad policies in the EEA. As mentioned in our H2 2023 report, the policies cover:
- Medical Misinformation
- Dangerous Misinformation
- Synthetic and Manipulated Media
- Dangerous Conspiracy Theories
- Expanded the functionality (including choice and ability) in the EEA of our in-house pre-campaign brand safety tool, the TikTok Inventory Filter.
- Improved our IAB certification for Sweden Gold Standard to 2.0.
- We continue to engage in the Task-force and its working groups and subgroups such as the working subgroup on Elections (Crisis Response).
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 1.1
Relevant Signatories involved in the selling of advertising, inclusive of media platforms, publishers and ad tech companies, will deploy, disclose, and enforce policies with the aims of: - first avoiding the publishing and carriage of harmful Disinformation to protect the integrity of advertising supported businesses - second taking meaningful enforcement and remediation steps to avoid the placement of advertising next to Disinformation content or on sources that repeatedly violate these policies; and - third adopting measures to enable the verification of the landing / destination pages of ads and origin of ad placement.
QRE 1.1.1
Signatories will disclose and outline the policies they develop, deploy, and enforce to meet the goals of Measure 1.1 and will link to relevant public pages in their help centres.
- Medical Misinformation
- Dangerous Misinformation
- Synthetic and Manipulated Media
- Dangerous Conspiracy Theories
In line with our approach of building a platform that brings people together, not divides them, we have long prohibited political ads and political branded content. Specifically, we do not allow paid ads (nor landing pages) that promote or oppose a candidate, current leader, political party or group, or content that advocates a stance (for or against) on a local, state, or federal issue of public importance in order to influence a political decision or outcome. Similar rules apply in respect of branded content. We also classify certain accounts as Government, Politician, and Political Party Accounts (GPPPA) and we have introduced restrictions on these at an account level. This means accounts belonging to the government, politicians and political parties will automatically have their access to advertising features turned off. We make exceptions for governments in certain circumstances e.g., to promote public health. We make various brand safety tools available to advertisers to assist in helping to ensure that their ads are not placed adjacent to content they do not consider to fit with their brand values. While any content that is violative of our CGs, including our I&A policies, is removed, the brand safety tools are designed to help advertisers to further protect their brand. For example, a family-oriented brand may not want to appear next to videos containing news-related content. We have adopted the industry accepted framework in support of these principles.
- Making state-affiliated media accounts that attempt to reach communities outside their home country on current global events and affairs ineligible for recommendation, which means their content won't appear in the For You feed.
- Prohibiting state-affiliated media accounts in all markets where our state-controlled media labels are available from advertising outside of the country with which they are primarily affiliated.
- Investing in our detection capabilities of state-affiliated media accounts.
- Working with third party external experts to shape our state-affiliated media policy and assessment of state-controlled media labels.
SLI 1.1.1
Signatories will report, quantitatively, on actions they took to enforce each of the policies mentioned in the qualitative part of this service level indicator, at the Member State or language level. This could include, for instance, actions to remove, to block, or to otherwise restrict advertising on pages and/or domains that disseminate harmful Disinformation.
We have been focused on enforcement of our political advertising prohibition as well as our internal detection capability of political content on our platform which included launching specialised political content moderator training and automoderation strategies. The data below suggests that our existing policies (such as political content and other policy areas such as our inaccurate, misleading, or false content policy) already cover the majority of harmful misinformation ads, due to their expansive nature of coverage.
Country | Number of ad removals under the political content ad policy | Number of ad removals under the four granular misinformation ad policies |
---|---|---|
Austria | 746 | 3 |
Belgium | 1152 | 1 |
Bulgaria | 328 | 7 |
Croatia | 3 | 0 |
Cyprus | 128 | 0 |
Czech Republic | 111 | 0 |
Denmark | 409 | 0 |
Estonia | 90 | 0 |
Finland | 235 | 0 |
France | 4621 | 7 |
Germany | 6498 | 63 |
Greece | 911 | 8 |
Hungary | 512 | 2 |
Ireland | 565 | 1 |
Italy | 2781 | 8 |
Latvia | 131 | 4 |
Lithuania | 19 | 0 |
Luxembourg | 86 | 0 |
Malta | 0 | 0 |
Netherlands | 1179 | 3 |
Poland | 1118 | 4 |
Portugal | 438 | 1 |
Romania | 10698 | 2 |
Slovakia | 145 | 4 |
Slovenia | 52 | 0 |
Spain | 2558 | 17 |
Sweden | 752 | 0 |
Iceland | 0 | 0 |
Liechtenstein | 0 | 0 |
Norway | 474 | 2 |
Total EU | 36266 | 135 |
Total EEA | 36740 | 137 |
Measure 1.2
Relevant Signatories responsible for the selling of advertising, inclusive of publishers, media platforms, and ad tech companies, will tighten eligibility requirements and content review processes for content monetisation and ad revenue share programmes on their services as necessary to effectively scrutinise parties and bar participation by actors who systematically post content or engage in behaviours which violate policies mentioned in Measure 1.1 that tackle Disinformation.
QRE 1.2.1
Signatories will outline their processes for reviewing, assessing, and augmenting their monetisation policies in order to scrutinise and bar participation by actors that systematically provide harmful Disinformation.
We launched the Creator Code of Conduct in April 2024. These are the standards we expect creators involved in TikTok programs, features, events and campaigns to follow on and off-platform, in addition to our Community Guidelines and Terms of Service. Being a part of these creator programs is an opportunity that comes with additional responsibilities, and this code will also help provide creators with additional reassurance that other participants are meeting these standards too. We are actively improving our enforcement guidance and processes for this, including building on proactive signalling of off-platform activity.
SLI 1.2.1
Signatories will report on the number of policy reviews and/or updates to policies relevant to Measure 1.2 throughout the reporting period. In addition, Signatories will report on the numbers of accounts or domains barred from participation to advertising or monetisation as a result of these policies at the Member State level.
Our I&A policies within our CGs are the first line of defence in combating harmful misinformation and deceptive behaviours on our platform. All creators are required to comply with our CGs, which set out the circumstances where we will remove, or otherwise limit the availability of, content. Creators who breach the Community Guidelines or Terms of Service are not eligible to receive rewards. We have set out the number of ads that have been removed from our platform for violation of our political content policies as well as our four more granular policies on medical misinformation, dangerous misinformation, synthetic and manipulated media and dangerous conspiracy theories in SLI 1.1.1. Further, SLI 1.1.2 aims to provide an estimate of the potential impact on revenue of demonetising disinformation. We are working towards being able to provide more data for this SLI.
Country | 0 | 0 | 0 | 0 |
---|---|---|---|---|
Austria | 0 | 0 | 0 | 0 |
Belgium | 0 | 0 | 0 | 0 |
Bulgaria | 0 | 0 | 0 | 0 |
Croatia | 0 | 0 | 0 | 0 |
Cyprus | 0 | 0 | 0 | 0 |
Czech Republic | 0 | 0 | 0 | 0 |
Denmark | 0 | 0 | 0 | 0 |
Estonia | 0 | 0 | 0 | 0 |
Finland | 0 | 0 | 0 | 0 |
France | 0 | 0 | 0 | 0 |
Germany | 0 | 0 | 0 | 0 |
Greece | 0 | 0 | 0 | 0 |
Hungary | 0 | 0 | 0 | 0 |
Ireland | 0 | 0 | 0 | 0 |
Italy | 0 | 0 | 0 | 0 |
Latvia | 0 | 0 | 0 | 0 |
Lithuania | 0 | 0 | 0 | 0 |
Luxembourg | 0 | 0 | 0 | 0 |
Malta | 0 | 0 | 0 | 0 |
Netherlands | 0 | 0 | 0 | 0 |
Poland | 0 | 0 | 0 | 0 |
Portugal | 0 | 0 | 0 | 0 |
Romania | 0 | 0 | 0 | 0 |
Slovakia | 0 | 0 | 0 | 0 |
Slovenia | 0 | 0 | 0 | 0 |
Spain | 0 | 0 | 0 | 0 |
Sweden | 0 | 0 | 0 | 0 |
Iceland | 0 | 0 | 0 | 0 |
Liechtenstein | 0 | 0 | 0 | 0 |
Norway | 0 | 0 | 0 | 0 |
Measure 1.3
Relevant Signatories responsible for the selling of advertising, inclusive of publishers, media platforms, and ad tech companies, will take commercial and technically feasible steps, including support for relevant third-party approaches, to give advertising buyers transparency on the placement of their advertising.
QRE 1.3.1
Signatories will report on the controls and transparency they provide to advertising buyers with regards to the placement of their ads as it relates to Measure 1.3.
- TikTok Inventory Filter: This is our proprietary system which enables advertisers to choose the profile of content they want their ads to run adjacent to. We expanded our Inventory Filter which is now available in 29 jurisdictions in the EEA and is embedded directly in TikTok Ads Manager, the system through which advertisers purchase ads. We have expanded the functionality of this Inventory Filter in various EEA countries. More details can be found here. The Inventory Filter is informed by Industry Standards and policies include topics which may be susceptible to disinformation.
- TikTok Pre-bid Brand Safety Solution by Integral Ad Science (“IAS”): Advertisers can filter content based on industry-standard frameworks with all levels of risk (available in France and Germany). Some misinformation content may be captured and filtered out by these industry standard categories, such as “Sensitive Social Issues”.
- Zefr: Through our partnership with Zefr, advertisers can obtain campaign insights into brand suitability and safety on the platform (now available in 29 countries in the EEA). Zefr aligns with the Industry Standards.
- IAS: Advertisers can measure brand safety, viewability and invalid traffic on the platform with the IAS Signal platform (post campaign is available in 28 countries in the EEA). As with IAS’s pre-bid solution covered above, this aligns with the GARM Framework.
DoubleVerify: We are partnering with DoubleVerify to provide advertisers with media quality measurement for ads. DoubleVerify is working actively with us to expand their suite of brand suitability and media quality solutions on the platform. DoubleVerify is available in 27 EU countries.
Measure 1.4
Relevant Signatories responsible for the buying of advertising, inclusive of advertisers, and agencies, will place advertising through ad sellers that have taken effective, and transparent steps to avoid the placement of advertising next to Disinformation content or in places that repeatedly publish Disinformation.
QRE 1.4.1
Relevant Signatories that are responsible for the buying of advertising will describe their processes and procedures to ensure they place advertising through ad sellers that take the steps described in Measure 1.4.
Measure 1.5
Relevant Signatories involved in the reporting of monetisation activities inclusive of media platforms, ad networks, and ad verification companies will take the necessary steps to give industry-recognised relevant independent third-party auditors commercially appropriate and fair access to their services and data in order to: - First, confirm the accuracy of first party reporting relative to monetisation and Disinformation, seeking alignment with regular audits performed under the DSA. - Second, accreditation services should assess the effectiveness of media platforms' policy enforcement, including Disinformation policies.
QRE 1.5.1
Signatories that produce first party reporting will report on the access provided to independent third-party auditors as outlined in Measure 1.5 and will link to public reports and results from such auditors, such as MRC Content Level Brand Safety Accreditation, TAG Brand Safety certifications, or other similarly recognised industry accepted certifications.
We have been certified by the Interactive Advertising Bureau (“IAB”) for the IAB Ireland Gold Standard 2.1 (listed here) and IAB Sweden Gold Standard 2.0.
QRE 1.5.2
Signatories that conduct independent accreditation via audits will disclose areas of their accreditation that have been updated to reflect needs in Measure 1.5.
Measure 1.6
Relevant Signatories will advance the development, improve the availability, and take practical steps to advance the use of brand safety tools and partnerships, with the following goals: - To the degree commercially viable, relevant Signatories will provide options to integrate information and analysis from source-raters, services that provide indicators of trustworthiness, fact-checkers, researchers or other relevant stakeholders providing information e.g., on the sources of Disinformation campaigns to help inform decisions on ad placement by ad buyers, namely advertisers and their agencies. - Advertisers, agencies, ad tech companies, and media platforms and publishers will take effective and reasonable steps to integrate the use of brand safety tools throughout the media planning, buying and reporting process, to avoid the placement of their advertising next to Disinformation content and/or in places or sources that repeatedly publish Disinformation. - Brand safety tool providers and rating services who categorise content and domains will provide reasonable transparency about the processes they use, insofar that they do not release commercially sensitive information or divulge trade secrets, and that they establish a mechanism for customer feedback and appeal.
QRE 1.6.1
Signatories that place ads will report on the options they provide for integration of information, indicators and analysis from source raters, services that provide indicators of trustworthiness, fact-checkers, researchers, or other relevant stakeholders providing information e.g. on the sources of Disinformation campaigns to help inform decisions on ad placement by buyers.
QRE 1.6.2
Signatories that purchase ads will outline the steps they have taken to integrate the use of brand safety tools in their advertising and media operations, disclosing what percentage of their media investment is protected by such services.
QRE 1.6.3
Signatories that provide brand safety tools will outline how they are ensuring transparency and appealability about their processes and outcomes.
QRE 1.6.4
Relevant Signatories that rate sources to determine if they persistently publish Disinformation shall provide reasonable information on the criteria under which websites are rated, make public the assessment of the relevant criteria relating to Disinformation, operate in an apolitical manner and give publishers the right to reply before ratings are published.
Commitment 2
Relevant Signatories participating in advertising commit to prevent the misuse of advertising systems to disseminate Disinformation in the form of advertising messages.
We signed up to the following measures of this commitment
Measure 2.1 Measure 2.2 Measure 2.3 Measure 2.4
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
- In order to improve the granularity of existing ad policies, developed a specific climate misinformation ad policy.
- Continued to enforce our four granular harmful misinformation ad policies in the EEA. As mentioned in our H2 2023 report, the policies cover:
- Medical Misinformation
- Dangerous Misinformation
- Synthetic and Manipulated Media
- Dangerous Conspiracy Theories
- Expanded the functionality (including choice and ability) in the EEA of our in-house pre-campaign brand safety tool, the TikTok Inventory Filter.
- Improved our IAB certification for Sweden Gold Standard to 2.0.
- We continue to engage in the Task-force and its working groups and subgroups such as the working subgroup on Elections (Crisis Response).
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 2.1
Relevant Signatories will develop, deploy, and enforce appropriate and tailored advertising policies that address the misuse of their advertising systems for propagating harmful Disinformation in advertising messages and in the promotion of content.
QRE 2.1.1
Signatories will disclose and outline the policies they develop, deploy, and enforce to meet the goals of Measure 2.1 and will link to relevant public pages in their help centres.
SLI 2.1.1
Signatories will report, quantitatively, on actions they took to enforce each of the policies mentioned in the qualitative part of this service level indicator, at the Member State or language level. This could include, for instance, actions to remove, to block, or to otherwise restrict harmful Disinformation in advertising messages and in the promotion of content.
Country | Number of ad removals under the political content ad policy | Number of ad removals under the four granular misinformation ad policies |
---|---|---|
Austria | 746 | 3 |
Belgium | 1152 | 1 |
Bulgaria | 328 | 7 |
Croatia | 3 | 0 |
Cyprus | 128 | 0 |
Czech Republic | 111 | 0 |
Denmark | 409 | 0 |
Estonia | 90 | 0 |
Finland | 235 | 0 |
France | 4621 | 7 |
Germany | 6498 | 63 |
Greece | 911 | 8 |
Hungary | 512 | 2 |
Ireland | 565 | 1 |
Italy | 2781 | 8 |
Latvia | 131 | 4 |
Lithuania | 19 | 0 |
Luxembourg | 86 | 0 |
Malta | 0 | 0 |
Netherlands | 1179 | 3 |
Poland | 1118 | 4 |
Portugal | 438 | 1 |
Romania | 10698 | 2 |
Slovakia | 145 | 4 |
Slovenia | 52 | 0 |
Spain | 2558 | 17 |
Sweden | 752 | 0 |
Iceland | 0 | 0 |
Liechtenstein | 0 | 0 |
Norway | 474 | 2 |
Total EU | 36266 | 135 |
Total EEA | 36740 | 137 |
Measure 2.2
Relevant Signatories will develop tools, methods, or partnerships, which may include reference to independent information sources both public and proprietary (for instance partnerships with fact-checking or source rating organisations, or services providing indicators of trustworthiness, or proprietary methods developed internally) to identify content and sources as distributing harmful Disinformation, to identify and take action on ads and promoted content that violate advertising policies regarding Disinformation mentioned in Measure 2.1.
QRE 2.2.1
Signatories will describe the tools, methods, or partnerships they use to identify content and sources that contravene policies mentioned in Measure 2.1 - while being mindful of not disclosing information that'd make it easier for malicious actors to circumvent these tools, methods, or partnerships. Signatories will specify the independent information sources involved in these tools, methods, or partnerships.
The majority of ads that violate our misinformation policies would have been removed under our existing policies. Our granular advertising policies currently cover:
- Dangerous Misinformation
- Dangerous Conspiracy Theories
- Medical Misinformation
- Manipulated Media
- Climate Misinformation
We work with 14 fact-checking partners who provide fact-checking coverage in 23 EEA languages, including at least one official language of every EU Member State, plus Georgian, Russian, Turkish, and Ukrainian.
Measure 2.3
Relevant Signatories will adapt their current ad verification and review systems as appropriate and commercially feasible, with the aim of preventing ads placed through or on their services that do not comply with their advertising policies in respect of Disinformation to be inclusive of advertising message, promoted content, and site landing page.
QRE 2.3.1
Signatories will describe the systems and procedures they use to ensure that ads placed through their services comply with their advertising policies as described in Measure 2.1.
The majority of ads that violate our misinformation policies would have been removed under our existing policies. Our granular advertising policies currently cover:
- Dangerous Misinformation
- Dangerous Conspiracy Theories
- Medical Misinformation
- Manipulated Media
- Climate Misinformation
SLI 2.3.1
Signatories will report quantitatively, at the Member State level, on the ads removed or prohibited from their services using procedures outlined in Measure 2.3. In the event of ads successfully removed, parties should report on the reach of violatory content and advertising.
Country | Number of ad removals under the political content ad policy | Number of ad removals under the four granular misinformation ad policies | Number of impressions for ads removed under the political content ad policy | Number of impressions for ads removed under the four granular misinformation ad policies |
---|---|---|---|---|
Austria | 746 | 3 | 2,405,688 | 0 |
Belgium | 1152 | 1 | 414,078 | 16,971 |
Bulgaria | 328 | 7 | 21,839 | 0 |
Croatia | 3 | 0 | 69 | 0 |
Cyprus | 128 | 0 | 10,838 | 0 |
Czech Republic | 111 | 0 | 187,494 | 0 |
Denmark | 409 | 0 | 1,333,325 | 12,268 |
Estonia | 90 | 0 | 14,889 | 0 |
Finland | 235 | 0 | 7,543,943 | 0 |
France | 4621 | 7 | 14,427,406 | 510 |
Germany | 6498 | 63 | 45,161,261 | 0 |
Greece | 911 | 8 | 512,170 | 12,873 |
Hungary | 512 | 2 | 3,675,505 | 0 |
Ireland | 565 | 1 | 1,341,419 | 0 |
Italy | 2781 | 8 | 6,836,564 | 12,029 |
Latvia | 131 | 4 | 4,551 | 0 |
Lithuania | 19 | 0 | 59,348 | 0 |
Luxembourg | 86 | 0 | 5,472 | 0 |
Malta | 0 | 0 | 0 | 0 |
Netherlands | 1179 | 3 | 879,250 | 1,048 |
Poland | 1118 | 4 | 610,009 | 0 |
Portugal | 438 | 1 | 409,358 | 0 |
Romania | 10698 | 2 | 27,208,895 | 0 |
Slovakia | 145 | 4 | 52,215 | 0 |
Slovenia | 52 | 0 | 53,989 | 0 |
Spain | 2558 | 17 | 9,622,981 | 8,551 |
Sweden | 752 | 0 | 4,565,753 | 0 |
Iceland | 0 | 0 | 0 | 0 |
Liechtenstein | 0 | 0 | 0 | 0 |
Norway | 474 | 2 | 120,449 | 1,367 |
Total EU | 36266 | 135 | 127,358,309 | 64,250 |
Total EEA | 36740 | 137 | 127,478,758 | 65,617 |
Measure 2.4
Relevant Signatories will provide relevant information to advertisers about which advertising policies have been violated when they reject or remove ads violating policies described in Measure 2.1 above or disable advertising accounts in application of these policies and clarify their procedures for appeal.
QRE 2.4.1
Signatories will describe how they provide information to advertisers about advertising policies they have violated and how advertisers can appeal these policies.
SLI 2.4.1
Signatories will report quantitatively, at the Member State level, on the number of appeals per their standard procedures they received from advertisers on the application of their policies and on the proportion of these appeals that led to a change of the initial policy decision.
Country | Number of ad removals under the political content ad policy | Number of ad removals under the four granular misinformation ad policies | Number of impressions for ads removed under the political content ad policy | Number of impressions for ads removed under the four granular misinformation ad policies |
---|---|---|---|---|
Austria | 746 | 3 | 2,405,688 | 0 |
Belgium | 1152 | 1 | 414,078 | 16,971 |
Bulgaria | 328 | 7 | 21,839 | 0 |
Croatia | 3 | 0 | 69 | 0 |
Cyprus | 128 | 0 | 10,838 | 0 |
Czech Republic | 111 | 0 | 187,494 | 0 |
Denmark | 409 | 0 | 1,333,325 | 12,268 |
Estonia | 90 | 0 | 14,889 | 0 |
Finland | 235 | 0 | 7,543,943 | 0 |
France | 4621 | 7 | 14,427,406 | 510 |
Germany | 6498 | 63 | 45,161,261 | 0 |
Greece | 911 | 8 | 512,170 | 12,873 |
Hungary | 512 | 2 | 3,675,505 | 0 |
Ireland | 565 | 1 | 1,341,419 | 0 |
Italy | 2781 | 8 | 6,836,564 | 12,029 |
Latvia | 131 | 4 | 4,551 | 0 |
Lithuania | 19 | 0 | 59,348 | 0 |
Luxembourg | 86 | 0 | 5,472 | 0 |
Malta | 0 | 0 | 0 | 0 |
Netherlands | 1179 | 3 | 879,250 | 1,048 |
Poland | 1118 | 4 | 610,009 | 0 |
Portugal | 438 | 1 | 409,358 | 0 |
Romania | 10698 | 2 | 27,208,895 | 0 |
Slovakia | 145 | 4 | 52,215 | 0 |
Slovenia | 52 | 0 | 53,989 | 0 |
Spain | 2558 | 17 | 9,622,981 | 8,551 |
Sweden | 752 | 0 | 4,565,753 | 0 |
Iceland | 0 | 0 | 0 | 0 |
Liechtenstein | 0 | 0 | 0 | 0 |
Norway | 474 | 2 | 120,449 | 1,367 |
Total EU | 36266 | 135 | 127,358,309 | 64,250 |
Total EEA | 36740 | 137 | 127,478,758 | 65,617 |
Commitment 3
Relevant Signatories involved in buying, selling and placing digital advertising commit to exchange best practices and strengthen cooperation with relevant players, expanding to organisations active in the online monetisation value chain, such as online e-payment services, e-commerce platforms and relevant crowd-funding/donation systems, with the aim to increase the effectiveness of scrutiny of ad placements on their own services.
We signed up to the following measures of this commitment
Measure 3.1 Measure 3.2 Measure 3.3
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 3.1
Relevant Signatories will cooperate with platforms, advertising supply chain players, source-rating services, services that provide indicators of trustworthiness, fact-checking organisations, advertisers and any other actors active in the online monetisation value chain, to facilitate the integration and flow of information, in particular information relevant for tackling purveyors of harmful Disinformation, in full respect of all relevant data protection rules and confidentiality agreements.
QRE 3.1.1
Signatories will outline how they work with others across industry and civil society to facilitate the flow of information that may be relevant for tackling purveyors of harmful Disinformation.
We work with 14 fact-checking partners who provide fact-checking coverage in 23 EEA languages, including at least one official language of every EU Member State, plus Georgian, Russian, Turkish, and Ukrainian.
Measure 3.2
Relevant Signatories will exchange among themselves information on Disinformation trends and TTPs (Tactics, Techniques, and Procedures), via the Code Task-force, GARM, IAB Europe, or other relevant fora. This will include sharing insights on new techniques or threats observed by Relevant Signatories, discussing case studies, and other means of improving capabilities and steps to help remove Disinformation across the advertising supply chain - potentially including real-time technical capabilities.
QRE 3.2.1
Signatories will report on their discussions within fora mentioned in Measure 3.2, being mindful of not disclosing information that is confidential and/or that may be used by malicious actors to circumvent the defences set by Signatories and others across the advertising supply chain. This could include, for instance, information about the fora Signatories engaged in; about the kinds of information they shared; and about the learnings they derived from these exchanges.
Measure 3.3
Relevant Signatories will integrate the work of or collaborate with relevant third-party organisations, such as independent source-rating services, services that provide indicators of trustworthiness, fact-checkers, researchers, or open-source investigators, in order to reduce monetisation of Disinformation and avoid the dissemination of advertising containing Disinformation.
QRE 3.3.1
Signatories will report on the collaborations and integrations relevant to their work with organisations mentioned.
Political Advertising
Commitment 4
Relevant Signatories commit to adopt a common definition of "political and issue advertising".
We signed up to the following measures of this commitment
Measure 4.1 Measure 4.2
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 4.1
Relevant Signatories commit to define "political and issue advertising" in this section in line with the definition of "political advertising" set out in the European Commission's proposal for a Regulation on the transparency and targeting of political advertising.
QRE 4.1.1
Relevant Signatories will declare the relevant scope of their commitment at the time of reporting and publish their relevant policies, demonstrating alignment with the European Commission's proposal for a Regulation on the transparency and targeting of political advertising.
- reference, promote, or oppose candidates or nominees for public office, political parties, or elected or appointed government officials;
- reference an election, including voter registration, voter turnout, and appeals for votes;
- include advocacy for or against past, current, or proposed referenda, ballot measures, and legislative, judicial, or regulatory outcomes or processes (including those that promote or attack government policies or track records); and
- reference, promote, or sell, merchandise that features prohibited individuals, entities, or content, including campaign slogans, symbols, or logos.
We prohibit political content in branded content i.e. content which is posted in exchange for payment or any other incentive by a third party.
We have been reviewing our policies to ensure that our prohibition is at least as broad as that defined by the Regulation EU 2024/900 on the Transparency and Targeting of Political Advertising. Our prohibition on political advertising is one part of our election integrity efforts which you can read more about in the elections crisis reports.
QRE 4.1.2
After the first year of the Code's operation, Relevant Signatories will state whether they assess that further work with the Task-force is necessary and the mechanism for doing so, in line with Measure 4.2.
Commitment 5
Relevant Signatories commit to apply a consistent approach across political and issue advertising on their services and to clearly indicate in their advertising policies the extent to which such advertising is permitted or prohibited on their services.
We signed up to the following measures of this commitment
Measure 5.1
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 5.1
Relevant Signatories will apply the labelling, transparency and verification principles (as set out below) across all ads relevant to their Commitments 4 and 5. They will publicise their policy rules or guidelines pertaining to their service's definition(s) of political and/or issue advertising in a publicly available and easily understandable way.
QRE 5.1.1
Relevant Signatories will report on their policy rules or guidelines and on their approach towards publicising them.
Commitment 6
Relevant Signatories commit to make political or issue ads clearly labelled and distinguishable as paid-for content in a way that allows users to understand that the content displayed contains political or issue advertising.
We signed up to the following measures of this commitment
Measure 6.1 Measure 6.2 Measure 6.3 Measure 6.4 Measure 6.5
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 6.1
Relevant Signatories will develop a set of common best practices and examples for marks and labels on political or issue ads and integrate those learnings as relevant to their services.
QRE 6.1.1
Relevant Signatories will publicise the best practices and examples developed as part of Measure 2.2.1 and describe how they relate to their relevant services.
Measure 6.2
Relevant Signatories will ensure that relevant information, such as the identity of the sponsor, is included in the label attached to the ad or is otherwise easily accessible to the user from the label.
QRE 6.2.1
Relevant Signatories will publish examples of how sponsor identities and other relevant information are attached to ads or otherwise made easily accessible to users from the label.
QRE 6.2.2
Relevant Signatories will publish their labelling designs.
Measure 6.3
Relevant Signatories will invest and participate in research to improve users's identification and comprehension of labels, discuss the findings of said research with the Task-force, and will endeavour to integrate the results of such research into their services where relevant.
QRE 6.3.1
Relevant Signatories will publish relevant research into understanding how users identify and comprehend labels on political or issue ads and report on the steps they have taken to ensure that users are consistently able to do so and to improve the labels' potential to attract users' awareness.
Measure 6.4
Relevant Signatories will ensure that once a political or issue ad is labelled as such on their platform, the label remains in place when users share that same ad on the same platform, so that they continue to be clearly identified as paid-for political or issue content.
QRE 6.4.1
Relevant Signatories will describe the steps they put in place to ensure that labels remain in place when users share ads.
Measure 6.5
Relevant Signatories that provide messaging services will, where possible and when in compliance with local law, use reasonable efforts to work towards improving the visibility of labels applied to political advertising shared over messaging services. To this end they will use reasonable efforts to develop solutions that facilitate users recognising, to the extent possible, paid-for content labelled as such on their online platform when shared over their messaging services, without any weakening of encryption and with due regard to the protection of privacy.
QRE 6.5.1
Relevant Signatories will report on any solutions in place to empower users to recognise paid-for content as outlined in Measure 6.5.
Commitment 7
Relevant Signatories commit to put proportionate and appropriate identity verification systems in place for sponsors and providers of advertising services acting on behalf of sponsors placing political or issue ads. Relevant signatories will make sure that labelling and user-facing transparency requirements are met before allowing placement of such ads.
We signed up to the following measures of this commitment
Measure 7.1 Measure 7.2 Measure 7.3 Measure 7.4
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 7.1
Relevant Signatories will make sure the sponsors and providers of advertising services acting on behalf of sponsors purchasing political or issue ads have provided the relevant information regarding their identity to verify (and re-verify where appropriate) said identity or the sponsors they are acting on behalf of before allowing placement of such ads.
QRE 7.1.1
Relevant Signatories will report on the tools and processes in place to collect and verify the information outlined in Measure 7.1.1, including information on the timeliness and proportionality of said tools and processes.
In the EU, we apply an internal label to accounts belonging to a government, politician, or political party. Once an account has been labelled in this manner, a number of policies will be applied that help prevent misuse of certain features e.g., access to advertising features and solicitation for campaign fundraising are not allowed.
Measure 7.2
Relevant Signatories will complete verifications processes described in Commitment 7 in a timely and proportionate manner.
QRE 7.2.1
Relevant Signatories will report on the actions taken against actors demonstrably evading the said tools and processes, including any relevant policy updates.
TikTok is dedicated to investigating and disrupting confirmed cases of CIO on the platform. Covert influence operations (CIOs) are organised attempts to manipulate or corrupt public debate while also misleading TikTok systems or users about identity, origin, operating location, popularity, or overall purpose. Suspension logic is dependent on strikes, where we take into account ad-level violation and advertiser account behaviours. Confirmed critical policy violations lead to permanent suspension. Further information on our policy can be found in our Business Help Centre Article.
QRE 7.2.2
Relevant Signatories will provide information on the timeliness and proportionality of the verification process.
Measure 7.3
Relevant Signatories will take appropriate action, such as suspensions or other account-level penalties, against political or issue ad sponsors who demonstrably evade verification and transparency requirements via on-platform tactics. Relevant Signatories will develop - or provide via existing tools - functionalities that allow users to flag ads that are not labelled as political.
QRE 7.3.1
Relevant Signatories will report on the tools and processes in place to request a declaration on whether the advertising service requested constitutes political or issue advertising.
QRE 7.3.2
Relevant Signatories will report on policies in place against political or issue ad sponsors who demonstrably evade verification and transparency requirements on-platform.
Measure 7.4
Relevant Signatories commit to request that sponsors, and providers of advertising services acting on behalf of sponsors, declare whether the advertising service they request constitutes political or issue advertising.
QRE 7.4.1
Relevant Signatories will report on research and publish data on the effectiveness of measures they take to verify the identity of political or issue ad sponsors.
Commitment 8
Relevant Signatories commit to provide transparency information to users about the political or issue ads they see on their service.
We signed up to the following measures of this commitment
Measure 8.1 Measure 8.2
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 8.2
Relevant Signatories will provide a direct link from the ad to the ad repository.
QRE 8.2.1
Relevant Signatories will provide details of the policies and measures put in place to implement the above-mentioned measures accessible to EU users, especially by publishing information outlining the main parameters their recommender systems employ in this regard.
Commitment 9
Relevant Signatories commit to provide users with clear, comprehensible, comprehensive information about why they are seeing a political or issue ad.
We signed up to the following measures of this commitment
Measure 9.1 Measure 9.2
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 9.2
Relevant Signatories will explain in simple, plain language, the rationale and the tools used by the sponsors and providers of advertising services acting on behalf of sponsors (for instance: demographic, geographic, contextual, interest or behaviourally-based) to determine that a political or issue ad is displayed specifically to the user.
QRE 9.2.1
Relevant Signatories will describe the tools and features in place to provide users with the information outlined in Measures 9.1 and 9.2, including relevant examples for each targeting method offered by the service.
Commitment 10
Relevant Signatories commit to maintain repositories of political or issue advertising and ensure their currentness, completeness, usability and quality, such that they contain all political and issue advertising served, along with the necessary information to comply with their legal obligations and with transparency commitments under this Code.
We signed up to the following measures of this commitment
Measure 10.1 Measure 10.2
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 10.2
The information in such ad repositories will be publicly available for at least 5 years.
QRE 10.2.1
Relevant Signatories will detail the availability, features, and updating cadence of their repositories to comply with Measures 10.1 and 10.2. Relevant Signatories will also provide quantitative information on the usage of the repositories, such as monthly usage.
Commitment 11
Relevant Signatories commit to provide application programming interfaces (APIs) or other interfaces enabling users and researchers to perform customised searches within their ad repositories of political or issue advertising and to include a set of minimum functionalities as well as a set of minimum search criteria for the application of APIs or other interfaces.
We signed up to the following measures of this commitment
Measure 11.1 Measure 11.2 Measure 11.3 Measure 11.4
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Commitment 13
Relevant Signatories agree to engage in ongoing monitoring and research to understand and respond to risks related to Disinformation in political or issue advertising.
We signed up to the following measures of this commitment
Measure 13.1 Measure 13.2 Measure 13.3
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 13.1
Relevant Signatories agree to work individually and together through the Task-force to identify novel and evolving disinformation risks in the uses of political or issue advertising and discuss options for addressing those risks.
QRE 13.1.1
Through the Task-force, the Relevant Signatories will convene, at least annually, an appropriately resourced discussion around novel risks in political advertising to develop coordinated policy.
Measure 13.2
Integrity of Services
Commitment 14
In order to limit impermissible manipulative behaviours and practices across their services, Relevant Signatories commit to put in place or further bolster policies to address both misinformation and disinformation across their services, and to agree on a cross-service understanding of manipulative behaviours, actors and practices not permitted on their services. Such behaviours and practices include: The creation and use of fake accounts, account takeovers and bot-driven amplification, Hack-and-leak operations, Impersonation, Malicious deep fakes, The purchase of fake engagements, Non-transparent paid messages or promotion by influencers, The creation and use of accounts that participate in coordinated inauthentic behaviour, User conduct aimed at artificially amplifying the reach or perceived public support for disinformation.
We signed up to the following measures of this commitment
Measure 14.1 Measure 14.2 Measure 14.3
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
- Building on our new AI-generated label for creators to disclose content that is completely AI-generated or significantly edited by AI, we have expanded our efforts in the AIGC space by:
- Implementing the Coalition for Content Provenance and Authenticity (C2PA) Content Credentials, which enables our systems to instantly recognize and automatically label AIGC.
- Supporting the coalition’s working groups as a C2PA General Member.
- Joining the Content Authenticity Initiative (CAI) to drive wider adoption of the technical standard.
- Publishing a new Transparency Center article Supporting responsible, transparent AI-generated content.
- Building on our new AI-generated content label for creators, and implementation of C2PA Content Credentials, we launched a number of media literacy campaigns with guidance from expert organisations like Mediawise and WITNESS, including in Brazil, Germany, France, Mexico and the UK, that teach our community how to spot and label AI-generated content. This AIGC Transparency Campaign informed by WITNESS has reached 80M users globally, including more than 8.5M and 9.5M in Germany and France respectively.
- Continued to join industry partners as a party to the “Tech Accord to Combat Deceptive Use of AI in 2024 Elections’ which is a joint commitment to combat the deceptive use of AI in elections.
- Continued to participate in the working groups on integrity of services and Generative AI.
- We have continued to enhance our ability to detect covert influence operations. To provide more regular and detailed updates about the covert influence operations we disrupt, we have a dedicated Transparency Report on covert influence operations, which is available in TikTok’s transparency centre. In this report, we include information about operations that we have previously removed and that have attempted to return to our platform with new accounts.
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 14.1
Relevant Signatories will adopt, reinforce and implement clear policies regarding impermissible manipulative behaviours and practices on their services, based on the latest evidence on the conducts and tactics, techniques and procedures (TTPs) employed by malicious actors, such as the AMITT Disinformation Tactics, Techniques and Procedures Framework.
QRE 14.1.1
Relevant Signatories will list relevant policies and clarify how they relate to the threats mentioned above as well as to other Disinformation threats.
TTPs which pertain to the creation of assets for the purpose of a disinformation campaign, and to ways to make these assets seem credible:
- Operating large networks of accounts controlled by a single entity, or through automation;
- Bulk distribution of a high-volume of spam; and
- Manipulation of engagement signals to amplify the reach of certain content, or buying and selling followers, particularly for financial purposes
- Accounts that pose as another real person or entity without disclosing that they are a fan or parody account in the account name, such as using someone's name, biographical details, content, or image without disclosing it
- Presenting as a person or entity that does not exist (a fake persona) with a demonstrated intent to mislead others on the platform
Use of fake / inauthentic reactions (e.g. likes, up votes, comments) and use of fake followers or subscribers
Our I&A policies which address fake engagement do not allow the trade or marketing of services that attempt to artificially increase engagement or deceive TikTok’s recommendation system. We do not allow our users to:
- facilitate the trade or marketing of services that artificially increase engagement, such as selling followers or likes; or
- provide instructions on how to artificially increase engagement on TikTok.
We also have a number of policies that address account hijacking. Our privacy and security policies under our CGs expressly prohibit users from providing access to their account credentials to others or enable others to conduct activities against our CGs. We do not allow access to any part of TikTok through unauthorised methods; attempts to obtain sensitive, confidential, commercial, or personal information; or any abuse of the security, integrity, or reliability of our platform. We also provide practical guidance to users if they have concerns that their account may have been hacked.
When we investigate and remove these operations, we focus on behaviour and assessing linkages between accounts and techniques to determine if actors are engaging in a coordinated effort to mislead TikTok’s systems or our community. In each case, we believe that the people behind these activities coordinate with one another to misrepresent who they are and what they are doing. We know that CIOs will continue to evolve in response to our detection and networks may attempt to reestablish a presence on our platform - that's why we take continuous action against these attempts including banning accounts found to be linked with previously disrupted networks. We continue to iteratively research and evaluate complex deceptive behaviours on our platform and develop appropriate product and policy solutions as appropriate in the long term. We publish all of the CIO networks we identify and remove voluntarily in a dedicated report within our transparency centre here.
- Our hack and leak policy, which aims to further reduce the harms inflicted by the unauthorised disclosure of hacked materials on the individuals, communities and organisations that may be implicated or exposed by such disclosures
- Our CIO policy addresses use of leaked documents to sway public opinion as part of a wider operation
- Our Edited Media and AI-Generated Content (AIGC) policy captures materials that have been digitally altered without an appropriate disclosure
- Our harmful misinformation policies combats conspiracy theories related to unfolding events and dangerous misinformation
- Our Trade of Regulated Goods and Services policy prohibits trading of hacked goods
Our ‘Edited Media and AI-Generated Content (AIGC)’ policy includes commonly used and easily understood language when referring to AIGC, and outlines our existing prohibitions on AIGC showing fake authoritative sources or crisis events, or falsely showing public figures in certain contexts including being bullied, making an endorsement, or being endorsed. We also do not allow content that contains the likeness of young people, or the likeness of adult private figures used without their permission.
- Realistic-appearing people under the age of 18
- The likeness of adult private figures, if we become aware it was used without their permission
- Misleading AIGC or edited media that falsely shows:
- Content made to seem as if it comes from an authoritative source, such as a reputable news organisation
- A crisis event, such as a conflict or natural disaster
- A public figure who is:
- being degraded or harassed, or engaging in criminal or antisocial behaviour
- taking a position on a political issue, commercial product, or a matter of public importance (such as an elections)
- being politically endorsed or condemned by an individual or group
As AI evolves, we continue to invest in combating harmful AIGC by evolving our proactive detection models, consulting with experts, and partnering with peers on shared solutions.
Our Terms of Service and Branded Content Policy require users posting about a brand or product in return for any payment or other incentive to disclose their content by enabling the branded content toggle which we make available for users. We also provide functionality to enable users to report suspected undisclosed branded content which reminds the user who posted the suspected undisclosed branded content of our requirements and prompts them to turn the branded content toggle on if required. We made this requirement even clearer to users in our Commercial Disclosures and Paid Promotion policy in our March 2023 CG refresh, by increasing the information around our policing of this policy and providing specific examples.
QRE 14.1.2
Signatories will report on their proactive efforts to detect impermissible content, behaviours, TTPs and practices relevant to this commitment.
- prevent inauthentic accounts from being created based on malicious patterns; and
- remove registered accounts based on certain signals (ie, uncommon behaviour on the platform).
We have also set up specifically-trained teams that are focused on investigating and detecting CIO on our Platform. We've built international trust & safety teams with specialized expertise across threat intelligence, security, law enforcement, and data science to work on influence operations full-time. These teams continuously pursue and analyse on-platform signals of deceptive behaviour, as well as leads from external sources. They also collaborate with external intelligence vendors to support specific investigations on a case-by-case basis. When we investigate and remove these operations, we focus on behaviour and assessing linkages between accounts and techniques to determine if actors are engaging in a coordinated effort to mislead TikTok’s systems or our community. In each case, we believe that the people behind these activities coordinate with one another to misrepresent who they are and what they are doing.
- They are coordinating with each other. For example, they are operated by the same entity, share technical similarities like using the same devices, or are working together to spread the same narrative.
- They are misleading our systems or users. For example, they are trying to conceal their actual location, or using fake personas to pose as someone they're not.
- They are attempting to manipulate or corrupt public debate to impact the decision making, beliefs and opinions of a community. For example, they are attempting to shape discourse around an election or conflict.
Measure 14.2
Relevant Signatories will keep a detailed, up-to-date list of their publicly available policies that clarifies behaviours and practices that are prohibited on their services and will outline in their reports how their respective policies and their implementation address the above set of TTPs, threats and harms as well as other relevant threats.
QRE 14.2.1
Relevant Signatories will report on actions taken to implement the policies they list in their reports and covering the range of TTPs identified/employed, at the Member State level.
SLI 14.2.1
Number of instances of identified TTPs and actions taken at the Member State level under policies addressing each of the TTPs as well as information on the type of content.
TTP No. 1: Creation of inauthentic accounts or botnets (which may include automated, partially automated, or non-automated accounts)
Methodology of data measurement
We have based the number of: (i) fake accounts removed; and (ii) followers of the fake accounts (identified at the time of removal of the fake account), on the country the fake account was last active in.
We have updated our methodology to report the ratio of monthly average of fake accounts over of monthly active users, based on the latest publication of monthly active users, in order to better reflect TTPs related content in relation to overall content on the service.
TTP no. 2: Use of fake / inauthentic reactions (e.g. likes, up votes, comments)
Methodology of data measurement:
We based the number of fake likes that we removed on the country of registration of the user. We also based the number of fake likes prevented on the country of registration of the user.
TTP No. 3: Use of fake followers or subscribers
Methodology of data measurement:
We based the number of fake followers that we removed on the country of registration of the user. We also based the number of fake followers prevented on the country of registration of the user.
TTP No. 4: Creation of inauthentic pages, groups, chat groups, fora, or domains
TTP No. 5: Account hijacking or impersonation
Methodology of data measurement:
The number of accounts removed under our impersonation policy is based on the approximate location of the users. We have updated our methodology to report the ratio of monthly average impersonation accounts banned over monthly active users, based on the latest publication of monthly active users, in order to better reflect TTPs related content in relation to overall content on the service.
TTP No. 6. Deliberately targeting vulnerable recipients (e.g. via personalised advertising, location spoofing or obfuscation)
Methodology of data measurement:
The number of new CIO network discoveries found to be targeting EU markets relates to our public disclosures for the period July 1st 2024 to December 31st 2024. We have categorised disrupted CIO networks by the country we assess that the network targeted. We have included any network which we assess to have targeted one or more European markets, or have operated from an EU market. We publish all of the CIO networks we identify and remove within our transparency reports here.
CIO networks identified and removed are detailed below, including the assessed geographic location of network operation and the assessed target audience of the network, which we assess via technical and behavioural evidence from proprietary and open sources. The number of followers of CIO networks has been based on the number of accounts that followed any account within a network as of the date of that network’s removal.
Note: TTP No. 6 data cannot be shown on this page due to limitations with the website. We provide a full list of CIOs disrupted originating in Member States in our full report, which can be downloaded from this website.
TTP No. 7: Deploy deceptive manipulated media (e.g. “deep fakes”, “cheap fakes”...)
TTP No. 8: Use “hack and leak” operation (which may or may not include doctored content)
TTP No. 9: Inauthentic coordination of content creation or amplification, including attempts to deceive/manipulate platforms algorithms (e.g. keyword stuffing or inauthentic posting/reposting designed to mislead people about popularity of content, including by influencers)
TTP No. 10: Use of deceptive practices to deceive/manipulate platform algorithms, such as to create, amplify or hijack hashtags, data voids, filter bubbles, or echo chambers
TTP No. 11. Non-transparent compensated messages or promotions by influencers
Methodology of data measurement:
TTP No. 12: Coordinated mass reporting of non-violative opposing content or accounts
Country | TTP 1 (Number of Fake Accounts Removed) | TTP 2 (Number of Fake Likes Removed) | TTP 3 (Number of Fake Followers Removed) | TTP 5 (Number of Accounts Banned Under Misinformation Policy) | TTP 7 (Number of videos removed for violation of Edited Media & AIGC Policy) |
---|---|---|---|---|---|
Austria | 92511 | 12262551 | 9980544 | 177 | 110859 |
Belgium | 176327 | 16913076 | 11916866 | 300 | 166222 |
Bulgaria | 423060 | 6468521 | 4561129 | 175 | 75036 |
Croatia | 74704 | 1821268 | 1965426 | 77 | 27536 |
Cyprus | 86741 | 4176517 | 1706405 | 54 | 59263 |
Czech Republic | 194925 | 3052689 | 4342681 | 134 | 51417 |
Denmark | 155675 | 4183605 | 3154022 | 115 | 49328 |
Estonia | 111506 | 687649 | 482641 | 29 | 19687 |
Finland | 99745 | 3086208 | 3204999 | 92 | 60083 |
France | 2061174 | 78227394 | 109481878 | 2587 | 1399713 |
Germany | 1678822 | 131158324 | 125941360 | 2277 | 1380835 |
Greece | 133443 | 14621872 | 7880295 | 215 | 206528 |
Hungary | 84057 | 1821268 | 2589692 | 141 | 63319 |
Ireland | 321237 | 4520433 | 3213842 | 235 | 32936 |
Italy | 672344 | 60514367 | 35511559 | 805 | 746928 |
Latvia | 60145 | 1690473 | 732030 | 48 | 99265 |
Lithuania | 79417 | 1682687 | 2057659 | 76 | 42778 |
Luxembourg | 73258 | 1920605 | 1574849 | 43 | 40901 |
Malta | 60192 | 1395676 | 401869 | 0 | 12100 |
Netherlands | 886619 | 23557961 | 17070055 | 567 | 202203 |
Poland | 360959 | 8833014 | 10128172 | 1251 | 203835 |
Portugal | 190906 | 9239486 | 3714261 | 206 | 151389 |
Romania | 294195 | 11254476 | 14021343 | 1300 | 287851 |
Slovakia | 131567 | 1208123 | 4288570 | 63 | 21883 |
Slovenia | 298807 | 727133 | 678185 | 43 | 10131 |
Spain | 709560 | 38331442 | 31084803 | 709 | 676935 |
Sweden | 239020 | 15782957 | 12342226 | 284 | 163490 |
Iceland | 31476 | 230931 | 120003 | 15 | 3353 |
Liechtenstein | 1369 | 24827 | 893407 | 0 | 357 |
Norway | 92800 | 5457966 | 3756414 | 178 | 59556 |
All EU | 9750916 | 459139775 | 424027361 | 12003 | 6362451 |
All EEA | 9876561 | 464982867 | 428797185 | 12196 | 6425717 |
SLI 14.2.2
Views/impressions of and interaction/engagement at the Member State level (e.g. likes, shares, comments), related to each identified TTP, before and after action was taken.
Country | TTP 1 (Number of followers of fake accounts identified at the time of removal) | TTP 2 (Number of fake likes prevented) | TTP 3 (Number of Fake Followers Prevented) | TTP 7 (Number of views of videos removed because of Edited Media and AI-Generated Content (AIGC) policy) | |
---|---|---|---|---|---|
Austria | 467635 | 39213306 | 25000123 | 216433 | |
Belgium | 544073 | 56682105 | 34550567 | 1119223 | |
Bulgaria | 188995 | 40004761 | 26400841 | 5977 | |
Croatia | 175230 | 17901159 | 18990456 | 58579 | |
Cyprus | 124021 | 6960047 | 18497473 | 19441 | |
Czech Republic | 348626 | 31099711 | 18233387 | 8287531 | |
Denmark | 298306 | 17585666 | 23806634 | 2742457 | |
Estonia | 239039 | 7385026 | 16887949 | 2063380 | |
Finland | 195684 | 19264460 | 20303735 | 464824 | |
France | 20207105 | 336499329 | 127136908 | 312078908 | |
Germany | 20545728 | 357582219 | 138933948 | 23904234 | |
Greece | 1702918 | 84211417 | 38712931 | 145950 | |
Hungary | 184291 | 28069699 | 24773097 | 86870 | |
Ireland | 697840 | 31110363 | 25239860 | 103199 | |
Italy | 5900534 | 606697045 | 158916638 | 1892355 | |
Latvia | 124765 | 11600082 | 17952175 | 4519 | |
Lithuania | 300241 | 11795998 | 18928046 | 25410 | |
Luxembourg | 611602 | 7987636 | 21051498 | 8729 | |
Malta | 226073 | 3466698 | 15758979 | 5811847 | |
Netherlands | 1575641 | 101316771 | 35162609 | 9080526 | |
Poland | 3192516 | 208518568 | 54501610 | 13404186 | |
Portugal | 370719 | 56146620 | 26901973 | 339124 | |
Romania | 4045608 | 83405388 | 44172801 | 623525 | |
Slovakia | 1347301 | 18154505 | 21010637 | 2014 | |
Slovenia | 45359 | 5843233 | 1942793 | 605 | |
Spain | 5351682 | 161280031 | 73920335 | 21882268 | |
Sweden | 528326 | 48240073 | 36451604 | 377862 | |
Iceland | 253997 | 1564206 | 2572695 | 6113 | |
Liechtenstein | 11129 | 70045 | 1045728 | 525 | |
Norway | 151088 | 20708187 | 7242021 | 139984 | |
All EU | 69539858 | 2398021916 | 1084139607 | 404749976 | |
All EEA | 69956072 | 2420364354 | 1095000051 | 404896598 |
SLI 14.2.3
Metrics to estimate the penetration and impact that e.g. Fake/Inauthentic accounts have on genuine users and report at the Member State level (including trends on audiences targeted; narratives used etc.).
Country | Number of unique videos labelled with AIGC tag of "Creator labeled as AI-generated" |
---|---|
Austria | 110859 |
Belgium | 166222 |
Bulgaria | 75036 |
Croatia | 27536 |
Cyprus | 59263 |
Czech Republic | 51417 |
Denmark | 49328 |
Estonia | 19687 |
Finland | 60083 |
France | 1399713 |
Germany | 1380835 |
Greece | 206528 |
Hungary | 63319 |
Ireland | 32936 |
Italy | 746928 |
Latvia | 99265 |
Lithuania | 42778 |
Luxembourg | 40901 |
Malta | 12100 |
Netherlands | 202203 |
Poland | 203835 |
Portugal | 151389 |
Romania | 287851 |
Slovakia | 21883 |
Slovenia | 10131 |
Spain | 676935 |
Sweden | 163490 |
Iceland | 3353 |
Liechtenstein | 357 |
Norway | 59556 |
Total EU | 6362451 |
Total EEA | 6425717 |
SLI 14.2.4
Estimation, at the Member State level, of TTPs related content, views/impressions and interaction/engagement with such content as a percentage of the total content, views/impressions and interaction/engagement on relevant signatories' service.
Country | TTP 1 (Ratio of monthly average of Fake accounts over monthly active users) | TTP 5 (Impersonation accounts as a % of monthly active users) | TTP 7 (Number of unique videos labelled with AIGC tag of "AI-generated") |
---|---|---|---|
Austria | 38531 | ||
Belgium | 75316 | ||
Bulgaria | 78668 | ||
Croatia | 18595 | ||
Cyprus | 3165 | ||
Czech Republic | 89409 | ||
Denmark | 30694 | ||
Estonia | 11220 | ||
Finland | 49106 | ||
France | 432739 | ||
Germany | 502916 | ||
Greece | 7936 | ||
Hungary | 74704 | ||
Ireland | 34736 | ||
Italy | 393642 | ||
Latvia | 18852 | ||
Lithuania | 21581 | ||
Luxembourg | 3319 | ||
Malta | 3444 | ||
Netherlands | 29448 | ||
Poland | 316048 | ||
Portugal | 64975 | ||
Romania | 37467 | ||
Slovakia | 28439 | ||
Slovenia | 6969 | ||
Spain | 493675 | ||
Sweden | 105253 | ||
Iceland | 4720 | ||
Liechtenstein | 61 | ||
Norway | 42172 | ||
Total EU | 0.001 | 0.000013 | 2970847 |
Total EEA | 3017800 |
Measure 14.3
Relevant Signatories will convene via the Permanent Task-force to agree upon and publish a list and terminology of TTPs employed by malicious actors, which should be updated on an annual basis.
QRE 14.3.1
Signatories will report on the list of TTPs agreed in the Permanent Task-force within 6 months of the signing of the Code and will update this list at least every year. They will also report about the common baseline elements, objectives and benchmarks for the policies and measures.
Commitment 15
Relevant Signatories that develop or operate AI systems and that disseminate AI-generated and manipulated content through their services (e.g. deepfakes) commit to take into consideration the transparency obligations and the list of manipulative practices prohibited under the proposal for Artificial Intelligence Act.
We signed up to the following measures of this commitment
Measure 15.1 Measure 15.2
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
- Building on our new AI-generated label for creators to disclose content that is completely AI-generated or significantly edited by AI, we have expanded our efforts in the AIGC space by:
- Implementing the Coalition for Content Provenance and Authenticity (C2PA) Content Credentials, which enables our systems to instantly recognize and automatically label AIGC.
- Supporting the coalition’s working groups as a C2PA General Member.
- Joining the Content Authenticity Initiative (CAI) to drive wider adoption of the technical standard.
- Publishing a new Transparency Center article Supporting responsible, transparent AI-generated content.
- Building on our new AI-generated content label for creators, and implementation of C2PA Content Credentials, we launched a number of media literacy campaigns with guidance from expert organisations like Mediawise and WITNESS, including in Brazil, Germany, France, Mexico and the UK, that teach our community how to spot and label AI-generated content. This AIGC Transparency Campaign informed by WITNESS has reached 80M users globally, including more than 8.5M and 9.5M in Germany and France respectively.
- Continued to join industry partners as a party to the “Tech Accord to Combat Deceptive Use of AI in 2024 Elections’ which is a joint commitment to combat the deceptive use of AI in elections.
- We continue to participate in relevant working groups, such as the Generative AI working group, which commenced in September 2023.
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 15.1
Relevant signatories will establish or confirm their policies in place for countering prohibited manipulative practices for AI systems that generate or manipulate content, such as warning users and proactively detect such content.
QRE 15.1.1
In line with EU and national legislation, Relevant Signatories will report on their policies in place for countering prohibited manipulative practices for AI systems that generate or manipulate content.
- AIGC that shows realistic-appearing people under the age of 18
- AIGC that shows the likeness of adult private figures, if we become aware it was used without their permission
- Misleading AIGC or edited media that falsely shows:
- Content made to seem as if it comes from an authoritative source, such as a reputable news organisation
- A crisis event, such as a conflict or natural disaster
- A public figure who is:
- being degraded or harassed, or engaging in criminal or antisocial behaviour
- taking a position on a political issue, commercial product, or a matter of public importance (such as an elections)
- being politically endorsed or condemned by an individual or group
Measure 15.2
Relevant Signatories will establish or confirm their policies in place to ensure that the algorithms used for detection, moderation and sanctioning of impermissible conduct and content on their services are trustworthy, respect the rights of end-users and do not constitute prohibited manipulative practices impermissibly distorting their behaviour in line with Union and Member States legislation.
QRE 15.2.1
Relevant Signatories will report on their policies and actions to ensure that the algorithms used for detection, moderation and sanctioning of impermissible conduct and content on their services are trustworthy, respect the rights of end-users and do not constitute prohibited manipulative practices in line with Union and Member States legislation.
- We have in place internal guidelines and training to help ensure that the training and deployment of our AI systems comply with applicable data protection laws, as well as principles of fairness.
- We have instituted a compliance review process for new AI systems that meet certain thresholds, and are working to prioritise review of previously developed algorithms.
Commitment 16
Relevant Signatories commit to operate channels of exchange between their relevant teams in order to proactively share information about cross-platform influence operations, foreign interference in information space and relevant incidents that emerge on their respective services, with the aim of preventing dissemination and resurgence on other services, in full compliance with privacy legislation and with due consideration for security and human rights risks.
We signed up to the following measures of this commitment
Measure 16.1 Measure 16.2
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
We are continuously reviewing and improving our tools and processes to fight disinformation and will report on any further development in the next COPD report.
Measure 16.1
Relevant Signatories will share relevant information about cross-platform information manipulation, foreign interference in information space and incidents that emerge on their respective services for instance via a dedicated sub-group of the permanent Task-force or via existing fora for exchanging such information.
QRE 16.1.1
Relevant Signatories will disclose the fora they use for information sharing as well as information about learnings derived from this sharing.
Measure 16.2
Relevant Signatories will pay specific attention to and share information on the tactical migration of known actors of misinformation, disinformation and information manipulation across different platforms as a way to circumvent moderation policies, engage different audiences or coordinate action on platforms with less scrutiny and policy bandwidth.
QRE 16.2.1
As a result of the collaboration and information sharing between them, Relevant Signatories will share qualitative examples and case studies of migration tactics employed and advertised by such actors on their platforms as observed by their moderation team and/or external partners from Academia or fact-checking organisations engaged in such monitoring.
Empowering Users
Commitment 17
In light of the European Commission's initiatives in the area of media literacy, including the new Digital Education Action Plan, Relevant Signatories commit to continue and strengthen their efforts in the area of media literacy and critical thinking, also with the aim to include vulnerable groups.
We signed up to the following measures of this commitment
Measure 17.1 Measure 17.2 Measure 17.3
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
- Rolled out two new ongoing general media literacy and critical thinking skills campaigns in the EU and two in EU candidate countries in collaboration with our fact-checking and media literacy partners:
- France: Agence France-Presse (AFP)
- Portugal: Polígrafo
- Georgia: Fact Check Georgia
- Moldova: StopFals!
- This brings the number of general media literacy and critical thinking skills campaigns in Europe to 11 (Denmark, Finland, France, Georgia, Ireland, Italy, Spain, Sweden, Moldova, Netherlands, and Portugal).
- Onboarded two new fact-checking partners in wider Europe:
- Albania & Kosovo: Internews Kosova
- Georgia: Fact Check Georgia
- Expanded our fact-checking coverage to a number of wider-European and EU candidate countries:
- Albania & Kosovo: Internews Kosova
- Georgia: Fact Check Georgia.
- Kazakhstan: Reuters
- Moldova: AFP/Reuters
- Serbia: Lead Stories
- We ran 14 temporary media literacy election integrity campaigns in advance of regional elections, most in collaboration with our fact-checking and media literacy partners:
- 8 in the EU (Austria, Croatia, France, 2 x Germany, Ireland, Lithuania, and Romania)
- Austria: Deutsche Presse-Agentur (dpa)
- Croatia: Faktograf
- France: Agence France-Presse (AFP)
- Germany (regional elections): Deutsche Presse-Agentur (dpa)
- Germany (federal election): Deutsche Presse-Agentur (dpa)
- Ireland: The Journal
- Lithuania: N/A
- Romania: Funky Citizens.
- 1 in EEA
- Iceland: N/A
- 5 in wider Europe/EU candidate countries (Bosnia, Bulgaria, Czechia, Georgia, and Moldova)
- Bosnia: N/A
- Bulgaria: N/A
- Czechia: N/A
- Georgia: Fact Check Georgia
- Moldova: StopFals!
- 8 in the EU (Austria, Croatia, France, 2 x Germany, Ireland, Lithuania, and Romania)
- During the reporting period, we ran 9 Election Speaker Series sessions, 7 in EU Member States and 2 in Georgia and Moldova.
- France: Agence France-Presse (AFP)
- Germany: German Press Agency (dpa)
- Austria: German Press Agency (dpa)
- Lithuania: Logically Facts
- Romania: Funky Citzens
- Ireland: Logically Facts
- Croatia: Faktograf
- Georgia: FactCheck Georgia
- Moldova: Stop Fals!
- Launched four new temporary in-app natural disaster media literacy search guides that link to authoritative 3rd party agencies and organisations:
- Central & Eastern European Floods (Austria, Bosnia, Czechia, Germany, Hungary, Moldova, Poland, Romania, and Slovakia)
- Portugal Wildfires
- Spanish floods
- Mayotte Cyclone
- Continued our in-app interventions, including video tags, search interventions and in-app information centres, available in 23 official EU languages and Norwegian and Icelandic for EEA users, around the elections, the Israel-Hamas Conflict, Climate Change,, Holocaust Education, Mpox, and the War in Ukraine.
Actively participated in the UN COP29 climate change summit by:- Working with the COP29 presidency to promote their content and engage new audiences around the conference as a strategic media partner.
- Re-launching our global #ClimateAction campaign with over 7K posts from around the world. Content across #ClimateAction has now received over 4B video views since being launched in 2021.
- Bringing 5 creators to the summit, who collectively produced 15+ videos that received over 60M video views.
- Launching two global features (a video notice tag and search intervention guide) to point users to authoritative climate related content between 29th October and 25th November, which were viewed 400k times.
- Our partnership with Verified for Climate, a joint initiative of the UN and social impact agency Purpose, continued to be our flagship climate initiative, which saw a network of 35 Verified Champions across Brazil, the United Arab Emirates, and Spain, work with select TikTok creators to develop educational content tackling climate misinformation and disinformation, and drive climate action within the TikTok community.
- Partnered with the World Health Organisation (WHO), including a US$ 3 million donation, to support mental well-being awareness and literacy by creating reliable content and combat misinformation through the Fides network, a diverse community of trusted healthcare professionals and content creators in the United Kingdom, United States, France, Japan, Korea, Indonesia, Mexico, and Brazil.
- Building on these efforts, we also launched the UK Clinician Creator Network, an initiative bringing together 19 leading NHS qualified clinicians who are actively sharing their medical expertise on TikTok, engaging a community of over 2.2 million followers.
- Strengthened our approach to state-affiliated media by:
- Working with third party external experts to shape our state-affiliated media policy, assessment of state-controlled media labels, and continuing to expand its use.
- Continued investment in our detection capabilities for state-affiliated media (SAM) accounts, with a focus on automation and scaled detection.
- Building on our AI-generated content label for creators, and implementation of C2PA Content Credentials, we launched a number of media literacy campaigns with guidance from expert organisations like Mediawise and WITNESS, including in Brazil, Germany, France, Mexico and the UK, that teach our community how to spot and label AI-generated content.
- Our AIGC Transparency Campaign informed by WITNESS has reached 80M users globally, including more than 8.5M and 9.5M in Germany and France respectively.
- Brought greater transparency about our systems and our integrity and authenticity efforts to our community by sharing regular insights and updates. In H2 2024, we continued to expand our Transparency Center with resources like our first-ever US Elections Integrity Hub, European Elections Integrity Hub, dedicated Covert Influence Operations Reports, and a new Transparency Center blog.
- Continued our partnership with Amadeu Antonio Stiftung in Germany on the Demo:create project, an educational initiative supporting young TikTok users to effectively deal with online hate speech, disinformation and misinformation.
- Continued to invest in training and development for our human moderation teams.
- TikTok continues to co-chair the working group on Elections.
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 17.1
Relevant Signatories will design and implement or continue to maintain tools to improve media literacy and critical thinking, for instance by empowering users with context on the content visible on services or with guidance on how to evaluate online content.
QRE 17.1.1
Relevant Signatories will outline the tools they develop or maintain that are relevant to this commitment and report on their deployment in each Member State.
- For example, the four new ongoing general media literacy and critical thinking skills campaigns rolled out in France, Georgia, Moldova, and Portugal, are all supported with search guides to direct users to authoritative sources.
- Our COP29 global search intervention, which ran from 29th October and 25th November, pointed users to authoritative climate related content, and was viewed 400k times.
Where users continue to post despite the warning:
- To limit the spread of potentially misleading information, the video will become ineligible for recommendation in the For You feed.
- The video's creator is also notified that their video was flagged as unsubstantiated content and is provided additional information about why the warning label has been added to their content. Again, this is to raise the creator’s awareness about the credibility of the content that they have shared.
State-controlled media label. Our state-affiliated media policy is to label accounts run by entities whose editorial output or decision-making process is subject to control or influence by a government. We apply a prominent label to all content and accounts from state-controlled media. The user is also shown a screen pop-up providing information about what the label means, inviting them to “learn more”, and redirecting them to an in-app page. The measure brings transparency to our community, raises users’ awareness, and encourages users to consider the reliability of the source. We continue to work with experts to inform our approach and explore how we can continue to expand its use.
In the EU, Iceland and Liechtenstein, we have also taken steps to restrict access to content from the entities sanctioned by the EU in 2024:
- RT - Russia Today UK
- RT - Russia Today Germany
- RT - Russia Today France
- RT- Russia Today Spanish
- Sputnik
- Rossiya RTR / RTR Planeta
- Rossiya 24 / Russia 24
- TV Centre International
- NTV/NTV Mir
- Rossiya 1
- REN TV
- Pervyi Kanal / Channel 1
- RT Arabic
- Sputnik Arabic
- RT Balkan
- Oriental Review
- Tsargrad
- New Eastern Outlook
- Katehon
- Voice of Europe
- RIA Novosti
- Izvestija
- Rossiiskaja Gazeta
AI-generated content labels. As more creators take advantage of Artificial Intelligence (AI) to enhance their creativity, we want to support transparent and responsible content creation practices. In 2023 TikTok launched a AI-generated label for creators to disclose content that is completely AI-generated or significantly edited by AI.The launch of this new tool to help creators label their AI-generated content was accompanied by a creator education campaign, a Help Center page, and a Newsroom Post. In May 2024, we started using the Coalition for Content Provenance and Authenticity (C2PA) Content Credentials, which enables our systems to instantly recognize and automatically label AIGC. In the interests of transparency, we also renamed TikTok AI effects to explicitly include "AI" in their name and corresponding effects label, and updated our guidelines for Effect House creators to do the same.
- Harmful Misinformation, Online challenges, Covid-19, Election integrity, Scams, and how to safely share content about tragic events on TikTok.
- Our safety partners page provides details of some of our work with global experts, non-governmental organisations, and industry associations to help build a safe platform for our community.
We also use Newsroom posts to keep our community informed about our most recent updates and efforts across News, Product, Community, Safety and Product. Users can select their country, including EU, for preferred language where available, and regionally relevant posts. For example, upon publication of our fourth Code report in September 2024, we provided users with an overview of our continued commitment to Combating Disinformation under the EU Code of Practice . We also updated users about how we are partnering with our industry to advance AI transparency and literacy. and how we protected the integrity of the platform during the Romanian presidential elections.
SLI 17.1.1
Relevant Signatories will report, at the Member State level, on metrics pertinent to assessing the effects of the tools described in the qualitative reporting element for Measure 17.1, which will include: the total count of impressions of the tool; and information on the interactions/engagement with the tool.
The number of impressions, clicks and click through rates of video notice tags, search interventions and public service announcements are based on the approximate location of the users that engaged with the tools. The number of impressions of the Safety Center pages is based on the IP location of the users.
Country | Number of impressions of state-affiliated media label (SAM) | Number of clicks on state-affiliated media label (SAM) | Clicks through rate of state-affiliated media label (SAM) | Number of impressions of topic covered by video Intervention (Holocaust Misinformation/Denial) | Number of impressions of topic covered by video Intervention (mpox) | Number of impressions of topic covered by video Intervention (Elections) | CTR of video Intervention (Holocaust Misinformation/Denial) | CTR of video Intervention (mpox) | CTR by video Intervention (Elections) | Number of impressions for search interventions (Holocaust Misinformation/Denial) | Number of impressions for search interventions (mpox) | Number of impressions for search interventions (Elections) | Number of impressions for search interventions (Climate change) | Number of clicks on search interventions (Holocaust Misinformation/Denial) | Number of clicks on search interventions (mpox) | Number of clicks on search interventions (Elections) | Number of clicks on search interventions (Climate change) | CTR of search interventions (Holocaust Misinformation/Denial) | CTR of search interventions (mpox) | CTR of search interventions (Elections) | CTR of search interventions (Climate change) | Number of impressions of public service announcements (Holocaust Misinformation/Denial) | Number of impressions of public service announcements (mpox) |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Austria | 3705075 | 5771 | 0.001557593 | 3987721 | 6065332 | 78371511 | 0.00277878 | 0.005051826 | 0.001992012 | 156298 | 467253 | 708656 | 228390 | 6187 | 2111 | 3263 | 196 | 0.03958464 | 0.004517895 | 0.004604491 | 0.000858181 | 16 | 26 |
Belgium | 3789615 | 6994 | 0.00184557 | 3501679 | 15383226 | 0 | 0.003052536 | 0.004787292 | 0 | 200324 | 669059 | 0 | 207808 | 10626 | 2501 | 0 | 159 | 0.053044069 | 0.003738086 | 0 | 0.000765129 | 41 | 26 |
Bulgaria | 7727480 | 9390 | 0.001215144 | 697482 | 5869376 | 29664185 | 0.004028778 | 0.00679033 | 0.001686141 | 62548 | 446240 | 121672 | 140769 | 1701 | 2724 | 367 | 183 | 0.027195114 | 0.006104338 | 0.003016306 | 0.001300002 | 17 | 39 |
Croatia | 1656809 | 2786 | 0.001681546 | 658928 | 5206361 | 24640666 | 0.003373661 | 0.007821202 | 0.001692933 | 100586 | 561621 | 546661 | 159478 | 2631 | 3321 | 1767 | 128 | 0.026156722 | 0.00591324 | 0.003232351 | 0.000802619 | 6 | 13 |
Cyprus | 752546 | 1166 | 0.001549407 | 311308 | 1309314 | 0 | 0.003819369 | 0.006010017 | 0 | 24629 | 77553 | 0 | 19126 | 445 | 452 | 0 | 20 | 0.018068131 | 0.005828272 | 0 | 0.001045697 | 4 | 4 |
Czech Republic | 7602192 | 7762 | 0.001021021 | 2888696 | 6134172 | 329546 | 0.004759587 | 0.009942499 | 0.001007447 | 88635 | 587165 | 17994 | 163222 | 2476 | 4180 | 56 | 172 | 0.027934789 | 0.007118953 | 0.003112148 | 0.00105378 | 99 | 76 |
Denmark | 2074577 | 4350 | 0.002096813 | 1690719 | 4604268 | 0 | 0.004065134 | 0.008214118 | 0 | 59881 | 435391 | 0 | 148528 | 1572 | 2571 | 0 | 134 | 0.026252067 | 0.005905037 | 0 | 0.000902187 | 13 | 17 |
Estonia | 1391192 | 2402 | 0.001726577 | 406818 | 2279691 | 0 | 0.003987041 | 0.00802872 | 0 | 14970 | 147804 | 0 | 29383 | 476 | 926 | 0 | 44 | 0.031796927 | 0.006265054 | 0 | 0.001497465 | 60 | 12 |
Finland | 3310339 | 9274 | 0.002801526 | 3314306 | 8904456 | 0 | 0.003627305 | 0.007459524 | 0 | 118664 | 648096 | 0 | 238286 | 2024 | 4591 | 0 | 213 | 0.017056563 | 0.007083827 | 0 | 0.000893884 | 27 | 30 |
France | 32521568 | 28995 | 0.000891562 | 2293975 | 123453307 | 1301158781 | 0.004792554 | 0.003822433 | 0.001599036 | 1592000 | 3031084 | 15712577 | 652102 | 111776 | 5826 | 7306 | 446 | 0.070211055 | 0.001922085 | 0.000464978 | 0.000683942 | 562 | 473 |
Germany | 37522365 | 49125 | 0.001309219 | 40208515 | 51643857 | 209773848 | 0.002756879 | 0.004731521 | 0.001247319 | 1383744 | 3901089 | 7265486 | 1761399 | 66652 | 15278 | 13805 | 1488 | 0.048167869 | 0.003916342 | 0.001900079 | 0.000844783 | 344 | 385 |
Greece | 3107902 | 6491 | 0.002088547 | 2559183 | 9476029 | 0 | 0.003910232 | 0.006915661 | 0 | 492728 | 1145046 | 0 | 250015 | 2946 | 6796 | 0 | 361 | 0.005978958 | 0.005935133 | 0 | 0.001443913 | 14 | 28 |
Hungary | 41012350 | 27450 | 0.000669311 | 4785260 | 5483667 | 0 | 0.003591236 | 0.007310437 | 0 | 118829 | 495768 | 0 | 270274 | 4853 | 3857 | 0 | 330 | 0.040840199 | 0.007779849 | 0 | 0.001220983 | 14 | 33 |
Ireland | 3250908 | 6757 | 0.002078496 | 3352323 | 9413972 | 278245 | 0.003129173 | 0.00624168 | 0.00127226 | 112337 | 714141 | 1651434 | 221386 | 1848 | 2252 | 16293 | 118 | 0.016450502 | 0.003153439 | 0.009865971 | 0.000533006 | 20 | 38 |
Italy | 12463432 | 16462 | 0.001320824 | 1987326 | 31117509 | 0 | 0.004668082 | 0.005592125 | 0 | 877504 | 2604995 | 0 | 1113346 | 11749 | 36148 | 0 | 818 | 0.013389113 | 0.013876418 | 0 | 0.000734722 | 74 | 63 |
Latvia | 3063840 | 4027 | 0.001314364 | 491701 | 2821714 | 0 | 0.004234281 | 0.008449829 | 0 | 18212 | 206307 | 0 | 37048 | 668 | 1375 | 0 | 63 | 0.036679113 | 0.006664825 | 0 | 0.001700497 | 89 | 9 |
Lithuania | 3025380 | 5056 | 0.001671195 | 685542 | 5065658 | 9102070 | 0.003958911 | 0.007301519 | 0.001649515 | 37622 | 484805 | 41034 | 112397 | 820 | 2767 | 127 | 176 | 0.021795758 | 0.005707449 | 0.003094994 | 0.001565878 | 41 | 11 |
Luxembourg | 439714 | 628 | 0.001428201 | 181085 | 823972 | 0 | 0.003633653 | 0.005151874 | 0 | 11059 | 39807 | 0 | 14448 | 571 | 180 | 0 | 17 | 0.051632155 | 0.004521818 | 0 | 0.001176633 | 2 | 1 |
Malta | 417073 | 605 | 0.001450585 | 206924 | 645827 | 0 | 0.002860954 | 0.004848048 | 0 | 7744 | 33713 | 0 | 9186 | 178 | 148 | 0 | 5 | 0.022985537 | 0.004389998 | 0 | 0.000544307 | 2 | 1 |
Netherlands | 14557284 | 23801 | 0.001634989 | 12745193 | 15404762 | 0 | 0.002615025 | 0.007486971 | 0 | 490423 | 1010537 | 0 | 406345 | 6942 | 4547 | 0 | 308 | 0.014155127 | 0.004499588 | 0 | 0.000757977 | 80 | 147 |
Poland | 203836052 | 63752 | 0.000312761 | 27175740 | 25070807 | 0 | 0.002600224 | 0.008385929 | 0 | 1069545 | 2560968 | 0 | 888768 | 842 | 19079 | 0 | 1136 | 0.000787251 | 0.007449917 | 0 | 0.001278174 | 154 | 172 |
Portugal | 1518762 | 4985 | 0.003282279 | 1990257 | 6017068 | 0 | 0.003294047 | 0.006165794 | 0 | 221507 | 714886 | 0 | 198458 | 2507 | 3653 | 0 | 162 | 0.011317927 | 0.005109906 | 0 | 0.000816294 | 11 | 12 |
Romania | 36420337 | 58883 | 0.001616762 | 3636828 | 12931412 | 1093883826 | 0.004134097 | 0.00773334 | 0.001400099 | 222020 | 1339325 | 21733061 | 375102 | 5125 | 7857 | 70746 | 513 | 0.023083506 | 0.005866388 | 0.003255225 | 0.001367628 | 21 | 40 |
Slovakia | 1971329 | 3352 | 0.001700376 | 640942 | 1798295 | 0 | 0.003516699 | 0.007978669 | 0 | 42499 | 329754 | 0 | 93322 | 1331 | 2027 | 0 | 104 | 0.031318384 | 0.006147007 | 0 | 0.001114421 | 16 | 24 |
Slovenia | 724668 | 1403 | 0.001936059 | 462528 | 2037178 | 0 | 0.003353311 | 0.006032364 | 0 | 30051 | 164647 | 0 | 34445 | 1520 | 793 | 0 | 26 | 0.05058068 | 0.004816365 | 0 | 0.000754827 | 3 | 5 |
Spain | 6639002 | 11904 | 0.001793041 | 8574010 | 47595074 | 0 | 0.003606247 | 0.003745682 | 0 | 2155115 | 1775339 | 0 | 842382 | 39383 | 6112 | 0 | 503 | 0.018274199 | 0.003442723 | 0 | 0.000597116 | 51 | 82 |
Sweden | 11757565 | 12977 | 0.001103715 | 5540266 | 15829378 | 0 | 0.004684613 | 0.00741501 | 0 | 175333 | 1106486 | 0 | 486125 | 4056 | 6128 | 0 | 405 | 0.023133124 | 0.005538254 | 0 | 0.000833119 | 87 | 55 |
Iceland | 291908 | 589 | 0.002017759 | 215411 | 679537 | 4620010 | 0.003574562 | 0.006076196 | 0.004255618 | 5203 | 22964 | 78449 | 4668 | 147 | 245 | 1095 | 7 | 0.028252931 | 0.010668873 | 0.013958113 | 0.001499572 | 4 | 5 |
Liechtenstein | 50186 | 48 | 0.000956442 | 11568 | 21397 | 0 | 0.004062932 | 0.006169089 | 0 | 478 | 1406 | 0 | 548 | 25 | 10 | 0 | 1 | 0.052301255 | 0.007112376 | 0 | 0.001824818 | 1 | 0 |
Norway | 4367100 | 8605 | 0.001970415 | 2909307 | 6765469 | 0 | 0.004536476 | 0.009467784 | 0 | 89193 | 539291 | 0 | 223306 | 2505 | 3472 | 0 | 179 | 0.028085164 | 0.006438083 | 0 | 0.000801591 | 27 | 18 |
Total EU | 446259356 | 376548 | 0.000843787 | 134975255 | 422385682 | 2747202678 | 0.003136642 | 0.005407563 | 0.001506023 | 9884807 | 25698879 | 47798575 | 9101538 | 291905 | 148200 | 113730 | 8228 | 0.029530673 | 0.005766789 | 0.00237936 | 0.000904023 | 1868 | 1822 |
Total EEA | 450968550 | 385790 | 0.00085547 | 138111541 | 429852085 | 2751822688 | 0.00316689 | 0.005472562 | 0.001510639 | 9979681 | 26262540 | 47877024 | 9330060 | 294582 | 151927 | 114825 | 8415 | 0.029518178 | 0.005784932 | 0.002398332 | 0.000901923 | 1900 | 1845 |
Measure 17.2
Relevant Signatories will develop, promote and/or support or continue to run activities to improve media literacy and critical thinking such as campaigns to raise awareness about Disinformation, as well as the TTPs that are being used by malicious actors, among the general public across the European Union, also considering the involvement of vulnerable communities.
We are pleased to report metrics on the four new general media literacy and critical thinking skills campaigns in France, Georgia, Moldova, and Portugal as well as the existing permanent campaigns that ran through the reporting period in: Denmark, Finland, Ireland, Italy, Spain, Sweden, and Netherlands.
QRE 17.2.1
Relevant Signatories will describe the activities they launch or support and the Member States they target and reach. Relevant signatories will further report on actions taken to promote the campaigns to their user base per Member States targeted.
Germany Regional Elections 2024 (Saxony, Thuringia, Brandenburg): From 8 Aug 2024, we launched an in-app Election Centre to provide users with up-to-date information about the German regional elections. The centre contained a section about spotting misinformation, which included videos created in partnership with the fact-checking organisation Deutsche Presse-Agentur (dpa).
Moldova Presidential Election and EU Referendum 2024: From 6 Sept 2024, we launched an in-app Election Centre to provide users with up-to-date information about the 2024 Moldova presidential election and EU referendum. The centre contained a section about spotting misinformation, which included videos created in partnership with the fact-checking organisation StopFals!
Georgia Parliamentary Election 2024: From 16 Sept 2024, we launched an in-app Election Centre to provide users with up-to-date information about the 2024 Georgia parliamentary election. The centre contained a section about spotting misinformation, which included videos created in partnership with the fact-checking organisation Fact Check Georgia.
Bosnia Parliamentary Election 2024: From 17 Sept 2024, we launched an in-app Election Centre to provide users with up-to-date information about the 2024 Bosnian regional elections, which contained a section about spotting misinformation.
Lithuania Parliamentary Election 2024: From 17 Sept 2024, we launched an in-app Election Centre to provide users with up-to-date information about the 2024 Lithuanian parliamentary elections, which contained a section about spotting misinformation.
Czechia Regional Elections 2024: From 13 Sept 2024, we launched an in-app Election Centre to provide users with up-to-date information about the 2024 Czechia regional elections, which contained a section about spotting misinformation.
Bulgaria Parliamentary Election 2024: From 1 Oct 2024, we launched an in-app Election Centre to provide users with up-to-date information about the 2024 Bulgaria parliamentary election, which contained a section about spotting misinformation.
Romania Presidential and Parliamentary Election 2024: From 11 Nov 2024, we launched an in-app Election Centre to provide users with up-to-date information about the 2024 Romanian elections. The centre contained a section about spotting misinformation, which included videos created in partnership with the fact-checking organisation Funky Citizens. [On 6 Dec 2024, following the Constitutional Court's decision to annul the first round of the presidential election, we updated our in-app Election Centre to guide users on rapidly changing events].
Ireland General Election: From 7 Nov 2024, we launched an in-app Election Centre to provide users with up-to-date information about the 2024 Irish general election. The centre contained a section about spotting misinformation, which included videos created in partnership with the fact-checking organisation The Journal.
Iceland Parliamentary Election 2024: From 7 Nov 2024, we launched an in-app Election Centre to provide users with up-to-date information about the 2024 Iceland parliamentary election, which contained a section about spotting misinformation.
Croatia Presidential Election 2024: From 6 Dec 2024, we launched an in-app Election Centre to provide users with up-to-date information about the 2024 Croatia presidential election. The centre contained a section about spotting misinformation, which included videos created in partnership with the fact-checking organisation Faktograf.
Germany Federal Election 2024: From 16 Dec 2024, we launched an in-app Election Centre to provide users with up-to-date information about the 2025 German federal election. The centre contained a section about spotting misinformation, which included videos created in partnership with the fact-checking organisation Deutsche Presse-Agentur (dpa).
(II) Election Speaker Series. To further promote election integrity, and inform our approach to elections, we invited suitably qualified local and regional external experts to share their insights and market expertise with our internal teams. During this reporting period, we ran 9 Election Speaker Series sessions, 7 in EU Member States and 2 in Georgia and Moldova.
- France: Agence France-Presse (AFP)
- Germany: German Press Agency (dpa)
- Austria: German Press Agency (dpa)
- Lithuania: Logically Facts
- Romania: Funky Citzens
- Ireland: Logically Facts
- Croatia: Faktograf
- Georgia: FactCheck Georgia
- Moldova: Stop Fals!
- France: Agence France-Presse (AFP)
- Portugal: Polígrafo
- Georgia: Fact Check Georgia
- Moldova: StopFals!
This brings the number of general media literacy and critical thinking skills campaigns in Europe to 11 (Denmark, Finland, France, Georgia, Ireland, Italy, Spain, Sweden, Moldova, Netherlands, and Portugal).
- Partnered with Lead Stories: Ukraine, Romania, Slovakia, Hungary, Latvia, Estonia, Lithuania.
- Partnered with fakenews.pl: Poland.
- Partnered with Correctiv: Germany, Austria.
(VI) Climate literacy.
- Our climate change search intervention tool is available in 23 official EU languages (plus Norwegian and Icelandic for EEA users). It redirects users looking for climate change-related content to authoritative information and encourages them to report any potential misinformation they see.
- In April 2024, in partnership with The Mary Robinson Centre, TikTok launched the TikTok Youth Climate Leaders Alliance, a programme aimed at 18-30-year-olds looking to make significant changes in the face of the climate crisis.
- Actively participated in the UN COP29 climate change summit by:
- Working with the COP29 presidency to promote their content and engage new audiences around the conference as a strategic media partner.
- Re-launching our global #ClimateAction campaign with over 7K posts from around the world. Content across #ClimateAction has now received over 4B video views since being launched in 2021.
- Bringing 5 creators to the summit, who collectively produced 15+ videos that received over 60M video views.
- Launching two global features (a video notice tag and search intervention guide) to point users to authoritative climate related content between 29th October and 25th November, which were viewed 400k times.
- As of August 2024, popular hashtags #ClimateChange, #SustainableLiving, and #ClimateAction have more than 800,000 associated posts on TikTok, combined.
SLI 17.2.1
Relevant Signatories report on number of media literacy and awareness raising activities organised and or participated in and will share quantitative information pertinent to show the effects of the campaigns they build or support at the Member State level.
Country | Total number of impressions of H5 Page between July 1 and December 31 2024 | Number of impressions of search intervention | Number of clicks on search intervention | Click through rate of the search intervention |
---|---|---|---|---|
France | 72861 | 229676 | 1370 | 0.60% |
Portugal | 3400 | 107964 | 426 | 0.39% |
Denmark | 1540 | 10854 | 30 | 0.28% |
Netherlands | 2492 | 64241 | 226 | 0.35% |
Ireland | 1320 | 14282 | 46 | 0.32% |
Finland | 595 | 3725 | 25 | 0.67% |
Sweden | 1197 | 13444 | 64 | 0.48% |
Spain | 26213 | 1253955 | 3220 | 0.26% |
Italy | 1948 | 41297 | 181 | 0.44% |
Austria and Germany | 33220 | 15072256 | 45865 | 0.30% |
Bulgaria | 741 | 309132 | 1095 | 0.35% |
Croatia | 811 | 449332 | 1452 | 0.32% |
Czech Republic | 1025 | 954741 | 1722 | 0.18% |
Slovenia | 286 | 118972 | 407 | 0.34% |
Measure 17.3
For both of the above Measures, and in order to build on the expertise of media literacy experts in the design, implementation, and impact measurement of tools, relevant Signatories will partner or consult with media literacy experts in the EU, including for instance the Commission's Media Literacy Expert Group, ERGA's Media Literacy Action Group, EDMO, its country-specific branches, or relevant Member State universities or organisations that have relevant expertise.
QRE 17.3.1
Relevant Signatories will describe how they involved and partnered with media literacy experts for the purposes of all Measures in this Commitment.
- We ran 14 temporary media literacy election integrity campaigns in advance of regional elections, most in collaboration with our fact-checking and media literacy partners:
- 7 in the EU (Austria, Croatia, France, 2 x Germany, Ireland and Lithuania)
- Austria: Deutsche Presse-Agentur (dpa)
- Croatia: Faktograf
- France: Agence France-Presse (AFP)
- Germany (regional elections): Deutsche Presse-Agentur (dpa)
- Germany (federal election): Deutsche Presse-Agentur (dpa)
- Ireland: The Journal
- 1 in EEA (Iceland)
- 6 in wider Europe/EU candidate countries (Bosnia, Bulgaria, Czechia, Georgia, Moldova and Romania)
- Georgia: Fact Check Georgia
- Moldova: StopFals!
- Romania: Funky Citizens.
- 7 in the EU (Austria, Croatia, France, 2 x Germany, Ireland and Lithuania)
- Election speaker series. To further promote election integrity, and inform our approach to elections, we invited suitably qualified local and regional external experts to share their insights and market expertise with our internal teams. During this reporting period, we ran 9 Election Speaker Series sessions, 7 in EU Member States and 2 in Georgia and Moldova.
- France: Agence France-Presse (AFP)
- Germany: German Press Agency (dpa)
- Austria: German Press Agency (dpa)
- Lithuania: Logically Facts
- Romania: Funky Citzens
- Ireland: Logically Facts
- Croatia: Faktograf
- Georgia: FactCheck Georgia
- Moldova: Stop Fals!
We continue to run our media literacy campaigns about the war in Ukraine, developed in partnership with our media literacy partners Correctiv in Austria and Germany, Fakenews.pl in Poland and Lead Stories in Ukraine, Romania, Slovakia, Hungary, Latvia, Estonia, Lithuania. We also expanded this campaign to Serbia, Bosnia, Montenegro, Czechia, Croatia, Slovenia, Bulgaria.
Commitment 18
Relevant Signatories commit to minimise the risks of viral propagation of Disinformation by adopting safe design practices as they develop their systems, policies, and features.
We signed up to the following measures of this commitment
Measure 18.1 Measure 18.2 Measure 18.3
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
- Onboarded two new fact-checking partners in wider Europe:
- Albania & Kosovo: Internews Kosova
- Georgia: Fact Check Georgia
- Continued to improve the accuracy of, and overall coverage provided by, our machine learning detection models.
- Members of the EDMO working group for the creation of the Independent Intermediary Body (IIB) to support research on digital platforms.
- Refined our standard operating procedure (SOP) for vetted researcher access to ensure compliance with the provisions of the Delegated Act on Data Access for Research.
- Participated in the EC Technical Roundtable on data access in December, 2024.
- Invested in training and development for our Trust and Safety team, including regular internal sessions dedicated to knowledge sharing and discussion about relevant issues and trends and attending external events to share their expertise and support continued professional learning. For example:
- In the lead-up to certain elections, we invite suitably qualified external local/regional experts, as part of our Election Speaker Series. Sharing their market expertise with our internal teams provides us with insights to better understand areas that could potentially amount to election manipulation, and informs our approach to the upcoming election.
- In June 2024, 12 members of our Trust & Safety team (including leaders of our fact-checking program) attended the GlobalFact11 and participated in an on-the-record mainstage presentation answering questions about our misinformation strategy and partnerships with professional fact-checkers.
- Continued to participate in, and co-chair, the working group on Elections.
- In October, we sponsored, attended, and presented at Disinfo24 the annual EU DisinfoLab Conference in Riga.
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 18.1
Relevant Signatories will take measures to mitigate risks of their services fuelling the viral spread of harmful Disinformation, such as: recommender systems designed to improve the prominence of authoritative information and reduce the prominence of Disinformation based on clear and transparent methods and approaches for defining the criteria for authoritative information; other systemic approaches in the design of their products, policies, or processes, such as pre-testing.
QRE 18.1.1
Relevant Signatories will report on the risk mitigation systems, tools, procedures, or features deployed under Measure 18.1 and report on their deployment in each EU Member State.
- Automated Review We place considerable emphasis on proactive detection to remove violative content. Content that is uploaded to the platform is typically first reviewed by our automated moderation technology, which looks at a variety of signals across content, including keywords, images, captions, and audio, to identify violating content. We work with various external experts, like our fact-checking partners, to inform our keyword lists. If our automated moderation technology identifies content that is a potential violation, it will either be automatically removed from the platform or flagged for further review by our human moderation teams. In line with our safeguards to help ensure accurate decisions are made, automated removal is applied when violations are the most clear-cut. We also carry out targeted sweeps of certain types of violative content including harmful misinformation, where we have identified specific risks or where our fact-checking partners or other experts have alerted us to specific risks.
- Human Moderation While some misinformation can be enforced through technology alone—for example, repetitions of previously debunked content—misinformation evolves quickly and is highly nuanced. That’s why we have misinformation moderators with enhanced training and access to tools like our global repository of previously fact-checked claims from the IFCN-accredited fact-checking partners, who help assess the accuracy of content. We also have teams on the ground who partner with experts to prioritise local context and nuance. We may also issue guidance to our moderation teams to help them more easily spot and take swift action on violating content. Human moderation will also occur if a video gains popularity or has been reported. Community members can report violations in-app and on our website. Our fact-checking partners and other stakeholders can also report potential violating content to us directly.
- For content that does not violate our CGs but may negatively impact the authenticity of the platform, we reduce its prominence on the For You feed and / or label it. The types of misinformation we may make ineligible for the For You feed are made clear to users here; general conspiracy theories, unverified information related to an emergency or unfolding event and potential high-harm misinformation that is undergoing a fact-check. We also label accounts and content of state-affiliated media entities to empower users to consider the sources of information. Our moderators take additional precautions to review videos as they rise in popularity to reduce the likelihood of content that may not be appropriate entering our recommended system.
- Providing access to authoritative information is an important part of our overall strategy to counter misinformation. There are a number of ways in which we do this, including launching information centres with informative resources from authoritative third-parties in response to global or local events, adding public service announcements on hashtag or search pages, or labelling content related to a certain topic to prompt our community to seek out authoritative information.
- We collaborate with Irrational Labs to develop and implement specialised prompts to help users consider before sharing unverified content (as outlined in QRE 21.3.1),
- Yad Vashem created an enrichment program on the Holocaust for our Trust and Safety team. The five week program aimed to give our team a deeper understanding about the Holocaust, its lessons and misinformation related to antisemitism and hatred.
- We worked with local/regional experts through our Election Speaker Series to ensure their insights and expertise informs our internal teams ahead of particular elections throughout 2024.
QRE 18.1.2
Relevant Signatories will publish the main parameters of their recommender systems, both in their report and, once it is operational, on the Transparency Centre.
- User interactions: Content you like, share, comment on, and watch in full or skip, as well as accounts of followers that you follow back.
- Content information: Sounds, hashtags, number of views, and the country in which the content was published.
- User information: Device settings, language preference, location, time zone and day, and device type.
- Not interested: Users can long-press on the video in their For You feed and select ‘Not interested’ from the pop-up menu. This will let us know they are not interested in this type of content and we will limit how much of that content we recommend in their feed.
- Video keyword filters: They can add keywords – both words or hashtags – they’d like to filter from their For You feed.
- For You refresh: To help you discover new content, users can refresh their For You feed, enabling them to explore entirely new sides of TikTok.
QRE 18.1.3
Relevant Signatories will outline how they design their products, policies, or processes, to reduce the impressions and engagement with Disinformation whether through recommender systems or through other systemic approaches, and/or to increase the visibility of authoritative information.
- Misinformation
- Conspiracy theories that are unfounded and claim that certain events or situations are carried out by covert or powerful groups, such as "the government" or a "secret society".
- Moderate harm health misinformation, such as an unproven recommendation for how to treat a minor illness.
- Repurposed media, such as showing a crowd at a music concert and suggesting it is a political protest.
- Misrepresenting authoritative sources, such as selectively referencing certain scientific data to support a conclusion that is counter to the findings of the study.
- Unverified claims related to an emergency or unfolding event.
- Potential high-harm misinformation while it is undergoing a fact-checking review.
- Civic and Election Integrity
- Unverified claims about an election, such as a premature claim that all ballots have been counted or tallied.
- Statements that significantly misrepresent authoritative civic information, such as a false claim about the text of a parliamentary bill.
- Fake Engagement
- Content that tricks or manipulates others as a way to increase gifts, or engagement metrics, such as "like-for-like" promises or other false incentives for engaging with content.
- Proactive insight reports that flag new and evolving claims they’re seeing across the internet. This helps us detect harmful misinformation and anticipate misinformation trends on our platform.
- A repository of previously fact-checked claims to help misinformation moderators make swift and accurate decisions.
We are also committed to civic and election integrity and mitigating the spread of false or misleading content about an electoral or civic process. We work with national electoral commissions, media literacy bodies and civil society organisations to ensure we are providing our community with accurate up-to-date information about an election through our in-app election information centers, election guides, search interventions and content labels.
SLI 18.1.1
Relevant Signatories will provide, through meaningful metrics capable of catering for the performance of their products, policies, processes (including recommender systems), or other systemic approaches as relevant to Measure 18.1 an estimation of the effectiveness of such measures, such as the reduction of the prevalence, views, or impressions of Disinformation and/or the increase in visibility of authoritative information. Insofar as possible, Relevant Signatories will highlight the causal effects of those measures.
The share cancel rate (%) following the unverified content label share warning pop-up indicates the percentage of users who do not share a video after seeing the label pop up. This metric is based on the approximate location of the users that engaged with these tools.
Country | Share cancel rate (%) following the unverified content label share warning pop-up (users who do not share the video after seeing the pop up) |
---|---|
Austria | 31.80% |
Belgium | 33.80% |
Bulgaria | 34.00% |
Croatia | 33.70% |
Cyprus | 32.90% |
Czech Republic | 29.50% |
Denmark | 30.20% |
Estonia | 28.50% |
Finland | 27.20% |
France | 37.10% |
Germany | 30.10% |
Greece | 32.10% |
Hungary | 31.40% |
Ireland | 29.60% |
Italy | 37.70% |
Latvia | 30.90% |
Lithuania | 30.80% |
Luxembourg | 33.60% |
Malta | 35.40% |
Netherlands | 27.80% |
Poland | 28.90% |
Portugal | 33.10% |
Romania | 30.10% |
Slovakia | 28.90% |
Slovenia | 33.30% |
Spain | 34.10% |
Sweden | 29.40% |
Iceland | 27.90% |
Liechtenstein | 19.60% |
Norway | 25.40% |
Total EU | 32.20% |
Total EEA | 32.10% |
Measure 18.2
Relevant Signatories will develop and enforce publicly documented, proportionate policies to limit the spread of harmful false or misleading information (as depends on the service, such as prohibiting, downranking, or not recommending harmful false or misleading information, adapted to the severity of the impacts and with due regard to freedom of expression and information); and take action on webpages or actors that persistently violate these policies.
QRE 18.2.1
Relevant Signatories will report on the policies or terms of service that are relevant to Measure 18.2 and on their approach towards persistent violations of these policies.
- Misinformation
- Misinformation that poses a risk to public safety or may induce panic about a crisis event or emergency, including using historical footage of a previous attack as if it were current, or incorrectly claiming a basic necessity (such as food or water) is no longer available in a particular location.Health misinformation, such as misleading statements about vaccines, inaccurate medical advice that discourages people from getting appropriate medical care for a life-threatening disease, or other misinformation which may cause negative health effects on an individual's life
- Climate change misinformation that undermines well-established scientific consensus, such as denying the existence of climate change or the factors that contribute to it.
- Conspiracy theories that name and attack individual people.
- Conspiracy theories that are violent or hateful, such as making a violent call to action, having links to previous violence, denying well-documented violent events, or causing prejudice towards a group with a protected attribute.
- Civic and Election Integrity
- Election misinformation, including:
- How, when, and where to vote or register to vote;
- Eligibility requirements of voters to participate in an election, and the qualifications for candidates to run for office;
- Laws, processes, and procedures that govern the organisation and implementation of elections and other civic processes, such as referendums, ballot propositions, or censuses;
- Final results or outcome of an election.
- Election misinformation, including:
- Edited Media and AI-Generated Content (AIGC)
- Realistic-appearing people under the age of 18.
- The likeness of adult private figures, if we become aware it was used without their permission.
- Misleading AIGC or edited media that falsely shows:
- Content made to seem as if it comes from an authoritative source, such as a reputable news organisation;
- A crisis event, such as a conflict or natural disaster.
- A public figure who is:
- being degraded or harassed, or engaging in criminal or antisocial behaviour;
- taking a position on a political issue, commercial product, or a matter of public importance (such as an election);
- being politically endorsed or condemned by an individual or group.
- Fake Engagement
- Facilitating the trade or marketing of services that artificially increase engagement, such as selling followers or likes.
- Providing instructions on how to artificially increase engagement on TikTok.
- Misinformation
- Conspiracy theories that are unfounded and claim that certain events or situations are carried out by covert or powerful groups, such as "the government" or a "secret society"
- Moderate harm health misinformation, such as an unproven recommendation for how to treat a minor illness
- Repurposed media, such as showing a crowd at a music concert and suggesting it is a political protest
- Misrepresenting authoritative sources, such as selectively referencing certain scientific data to support a conclusion that is counter to the findings of the study
- Unverified claims related to an emergency or unfolding event
- Potential high-harm misinformation while it is undergoing a fact-checking review
- Civic and Election Integrity
- Unverified claims about an election, such as a premature claim that all ballots have been counted or tallied
- Statements that significantly misrepresent authoritative civic information, such as a false claim about the text of a parliamentary bill
- Fake Engagement
- Content that tricks or manipulates others as a way to increase gifts, or engagement metrics, such as "like-for-like" promises or other false incentives for engaging with content
However, misinformation is different from other content issues. Context and fact-checking are critical to consistently and accurately enforcing our misinformation policies. So while we use machine learning models to help detect potential misinformation, ultimately our approach today is having our moderation team assess, confirm, and remove misinformation violations. We have misinformation moderators who have enhanced training, expertise, and tools to take action on harmful misinformation. This includes a repository of previously fact-checked claims to help misinformation moderators make swift and accurate decisions and direct access to our fact-checking partners who help assess the accuracy of new content.
SLI 18.2.1
Relevant Signatories will report on actions taken in response to violations of policies relevant to Measure 18.2, at the Member State level. The metrics shall include: Total number of violations and Meaningful metrics to measure the impact of these actions (such as their impact on the visibility of or the engagement with content that was actioned upon).
The number of views of videos removed because of violation of each of these policies is based on the approximate location of the user.
We also updated the methodology on the number of videos made ineligible for the For You feed under our Misinformation policy.
Country | Number of videos removed because of violation of Misinformation policy | Number of views of videos removed because of violation of Misinformation policy | Number of videos removed because of violation of Civic and Election Integrity policy | Number of views of videos removed because of violation of Civic and Election Integrity policy | Number of videos removed because of violation of Edited Media and AI-Generated Content (AIGC) policy | Number of views of videos removed because of violation of Edited Media and AI-Generated Content (AIGC) policy | Number of videos ineligible for promotion under Misinformation policy |
---|---|---|---|---|---|---|---|
Austria | 2888 | 1313102 | 472 | 843182 | 414 | 216433 | 1696 |
Belgium | 3902 | 2844929 | 1002 | 107828 | 2092 | 1119223 | 2688 |
Bulgaria | 1568 | 5435715 | 182 | 110186 | 227 | 5977 | 1600 |
Croatia | 789 | 973202 | 64 | 3753 | 1361 | 58579 | 616 |
Cyprus | 511 | 1241327 | 86 | 1333 | 948 | 19441 | 326 |
Czech Republic | 2720 | 4705302 | 275 | 25952 | 465 | 8287531 | 6470 |
Denmark | 1455 | 2979180 | 335 | 14082 | 315 | 2742457 | 1157 |
Estonia | 319 | 77555 | 41 | 866 | 208 | 2063380 | 453 |
Finland | 984 | 1784968 | 199 | 1944 | 716 | 464824 | 811 |
France | 44354 | 61693484 | 4390 | 8369126 | 8563 | 312078908 | 24035 |
Germany | 50335 | 162220869 | 12231 | 3510858 | 11199 | 23904234 | 30934 |
Greece | 4198 | 4431258 | 649 | 1726365 | 8742 | 145950 | 1735 |
Hungary | 2002 | 9947587 | 308 | 273247 | 261 | 86870 | 957 |
Ireland | 4676 | 4802257 | 2051 | 568596 | 1063 | 103199 | 2154 |
Italy | 21035 | 39078480 | 3910 | 1578217 | 3574 | 1892355 | 19481 |
Latvia | 694 | 3745925 | 48 | 9 | 129 | 4519 | 459 |
Lithuania | 520 | 1122197 | 57 | 26 | 203 | 25410 | 647 |
Luxembourg | 279 | 162787 | 66 | 2180 | 223 | 8729 | 121 |
Malta | 168 | 5599 | 70 | 97 | 183 | 5811847 | 173 |
Netherlands | 5422 | 2811880 | 1046 | 55695 | 1883 | 9080526 | 6189 |
Poland | 13028 | 59545691 | 768 | 3942081 | 772 | 13404186 | 9872 |
Portugal | 2629 | 31071224 | 535 | 28529 | 1010 | 339124 | 1400 |
Romania | 14103 | 64183832 | 4276 | 33123122 | 937 | 623525 | 11739 |
Slovakia | 1365 | 4714713 | 41 | 677 | 98 | 2014 | 1472 |
Slovenia | 574 | 22494 | 28 | 111 | 66 | 605 | 346 |
Spain | 22581 | 37024505 | 2126 | 3554918 | 4392 | 21882268 | 54592 |
Sweden | 3489 | 9893681 | 633 | 6424 | 762 | 377862 | 2423 |
Iceland | 122 | 153566 | 26 | 19 | 85 | 6113 | 77 |
Liechtenstein | 35 | 0 | 20 | 0 | 48 | 525 | 33 |
Norway | 1798 | 5158745 | 313 | 1152478 | 679 | 139984 | 1200 |
Total EU | 206588 | 517833743 | 35889 | 57849404 | 50806 | 404749976 | 184546 |
Total EEA | 208543 | 523146054 | 36248 | 59001901 | 51618 | 404896598 | 185856 |
Measure 18.3
Relevant Signatories will invest and/or participate in research efforts on the spread of harmful Disinformation online and related safe design practices, will make findings available to the public or report on those to the Code's taskforce. They will disclose and discuss findings within the permanent Task-force, and explain how they intend to use these findings to improve existing safe design practices and features or develop new ones.
QRE 18.3.1
Relevant Signatories will describe research efforts, both in-house and in partnership with third-party organisations, on the spread of harmful Disinformation online and relevant safe design practices, as well as actions or changes as a result of this research. Relevant Signatories will include where possible information on financial investments in said research. Wherever possible, they will make their findings available to the general public.
- specialised prompts for unverified content, which alerts viewers to unverified content identified during an emergency or unfolding event and
- our state-controlled media label, which brings transparency to our community in relation to state affiliated media entities and raises awareness among users to encourage users to consider the reliability of the source.
Commitment 19
Relevant Signatories using recommender systems commit to make them transparent to the recipients regarding the main criteria and parameters used for prioritising or deprioritising information, and provide options to users about recommender systems, and make available information on those options.
We signed up to the following measures of this commitment
Measure 19.1 Measure 19.2
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
- At TikTok, we strive to bring more transparency to how we protect our platform. We continue to increase the reports we voluntarily publish, the depth of data we disclose, and the frequency with which we publish.
- In December 2024, we published our newest collection of transparency reports, including our: Community Guidelines Enforcement Report (July-September 2024); Government Removal Requests Report; Law Enforcement Information Requests Report, IP Removal Requests Report; and most recent Covert Influence Operations Reports, where we shared information about the influence networks we disrupted in October and November 2024.
- We also worked to make it easier for people to independently study our data and platform. For example through:
- our Research Tools which empower over 500 research teams to independently study our platform.
- the downloadable data file in the Community Guidelines Enforcement Report offering access to aggregated data, including removal data by policy category, for the 50 markets with the highest volumes of removed content.
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 19.1
Relevant Signatories will make available to their users, including through the Transparency Centre and in their terms and conditions, in a clear, accessible and easily comprehensible manner, information outlining the main parameters their recommender systems employ.
QRE 19.1.1
Relevant Signatories will provide details of the policies and measures put in place to implement the above-mentioned measures accessible to EU users, especially by publishing information outlining the main parameters their recommender systems employ in this regard. This information should also be included in the Transparency Centre.
- user interactions (e.g. content users like, share, comment on, and watch in full or skip, as well as accounts of followers that users follow back);
- Content information (e.g. sounds, hashtags, number of views, and the country the content was published); and
- User information (e.g. device settings, language preferences, location, time zone and day, and device types).
The main parameters help us make predictions on the content users are likely to be interested in. Different factors can play a larger or smaller role in what’s recommended, and the importance – or weighting – of a factor can change over time. For many users, the time spent watching a specific video is generally weighted more heavily than other factors. These predictions are also influenced by the interactions of other people on TikTok who appear to have similar interests. For example, if a user likes videos 1, 2, and 3 and a second user likes videos 1, 2, 3, 4 and 5, the recommendation system may predict that the first user will also like videos 4 and 5.
- Users can click on any video and select “not interested” to indicate that they do not want to see similar content.
- Users are able to automatically filter out specific words or hashtags from the content recommended to them(see here).
Measure 19.2
Relevant Signatories will provide options for the recipients of the service to select and to modify at any time their preferred options for relevant recommender systems, including giving users transparency about those options.
SLI 19.2.1
Relevant Signatories will provide aggregated information on effective user settings, such as the number of times users have actively engaged with these settings within the reporting period or over a sample representative timeframe, and clearly denote shifts in configuration patterns.
The number of users who have filtered hashtags or a keyword to set preferences for For You feed, the number of times users clicked “not interested” in relation to the For You feed, and the number of times users clicked on the For You Feed Refresh are all based on the approximate location of the users that engaged with these tools.
The number for videos tagged with AIGC label includes both automatic and creator-generated labeling.
Country | Number of users that filtered hashtags or words | Number of users that clicked on "not interested" | Number of times users clicked on the For You Feed Refresh | |
---|---|---|---|---|
,Number of Videos tagged with AIGC label | ||||
Austria | 53057 | 886639 | 52559 | 149390 |
Belgium | 67734 | 1322561 | 83721 | 241538 |
Bulgaria | 34081 | 744333 | 38568 | 153704 |
Croatia | 20196 | 486259 | 23134 | 46131 |
Cyprus | 7895 | 176600 | 13456 | 62428 |
Czech Republic | 45392 | 753417 | 35791 | 140826 |
Denmark | 35294 | 573821 | 27747 | 80022 |
Estonia | 11648 | 151267 | 11558 | 30907 |
Finland | 45185 | 586897 | 43657 | 109189 |
France | 332521 | 7939397 | 486316 | 1832452 |
Germany | 503549 | 7977800 | 648033 | 1883751 |
Greece | 52519 | 1344879 | 68577 | 214464 |
Hungary | 46966 | 1020692 | 28543 | 138023 |
Ireland | 54952 | 801523 | 52714 | 67672 |
Italy | 261272 | 6455485 | 295958 | 1140570 |
Latvia | 15527 | 279241 | 24888 | 118117 |
Lithuania | 21247 | 325564 | 23209 | 64359 |
Luxembourg | 4519 | 76244 | 5508 | 44220 |
Malta | 3137 | 77760 | 4923 | 15544 |
Netherlands | 135944 | 2081920 | 150651 | 231651 |
Poland | 196496 | 3383567 | 175988 | 519883 |
Portugal | 57677 | 1152515 | 61327 | 216364 |
Romania | 85551 | 2629162 | 165990 | 325318 |
Slovakia | 18482 | 347681 | 13822 | 50322 |
Slovenia | 9983 | 177990 | 19591 | 17100 |
Spain | 275604 | 6889325 | 381588 | 1170610 |
Sweden | 82868 | 1371265 | 111934 | 268743 |
Iceland | 4720 | 57250 | 3175 | 8073 |
Liechtenstein | 129 | 3563 | 291 | 418 |
Norway | 48188 | 685406 | 63483 | 101728 |
Total EU | 2479296 | 50013804 | 3049751 | 9333298 |
Total EEA | 2532333 | 50760023 | 3116700 | 9443517 |
Commitment 21
Relevant Signatories commit to strengthen their efforts to better equip users to identify Disinformation. In particular, in order to enable users to navigate services in an informed way, Relevant Signatories commit to facilitate, across all Member States languages in which their services are provided, user access to tools for assessing the factual accuracy of sources through fact-checks from fact-checking organisations that have flagged potential Disinformation, as well as warning labels from other authoritative sources.
We signed up to the following measures of this commitment
Measure 21.1 Measure 21.2 Measure 21.3
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
- Onboarded two new fact-checking partners in wider Europe:
- Albania & Kosovo: Internews Kosova
- Georgia: Fact Check Georgia
- Expanded our fact-checking coverage to a number of wider-European and EU candidate countries:
- Albania & Kosovo: Internews Kosova
- Georgia: Fact Check Georgia.
- Kazakhstan: Reuters
- Moldova: AFP/Reuters
- Serbia: Lead Stories
- We ran 14 temporary media literacy election integrity campaigns in advance of regional elections, most in collaboration with our fact-checking and media literacy partners:
- 8 in the EU (Austria, Croatia, France, 2 x Germany, Ireland, Lithuania, and Romania)
- Austria: Deutsche Presse-Agentur (dpa)
- Croatia: Faktograf
- France: Agence France-Presse (AFP)
- Germany (regional elections): Deutsche Presse-Agentur (dpa)
- Germany (federal election): Deutsche Presse-Agentur (dpa)
- Ireland: The Journal
- Lithuania: N/A
- Romania: Funky Citizens
- 1 in EEA
- Iceland: N/A
- 5 in wider Europe/EU candidate countries (Bosnia, Bulgaria, Czechia, Georgia, and Moldova)
- Bosnia: N/A
- Bulgaria: N/A
- Czechia: N/A
- Georgia: Fact Check Georgia
- Moldova: StopFals!
- 8 in the EU (Austria, Croatia, France, 2 x Germany, Ireland, Lithuania, and Romania)
- Launched four new temporary in-app natural disaster media literacy search guides that link to authoritative 3rd party agencies and organisations:
- Central & Eastern European Floods (Austria, Bosnia, Czechia, Germany, Hungary, Moldova, Poland, Romania, and Slovakia)
- Portugal Wildfires
- Spanish floods
- Mayotte Cyclone
- Continued our in-app interventions, including video tags, search interventions and in-app information centres, available in 23 official EU languages and Norwegian and Icelandic for EEA users, around the elections, the Israel-Hamas Conflict, Climate Change, Holocaust Education, Mpox, and the War in Ukraine.
- We partner with fact checkers to assess the accuracy of content. Sometimes, our fact-checking partners determine that content cannot be confirmed or checks are inconclusive (especially during unfolding events). Where our fact-checking partners provide us with a rating that demonstrates the claim cannot yet be verified, we may use our unverified content label to inform viewers via a banner that a video contains unverified content, in an effort to raise user awareness about content credibility.
- Building on our new AI-generated content label for creators, and implementation of C2PA Content Credentials, we launched a number of media literacy campaigns with guidance from expert organisations like Mediawise and WITNESS, including in Brazil, Germany, France, Mexico and the UK, that teach our community how to spot and label AI-generated content.
- Our AIGC Transparency Campaign informed by WITNESS has reached 80M users globally, including more than 8.5M and 9.5M in Germany and France respectively.
- Our AIGC Transparency Campaign informed by WITNESS has reached 80M users globally, including more than 8.5M and 9.5M in Germany and France respectively.
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 21.1
Relevant Signatories will further develop and apply policies, features, or programs across Member States and EU languages to help users benefit from the context and insights provided by independent fact-checkers or authoritative sources, for instance by means of labels, such as labels indicating fact-checker ratings, notices to users who try to share or previously shared the rated content, information panels, or by acting upon content notified by fact-checkers that violate their policies.
QRE 21.1.1
Relevant Signatories will report on the policies, features, or programs they deploy to meet this Measure and on their availability across Member States.
- Agence France-Presse (AFP)
- dpa Deutsche Presse-Agentur
- Demagog
- Facta
- Fact Check Georgia
- Faktograf
- Internews Kosova
- Lead Stories
- Logically Facts
- Newtral
- Poligrafo
- Reuters
- Science Feedback
- Teyit
Enforcement of misinformation policies. Our fact-checking partners play a critical role in helping us enforce our misinformation policies, which aim to promote a trustworthy and authentic experience for our users. We consider context and fact-checking to be key to consistently and accurately enforcing these policies, so, while we use machine learning models to help detect potential misinformation, we have our misinformation moderators assess, confirm, and take action on harmful misinformation. As part of this process, our moderators can access a repository of previously fact-checked claims and they are able to provide content to our expert fact checking partners for further evaluation. Where fact-checking partners advise that content is false, our moderators take measures to assess and remove it from our platform. Our response to QRE 31.1.1 provides further insight into the way in which fact-checking partners are involved in this process.
- In-app tools related to specific topics:
- Election integrity. We have launched campaigns in advance of several major elections aimed at educating the public about the voting process which encourage users to fact-check information with our fact-checking partners. For example, the election integrity campaign we rolled out in advance of France legislative elections in June 2024 included a search intervention and in-app Election Centre. The centre contained a section about spotting misinformation, which included videos created in partnership with fact-checking organisation Agence France-Presse (AFP). In total, during the reporting period, we ran 14 temporary media literacy election integrity campaigns in advance of regional elections.
- Climate Change. We launched a search intervention which redirects users seeking out climate change-related content to authoritative information. We worked with the UN to provide the authoritative information (see our newsroom post here).
- COP29: We launched two global features (a video notice tag and search intervention guide) to point users to authoritative climate related content between 29th October and 25th November, which were viewed 400k times.
- Natural disasters: Launched four new temporary in-app natural disaster media literacy search guides that link to authoritative 3rd party agencies and organisations:
- Central & Eastern European Floods (Austria, Bosnia, Czechia, Germany, Hungary, Moldova, Poland, Romania, and Slovakia)
- Portugal Wildfires
- Spanish floods
- Mayotte Cyclone
- User awareness of our fact-checking partnerships and labels. We have created pages on our Safety Center & Transparency Center to raise users’ awareness about our fact-checking program and labels and to support the work of our fact-checking partners.
SLI 21.1.1
Relevant Signatories will report through meaningful metrics on actions taken under Measure 21.1, at the Member State level. At the minimum, the metrics will include: total impressions of fact-checks; ratio of impressions of fact-checks to original impressions of the fact-checked content–or if these are not pertinent to the implementation of fact-checking on their services, other equally pertinent metrics and an explanation of why those are more adequate.
Country | % video removals under Misinformation policy | % proactive video removals under Misinformation policy | % video removals before any views under Misinformation policy | % video removals within 24h under Misinformation policy | % video removals under Civic and Election Integrity policy | % proactive video removals under Civic and Election Integrity policy | % video removals before any views under Civic and Election Integrity policy | % video removals within 24h under Civic and Election Integrity policy | % video removals under Synthetic Media policy | % proactive video removals under Synthetic Media policy | % video removals before any views under Synthetic Media policy | % video removals within 24h under Synthetic Media policy | Share cancel rate (%) following the unverified content label share warning pop-up (users who do not share the video after seeing the pop up) |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Austria | 20.22% | 97.92% | 80.85% | 82.17% | 3.30% | 96.61% | 77.12% | 82.84% | 2.90% | 99.03% | 57.25% | 47.34% | 31.81% |
Belgium | 14.44% | 98.92% | 82.47% | 89.65% | 3.71% | 98.60% | 89.92% | 93.11% | 7.74% | 97.56% | 62.76% | 72.66% | 33.81% |
Bulgaria | 30.57% | 94.39% | 59.44% | 82.91% | 3.55% | 95.05% | 90.11% | 94.51% | 4.42% | 99.12% | 46.70% | 23.79% | 33.97% |
Croatia | 20.93% | 98.99% | 70.47% | 89.48% | 1.70% | 95.31% | 85.94% | 87.50% | 36.11% | 93.17% | 15.43% | 11.09% | 33.66% |
Cyprus | 18.42% | 95.69% | 71.62% | 82.97% | 3.10% | 98.84% | 83.72% | 82.56% | 34.17% | 93.78% | 30.70% | 6.86% | 32.91% |
Czech Republic | 25.19% | 91.84% | 53.20% | 90.92% | 2.55% | 98.18% | 94.18% | 94.91% | 4.31% | 97.20% | 48.82% | 70.75% | 29.52% |
Denmark | 8.25% | 96.91% | 73.47% | 83.09% | 1.90% | 97.61% | 94.63% | 96.72% | 1.79% | 98.10% | 48.57% | 59.05% | 30.20% |
Estonia | 18.74% | 99.37% | 75.86% | 93.10% | 2.41% | 97.56% | 82.93% | 87.80% | 12.22% | 96.63% | 59.13% | 74.52% | 28.53% |
Finland | 15.52% | 94.11% | 69.82% | 89.43% | 3.14% | 97.99% | 92.46% | 96.98% | 11.29% | 97.07% | 39.94% | 55.31% | 27.21% |
France | 22.45% | 99.24% | 86.89% | 95.58% | 2.22% | 97.95% | 90.05% | 96.54% | 4.33% | 96.10% | 46.50% | 47.45% | 37.13% |
Germany | 21.79% | 97.71% | 76.06% | 90.87% | 5.29% | 98.11% | 85.14% | 96.21% | 4.85% | 97.79% | 62.09% | 56.74% | 30.09% |
Greece | 17.26% | 96.86% | 74.92% | 92.28% | 2.67% | 98.77% | 96.46% | 98.15% | 35.94% | 89.87% | 27.90% | 10.17% | 32.05% |
Hungary | 28.88% | 90.51% | 63.49% | 86.26% | 4.44% | 91.88% | 82.47% | 95.13% | 3.77% | 98.47% | 55.56% | 57.47% | 31.38% |
Ireland | 22.17% | 93.76% | 61.18% | 88.43% | 9.73% | 86.01% | 24.38% | 96.34% | 5.04% | 92.76% | 52.30% | 60.11% | 29.59% |
Italy | 27.66% | 98.27% | 72.70% | 92.14% | 5.14% | 98.57% | 81.43% | 88.77% | 4.70% | 98.77% | 47.26% | 44.71% | 37.65% |
Latvia | 26.97% | 98.85% | 82.42% | 94.24% | 1.87% | 97.92% | 93.75% | 87.50% | 5.01% | 99.22% | 45.74% | 47.29% | 30.90% |
Lithuania | 23.16% | 99.23% | 87.50% | 94.42% | 2.54% | 100.00% | 92.98% | 91.23% | 9.04% | 98.03% | 47.78% | 48.28% | 30.80% |
Luxembourg | 9.39% | 98.92% | 88.53% | 86.38% | 2.22% | 96.97% | 92.42% | 98.48% | 7.51% | 96.86% | 50.67% | 41.70% | 33.64% |
Malta | 9.84% | 98.21% | 89.29% | 88.10% | 4.10% | 100.00% | 94.29% | 95.71% | 10.72% | 98.36% | 67.21% | 79.23% | 35.43% |
Netherlands | 16.62% | 99.19% | 86.32% | 89.45% | 3.21% | 99.43% | 91.01% | 94.46% | 5.77% | 98.67% | 60.65% | 67.71% | 27.79% |
Poland | 30.42% | 94.28% | 63.90% | 89.56% | 1.79% | 95.57% | 90.89% | 93.62% | 1.80% | 95.85% | 56.35% | 51.30% | 28.88% |
Portugal | 26.70% | 97.64% | 84.90% | 90.64% | 5.43% | 99.44% | 97.20% | 97.76% | 10.26% | 96.04% | 37.82% | 31.78% | 33.08% |
Romania | 41.05% | 91.73% | 62.51% | 82.05% | 12.45% | 78.02% | 27.78% | 49.79% | 2.73% | 96.80% | 37.89% | 24.97% | 30.08% |
Slovakia | 45.65% | 89.16% | 56.04% | 87.47% | 1.37% | 97.56% | 92.68% | 97.56% | 3.28% | 97.96% | 38.78% | 18.37% | 28.89% |
Slovenia | 22.94% | 99.30% | 79.09% | 95.82% | 1.12% | 100.00% | 89.29% | 92.86% | 2.64% | 100.00% | 57.58% | 60.61% | 33.33% |
Spain | 28.31% | 99.14% | 82.55% | 90.39% | 2.67% | 98.54% | 69.71% | 81.94% | 5.51% | 97.70% | 33.15% | 30.76% | 34.09% |
Sweden | 10.90% | 97.71% | 77.84% | 90.43% | 1.98% | 98.89% | 95.10% | 98.10% | 2.38% | 95.28% | 48.69% | 53.67% | 29.44% |
Iceland | 4.40% | 97.54% | 90.16% | 92.62% | 0.94% | 100.00% | 96.15% | 100.00% | 3.07% | 98.82% | 72.94% | 75.29% | 27.86% |
Liechtenstein | 3.11% | 100.00% | 100.00% | 91.43% | 1.78% | 100.00% | 100.00% | 100.00% | 4.26% | 97.92% | 68.75% | 60.42% | 19.61% |
Norway | 18.77% | 96.05% | 74.03% | 89.93% | 3.27% | 96.49% | 89.46% | 92.65% | 7.09% | 93.96% | 46.54% | 55.67% | 25.37% |
Total EU | 23.14% | 97.35% | 76.62% | 90.87% | 4.02% | 95.03% | 75.26% | 88.70% | 5.69% | 95.81% | 45.90% | 41.70% | 32.24% |
Total EEA | 23.01% | 97.34% | 76.61% | 90.86% | 4.00% | 95.05% | 75.41% | 88.75% | 5.69% | 95.79% | 45.97% | 41.96% | 32.13% |
SLI 21.1.2
When cooperating with independent fact-checkers to label content on their services, Relevant Signatories will report on actions taken at the Member State level and their impact, via metrics, of: number of articles published by independent fact-checkers; number of labels applied to content, such as on the basis of such articles; meaningful metrics on the impact of actions taken under Measure 21.1.1 such as the impact of said measures on user interactions with, or user re-shares of, content fact-checked as false or misleading.
Country | Number of videos tagged with the unverified content label | Share cancel rate (%) following the unverified content label share warning pop-up (users who do not share the video after seeing the pop up) |
---|---|---|
Austria | 1875 | 31.81% |
Belgium | 2387 | 33.81% |
Bulgaria | 2428 | 33.97% |
Croatia | 532 | 33.66% |
Cyprus | 330 | 32.91% |
Czech Republic | 2431 | 29.52% |
Denmark | 2438 | 30.20% |
Estonia | 190 | 28.53% |
Finland | 1768 | 27.21% |
France | 24023 | 37.13% |
Germany | 28389 | 30.09% |
Greece | 3363 | 32.05% |
Hungary | 2683 | 31.38% |
Ireland | 1591 | 29.59% |
Italy | 23139 | 37.65% |
Latvia | 415 | 30.90% |
Lithuania | 389 | 30.80% |
Luxembourg | 135 | 33.64% |
Malta | 64 | 35.43% |
Netherlands | 4787 | 27.79% |
Poland | 12974 | 28.88% |
Portugal | 1921 | 33.08% |
Romania | 6708 | 30.08% |
Slovakia | 1229 | 28.89% |
Slovenia | 169 | 33.33% |
Spain | 25829 | 34.09% |
Sweden | 3207 | 29.44% |
Iceland | 49 | 27.86% |
Liechtenstein | 0 | 19.61% |
Norway | 1516 | 25.37% |
Total EU | 155394 | 32.24% |
Total EEA | 156959 | 32.13% |
Measure 21.3
Where Relevant Signatories employ labelling and warning systems, they will design these in accordance with up-to-date scientific evidence and with analysis of their users' needs on how to maximise the impact and usefulness of such interventions, for instance such that they are likely to be viewed and positively received.
QRE 21.3.1
Relevant Signatories will report on their procedures for developing and deploying labelling or warning systems and how they take scientific evidence and their users' needs into account to maximise usefulness.
Unverified content label. In 2021, we partnered with behavioural scientists, Irrational Labs, on the design and testing of the specialised prompts which encourage users to consider content which has been labelled as unverified, before sharing it, as detailed in QRE 17.1.1. On testing the prompts, Irrational Labs found that viewers decreased the rate at which they shared videos by 24%, while likes on such unsubstantiated content also decreased by 7%.Their full report can be found here.
As mentioned above, we partner with a number of IFCN accredited fact-checkers in Europe, who assist with assessing the accuracy of certain content on our platform. Where our fact-checking partners determine that a video is not able to be confirmed or their fact-checks are inconclusive (which is sometimes the case, particularly during unfolding events or emergencies), we may apply our unverified content label to the video.
State-controlled media label. Since January 2023, we have been applying state-controlled media labels to accounts or content where there is evidence of clear editorial control and decision-making by members of the state. To inform our state-affiliated media policy, including the updates set out in this report, and our approach to making such designations, we consult with media experts, political scientists, academics, and representatives from international organisations and civil society across North and South America, Africa, Europe, the Middle East, Asia, and Australia. We continue to work with these experts to inform our global approach and expansion of the policy.
Commitment 22
Relevant Signatories commit to provide users with tools to help them make more informed decisions when they encounter online information that may be false or misleading, and to facilitate user access to tools and information to assess the trustworthiness of information sources, such as indicators of trustworthiness for informed online navigation, particularly relating to societal issues or debates of general interest.
We signed up to the following measures of this commitment
Measure 22.1 Measure 22.7
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Commitment 23
Relevant Signatories commit to provide users with the functionality to flag harmful false and/or misleading information that violates Signatories policies or terms of service.
We signed up to the following measures of this commitment
Measure 23.1 Measure 23.2
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
- In line with our DSA requirements, we continued to provide a dedicated reporting channel, and appeals process for users who disagree with the outcome, for our community in the European Union to ‘Report Illegal Content,’ enabling users to alert us to content they believe breaches the law.
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 23.1
Relevant Signatories will develop or continue to make available on all their services and in all Member States languages in which their services are provided a user-friendly functionality for users to flag harmful false and/or misleading information that violates Signatories' policies or terms of service. The functionality should lead to appropriate, proportionate and consistent follow-up actions, in full respect of the freedom of expression.
QRE 23.1.1
Relevant Signatories will report on the availability of flagging systems for their policies related to harmful false and/or misleading information across EU Member States and specify the different steps that are required to trigger the systems.
- By ‘long-pressing’ (e.g., clicking for 3 seconds) on the video content and selecting the “Report” option.
- By selecting the “Share” button available on the right-hand side of the video content and then selecting the “Report” option.
Users do not need to be logged into an account on the platform to report content, and can also report video content via the TikTok website (by clicking on the “Report” button which is prominently displayed in the upper right hand corner of each video when hovering over a video) or by means of our “Report Inappropriate content” webform which is available in our Support Centre.
We are aware that harmful misinformation is not limited to video content and so users can also report a comment, a suggested search, a hashtag, a sound or an account, again specifically for harmful misinformation.
Measure 23.2
Relevant Signatories will take the necessary measures to ensure that this functionality is duly protected from human or machine-based abuse (e.g., the tactic of 'mass-flagging' to silence other voices).
QRE 23.2.1
Relevant Signatories will report on the general measures they take to ensure the integrity of their reporting and appeals systems, while steering clear of disclosing information that would help would-be abusers find and exploit vulnerabilities in their defences.
We have sought to make our CGs as clear and comprehensive as possible and have put in place robust Quality Assurance processes (including steps such as review of moderation cases, flows, appeals and undertaking Root Cause Analyses).
Those who report suspected illegal content will be notified of our decision, including if we consider that the content is not illegal. Users who disagree can appeal those decisions using the appeals process.
We also note that whilst user reports are important, at TikTok we place considerable emphasis on proactive detection to remove violative content. We are proud that the vast majority of removed content is identified proactively before it is reported to us.
We are transparent with users in relation to appeals. We set out the options that may be available both to the user who reported the content and the creator of the affected content, where they disagree with the decision we have taken.
The integrity of our appeals systems is reinforced by the involvement of our trained human moderators, who can take context and nuance into consideration when deciding whether content is illegal or violates our CGs.
Our moderators review all appeals raised in relation to removed videos, removed comments, and banned accounts and assess them against our policies. To ensure consistency within this process and its overall integrity, we have sought to make our policies as clear and comprehensive as possible and have put in place robust Quality Assurance processes (including steps such as auditing appeals and undertaking Root Cause Analyses).
If users who have submitted an appeal are still not satisfied with our decision, they can share feedback with us via the webform on TikTok.com. We continuously take user feedback into consideration to identify areas of improvement, including within the appeals process. Users may also have other legal rights in relation to decisions we make, as set out further here.
Commitment 24
Relevant Signatories commit to inform users whose content or accounts has been subject to enforcement actions (content/accounts labelled, demoted or otherwise enforced on) taken on the basis of violation of policies relevant to this section (as outlined in Measure 18.2), and provide them with the possibility to appeal against the enforcement action at issue and to handle complaints in a timely, diligent, transparent, and objective manner and to reverse the action without undue delay where the complaint is deemed to be founded.
We signed up to the following measures of this commitment
Measure 24.1
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
- Continued to serve user notifications following action on a user’s account or content, which includes a clear explanation about the action taken and a simple way to appeal the decision taken.
- Continued to provide additional user transparency around our appeals processes (here)
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 24.1
Relevant Signatories commit to provide users with information on why particular content or accounts have been labelled, demoted, or otherwise enforced on, on the basis of violation of policies relevant to this section, as well as the basis for such enforcement action, and the possibility for them to appeal through a transparent mechanism.
QRE 24.1.1
Relevant Signatories will report on the availability of their notification and appeals systems across Member States and languages and provide details on the steps of the appeals procedure.
- removal or otherwise restriction of access to their content;
- a ban of the account;
- restriction of their access to a feature (such as LIVE); or
- restriction of their ability to monetise.
SLI 24.1.1
Relevant Signatories provide information on the number and nature of enforcement actions for policies described in response to Measure 18.2, the numbers of such actions that were subsequently appealed, the results of these appeals, information, and to the extent possible metrics, providing insight into the duration or effectiveness of processing of appeals process, and publish this information on the Transparency Centre.
Country | Number of appeals of videos removed for violation of misinformation policy | Number of overturns following appeals for violation of misinformation policy | Appeal success rate of videos removed for violation of misinformation policy | Number of appeals of videos removed for violation of Civic and Election Integrity policy | Number of overturns following appeals for violation of Civic and Election Integrity policy | Appeal success rate of videos removed for violation of Civic and Election Integrity policy | Number of appeals of videos removed for violation of Edited Media and AI-Generated Content (AIGC) policy | Number of overturns following appeals under Edited Media and AI-Generated Content (AIGC) policy | Appeal success rate of videos removed for violation of Synthetic and Manipulated Media |
---|---|---|---|---|---|---|---|---|---|
Austria | 619 | 352 | 56.90% | 79 | 65 | 82.30% | 9 | 8 | 88.90% |
Belgium | 863 | 673 | 78.00% | 149 | 123 | 82.60% | 14 | 12 | 85.70% |
Bulgaria | 267 | 107 | 40.10% | 34 | 23 | 67.60% | 5 | 2 | 40.00% |
Croatia | 140 | 84 | 60.00% | 7 | 7 | 100.00% | 12 | 8 | 66.70% |
Cyprus | 108 | 56 | 51.90% | 4 | 2 | 50.00% | 4 | 3 | 75.00% |
Czech Republic | 902 | 433 | 48.00% | 45 | 33 | 73.30% | 31 | 12 | 38.70% |
Denmark | 289 | 215 | 74.40% | 57 | 50 | 87.70% | 18 | 16 | 88.90% |
Estonia | 140 | 113 | 80.70% | 3 | 3 | 100.00% | 18 | 14 | 77.80% |
Finland | 202 | 156 | 77.20% | 12 | 9 | 75.00% | 6 | 5 | 83.30% |
France | 7461 | 6189 | 83.00% | 331 | 301 | 90.90% | 110 | 87 | 79.10% |
Germany | 13540 | 7268 | 53.70% | 1302 | 1053 | 80.90% | 177 | 121 | 68.40% |
Greece | 734 | 425 | 57.90% | 68 | 56 | 82.40% | 12 | 9 | 75.00% |
Hungary | 481 | 314 | 65.30% | 45 | 32 | 71.10% | 22 | 15 | 68.20% |
Ireland | 1091 | 845 | 77.50% | 53 | 48 | 90.60% | 17 | 15 | 88.20% |
Italy | 6074 | 4174 | 68.70% | 553 | 491 | 88.80% | 57 | 48 | 84.20% |
Latvia | 110 | 83 | 75.50% | 5 | 5 | 100.00% | 7 | 3 | 42.90% |
Lithuania | 105 | 87 | 82.90% | 13 | 11 | 84.60% | 0 | 0 | 0.00% |
Luxembourg | 17 | 16 | 94.10% | 7 | 4 | 57.10% | 0 | 0 | 0.00% |
Malta | 38 | 37 | 97.40% | 3 | 3 | 100.00% | 0 | 0 | 0.00% |
Netherlands | 1207 | 959 | 79.50% | 123 | 103 | 83.70% | 19 | 14 | 73.70% |
Poland | 4263 | 1833 | 43.00% | 177 | 125 | 70.60% | 35 | 25 | 71.40% |
Portugal | 402 | 274 | 68.20% | 79 | 56 | 70.90% | 22 | 16 | 72.70% |
Romania | 2573 | 1598 | 62.10% | 524 | 403 | 76.90% | 30 | 24 | 80.00% |
Slovakia | 401 | 175 | 43.60% | 11 | 7 | 63.60% | 5 | 4 | 80.00% |
Slovenia | 267 | 153 | 57.30% | 5 | 4 | 80.00% | 7 | 2 | 28.60% |
Spain | 4920 | 3961 | 80.50% | 239 | 202 | 84.50% | 52 | 40 | 76.90% |
Sweden | 943 | 544 | 57.70% | 124 | 100 | 80.60% | 15 | 7 | 46.70% |
Iceland | 20 | 17 | 85.00% | 2 | 0 | 0.00% | 0 | 0 | 0.00% |
Liechtenstein | 0 | 0 | 0.00% | 0 | 0 | 0.00% | 0 | 0 | 0.00% |
Norway | 437 | 322 | 73.70% | 44 | 35 | 79.50% | 14 | 9 | 64.30% |
Total EU | 48157 | 31124 | 64.60% | 4052 | 3319 | 81.90% | 704 | 510 | 72.40% |
Total EEA | 48,614 | 31,463 | 64.70% | 4098 | 3354 | 81.80% | 718 | 519 | 72.30% |
Empowering Researchers
Commitment 26
Relevant Signatories commit to provide access, wherever safe and practicable, to continuous, real-time or near real-time, searchable stable access to non-personal data and anonymised, aggregated, or manifestly-made public data for research purposes on Disinformation through automated means such as APIs or other open and accessible technical solutions allowing the analysis of said data.
We signed up to the following measures of this commitment
Measure 26.1 Measure 26.2 Measure 26.3
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
- Continued to refine the new Virtual Compute Environment (VCE), launched May 2024, by:
- Providing access to public U18 data.
- Adding new data points (e.g., Hashtag Info) and endpoints (e.g., Playlist Info). See Changelog.
- Establishing a new due diligence process with an external partner to confirm the eligibility of NGO applicants.
- Continued to support independent research through the Research API and improve accessibility by:
- Adding three new endpoints for TikTok Shop, which launched in Spain and Ireland in December 2024.
- Making Python and R (programming languages) wrappers available via GitHub.
- Continued to make the Commercial Content API available in Europe to bring transparency to paid advertising, advertisers and other commercial content on TikTok.
- Continued to offer our Commercial Content Library, a publicly searchable EU ads database with information about paid ads and ad metadata, such as the advertising creative, dates the ad was active for, the main parameters used for targeting (e.g. age, gender), the number of people who were served the ad.
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 26.1
Relevant Signatories will provide public access to non-personal data and anonymised, aggregated or manifestly-made public data pertinent to undertaking research on Disinformation on their services, such as engagement and impressions (views) of content hosted by their services, with reasonable safeguards to address risks of abuse (e.g. API policies prohibiting malicious or commercial uses).
QRE 26.1.1
Relevant Signatories will describe the tools and processes in place to provide public access to non-personal data and anonymised, aggregated and manifestly-made public data pertinent to undertaking research on Disinformation, as well as the safeguards in place to address risks of abuse.
(I) Research API
To make it easier to independently research our platform and bring transparency to TikTok content, we built a Research API that provides researchers in the US, EEA, UK and Switzerland, with access to public data on accounts and content, including comments, captions, subtitles, number of comments, shares, likes, followers and following lists, and favourites that a video receives on our platform. More information is available here. We carefully consider feedback from researchers who have used the API and continue to make improvements such as additional data fields, streamlining the application process, and enabling collaboration through Lab Access, which allows up to 10 researchers to work together on a shared research project.
- Test Stage: Query the data using TikTok's query software development kit (SDK). The VCE will return random sample data based on your query, limited to 5,000 records per day.
- Execution Stage: Submit a script to execute against all public data. TikTok provides a powerful search capability that allows data to be paginated in increments of up to 100,000 records. TikTok will review the results file to make sure the output is aggregated.
The Commercial Content Library is a publicly searchable database with information about paid ads and ad metadata, such as the advertising creative, dates the ad ran, main parameters used for targeting (e.g. age, gender), number of people who were served the ad, and more. It also includes information about content that's commercial in nature and tagged with either a paid partnership label or promotional label, such as content that promotes a brand, product or service, but is not a paid ad.
QRE 26.1.2
Relevant Signatories will publish information related to data points available via Measure 25.1, as well as details regarding the technical protocols to be used to access these data points, in the relevant help centre. This information should also be reachable from the Transparency Centre. At minimum, this information will include definitions of the data points available, technical and methodological information about how they were created, and information about the representativeness of the data.
We provide access to researchers to data that is publicly available on our platform through our Research Tools and through our Commercial Content API for commercial content (detailed below).
SLI 26.1.1
Relevant Signatories will provide quantitative information on the uptake of the tools and processes described in Measure 26.1, such as number of users.
During this reporting period we received:
- 148 applications to access TikTok’s Research Tools (Research API and VCE) from researchers in the EU and EEA.
- 61 applications to access the TikTok Commercial Content API.
Country | Number of applications received for Research API | Number of applications accepted for Research API | Number of applications rejected for Research API | Number of applications received for TikTok Commercial Content Library API | Number of applications accepted for TikTok Commercial Content Library API | Number of applications rejected for TikTok Commercial Content Library API |
---|---|---|---|---|---|---|
Austria | 5 | 3 | 1 | 1 | 1 | 0 |
Belgium | 0 | 0 | 0 | 3 | 3 | 0 |
Bulgaria | 1 | 0 | 0 | 1 | 1 | 0 |
Croatia | 2 | 0 | 2 | 0 | 0 | 0 |
Cyprus | 0 | 0 | 0 | 0 | 0 | 0 |
Czech Republic | 2 | 1 | 1 | 0 | 0 | 0 |
Denmark | 4 | 3 | 0 | 0 | 0 | 0 |
Estonia | 0 | 0 | 0 | 0 | 0 | 0 |
Finland | 1 | 2 | 0 | 3 | 2 | 1 |
France | 16 | 4 | 6 | 11 | 8 | 3 |
Germany | 50 | 12 | 16 | 14 | 11 | 3 |
Greece | 5 | 1 | 3 | 0 | 0 | 0 |
Hungary | 1 | 1 | 1 | 2 | 2 | 0 |
Ireland | 3 | 2 | 4 | 1 | 1 | 0 |
Italy | 13 | 5 | 2 | 2 | 2 | 0 |
Latvia | 0 | 0 | 0 | 1 | 1 | 0 |
Lithuania | 0 | 0 | 0 | 2 | 2 | 0 |
Luxembourg | 0 | 0 | 0 | 0 | 0 | 0 |
Malta | 0 | 0 | 0 | 0 | 0 | 0 |
Netherlands | 17 | 7 | 7 | 3 | 2 | 1 |
Poland | 3 | 0 | 1 | 3 | 2 | 1 |
Portugal | 2 | 2 | 0 | 2 | 2 | 0 |
Romania | 6 | 1 | 1 | 0 | 0 | 0 |
Slovakia | 0 | 0 | 0 | 1 | 1 | 0 |
Slovenia | 0 | 0 | 0 | 0 | 0 | 0 |
Spain | 11 | 2 | 4 | 6 | 4 | 2 |
Sweden | 4 | 3 | 1 | 4 | 3 | 1 |
Iceland | 0 | 0 | 0 | 0 | 0 | 0 |
Liechtenstein | 0 | 0 | 0 | 0 | 0 | 0 |
Norway | 2 | 2 | 0 | 1 | 1 | 0 |
Total EU | 146 | 49 | 50 | 60 | 48 | 12 |
Total EEA | 148 | 51 | 50 | 61 | 49 | 12 |
Measure 26.2
Relevant Signatories will provide real-time or near real-time, machine-readable access to non-personal data and anonymised, aggregated or manifestly-made public data on their service for research purposes, such as accounts belonging to public figures such as elected official, news outlets and government accounts subject to an application process which is not overly cumbersome.
QRE 26.2.1
Relevant Signatories will describe the tools and processes in place to provide real-time or near real-time access to non-personal data and anonymised, aggregated and manifestly-made public data for research purposes as described in Measure 26.2.
(I) Research API
To make it easier to independently research our platform and bring transparency to TikTok content, we built a Research API that provides researchers in the US, EEA, UK and Switzerland, with access to public data on accounts and content, including comments, captions, subtitles, number of comments, shares, likes, followers and following lists, and favourites that a video receives on our platform. More information is available here. We carefully consider feedback from researchers who have used the API and continue to make improvements such as additional data fields, streamlining the application process, and enabling collaboration through Lab Access, which allows up to 10 researchers to work together on a shared research project.
- Test Stage: Query the data using TikTok's query software development kit (SDK). The VCE will return random sample data based on your query, limited to 5,000 records per day.
- Execution Stage: Submit a script to execute against all public data. TikTok provides a powerful search capability that allows data to be paginated in increments of up to 100,000 records. TikTok will review the results file to make sure the output is aggregated.
(IV) Commercial Content Library
QRE 26.2.2
Relevant Signatories will describe the scope of manifestly-made public data as applicable to their services.
(I) Research API
TikTok's Commercial Content Library is a repository of ads and other types of commercial content posted to users in the European Economic Area (EEA), Switzerland, and the UK only, but can be accessed by members of the public located in any country. Each ad and ad details will be available in the library for one year after the advertisement was last viewed by any user. Through the Commercial Content Library, the public can access information about paid ads and ad metadata, such as the advertising creative, dates the ad ran, main parameters used for targeting (e.g. age, gender), number of people who were served the ad, and more. It also includes information about content that is commercial in nature and tagged with either a paid partnership label or promotional label, such as content that promotes a brand, product or service, but is not a paid ad.
QRE 26.2.3
Relevant Signatories will describe the application process in place to in order to gain the access to non-personal data and anonymised, aggregated and manifestly-made public data described in Measure 26.2.
We make detailed information available to applicants about our Research Tools (Research API and VCE) and Commercial Content API,through our dedicated TikTok for Developers website, including on what data is made available and how to apply for access. In August 2024, we established a new due diligence process with an external vendor to confirm the eligibility of NGO applicants.
Once an application has been approved for access to our Research Tools, we provide step-by-step instructions for researchers on how to access research data, how to comply with the security steps, and how to run queries on the data.
SLI 26.2.1
Relevant Signatories will provide meaningful metrics on the uptake, swiftness, and acceptance level of the tools and processes in Measure 26.2, such as: Number of monthly users (or users over a sample representative timeframe), Number of applications received, rejected, and accepted (over a reporting period or a sample representative timeframe), Average response time (over a reporting period or a sample representative timeframe).
During this reporting period we received:
- 148 applications to access TikTok’s Research Tools (Research API and VCE) from researchers in the EU and EEA.
- 61 applications to access the TikTok Commercial Content API.
Country | Number of applications received for Research API | Number of applications accepted for Research API | Number of applications rejected for Research API | Number of applications received for TikTok Commercial Content Library API | Number of applications accepted for TikTok Commercial Content Library API | Number of applications rejected for TikTok Commercial Content Library API |
---|---|---|---|---|---|---|
Austria | 5 | 3 | 1 | 1 | 1 | 0 |
Belgium | 0 | 0 | 0 | 3 | 3 | 0 |
Bulgaria | 1 | 0 | 0 | 1 | 1 | 0 |
Croatia | 2 | 0 | 2 | 0 | 0 | 0 |
Cyprus | 0 | 0 | 0 | 0 | 0 | 0 |
Czech Republic | 2 | 1 | 1 | 0 | 0 | 0 |
Denmark | 4 | 3 | 0 | 0 | 0 | 0 |
Estonia | 0 | 0 | 0 | 0 | 0 | 0 |
Finland | 1 | 2 | 0 | 3 | 2 | 1 |
France | 16 | 4 | 6 | 11 | 8 | 3 |
Germany | 50 | 12 | 16 | 14 | 11 | 3 |
Greece | 5 | 1 | 3 | 0 | 0 | 0 |
Hungary | 1 | 1 | 1 | 2 | 2 | 0 |
Ireland | 3 | 2 | 4 | 1 | 1 | 0 |
Italy | 13 | 5 | 2 | 2 | 2 | 0 |
Latvia | 0 | 0 | 0 | 1 | 1 | 0 |
Lithuania | 0 | 0 | 0 | 2 | 2 | 0 |
Luxembourg | 0 | 0 | 0 | 0 | 0 | 0 |
Malta | 0 | 0 | 0 | 0 | 0 | 0 |
Netherlands | 17 | 7 | 7 | 3 | 2 | 1 |
Poland | 3 | 0 | 1 | 3 | 2 | 1 |
Portugal | 2 | 2 | 0 | 2 | 2 | 0 |
Romania | 6 | 1 | 1 | 0 | 0 | 0 |
Slovakia | 0 | 0 | 0 | 1 | 1 | 0 |
Slovenia | 0 | 0 | 0 | 0 | 0 | 0 |
Spain | 11 | 2 | 4 | 6 | 4 | 2 |
Sweden | 4 | 3 | 1 | 4 | 3 | 1 |
Iceland | 0 | 0 | 0 | 0 | 0 | 0 |
Liechtenstein | 0 | 0 | 0 | 0 | 0 | 0 |
Norway | 2 | 2 | 0 | 1 | 1 | 0 |
Total EU | 146 | 49 | 50 | 60 | 48 | 12 |
Total EEA | 148 | 51 | 50 | 61 | 49 | 12 |
Measure 26.3
Relevant Signatories will implement procedures for reporting the malfunctioning of access systems and for restoring access and repairing faulty functionalities in a reasonable time.
QRE 26.3.1
Relevant Signatories will describe the reporting procedures in place to comply with Measure 26.3 and provide information about their malfunction response procedure, as well as about malfunctions that would have prevented the use of the systems described above during the reporting period and how long it took to remediate them.
Commitment 27
Relevant Signatories commit to provide vetted researchers with access to data necessary to undertake research on Disinformation by developing, funding, and cooperating with an independent, third-party body that can vet researchers and research proposals.
We signed up to the following measures of this commitment
Measure 27.1 Measure 27.2 Measure 27.3 Measure 27.4
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
- We are a members of the EDMO working group for the creation of the Independent Intermediary Body (IIB) to support research on digital platforms.
- Refined our standard operating procedure (SOP) for vetted researcher access to ensure compliance with the provisions of the Delegated Act on Data Access for Research.
- Participated in the EC Technical Roundtable on data access in December, 2024. The roundtable focused on the technical measures and best practices that could be implemented to facilitate the roll-out of the data access mechanism for vetted researchers.
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 27.1
Relevant Signatories commit to work with other relevant organisations (European Commission, Civil Society, DPAs) to develop within a reasonable timeline the independent third-party body referred to in Commitment 27, taking into account, where appropriate, ongoing efforts such as the EDMO proposal for a Code of Conduct on Access to Platform Data.
QRE 27.1.1
Relevant Signatories will describe their engagement with the process outlined in Measure 27.1 with a detailed timeline of the process, the practical outcome and any impacts of this process when it comes to their partnerships, programs, or other forms of engagement with researchers.
Measure 27.2
Relevant Signatories commit to co-fund from 2022 onwards the development of the independent third-party body referred to in Commitment 27.
QRE 27.2.1
Relevant Signatories will disclose their funding for the development of the independent third-party body referred to in Commitment 27.
Measure 27.3
Relevant Signatories commit to cooperate with the independent third-party body referred to in Commitment 27 once it is set up, in accordance with applicable laws, to enable sharing of personal data necessary to undertake research on Disinformation with vetted researchers in accordance with protocols to be defined by the independent third-party body.
QRE 27.3.1
Relevant Signatories will describe how they cooperate with the independent third-party body to enable the sharing of data for purposes of research as outlined in Measure 27.3, once the independent third-party body is set up.
Measure 27.4
Relevant Signatories commit to engage in pilot programs towards sharing data with vetted researchers for the purpose of investigating Disinformation, without waiting for the independent third-party body to be fully set up. Such pilot programmes will operate in accordance with all applicable laws regarding the sharing/use of data. Pilots could explore facilitating research on content that was removed from the services of Signatories and the data retention period for this content.
QRE 27.4.1
Relevant Signatories will describe the pilot programs they are engaged in to share data with vetted researchers for the purpose of investigating Disinformation. This will include information about the nature of the programs, number of research teams engaged, and where possible, about research topics or findings.
Commitment 28
COOPERATION WITH RESEARCHERS Relevant Signatories commit to support good faith research into Disinformation that involves their services.
We signed up to the following measures of this commitment
Measure 28.1 Measure 28.2 Measure 28.3 Measure 28.4
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
- Continued to refine the new Virtual Compute Environment (VCE), launched May 2024, by:
- Providing access to public U18 data.
- Adding new data points (e.g., Hashtag Info) and endpoints (e.g., Playlist Info). See Changelog.
- Establishing a new due diligence process with an external partner to confirm the eligibility of NGO applicants.
- Continued to support independent research through the Research API and improve accessibility by:
- Adding three new endpoints for TikTok Shop, which launched in Spain and Ireland in December 2024.
- Making Python and R (programming languages) wrappers available via GitHub.
- Continued to make the Commercial Content API available in Europe to bring transparency to paid advertising, advertisers and other commercial content on TikTok.
- Continued to offer our Commercial Content Library, a publicly searchable EU ads database with information about paid ads and ad metadata, such as the advertising creative, dates the ad ran, main parameters used for targeting (e.g. age, gender), number of people who were served the ad, and more.
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 28.1
Relevant Signatories will ensure they have the appropriate human resources in place in order to facilitate research, and should set-up and maintain an open dialogue with researchers to keep track of the types of data that are likely to be in demand for research and to help researchers find relevant contact points in their organisations.
QRE 28.1.1
Relevant Signatories will describe the resources and processes they deploy to facilitate research and engage with the research community, including e.g. dedicated teams, tools, help centres, programs, or events.
TikTok is committed to facilitating research and engaging with the research community.
As set out above, TikTok is committed to facilitating research through our Research Tools, Commercial Content APIs and Commercial Content Library, full details of which are available on our TikTok for Developers and Commercial Content Library websites.
We have many teams and individuals across product, policy, data science, outreach and legal working to facilitate research. We believe transparency and accountability are essential to fostering trust with our community. We are committed to transparency in how we operate, moderate and recommend content, empower users, and secure our platform. That's why we opened our global Transparency and Accountability Centers (TACs) for invited guests to see first-hand our work to protect the safety and security of the TikTok platform..
Our TACs are located in Dublin, Los Angeles, Singapore, and Washington, DC. In October 2024, we opened our rehoused Dublin based TAC in TikTok’s new premises. DUBTAC offers an opportunity for academics, businesses, policymakers, politicians, regulators, researchers and many other expert audiences from Europe and around the world to see first-hand how teams at TikTok go about the critically important work of securing our community's safety, data, and privacy. During the reporting period, DUBTAC hosted the following visits:
22 external tours including 3 NGO/industry bodies, 6 media representatives, and 2 creators.- On 22 & 23 October 2024 respectively, we welcomed the Sub-Saharan Africa (SSA) Safety Advisory Council and the Middle East, North Africa and Turkey (MENAT) Safety Advisory Council members. These visits were attended by TikTok T&S personnel where there were discussions on a range of topics and exchanges of views.
- In November 2024, we welcomed the Latin America (LATAM) Safety Advisory Council.
We work closely with our nine regional Advisory Councils, including our European Safety Advisory Council andUS Content Advisory Council, and our global Youth Advisory Council, which bring together a diverse array of independent experts from academia and civil society as well as youth perspectives. Advisory Council members provide subject matter expertise and advice on issues relating to user safety, content policy, and emerging issues that affect TikTok and our community, including in the development of our AI-generated content label and a recent campaign to raise awareness around AI labeling and potentially misleading AIGC.These councils are an important way to bring outside perspectives into our company and onto our platform.
In addition to these efforts, there are a plethora of ways through which we engage with the research community in the course of our work.
Our Outreach & Partnerships Management (OPM) Team is dedicated to establishing partnerships and regularly engaging with civil society stakeholders and external experts, including the academic and research community, to ensure their perspectives inform our policy creation, feature development, risk mitigation, and safety strategies. For example, we engaged with global experts, including numerous academics in Europe, in the development of our state-affiliated media policy, Election Misinformation policies, and AI-generated content labels. OPM also plays an important role in our efforts to counter misinformation by identifying, onboarding and managing new partners to our fact-checking programme. In H2 2024, we expanded fact-checking coverage to a number of wider-European and EU candidate countries:
- Moldova: AFP/Reuters
- Georgia: Fact Check Georgia
- Albania & Kosovo: Internews Kosova
- Serbia: Lead Stories
- Kazakhstan: Reuters
In the lead-up to certain elections, we invite suitably qualified external local/regional experts, as part of our Election Speaker Series.Sharing their market expertise with our internal teams provides us with insights to better understand areas that could potentially amount to election manipulation, and informs our approach to the upcoming election.
During this reporting period, we ran 9 Election Speaker Series sessions, 7 in EU Member States and 2 in Georgia and Moldova.
France: Agence France-Presse (AFP)- Germany: German Press Agency (dpa)
- Austria: German Press Agency (dpa)
- Lithuania: Logically Facts
- Romania: Funky Citizens
- Ireland: Logically Facts
- Croatia: Faktograf
- Georgia: FactCheck Georgia
- Moldova: Stop Fals!
As well as opportunities to share context about our approach, research interests, and opportunities to collaborate, these events enable us to learn from the important work being done by the research community on various topics, which include aspects related to harmful misinformation.
Measure 28.2
Relevant Signatories will be transparent on the data types they currently make available to researchers across Europe.
QRE 28.2.1
Relevant Signatories will describe what data types European researchers can currently access via their APIs or via dedicated teams, tools, help centres, programs, or events.
- Public account data, such as user profiles, followers and following lists, liked videos, pinned videos and reposted videos.
- Public content data, such as comments, captions, subtitles, and number of comments, shares and likes that a video receives.
Our commercial content related APIs includes ads, ad and advertiser metadata, and targeting information. These APIs will allow the public and researchers to perform customised - advertiser name or keyword based - searches on ads and other commercial content data that is stored in the Commercial Content Library repository. The Library is a searchable database with information about paid ads and ad metadata, such as the advertising creative, dates the ad ran, main parameters used for targeting (e.g. age, gender), number of people who were served the ad, and more.
Measure 28.3
Relevant Signatories will not prohibit or discourage genuinely and demonstratively public interest good faith research into Disinformation on their platforms, and will not take adversarial action against researcher users or accounts that undertake or participate in good-faith research into Disinformation.
QRE 28.3.1
Relevant Signatories will collaborate with EDMO to run an annual consultation of European researchers to assess whether they have experienced adversarial actions or are otherwise prohibited or discouraged to run such research.
Measure 28.4
As part of the cooperation framework between the Signatories and the European research community, relevant Signatories will, with the assistance of the EDMO, make funds available for research on Disinformation, for researchers to independently manage and to define scientific priorities and transparent allocation procedures based on scientific merit.
QRE 28.4.1
Relevant Signatories will disclose the resources made available for the purposes of Measure 28.4 and procedures put in place to ensure the resources are independently managed.
Empowering fact-checkers
Commitment 30
Relevant Signatories commit to establish a framework for transparent, structured, open, financially sustainable, and non-discriminatory cooperation between them and the EU fact-checking community regarding resources and support made available to fact-checkers.
We signed up to the following measures of this commitment
Measure 30.1 Measure 30.2 Measure 30.3 Measure 30.4
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
- Onboarded two new fact-checking partners in wider Europe:
- Albania & Kosovo: Internews Kosova
- Georgia: Fact Check Georgia.
- In H2 we also expanded our fact-checking coverage to other wider-European and EU candidate countries with existing fact-checking partners:
- Moldova: AFP/Reuters
- Serbia: Lead Stories
- Continued to expand our fact-checking repository to ensure our teams and systems leverage the full scope of insights our fact-checking partners submitted to TikTok (regardless of the original language of the relevant content).
- Continued to conduct feedback sessions with our partners to further enhance the efficiency of the fact-checking program.
- Continued to participate in the working group within the Code framework on the creation of an external fact-checking repository.
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 30.1
Relevant Signatories will set up agreements between them and independent fact-checking organisations (as defined in whereas (e)) to achieve fact-checking coverage in all Member States. These agreements should meet high ethical and professional standards and be based on transparent, open, consistent and non-discriminatory conditions and will ensure the independence of fact-checkers.
QRE 30.1.1
Relevant Signatories will report on and explain the nature of their agreements with fact-checking organisations; their expected results; relevant quantitative information (for instance: contents fact-checked, increased coverage, changes in integration of fact-checking as depends on the agreements and to be further discussed within the Task-force); and such as relevant common standards and conditions for these agreements.
- The service the fact-checking partner will provide, namely that their team of fact checkers review, assess and rate video content uploaded to their fact-checking queue.
- The expected results e.g., the fact-checkers advise on whether the content may be or contain misinformation and rate it using our classification categories.
- An option to agree that our fact-checker partners provide regular written reports about disinformation trends identified.
- An option to receive pro-actively flagging of potential harmful misinformation from our partners.
- The languages in which they will provide fact-checking services.
- The ability to request temporary coverage regarding additional languages or support on ad hoc additional projects.
- All other key terms including the applicable term and fees and payment arrangements.
QRE 30.1.2
Relevant Signatories will list the fact-checking organisations they have agreements with (unless a fact-checking organisation opposes such disclosure on the basis of a reasonable fear of retribution or violence).
- Agence France-Presse (AFP)
- dpa Deutsche Presse-Agentur
- Demagog
- Facta
- Fact Check Georgia
- Faktograf
- Internews Kosova
- Lead Stories
- Logically Facts
- Newtral
- Poligrafo
- Reuters
- Science Feedback
- Teyit
- Austria: Deutsche Presse-Agentur (dpa)
- Croatia: Faktograf
- France: Agence France-Presse (AFP)
- Georgia: Fact Check Georgia
- Germany (regional elections): Deutsche Presse-Agentur (dpa)
- Germany (federal election): Deutsche Presse-Agentur (dpa)
- Ireland: The Journal
- Moldova: StopFals!
- Romania: Funky Citizens
We also rolled out two new ongoing general media literacy and critical thinking skills campaigns in the EU and two in EU candidate countries in collaboration with our fact-checking and media literacy partners:
- France: Agence France-Presse (AFP)
- Portugal: Polígrafo
- Georgia: Fact Check Georgia
- Moldova: StopFals!
QRE 30.1.3
Relevant Signatories will report on resources allocated where relevant in each of their services to achieve fact-checking coverage in each Member State and to support fact-checking organisations' work to combat Disinformation online at the Member State level.
In order to effectively scale the feedback provided by our fact-checkers globally, we have implemented the measures listed below.
- Fact-checking repository. We have built a repository of previously fact-checked claims to help misinformation moderators make swift and accurate decisions.
- Trends reports. Our fact-checking partners can provide us with regular reports identifying general misinformation trends observed on our platform and across the industry generally, including new/changing industry or market trends, events or topics that generated particular misinformation or disinformation.
- Proactive detection by our fact-checking partners. Our fact-checking partners are authorised to proactively identify content that may constitute harmful misinformation on our platform and suggest prominent misinformation that is circulating online that may benefit from verification.
- Fact-checking guidelines. We create guidelines and trending topic reminders for our moderators on the basis of previous fact-checking assessments. This ensures our moderation teams leverage the insights from our fact-checking partners and helps our moderators make swift and accurate decisions on flagged content regardless of the language in which the original claim was made.
- Election Speaker Series. To further promote election integrity, and inform our approach to country-level EU elections, we invited suitably qualified local and regional external experts to share their insights and market expertise with our internal teams. Our recent Election Speaker Series heard presentations from the following organisations:
- France: Agence France-Presse (AFP)
- Germany: German Press Agency (dpa)
- Austria: German Press Agency (dpa)
- Lithuania: Logically Facts
- Romania: Funky Citizens
- Ireland: Logically Facts
- Croatia: Faktograf
- Georgia: FactCheck Georgia
- Moldova: Stop Fals!
- Computer Vision models, which help to detect objects so it can be determined whether the content likely contains material which violates our policies.
- Keyword lists and models are used to review text and audio content to detect material in violation of our policies. We work with various external experts, including our fact-checking partners, to inform our keyword lists.
- Where we have previously detected content that violates our policies, we use de-duplication and hashing technologies that enable us to recognise copies or near copies of such content to prevent further re-distribution of violative content on our platform.
- We launched the ability to read Content Credentials that attach metadata to content, which we can use to automatically label AI-generated content that originated on other major platforms.
Measure 30.2
Relevant Signatories will provide fair financial contributions to the independent European fact-checking organisations for their work to combat Disinformation on their services. Those financial contributions could be in the form of individual agreements, of agreements with multiple fact-checkers or with an elected body representative of the independent European fact-checking organisations that has the mandate to conclude said agreements.
QRE 30.2.1
Relevant Signatories will report on actions taken and general criteria used to ensure the fair financial contributions to the fact-checkers for the work done, on criteria used in those agreements to guarantee high ethical and professional standards, independence of the fact-checking organisations, as well as conditions of transparency, openness, consistency and non-discrimination.
QRE 30.2.2
Relevant Signatories will engage in, and report on, regular reviews with their fact-checking partner organisations to review the nature and effectiveness of the Signatory's fact-checking programme.
QRE 30.2.3
European fact-checking organisations will, directly (as Signatories to the Code) or indirectly (e.g. via polling by EDMO or an elected body representative of the independent European fact-checking organisations) report on the fairness of the individual compensations provided to them via these agreements.
Measure 30.3
Relevant Signatories will contribute to cross-border cooperation between fact-checkers.
QRE 30.3.1
Relevant Signatories will report on actions taken to facilitate their cross-border collaboration with and between fact-checkers, including examples of fact-checks, languages, or Member States where such cooperation was facilitated.
In addition, we continue to collaborate with our partners to understand how we may be able to facilitate further collaboration through individual feedback sessions with partners.
Measure 30.4
To develop the Measures above, relevant Signatories will consult EDMO and an elected body representative of the independent European fact-checking organisations.
QRE 30.4.1
Relevant Signatories will report, ex ante on plans to involve, and ex post on actions taken to involve, EDMO and the elected body representative of the independent European fact-checking organisations, including on the development of the framework of cooperation described in Measures 30.3 and 30.4.
Commitment 31
Relevant Signatories commit to integrate, showcase, or otherwise consistently use fact-checkers' work in their platforms' services, processes, and contents; with full coverage of all Member States and languages.
We signed up to the following measures of this commitment
Measure 31.1 Measure 31.2 Measure 31.3 Measure 31.4
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
- Onboarded two new fact-checking partners in wider Europe:
- Albania & Kosovo: Internews Kosova
- Georgia: Fact Check Georgia.
- And, in addition, expanded our fact-checking coverage to other wider-European and EU candidate countries with existing fact-checking partners:
- Moldova: AFP/Reuters
- Serbia: Lead Stories
- Continued to expand our fact-checking repository to ensure our teams and systems leverage the full scope of insights our fact-checking partners submitted to TikTok (regardless of the original language of the relevant content).
- Continued to conduct feedback sessions with our partners to further enhance the efficiency of the fact-checking program.
- Continued to participate in the working group within the Code framework on the creation of an external fact-checking repository.
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 31.2
Relevant Signatories that integrate fact-checks in their products or processes will ensure they employ swift and efficient mechanisms such as labelling, information panels, or policy enforcement to help increase the impact of fact-checks on audiences.
QRE 31.2.1
Relevant Signatories will report on their specific activities and initiatives related to Measures 31.1 and 31.2, including the full results and methodology applied in testing solutions to that end.
SLI 31.1.1 (for Measures 31.1 and 31.2)
Member State level reporting on use of fact-checks by service and the swift and efficient mechanisms in place to increase their impact, which may include (as depends on the service): number of fact-check articles published; reach of fact-check articles; number of content pieces reviewed by fact-checkers.
Country | Number of fact-checked videos | |||
---|---|---|---|---|
Austria | 64 | |||
Belgium | 141 | |||
Bulgaria | 398 | |||
Croatia | 137 | |||
Cyprus | 8 | |||
Czech Republic | 200 | |||
Denmark | 175 | |||
Estonia | 84 | |||
Finland | 61 | |||
France | 1045 | |||
Germany | 837 | |||
Greece | 64 | |||
Hungary | 144 | |||
Ireland | 91 | |||
Italy | 202 | |||
Latvia | 40 | |||
Lithuania | 41 | |||
Luxembourg | 2 | |||
Malta | 0 | |||
Netherlands | 52 | |||
Poland | 622 | |||
Portugal | 59 | |||
Romania | 669 | |||
Slovakia | 138 | |||
Slovenia | 22 | |||
Spain | 407 | |||
Sweden | 158 | |||
Iceland | 1 | |||
Liechtenstein | 0 | |||
Norway | 227 | |||
Total EU | 5861 | |||
Total EEA | 6089 |
SLI 31.1.2 (for Measures 31.1 and 31.2)
An estimation, through meaningful metrics, of the impact of actions taken such as, for instance, the number of pieces of content labelled on the basis of fact-check articles, or the impact of said measures on user interactions with information fact-checked as false or misleading.
Country | Number of videos removed as a result of a fact-checking assessment | Number of videos removed because of policy guidelines, known misinformation trends and knowledge based repository |
---|---|---|
Austria | 8 | 2888 |
Belgium | 26 | 3902 |
Bulgaria | 62 | 1568 |
Croatia | 31 | 789 |
Cyprus | 0 | 511 |
Czech Republic | 42 | 2720 |
Denmark | 12 | 1455 |
Estonia | 2 | 319 |
Finland | 4 | 984 |
France | 166 | 44354 |
Germany | 177 | 50335 |
Greece | 8 | 4198 |
Hungary | 21 | 2002 |
Ireland | 13 | 4676 |
Italy | 40 | 21035 |
Latvia | 1 | 694 |
Lithuania | 0 | 520 |
Luxembourg | 0 | 279 |
Malta | 0 | 168 |
Netherlands | 13 | 5422 |
Poland | 152 | 13028 |
Portugal | 10 | 2629 |
Romania | 168 | 14103 |
Slovakia | 42 | 1365 |
Slovenia | 3 | 574 |
Spain | 55 | 22581 |
Sweden | 15 | 3489 |
Iceland | 1 | 122 |
Liechtenstein | 0 | 35 |
Norway | 14 | 1798 |
Total EU | 1071 | 206588 |
Total EEA | 1086 | 208543 |
SLI 31.1.3 (for Measures 31.1 and 31.2)
Signatories recognise the importance of providing context to SLIs 31.1.1 and 31.1.2 in ways that empower researchers, fact-checkers, the Commission, ERGA, and the public to understand and assess the impact of the actions taken to comply with Commitment 31. To that end, relevant Signatories commit to include baseline quantitative information that will help contextualise these SLIs. Relevant Signatories will present and discuss within the Permanent Task-force the type of baseline quantitative information they consider using for contextualisation ahead of their baseline reports.
The metric we have provided demonstrates the % of videos which have been removed as a result of the fact checking assessment, in comparison to the total number of videos removed because of violation of our harmful misinformation policy.
Country | Videos removed as a result of a fact checking assessment as a percentage of total number of videos removed due to violation of harmful misinformation policy |
---|---|
Austria | 0.20% |
Belgium | 0.50% |
Bulgaria | 3.60% |
Croatia | 1.00% |
Cyprus | 0.00% |
Czech Republic | 1.30% |
Denmark | 0.80% |
Estonia | 0.60% |
Finland | 0.40% |
France | 0.40% |
Germany | 0.30% |
Greece | 0.20% |
Hungary | 0.30% |
Ireland | 0.00% |
Italy | 0.20% |
Latvia | 0.00% |
Lithuania | 0.00% |
Luxembourg | 0.00% |
Malta | 0.00% |
Netherlands | 0.10% |
Poland | 1.00% |
Portugal | 0.30% |
Romania | 0.90% |
Slovakia | 2.80% |
Slovenia | 0.00% |
Spain | 0.20% |
Sweden | 0.40% |
Iceland | 0.00% |
Liechtenstein | 0.00% |
Norway | 0.60% |
Total EU | 0.40% |
Total EEA | 0.40% |
Measure 31.3
Relevant Signatories (including but not necessarily limited to fact-checkers and platforms) will create, in collaboration with EDMO and an elected body representative of the independent European fact-checking organisations, a repository of fact-checking content that will be governed by the representatives of fact-checkers. Relevant Signatories (i.e. platforms) commit to contribute to funding the establishment of the repository, together with other Signatories and/or other relevant interested entities. Funding will be reassessed on an annual basis within the Permanent Task-force after the establishment of the repository, which shall take no longer than 12 months.
QRE 31.3.1
Relevant Signatories will report on their work towards and contribution to the overall repository project, which may include (depending on the Signatories): financial contributions; technical support; resourcing; fact-checks added to the repository. Further relevant metrics should be explored within the Permanent Task-force.
Measure 31.4
Relevant Signatories will explore technological solutions to facilitate the efficient use of this common repository across platforms and languages. They will discuss these solutions with the Permanent Task-force in view of identifying relevant follow up actions.
QRE 31.4.1
Relevant Signatories will report on the technical solutions they explore and insofar as possible and in light of discussions with the Task-force on solutions they implemented to facilitate the efficient use of a common repository across platforms.
Commitment 32
Relevant Signatories commit to provide fact-checkers with prompt, and whenever possible automated, access to information that is pertinent to help them to maximise the quality and impact of fact-checking, as defined in a framework to be designed in coordination with EDMO and an elected body representative of the independent European fact-checking organisations.
We signed up to the following measures of this commitment
Measure 32.1 Measure 32.2 Measure 32.3
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 32.2
Relevant Signatories that showcase User Generated Content (UGC) will provide appropriate interfaces, automated wherever possible, for fact-checking organisations to be able to access information on the impact of contents on their platforms and to ensure consistency in the way said Signatories use, credit and provide feedback on the work of fact-checkers.
QRE 31.1.1 (for Measures 31.1 and 31.2)
Relevant Signatories will provide details on the interfaces and other tools put in place to provide fact-checkers with the information referred to in Measure 31.1 and 31.2.
SLI 32.1.1
Relevant Signatories will provide quantitative information on the use of the interfaces and other tools put in place to provide fact-checkers with the information referred to in Measures 32.1 and 32.2 (such as monthly users for instance).
Methodology of data measurement:
Country | 0 | 0 |
---|---|---|
Austria | 0 | 0 |
Belgium | 0 | 0 |
Bulgaria | 0 | 0 |
Croatia | 0 | 0 |
Cyprus | 0 | 0 |
Czech Republic | 0 | 0 |
Denmark | 0 | 0 |
Estonia | 0 | 0 |
Finland | 0 | 0 |
France | 0 | 0 |
Germany | 0 | 0 |
Greece | 0 | 0 |
Hungary | 0 | 0 |
Ireland | 0 | 0 |
Italy | 0 | 0 |
Latvia | 0 | 0 |
Lithuania | 0 | 0 |
Luxembourg | 0 | 0 |
Malta | 0 | 0 |
Netherlands | 0 | 0 |
Poland | 0 | 0 |
Portugal | 0 | 0 |
Romania | 0 | 0 |
Slovakia | 0 | 0 |
Slovenia | 0 | 0 |
Spain | 0 | 0 |
Sweden | 0 | 0 |
Iceland | 0 | 0 |
Liechtenstein | 0 | 0 |
Norway | 0 | 0 |
Measure 32.3
Relevant Signatories will regularly exchange information between themselves and the fact-checking community, to strengthen their cooperation.
QRE 32.3.1
Relevant Signatories will report on the channels of communications and the exchanges conducted to strengthen their cooperation - including success of and satisfaction with the information, interface, and other tools referred to in Measures 32.1 and 32.2 - and any conclusions drawn from such exchanges.
Transparency Centre
Commitment 34
To ensure transparency and accountability around the implementation of this Code, Relevant Signatories commit to set up and maintain a publicly available common Transparency Centre website.
We signed up to the following measures of this commitment
Measure 34.1 Measure 34.2 Measure 34.3 Measure 34.4 Measure 34.5
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Commitment 35
Signatories commit to ensure that the Transparency Centre contains all the relevant information related to the implementation of the Code's Commitments and Measures and that this information is presented in an easy-to-understand manner, per service, and is easily searchable.
We signed up to the following measures of this commitment
Measure 35.1 Measure 35.2 Measure 35.3 Measure 35.4 Measure 35.5 Measure 35.6
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Commitment 36
Signatories commit to updating the relevant information contained in the Transparency Centre in a timely and complete manner.
We signed up to the following measures of this commitment
Measure 36.1 Measure 36.2 Measure 36.3
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 36.3
Signatories will update the Transparency Centre to reflect the latest decisions of the Permanent Task-force, regarding the Code and the monitoring framework.
QRE 36.1.1
With their initial implementation report, Signatories will outline the state of development of the Transparency Centre, its functionalities, the information it contains, and any other relevant information about its functioning or operations. This information can be drafted jointly by Signatories involved in operating or adding content to the Transparency Centre.
QRE 36.1.2
Signatories will outline changes to the Transparency Centre's content, operations, or functioning in their reports over time. Such updates can be drafted jointly by Signatories involved in operating or adding content to the Transparency Centre.
SLI 36.1.1
Signatories will provide meaningful quantitative information on the usage of the Transparency Centre, such as the average monthly visits of the webpage.
Between 1 July 2024 and 31 December 2024, the common Transparency Centre has been visited by 20,255 unique visitors. The Signatories’ reports were downloaded 5,626 times by 1,275 unique visitors. More specifically, TikTok’s previous COPD report was downloaded 302 times by 135 visitors.
Country | 0 |
---|---|
Austria | 0 |
Belgium | 0 |
Bulgaria | 0 |
Croatia | 0 |
Cyprus | 0 |
Czech Republic | 0 |
Denmark | 0 |
Estonia | 0 |
Finland | 0 |
France | 0 |
Germany | 0 |
Greece | 0 |
Hungary | 0 |
Ireland | 0 |
Italy | 0 |
Latvia | 0 |
Lithuania | 0 |
Luxembourg | 0 |
Malta | 0 |
Netherlands | 0 |
Poland | 0 |
Portugal | 0 |
Romania | 0 |
Slovakia | 0 |
Slovenia | 0 |
Spain | 0 |
Sweden | 0 |
Iceland | 0 |
Liechtenstein | 0 |
Norway | 0 |
Permanent Task-Force
Commitment 37
Signatories commit to participate in the permanent Task-force. The Task-force includes the Signatories of the Code and representatives from EDMO and ERGA. It is chaired by the European Commission, and includes representatives of the European External Action Service (EEAS). The Task-force can also invite relevant experts as observers to support its work. Decisions of the Task-force are made by consensus.
We signed up to the following measures of this commitment
Measure 37.1 Measure 37.2 Measure 37.3 Measure 37.4 Measure 37.5 Measure 37.6
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 37.6
Signatories agree to notify the rest of the Task-force when a Commitment or Measure would benefit from changes over time as their practices and approaches evolve, in view of technological, societal, market, and legislative developments. Having discussed the changes required, the Relevant Signatories will update their subscription document accordingly and report on the changes in their next report.
QRE 37.6.1
Signatories will describe how they engage in the work of the Task-force in the reporting period, including the sub-groups they engaged with.
Monitoring of the Code
Commitment 38
The Signatories commit to dedicate adequate financial and human resources and put in place appropriate internal processes to ensure the implementation of their commitments under the Code.
We signed up to the following measures of this commitment
Measure 38.1
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 38.1
Relevant Signatories will outline the teams and internal processes they have in place, per service, to comply with the Code in order to achieve full coverage across the Member States and the languages of the EU.
QRE 38.1.1
Relevant Signatories will outline the teams and internal processes they have in place, per service, to comply with the Code in order to achieve full coverage across the Member States and the languages of the EU.
Across the European Union, we have thousands of trust and safety professionals dedicated to keeping our platform safe.We also recognise the importance of local knowledge and expertise as we work to ensure online safety for our users. We take a similar approach to our third party partnerships.
Commitment 39
Signatories commit to provide to the European Commission, within 1 month after the end of the implementation period (6 months after this Code’s signature) the baseline reports as set out in the Preamble.
We signed up to the following measures of this commitment
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Commitment 40
Signatories commit to provide regular reporting on Service Level Indicators (SLIs) and Qualitative Reporting Elements (QREs). The reports and data provided should allow for a thorough assessment of the extent of the implementation of the Code’s Commitments and Measures by each Signatory, service and at Member State level.
We signed up to the following measures of this commitment
Measure 40.1 Measure 40.2 Measure 40.3 Measure 40.4 Measure 40.5 Measure 40.6
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Commitment 41
Signatories commit to work within the Task-force towards developing Structural Indicators, and publish a first set of them within 9 months from the signature of this Code; and to publish an initial measurement alongside their first full report.
We signed up to the following measures of this commitment
Measure 41.1 Measure 41.2 Measure 41.3
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
- We have been an active participant in the working group dedicated to developing Structural Indicators.
- We supported the publication of the second analysis of Structural Indicators, expanding it to covering 4 markets and increasing the sample size in September 2024.
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Commitment 42
Relevant Signatories commit to provide, in special situations like elections or crisis, upon request of the European Commission, proportionate and appropriate information and data, including ad-hoc specific reports and specific chapters within the regular monitoring, in accordance with the rapid response system established by the Task-force.
We signed up to the following measures of this commitment
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Commitment 43
Relevant Signatories commit to provide, in special situations like elections or crisis, upon request of the European Commission, proportionate and appropriate information and data, including ad-hoc specific reports and specific chapters within the regular monitoring, in accordance with the rapid response system established by the Taskforce.
We signed up to the following measures of this commitment
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
- Participated in the monitoring and reporting working group.
- Published transparency report in September 2024.
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Commitment 44
Relevant Signatories that are providers of Very Large Online Platforms commit, seeking alignment with the DSA, to be audited at their own expense, for their compliance with the commitments undertaken pursuant to this Code. Audits should be performed by organisations, independent from, and without conflict of interest with, the provider of the Very Large Online Platform concerned. Such organisations shall have proven expertise in the area of disinformation, appropriate technical competence and capabilities and have proven objectivity and professional ethics, based in particular on adherence to auditing standards and guidelines.
We signed up to the following measures of this commitment
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?