TikTok

Report March 2025

Submitted
TikTok's mission is to inspire creativity and bring joy. In a global community such as ours with millions of users it is natural for people to have different opinions, so we seek to operate on a shared set of facts and reality when it comes to topics that impact people’s safety. Ensuring a safe and authentic environment for our community is critical to achieving our goals - this includes making sure our users have a trustworthy experience on TikTok. As part of creating a trustworthy environment, transparency is essential to enable online communities and wider society to assess TikTok's approach to its regulatory obligations. TikTok is committed to providing insights into the actions we are taking as a signatory to the Code of Practice on Disinformation (the Code). 

Our full executive summary is available as part of our report, which can be downloaded by following the link below.

Download PDF

Commitment 14
In order to limit impermissible manipulative behaviours and practices across their services, Relevant Signatories commit to put in place or further bolster policies to address both misinformation and disinformation across their services, and to agree on a cross-service understanding of manipulative behaviours, actors and practices not permitted on their services. Such behaviours and practices include: The creation and use of fake accounts, account takeovers and bot-driven amplification, Hack-and-leak operations, Impersonation, Malicious deep fakes, The purchase of fake engagements, Non-transparent paid messages or promotion by influencers, The creation and use of accounts that participate in coordinated inauthentic behaviour, User conduct aimed at artificially amplifying the reach or perceived public support for disinformation.
We signed up to the following measures of this commitment
Measure 14.1 Measure 14.2 Measure 14.3
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
Yes
If yes, list these implementation measures here
  • Building on our new AI-generated label for creators to disclose content that is completely AI-generated or significantly edited by AI, we have expanded our efforts in the AIGC space by:
    • Implementing the Coalition for Content Provenance and Authenticity (C2PA) Content Credentials, which enables our systems to instantly recognize and automatically label AIGC.
    • Supporting the coalition’s working groups as a C2PA General Member.
    • Joining the Content Authenticity Initiative (CAI) to drive wider adoption of the technical standard.
    • Publishing a new Transparency Center article Supporting responsible, transparent AI-generated content.
    • Building on our new AI-generated content label for creators, and implementation of C2PA Content Credentials, we launched a number of media literacy campaigns with guidance from expert organisations like Mediawise and WITNESS, including in Brazil, Germany, France, Mexico and the UK, that teach our community how to spot and label AI-generated content. This AIGC Transparency Campaign informed by WITNESS has reached 80M users globally, including more than 8.5M and 9.5M in Germany and France respectively.
  • Continued to join industry partners as a party to the “Tech Accord to Combat Deceptive Use of AI in 2024 Elections’ which is a joint commitment to combat the deceptive use of AI in elections. 
  • Continued to participate in the working groups on integrity of services and Generative AI.
  • We have continued to enhance our ability to detect covert influence operations. To provide more regular and detailed updates about the covert influence operations we disrupt, we have a dedicated Transparency Report on covert influence operations, which is available in TikTok’s transparency centre. In this report, we include information about operations that we have previously removed and that have attempted to return to our platform with new accounts.
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
No
If yes, which further implementation measures do you plan to put in place in the next 6 months?
We are continuously reviewing and improving our tools and processes to fight disinformation and will report on any further development in the next COPD report.
Measure 14.1
Relevant Signatories will adopt, reinforce and implement clear policies regarding impermissible manipulative behaviours and practices on their services, based on the latest evidence on the conducts and tactics, techniques and procedures (TTPs) employed by malicious actors, such as the AMITT Disinformation Tactics, Techniques and Procedures Framework.
QRE 14.1.1
Relevant Signatories will list relevant policies and clarify how they relate to the threats mentioned above as well as to other Disinformation threats.
As well as our I&A policies in our CGs which safeguard against harmful misinformation (see QRE 18.2.1), our I&A policies also expressly prohibit deceptive behaviours. Our policies on deceptive behaviours relate to the TTPs as follows:

TTPs which pertain to the creation of assets for the purpose of a disinformation campaign, and to ways to make these assets seem credible: 

Creation of inauthentic accounts or botnets (which may include automated, partially automated, or non-automated accounts)  

Our I&A policies which address Spam and Deceptive Account Behaviours expressly prohibit account behaviours that may spam or mislead our community. You can set up multiple accounts on TikTok to create different channels for authentic creative expression, but not for deceptive purposes.

We do not allow spam including:
  • Operating large networks of accounts controlled by a single entity, or through automation;
  • Bulk distribution of a high-volume of spam; and
  • Manipulation of engagement signals to amplify the reach of certain content, or buying and selling followers, particularly for financial purposes

We also do not allow impersonation including:
  • Accounts that pose as another real person or entity without disclosing that they are a fan or parody account in the account name, such as using someone's name, biographical details, content, or image without disclosing it
  • Presenting as a person or entity that does not exist (a fake persona) with a demonstrated intent to mislead others on the platform

If we determine someone has engaged in any of these deceptive account behaviours, we will ban the account, and may ban any new accounts that are created.

Use of fake / inauthentic reactions (e.g. likes, up votes, comments) and use of fake followers or subscribers

Our I&A policies which address fake engagement do not allow the trade or marketing of services that attempt to artificially increase engagement or deceive TikTok’s recommendation system. We do not allow our users to: 

  • facilitate the trade or marketing of services that artificially increase engagement, such as selling followers or likes; or
  • provide instructions on how to artificially increase engagement on TikTok.

If we become aware of accounts or content with inauthentically inflated metrics, we will remove the associated fake followers or likes. Content that tricks or manipulates others as a way to increase engagement metrics, such as “like-for-like” promises and false incentives for engaging with content (to increase gifts, followers, likes, views, or other engagement metrics) is ineligible for our For You feed.

Creation of inauthentic pages, groups, chat groups, fora, or domains 
TikTok does not have pages, groups, chat groups, fora or domains. This TTP is not relevant to our platform.

Account hijacking or Impersonation
Again, our policies prohibit impersonation which refers to accounts that pose as another real person or entity or present as a person or entity that does not exist (a fake persona) with a demonstrated intent to mislead others on the platform. Our users are not allowed to use someone else's name, biographical details, or profile picture in a misleading manner. 

In order to protect freedom of expression, we do allow accounts that are clearly parody, commentary, or fan-based, such as where the account name indicates that it is a fan, commentary, or parody account and not affiliated with the subject of the account. We continue to develop our policies to ensure that impersonation of entities (such as businesses or educational institutions, for example) are prohibited and that accounts which impersonate people or entities who are not on the platform are also prohibited. We also issue warnings to users of suspected impersonation accounts and do not recommend those accounts on our For You Feed.

We also have a number of policies that address account hijacking. Our privacy and security policies under our CGs expressly prohibit users from providing access to their account credentials to others or enable others to conduct activities against our CGs. We do not allow access to any part of TikTok through unauthorised methods; attempts to obtain sensitive, confidential, commercial, or personal information; or any abuse of the security, integrity, or reliability of our platform. We also provide practical guidance to users if they have concerns that their account may have been hacked.  

TTPs which pertain to the dissemination of content created in the context of a disinformation campaign, which may or may not include some forms of targeting or attempting to silence opposing views: 

Deliberately targeting vulnerable recipients (e.g. via personalised advertising, location spoofing or obfuscation), inauthentic coordination of content creation or amplification, including attempts to deceive/manipulate platforms algorithms (e.g. keyword stuffing or inauthentic posting/reposting designed to mislead people about popularity of content, including by influencers), use of deceptive practices to deceive/manipulate platform algorithms, such as to create, amplify or hijack hashtags, data voids, filter bubbles, or echo chambers and coordinated mass reporting of non-violative opposing content or accounts.

We fight against CIOs as our policies prohibit attempts to sway public opinion while also misleading our systems or users about the identity, origin, approximate location, popularity or overall purpose.

When we investigate and remove these operations, we focus on behaviour and assessing linkages between accounts and techniques to determine if actors are engaging in a coordinated effort to mislead TikTok’s systems or our community. In each case, we believe that the people behind these activities coordinate with one another to misrepresent who they are and what they are doing. We know that CIOs will continue to evolve in response to our detection and networks may attempt to reestablish a presence on our platform - that's why we take continuous action against these attempts including banning accounts found to be linked with previously disrupted networks. We continue to iteratively research and evaluate complex deceptive behaviours on our platform and develop appropriate product and policy solutions as appropriate in the long term. We publish all of the CIO networks we identify and remove voluntarily in a dedicated report within our transparency centre here.

Use “hack and leak” operation (which may or may not include doctored content) 

We have a number of policies that address hack and leak related threats (some examples are below):
  • Our hack and leak policy, which aims to further reduce the harms inflicted by the unauthorised disclosure of hacked materials on the individuals, communities and organisations that may be implicated or exposed by such disclosures
  • Our CIO policy addresses use of leaked documents to sway public opinion as part of a wider operation
  • Our Edited Media and AI-Generated Content (AIGC) policy captures materials that have been digitally altered without an appropriate disclosure
  • Our harmful misinformation policies combats conspiracy theories related to unfolding events and dangerous misinformation
  • Our Trade of Regulated Goods and Services policy prohibits trading of hacked goods

Deceptive manipulated media (e.g. “deep fakes”, “cheap fakes”...)  

Our ‘Edited Media and AI-Generated Content (AIGC)’ policy includes commonly used and easily understood language when referring to AIGC, and outlines our existing prohibitions on AIGC showing fake authoritative sources or crisis events, or falsely showing public figures in certain contexts including being bullied, making an endorsement, or being endorsed. We also do not allow content that contains the likeness of young people, or the likeness of adult private figures used without their permission.

For the purposes of our policy, AIGC refers to content created or modified by artificial intelligence (AI) technology or machine-learning processes, which may include images of real people, and may show highly realistic-appearing scenes, or use a particular artistic style, such as a painting, cartoons, or anime. ‘Significantly edited content’ is content that shows people doing or saying something they did not do or say, or altering their appearance in a way that makes them difficult to recognise or identify. Misleading AIGC or edited media is audio or visual content that has been edited, including by combining different clips together, to change the composition, sequencing, or timing in a way that alters the meaning of the content and could mislead viewers about the truth of real world events.
 
In accordance with our policy, we prohibit AIGC that features:
  • Realistic-appearing people under the age of 18
  • The likeness of adult private figures, if we become aware it was used without their permission
  • Misleading AIGC or edited media that falsely shows:
    • Content made to seem as if it comes from an authoritative source, such as a reputable news organisation
    • A crisis event, such as a conflict or natural disaster
    • A public figure who is:
      • being degraded or harassed, or engaging in criminal or antisocial behaviour
      • taking a position on a political issue, commercial product, or a matter of public importance (such as an elections)
      • being politically endorsed or condemned by an individual or group

As AI evolves, we continue to invest in combating harmful AIGC by evolving our proactive detection models, consulting with experts, and partnering with peers on shared solutions.

Non-transparent compensated messages or promotions by influencers 

Our Terms of Service and Branded Content Policy require users posting about a brand or product in return for any payment or other incentive to disclose their content by enabling the branded content toggle which we make available for users. We also provide functionality to enable users to report suspected undisclosed branded content which reminds the user who posted the suspected undisclosed branded content of our requirements and prompts them to turn the branded content toggle on if required. We made this requirement even clearer to users in our Commercial Disclosures and Paid Promotion policy in our March 2023 CG refresh, by increasing the information around our policing of this policy and providing specific examples.

In addition to branded content policies, our CIO policy can also apply to non-transparent compensated messages or promotions by influencers where it is found that those messages or promotions formed part of a covert influence campaign.

QRE 14.1.2
Signatories will report on their proactive efforts to detect impermissible content, behaviours, TTPs and practices relevant to this commitment.
At TikTok, we place considerable emphasis on proactive content moderation and use a combination of technology and safety professionals to detect and remove harmful misinformation (see QRE 18.1.1) and deceptive behaviours on our Platform, before it is reported to us by users or third parties. 

For instance, we take proactive measures to prevent inauthentic or spam accounts from being created. Thus, we have created and use detection models and rule engines that:

  • prevent inauthentic accounts from being created based on malicious patterns; and
  • remove registered accounts based on certain signals (ie, uncommon behaviour on the platform).

We also manually monitor user reports of inauthentic accounts in order to detect larger clusters or similar inauthentic behaviours.

However, given the complex nature of the TTPs, human moderation is critical to success in this area and TikTok's moderation teams therefore play a key role assessing and addressing identified violations. We provide our moderation teams with detailed guidance on how to apply the I&A policies in our CGs, including providing case banks of harmful misinformation claims to support their moderation work, and allow them to route new or evolving content to our fact-checking partners for assessment. 

In addition, where content reaches certain levels of popularity in terms of the number of video views, it will be flagged for further review. Such review is undertaken given the extent of the content’s dissemination and the increase in potential harm if the content is found to be in breach of our CGs including our I&A policies.

Furthermore, during the reporting period, we improved automated detection and enforcement of our ‘Edited Media and AI-Generated Content (AIGC)’ policy, resulting in an effective increase in the number of videos removed for policy violations. This also resulted in the number of visitors per video decreasing over the reporting period, demonstrating an effective control strategy as the scope of enforcement increased.

We have also set up specifically-trained teams that are focused on investigating and detecting CIO on our Platform. We've built international trust & safety teams with specialized expertise across threat intelligence, security, law enforcement, and data science to work on influence operations full-time. These teams continuously pursue and analyse on-platform signals of deceptive behaviour, as well as leads from external sources. They also collaborate with external intelligence vendors to support specific investigations on a case-by-case basis. When we investigate and remove these operations, we focus on behaviour and assessing linkages between accounts and techniques to determine if actors are engaging in a coordinated effort to mislead TikTok’s systems or our community. In each case, we believe that the people behind these activities coordinate with one another to misrepresent who they are and what they are doing.

Accounts that engage in influence operations often avoid posting content that would be violative of platforms' guidelines by itself. That's why we focus on accounts' behaviour and technical linkages when analysing them, specifically looking for evidence that:

  • They are coordinating with each other. For example, they are operated by the same entity, share technical similarities like using the same devices, or are working together to spread the same narrative.
  • They are misleading our systems or users. For example, they are trying to conceal their actual location, or using fake personas to pose as someone they're not.
  • They are attempting to manipulate or corrupt public debate to impact the decision making, beliefs and opinions of a community. For example, they are attempting to shape discourse around an election or conflict.

These criteria are aligned with industry standards and guidance from the experts we regularly consult with. They're particularly important to help us distinguish malicious, inauthentic coordination from authentic interactions that are part of healthy and open communities. For example, it would not violate our policies if a group of people authentically worked together to raise awareness or campaign for a social cause, or express a shared opinion (including political views). However, multiple accounts deceptively working together to spread similar messages in an attempt to influence public discussions would be prohibited and disrupted.