TikTok

Report September 2025

Submitted
TikTok’s mission is to inspire creativity and bring joy. With a global community of more than a billion users, it’s natural for people to hold different opinions. That’s why we focus on a shared set of facts when it comes to issues that affect people’s safety. A safe, authentic, and trustworthy experience is essential to achieving our goals. Transparency plays a key role in building that trust, allowing online communities and society to assess how TikTok meets its regulatory obligations. As a signatory to the Code of Conduct on Disinformation (the Code), TikTok is committed to sharing clear insights into the actions we take.

TikTok takes disinformation extremely seriously. We are committed to preventing its spread, promoting authoritative information, and supporting media literacy initiatives that strengthen community resilience.

We prioritise proactive content moderation, with the vast majority of violative content removed before it is viewed or reported. In H1 2025, more than 97% of videos violating our Integrity and Authenticity policies were removed proactively worldwide.

We continue to address emerging behaviours and risks through our Digital Services Act (DSA) compliance programme, which the Code has operated under since July 2025. This includes a range of measures to protect users, detailed on our European Online Safety Hub. Our actions under the Code demonstrate TikTok’s strong commitment to combating disinformation while ensuring transparency and accountability to our community and regulators.

Our full executive summary can be read by downloading our report using the link below.

Download PDF

Commitment 14
In order to limit impermissible manipulative behaviours and practices across their services, Relevant Signatories commit to put in place or further bolster policies to address both misinformation and disinformation across their services, and to agree on a cross-service understanding of manipulative behaviours, actors and practices not permitted on their services. Such behaviours and practices include: The creation and use of fake accounts, account takeovers and bot-driven amplification, Hack-and-leak operations, Impersonation, Malicious deep fakes, The purchase of fake engagements, Non-transparent paid messages or promotion by influencers, The creation and use of accounts that participate in coordinated inauthentic behaviour, User conduct aimed at artificially amplifying the reach or perceived public support for disinformation.
We signed up to the following measures of this commitment
Measure 14.1 Measure 14.2 Measure 14.3
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
Yes
If yes, list these implementation measures here
  • Building on our AI-generated content label for creators, and implementation of C2PA Content Credentials, we completed our AIGC media literacy campaign series in Mexico and the UK. These campaigns in Brazil, Germany, France, Mexico and the UK, which ran across H2 2024 and H1 2025, were developed with guidance from expert organisations like Mediawise and WITNESS to teach our community how to spot and label AI generated content. They reached more than 90M users globally, including more than 27M in Mexico and 10M in the UK.
  • Continued to join industry partners as a party to the “Tech Accord to Combat Deceptive Use of AI in 2024 Elections’ which is a joint commitment to combat the deceptive use of AI in elections. 
  • Continued to participate in the working groups on the integrity of services and Generative AI.
  • We have continued to enhance our ability to detect covert influence operations. To provide more regular and detailed updates about the covert influence operations we disrupt, we have a dedicated Transparency Report on covert influence operations, which is available in TikTok’s Transparency Centre. In this report, we include information about operations that we have previously removed and that have attempted to return to our platform with new accounts.
  • We continue to update and refine our policies around Covert Influence Operations in order to stay agile to changing behaviours and tactics on the platform and to ensure more granular detail is enshrined in our policy rationales. 

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
N/A
If yes, which further implementation measures do you plan to put in place in the next 6 months?
N/A
Measure 14.1
Relevant Signatories will adopt, reinforce and implement clear policies regarding impermissible manipulative behaviours and practices on their services, based on the latest evidence on the conducts and tactics, techniques and procedures (TTPs) employed by malicious actors, such as the AMITT Disinformation Tactics, Techniques and Procedures Framework.
QRE 14.1.1
Relevant Signatories will list relevant policies and clarify how they relate to the threats mentioned above as well as to other Disinformation threats.
As well as our Integrity & Authenticity policies in our Community Guidelines, which safeguard against harmful misinformation (see QRE 18.2.1), our Integrity & Authenticity policies also expressly prohibit deceptive behaviours. Our policies on deceptive behaviours relate to the TTPs as follows:

TTPs which pertain to the creation of assets for the purpose of a disinformation campaign, and the ways to make these assets seem credible: 

Creation of inauthentic accounts or botnets (which may include automated, partially automated, or non-automated accounts)  

Our Integrity & Authenticity policies, which address Spam and Deceptive Account Behaviours, expressly prohibit account behaviours that may spam or mislead our community. You can set up multiple accounts on TikTok to create different channels for authentic creative expression, but not for deceptive purposes.

We do not allow spam, including:
  • Operating large networks of accounts controlled by a single entity, or through automation;
  • Bulk distribution of a high volume of spam; and
  • Manipulation of engagement signals to amplify the reach of certain content, or buying and selling followers, particularly for financial purposes

We also do not allow impersonation, including:
  • Accounts that pose as another real person or entity without disclosing that they are a fan or parody account in the account name, such as using someone's name, biographical details, content, or image without disclosing it
  • Presenting as a person or entity that does not exist (a fake persona) with a demonstrated intent to mislead others on the platform

If we determine someone has engaged in any of these deceptive account behaviours, we will ban the account, and may ban any new accounts that are created.

Use of fake / inauthentic reactions (e.g. likes, up votes, comments) and use of fake followers or subscribers
Our Integrity & Authenticity policies, which address fake engagement, do not allow the trade or marketing of services that attempt to artificially increase engagement or deceive TikTok’s recommendation system. We do not allow our users to: 

  • facilitate the trade or marketing of services that artificially increase engagement, such as selling followers or likes; or
  • provide instructions on how to artificially increase engagement on TikTok.

If we become aware of accounts or content with inauthentically inflated metrics, we will remove the associated fake followers or likes. Content that tricks or manipulates others as a way to increase engagement metrics, such as “like-for-like” promises and false incentives for engaging with content (to increase gifts, followers, likes, views, or other engagement metrics) is ineligible for our For You feed.

Creation of inauthentic pages, groups, chat groups, fora, or domains 
TikTok does not have pages, groups, chat groups, fora, or domains. This TTP is not relevant to our platform.

Account hijacking or Impersonation
Again, our policies prohibit impersonation, which refers to accounts that pose as another real person or entity or present as a person or entity that does not exist (a fake persona) with a demonstrated intent to mislead others on the platform. Our users are not allowed to use someone else's name, biographical details, or profile picture in a misleading manner. 
In order to protect freedom of expression, we do allow accounts that are clearly parody, commentary, or fan-based, such as where the account name indicates that it is a fan, commentary, or parody account and not affiliated with the subject of the account. We continue to develop our policies to ensure that impersonation of entities (such as businesses or educational institutions, for example) is prohibited and that accounts which impersonate people or entities who are not on the platform are also prohibited. We also issue warnings to users of suspected impersonation accounts and do not recommend those accounts on our For You Feed.

We also have a number of policies that address account hijacking. Our privacy and security policies under our Community Guidelines expressly prohibit users from providing access to their account credentials to others or enabling others to conduct activities against our Community Guidelines. We do not allow access to any part of TikTok through unauthorised methods; attempts to obtain sensitive, confidential, commercial, or personal information; or any abuse of the security, integrity, or reliability of our platform. We also provide practical guidance to users if they have concerns that their account may have been hacked.  

TTPs which pertain to the dissemination of content created in the context of a disinformation campaign, which may or may not include some forms of targeting or attempting to silence opposing views: 

Deliberately targeting vulnerable recipients (e.g. via personalised advertising, location spoofing or obfuscation), inauthentic coordination of content creation or amplification, including attempts to deceive/manipulate platforms algorithms (e.g. keyword stuffing or inauthentic posting/reposting designed to mislead people about popularity of content, including by influencers), use of deceptive practices to deceive/manipulate platform algorithms, such as to create, amplify or hijack hashtags, data voids, filter bubbles, or echo chambers and coordinated mass reporting of non-violative opposing content or accounts.

We fight against CIOs as our policies prohibit attempts to sway public opinion while also misleading our systems or users about the identity, origin, approximate location, popularity or overall purpose.

When we investigate and remove these operations, we focus on behaviour and assessing linkages between accounts and techniques to determine if actors are engaging in a coordinated effort to mislead TikTok’s systems or our community. In each case, we believe that the people behind these activities coordinate with one another to misrepresent who they are and what they are doing. We know that CIOs will continue to evolve in response to our detection, and networks may attempt to reestablish a presence on our platform. That is why we take continuous action against these attempts, including banning accounts found to be linked with previously disrupted networks. We continue to iteratively research and evaluate complex deceptive behaviours on our platform and develop appropriate product and policy solutions as appropriate in the long term. We have published details of all the CIO networks we identified and removed in H1 2025 in a dedicated monthly report within our Transparency Centre here.

In H1 2025, through our Deceptive Behaviours policies we worked on a number of initiatives that sought to continue developing and adapting our strategies at combatting manipulative behaviours and practices. We continue to make progress through several updates and development schemes.

Use “hack and leak” operation (which may or may not include doctored content) 

We have a number of policies that address hack-and-leak related threats (some examples are below):
  • Our hack and leak policy, which aims to further reduce the harms inflicted by the unauthorised disclosure of hacked materials on the individuals, communities, and organisations that may be implicated or exposed by such disclosures.
  • Our CIO policy addresses use of leaked documents to sway public opinion as part of a wider operation.
  • Our Edited Media and AI-Generated Content (AIGC) policy captures materials that have been digitally altered without an appropriate disclosure.
  • Our harmful misinformation policies combat conspiracy theories related to unfolding events and dangerous misinformation.
  • Our Trade of Regulated Goods and Services policy prohibits the trading of hacked goods.

Deceptive manipulated media (e.g. “deep fakes”, “cheap fakes”...) 

Our ‘Edited Media and AI-Generated Content (AIGC)’ policy includes commonly used and easily understood language when referring to AIGC, and outlines our existing prohibitions on AIGC showing fake authoritative sources or crisis events, or falsely showing public figures in certain contexts, including being bullied, making an endorsement, or being endorsed. We also do not allow content that contains the likeness of young people, or the likeness of adult private figures used without their permission.

For the purposes of our policy, AIGC refers to content created or modified by artificial intelligence (AI) technology or machine-learning processes, which may include images of real people, and may show highly realistic-appearing scenes, or use a particular artistic style, such as a painting, cartoons, or anime. ‘Significantly edited content’ is content that shows people doing or saying something they did not do or say, or altering their appearance in a way that makes them difficult to recognise or identify. Misleading AIGC or edited media is audio or visual content that has been edited, including by combining different clips together, to change the composition, sequencing, or timing in a way that alters the meaning of the content and could mislead viewers about the truth of real-world events.
 
In accordance with our policy, we prohibit AIGC, which features:
  • The likeness of young people or realistic-appearing people under the age of 18.
  • The likeness of adult private figures, if we become aware that it was used without their permission.
  • Misleading AIGC or edited media that falsely show:
    • Content made to seem as if it comes from an authoritative source, such as a reputable news organisation.
    • A crisis event, such as a conflict or natural disaster
    • A public figure who is:
      • being degraded or harassed, or engaging in criminal or antisocial behaviour.
      • taking a position on a political issue, commercial product, or a matter of public importance (such as an election).
      • being politically endorsed or condemned by an individual or group.

As AI evolves, we continue to invest in combating harmful AIGC by evolving our proactive detection models, consulting with experts, and partnering with peers on shared solutions.

Non-transparent compensated messages or promotions by influencers 

Our Terms of Service and Branded Content Policy require users posting about a brand or product in return for any payment or other incentive to disclose their content by enabling the branded content toggle, which we make available for users. We also provide functionality to enable users to report suspected undisclosed branded content, which reminds the user who posted the suspected undisclosed branded content of our requirements and prompts them to turn the branded content toggle on if required. We made this requirement even clearer to users in our Commercial Disclosures and Paid Promotion policy in our March 2023 Community Guidelines refresh by increasing the information around our policing of this policy and providing specific examples.

We also don't allow paid political advertising. This includes creators being compensated for making branded political content, and the use of other promotional tools on the platform, such as Promote. We prohibit advertising of any kind by political figures and entities, and suspected paid political advertising is ineligible for the For You feed.

In addition to branded content policies, our CIO policy can also apply to non-transparent compensated messages or promotions by influencers where it is found that those messages or promotions formed part of a covert influence campaign.

QRE 14.1.2
Signatories will report on their proactive efforts to detect impermissible content, behaviours, TTPs and practices relevant to this commitment.
At TikTok, we place considerable emphasis on proactive content moderation and use a combination of technology and safety professionals to detect and remove harmful misinformation (see QRE 18.1.1) and deceptive behaviours on our Platform before they are reported to us by users or third parties. 

For instance, we take proactive measures to prevent inauthentic or spam accounts from being created. Thus, we have created and used detection models and rule engines that:

  • prevent inauthentic accounts from being created based on malicious patterns; and
  • remove registered accounts based on certain signals (i.e., uncommon behaviour on the platform).

We also manually monitor user reports of inauthentic accounts in order to detect larger clusters or similar inauthentic behaviours.

However, given the complex nature of the TTPs, human moderation is critical to success in this area, and TikTok's moderation teams therefore play a key role in assessing and addressing identified violations. We provide our moderation teams with detailed guidance on how to apply the Integrity & Authenticity policies in our Community Guidelines, including providing case banks of harmful misinformation claims to support their moderation work, and allowing them to route new or evolving content to our fact-checking partners for assessment. 

In addition, where content reaches certain popularity levels in terms of the number of video views, it will be flagged for further review. Such a review is undertaken given the extent of the content’s dissemination and the increase in potential harm if the content is found to be in breach of our Community Guidelines including our Integrity & Authenticity policies.

Furthermore, during the reporting period, we improved automated detection and enforcement of our ‘Edited Media and AI-Generated Content (AIGC)’ policy, effectively increasing the number of videos removed for policy violations. This also decreased the number of visitors per video over the reporting period, demonstrating an effective control strategy as the scope of enforcement increased.

We have also set up specifically-trained teams that are focused on investigating and detecting CIO on our Platform. We've built international trust & safety teams with specialized expertise across threat intelligence, security, law enforcement, and data science to work on influence operations full-time. These teams continuously pursue and analyse on-platform signals of deceptive behaviour, as well as leads from external sources. They also collaborate with external intelligence vendors to support specific investigations on a case-by-case basis. When we investigate and remove these operations, we focus on behaviour and assessing linkages between accounts and techniques to determine if actors are engaging in a coordinated effort to mislead TikTok’s systems or our community. In each case, we believe that the people behind these activities coordinate with one another to misrepresent who they are and what they are doing.

Accounts that engage in influence operations often avoid posting content that would be violative of platforms' guidelines by itself. That's why we focus on accounts' behaviour and technical linkages when analysing them, specifically looking for evidence that:

  • They are coordinating with each other. For example, they are operated by the same entity, share technical similarities like using the same devices, or work together to spread the same narrative.
  • They are misleading our systems or users. For example, they are trying to conceal their actual location or use fake personas to pose as someone they're not.
  • They are attempting to manipulate or corrupt public debate to impact the decision-making, beliefs, and opinions of a community. For example, they are attempting to shape discourse around an election or conflict.

These criteria are aligned with industry standards and guidance from the experts we regularly consult with. They're particularly important to help us distinguish malicious, inauthentic coordination from authentic interactions that are part of healthy and open communities. For example, it would not violate our policies if a group of people authentically worked together to raise awareness or campaign for a social cause, or express a shared opinion (including political views). However, multiple accounts deceptively working together to spread similar messages in an attempt to influence public discussions would be prohibited and disrupted.