As well as our I&A policies in our CGs which safeguard against harmful misinformation (see QRE 18.2.1), our I&A policies also expressly prohibit deceptive behaviours. Our policies on deceptive behaviours relate to the TTPs as follows:
TTPs which pertain to the creation of assets for the purpose of a disinformation campaign, and to ways to make these assets seem credible:
Creation of inauthentic accounts or botnets (which may include automated, partially automated, or non-automated accounts)
Our I&A policies which address Spam and Deceptive Account Behaviours expressly prohibit account behaviours that may spam or mislead our community. You can set up multiple accounts on TikTok to create different channels for authentic creative expression, but not for deceptive purposes.
We do not allow spam including:
- Operating large networks of accounts controlled by a single entity, or through automation;
- Bulk distribution of a high-volume of spam; and
- Manipulation of engagement signals to amplify the reach of certain content, or buying and selling followers, particularly for financial purposes
We also do not allow impersonation including:
- Accounts that pose as another real person or entity without disclosing that they are a fan or parody account in the account name, such as using someone's name, biographical details, content, or image without disclosing it
- Presenting as a person or entity that does not exist (a fake persona) with a demonstrated intent to mislead others on the platform
If we determine someone has engaged in any of these deceptive account behaviours, we will ban the account, and may ban any new accounts that are created.
Use of fake / inauthentic reactions (e.g. likes, up votes, comments) and use of fake followers or subscribers
Our I&A policies which address fake engagement do not allow the trade or marketing of services that attempt to artificially increase engagement or deceive TikTok’s recommendation system. We do not allow our users to:
- facilitate the trade or marketing of services that artificially increase engagement, such as selling followers or likes; or
- provide instructions on how to artificially increase engagement on TikTok.
If we become aware of accounts or content with inauthentically inflated metrics, we will remove the associated fake followers or likes. Content that tricks or manipulates others as a way to increase engagement metrics, such as “like-for-like” promises and false incentives for engaging with content (to increase gifts, followers, likes, views, or other engagement metrics) is ineligible for our For You feed.
Creation of inauthentic pages, groups, chat groups, fora, or domains
TikTok does not have pages, groups, chat groups, fora or domains. This TTP is not relevant to our platform.
Account hijacking or Impersonation
Again, our policies prohibit impersonation which refers to accounts that pose as another real person or entity or present as a person or entity that does not exist (a fake persona) with a demonstrated intent to mislead others on the platform. Our users are not allowed to use someone else's name, biographical details, or profile picture in a misleading manner.
In order to protect freedom of expression, we do allow accounts that are clearly parody, commentary, or fan-based, such as where the account name indicates that it is a fan, commentary, or parody account and not affiliated with the subject of the account. We continue to develop our policies to ensure that impersonation of entities (such as businesses or educational institutions, for example) are prohibited and that accounts which impersonate people or entities who are not on the platform are also prohibited. We also issue warnings to users of suspected impersonation accounts and do not recommend those accounts on our For You Feed.
We also have a number of policies that address account hijacking. Our privacy and security policies under our CGs expressly prohibit users from providing access to their account credentials to others or enable others to conduct activities against our CGs. We do not allow access to any part of TikTok through unauthorised methods; attempts to obtain sensitive, confidential, commercial, or personal information; or any abuse of the security, integrity, or reliability of our platform. We also provide practical
guidance to users if they have concerns that their account may have been hacked.
TTPs which pertain to the dissemination of content created in the context of a disinformation campaign, which may or may not include some forms of targeting or attempting to silence opposing views:
Deliberately targeting vulnerable recipients (e.g. via personalised advertising, location spoofing or obfuscation), inauthentic coordination of content creation or amplification, including attempts to deceive/manipulate platforms algorithms (e.g. keyword stuffing or inauthentic posting/reposting designed to mislead people about popularity of content, including by influencers), use of deceptive practices to deceive/manipulate platform algorithms, such as to create, amplify or hijack hashtags, data voids, filter bubbles, or echo chambers and coordinated mass reporting of non-violative opposing content or accounts.
We fight against CIOs as our policies prohibit attempts to sway public opinion while also misleading our systems or users about the identity, origin, approximate location, popularity or overall purpose.
When we investigate and remove these operations, we focus on behaviour and assessing linkages between accounts and techniques to determine if actors are engaging in a coordinated effort to mislead TikTok’s systems or our community. In each case, we believe that the people behind these activities coordinate with one another to misrepresent who they are and what they are doing. We know that CIOs will continue to evolve in response to our detection and networks may attempt to reestablish a presence on our platform - that's why we take continuous action against these attempts including banning accounts found to be linked with previously disrupted networks. We continue to iteratively research and evaluate complex deceptive behaviours on our platform and develop appropriate product and policy solutions as appropriate in the long term. We publish all of the CIO networks we identify and remove voluntarily in a dedicated report within our transparency centre
here.
Use “hack and leak” operation (which may or may not include doctored content)
We have a number of policies that address hack and leak related threats (some examples are below):
- Our hack and leak policy, which aims to further reduce the harms inflicted by the unauthorised disclosure of hacked materials on the individuals, communities and organisations that may be implicated or exposed by such disclosures
- Our CIO policy addresses use of leaked documents to sway public opinion as part of a wider operation
- Our Edited Media and AI-Generated Content (AIGC) policy captures materials that have been digitally altered without an appropriate disclosure
- Our harmful misinformation policies combats conspiracy theories related to unfolding events and dangerous misinformation
- Our Trade of Regulated Goods and Services policy prohibits trading of hacked goods
Deceptive manipulated media (e.g. “deep fakes”, “cheap fakes”...)
Our ‘Edited Media and AI-Generated Content (AIGC)’ policy includes commonly used and easily understood language when referring to AIGC, and outlines our existing prohibitions on AIGC showing fake authoritative sources or crisis events, or falsely showing public figures in certain contexts including being bullied, making an endorsement, or being endorsed. We also do not allow content that contains the likeness of young people, or the likeness of adult private figures used without their permission.
For the purposes of our policy, AIGC refers to content created or modified by artificial intelligence (AI) technology or machine-learning processes, which may include images of real people, and may show highly realistic-appearing scenes, or use a particular artistic style, such as a painting, cartoons, or anime. ‘Significantly edited content’ is content that shows people doing or saying something they did not do or say, or altering their appearance in a way that makes them difficult to recognise or identify. Misleading AIGC or edited media is audio or visual content that has been edited, including by combining different clips together, to change the composition, sequencing, or timing in a way that alters the meaning of the content and could mislead viewers about the truth of real world events.
In accordance with our policy, we prohibit AIGC that features:
- Realistic-appearing people under the age of 18
- The likeness of adult private figures, if we become aware it was used without their permission
- Misleading AIGC or edited media that falsely shows:
- Content made to seem as if it comes from an authoritative source, such as a reputable news organisation
- A crisis event, such as a conflict or natural disaster
- A public figure who is:
- being degraded or harassed, or engaging in criminal or antisocial behaviour
- taking a position on a political issue, commercial product, or a matter of public importance (such as an elections)
- being politically endorsed or condemned by an individual or group
As AI evolves, we continue to invest in combating harmful AIGC by evolving our proactive detection models, consulting with experts, and partnering with peers on shared solutions.
Non-transparent compensated messages or promotions by influencers
Our
Terms of Service and
Branded Content Policy require users posting about a
brand or product in return for any payment or other incentive to disclose their content by enabling the branded content toggle which we make available for users. We also provide functionality to enable users to report suspected undisclosed branded content which reminds the user who posted the suspected undisclosed branded content of our requirements and prompts them to turn the branded content toggle on if required. We made this requirement even clearer to users in our
Commercial Disclosures and Paid Promotion policy in our March 2023 CG refresh, by increasing the information around our policing of this policy and providing specific examples.
In addition to branded content policies, our CIO policy can also apply to non-transparent compensated messages or promotions by influencers where it is found that those messages or promotions formed part of a covert influence campaign.