In addition to systematically removing content that violates our I&A policies, we continue to dedicate significant resources to: expanding our in-app measures that show users additional context on certain content; redirecting them to authoritative information; and making these tools available in 23 EU official languages (plus, for EEA users, Norwegian & Icelandic).
We work with external experts to combat harmful misinformation. For example, we work with the World Health Organisation (WHO) on medical information, and our global fact-checking partners, taking into account their feedback, as well as user feedback, to continually identify new topics and consider which tools may be best suited for raising awareness around that topic.
We deploy a combination of in-app
user intervention tools on topical issues such as elections , the Israel-Hamas Conflict,
Holocaust Education, Mpox and the War in Ukraine..
Video notice tags.
A video notice tag is an information bar at the bottom of a video which is automatically applied to a specific word or hashtag (or set of hashtags). The information bar is clickable and invites users to “Learn more about [the topic]”. Users will be directed to an in-app guide, or reliable third party resource, as appropriate.
Search intervention. If users search for terms associated with a topic, they will be presented with a banner encouraging them to verify the facts and providing a link to a trusted source of information. Search interventions are not deployed for search terms that violate our Community Guidelines, which are actioned according to our policies.
- For example, the four new ongoing general media literacy and critical thinking skills campaigns rolled out in France, Georgia, Moldova, and Portugal, are all supported with search guides to direct users to authoritative sources.
- Our COP29 global search intervention, which ran from 29th October and 25th November, pointed users to authoritative climate related content, and was viewed 400k times.
Public service announcement (PSA). If users search for a hashtag on the topic, they will be served with a public service announcement reminding them about our Community Guidelines and presenting them with links to a trusted source of information.
Unverified content label. In addition to the above mentioned tools, to encourage users to consider the reliability of content related to an emergency or unfolding event, which has been assessed by our fact-checking partners but cannot be verified as accurate i.e., ‘unverified content’, we apply warning labels and we prompt people to
reconsider sharing such content. Details of these warning labels are included in our
Community Guidelines.
Where users continue to post despite the warning:
- To limit the spread of potentially misleading information, the video will become ineligible for recommendation in the For You feed.
- The video's creator is also notified that their video was flagged as unsubstantiated content and is provided additional information about why the warning label has been added to their content. Again, this is to raise the creator’s awareness about the credibility of the content that they have shared.
State-controlled media label. Our state-affiliated media policy is to label accounts run by entities whose editorial output or decision-making process is subject to control or influence by a government. We apply a prominent label to all content and accounts from state-controlled media. The user is also shown a screen pop-up providing information about what the label means, inviting them to “learn more”, and redirecting them to an
in-app page. The measure brings transparency to our community, raises users’ awareness, and encourages users to consider the reliability of the source. We continue to work with experts to inform our approach and explore how we can continue to expand its use.
In the EU, Iceland and Liechtenstein, we have also taken steps to restrict access to content from the entities sanctioned by the EU in 2024:
- RT - Russia Today UK
- RT - Russia Today Germany
- RT - Russia Today France
- RT- Russia Today Spanish
- Sputnik
- Rossiya RTR / RTR Planeta
- Rossiya 24 / Russia 24
- TV Centre International
- NTV/NTV Mir
- Rossiya 1
- REN TV
- Pervyi Kanal / Channel 1
- RT Arabic
- Sputnik Arabic
- RT Balkan
- Oriental Review
- Tsargrad
- New Eastern Outlook
- Katehon
- Voice of Europe
- RIA Novosti
- Izvestija
- Rossiiskaja Gazeta
AI-generated content labels. As more creators take advantage of Artificial Intelligence (AI) to enhance their creativity, we want to support transparent and responsible content creation practices. In 2023 TikTok launched a AI-generated label for creators to disclose content that is completely AI-generated or significantly edited by AI.The launch of this new tool to help creators label their AI-generated content was accompanied by a
creator education campaign, a
Help Center page, and a
Newsroom Post. In May 2024, we started using the Coalition for Content Provenance and Authenticity (C2PA)
Content Credentials, which enables our systems to instantly recognize and automatically label AIGC. In the interests of transparency, we also renamed TikTok AI effects to explicitly include "AI" in their name and corresponding effects label, and updated our guidelines for Effect House creators to do the same.
Dedicated online and in-app information resources. The above mentioned tools provide links to users to accurate and up-to-date information from trusted sources. Depending on the topic, or the relevant EU country, users may be directed to an external authoritative source (e.g., a national government website or an independent national electoral commission), an in-app information centre (e.g., War in Ukraine), or a dedicated page in the TikTok Safety Center or Transparency Center.
We use our
Safety Center to inform our community about our approach to safety, privacy, and security on our platform. Relevant to combating harmful misinformation, we have dedicated information on:
Users can learn more about our transparency efforts in our dedicated
Transparency Center, available in a number of EU languages, which houses our transparency reports, including the standalone
Covert Influence Operations report and the reports we have published under this Code, as well as information on our commitments to maintaining platform integrity e.g.,
Protecting the integrity of elections, Combating misinformation,
Countering influence operation,
Supporting responsible, transparent AI-generated content, and details of
Government Removal Requests