Twitch

Report March 2026

Submitted

Executive summary


Twitch is a live streaming service built around interactive, real-time communities, where creators engage in a wide variety of activities, including video games, art, cooking, and other types of creative content.

At Twitch, we are committed to fostering a dynamic and inclusive environment that enables streamers to express themselves safely while ensuring a positive, engaging environment for viewers, free of illegal and harmful interactions. This starts with Twitch’s Community Guidelines, which balance user expression with community safety, and set the rules for the behaviour on Twitch. Developed in consultation with external safety, human rights, and policy experts, these guidelines are regularly reviewed and updated to reflect and respond to the community’s evolving needs.

We identify and address potential safety risks using a combination of automated detection, proactive human review, and user reporting. Our global Trust and Safety team reviews and evaluates content and accounts flagged by users as well as signals generated by our automated detection models. The speed at which we respond to user reports is critical given the live nature of Twitch, and in 2025, during the reporting period, we responded to 85% of reports in under 10 minutes and 96% of reports in under an hour. Twitch employs extensive human review to help ensure that enforcement actions remain accurate and fair for our community members. 

Twitch recognizes the risks posed by misinformation, and we believe that individuals who use online services to spread harmful misinformation at scale do not have a place in our community. We maintain a dedicated policy to address misinformation: our Harmful Misinformation Actor policy. This policy targets individuals whose online presence is dedicated to persistently sharing widely disproven and broadly circulated  misinformation that has the potential to cause real-world harm. These actors share three characteristics: their online presence—whether on or off Twitch—is dedicated to (1) persistently sharing (2) widely disproven and broadly shared (3) harmful misinformation topics, such as conspiracies that promote violence. We prohibit harmful misinformation actors who meet all three of these criteria since taken together they create the highest risk of harm, including the incitement of real-world harm. 

Detecting and responding to disinformation 

Our enforcement numbers under this policy are relatively low due to several factors.  First, the structure of Twitch makes coordinated misinformation campaigns difficult to sustain. It is extremely difficult for a new streamer to garner large numbers of concurrent viewers; it takes time to grow an audience on Twitch. Most Twitch content is also long-form and ephemeral.  Once a livestream ends, the content typically disappears or is only available in limited formats. The ephemeral nature of content on Twitch makes large-scale misinformation campaigns much more difficult to execute compared to other user-generated content video and social media services where content remains indefinitely and can be amplified more easily. Second, our policy focuses on individuals who persistently share harmful misinformation. Due to the long-form nature of Twitch’s content, we evaluate a streamer’s aggregated content rather than isolated statements within a longer piece of content. Finally, when the Harmful Misinformation Actor policy was introduced in 2022, we took swift enforcement action against accounts that posed a clear risk to our community. We believe enforcement of our policy since its introduction has been an effective deterrent to harmful misinformation actors; we have not seen significant numbers of such actors attempt to join our service.

Even if someone is not a Harmful Misinformation Actor, Twitch may still take enforcement action under other policies. For example, misinformation that targets specific communities may violate our Hateful Conduct or Harassment policies, and we take action on content that encourages others to engage in physically harmful behaviour under our Self-Destructive Behaviour policy.

In addition to misinformation, Twitch invests significant resources to ban bots, spammers, impersonators, and other types of bad actors that attempt to manipulate activity or evade enforcement on our service. We have automated and proactive detection systems that work in tandem with our reporting system to identify and remove bots, known bad actors, and those who are trying to evade a suspension or ban.

Elections and industry collaboration

While misinformation is not currently prevalent on Twitch, we recognize the harm that this content can cause, particularly when it is related to civic processes and elections. Twitch maintains cross-functional coordination across Product, Policy, Operations Legal, Risk Management, and Content teams to review potential misinformation risks and respond if necessary. Our Trust & Safety team did not observe any misinformation-related—or hateful conduct, harassment, or violence-related—threats related to elections that took place during the reporting period.

We continuously refine our approach to safety, drawing on insights from experts and evolving trends in our community. Recognizing that the  prevalence of harmful misinformation can shift, we actively collaborate with industry, academia, and civil society to assess emerging risks and adjust  our strategies accordingly. Twitch is a signatory of the Australian Code of Practice on Disinformation and Misinformation, fostering stronger cross-sector collaboration and information sharing. Additionally, we participate in a variety of global industry knowledge-sharing initiatives, including the New Zealand Code of Practice for Online Safety and Harms (which also addresses disinformation), the EU Hate Speech Code, and the EU Internet Forum. Twitch also recently stepped into an at-large Operating Board seat for the Global Internet Forum to Counter Terrorism (GIFCT), further supporting cross-industry coordination and information sharing on emerging online harms. 

The EU Code of Practice on Disinformation serves as a valuable mechanism for information sharing and collaboration that will help strengthen industry’s abilities to react quickly to the spread of misinformation. As a signatory, Twitch aims to make meaningful contributions while continuing to learn from expert organisations and industry peers. We are committed to combating misinformation on Twitch in an effective yet targeted manner that balances freedom of expression with keeping our communities safe.
Commitment 1
Relevant signatories participating in ad placements commit to defund the dissemination of disinformation, and improve the policies and systems which determine the eligibility of content to be monetised, the controls for monetisation and ad placement, and the data to report on the accuracy and effectiveness of controls and services around ad placements.
We signed up to the following measures of this commitment
Measure 1.1 Measure 1.2 Measure 1.5
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 1.1
Relevant Signatories involved in the selling of advertising, inclusive of media platforms, publishers and ad tech companies, will deploy, disclose, and enforce policies with the aims of: - first avoiding the publishing and carriage of harmful Disinformation to protect the integrity of advertising supported businesses - second taking meaningful enforcement and remediation steps to avoid the placement of advertising next to Disinformation content or on sources that repeatedly violate these policies; and - third adopting measures to enable the verification of the landing / destination pages of ads and origin of ad placement.
In accordance with the Code’s accommodation of signatories that do not provide very large online services, Twitch has adapted this QRE as follows:  
Relevant Signatories involved in the selling of advertising, inclusive of media platforms, publishers and ad tech companies, will deploy, disclose, and enforce policies with the aims of: 
  • first avoiding the publishing and carriage of harmful Disinformation to protect the integrity of advertising supported businesses
  • second taking meaningful enforcement and remediation steps to avoid the placement of advertising next to Disinformation content or on sources that repeatedly violate these policies
QRE 1.1.1
Signatories will disclose and outline the policies they develop, deploy, and enforce to meet the goals of Measure 1.1 and will link to relevant public pages in their help centres.
From Twitch’s Community Guidelines:
“We remove users whose online presence is dedicated to (1) persistently sharing (2) widely disproven and broadly shared (3) harmful misinformation topics.
This policy is focused on Twitch users who persistently share harmful misinformation. It will not be applied to users based upon individual statements or discussions that occur on the channel. We will evaluate whether a user violates the policy by assessing both their on-platform behaviour as well as their off-platform behaviour. You can report these actors by sending an email to our internal investigations team with the account name and any available supporting evidence.
Under this policy we cover the following topic areas, and will continue to update this list as new trends emerge: 

  • Misinformation that targets protected groups, which is already prohibited under our Hateful Conduct & Harassment Policy
  • Harmful health misinformation and wide-spread conspiracy theories related to dangerous treatments, COVID-19, and COVID-19 vaccine misinformation
    • Discussions of treatments that are known to be harmful without noting the dangers of such treatments
    • For COVID-19—and any other WHO-declared Public Health Emergency of International Concern (PHEIC)—misinformation that causes imminent physical harm or is part of a broad conspiracy
  • Misinformation promoted by conspiracy networks tied to violence and/or promoting violence
  • Civic misinformation that undermines the integrity of a civic or political process
    • Promotion of verifiably false claims related to the outcome of a fully vetted political process, including election rigging, ballot tampering, vote tallying, or election fraud
  • In instances of public emergencies (e.g., wildfires, earthquakes, active shootings), we may also act on misinformation that may impact public safety)”

Additionally, Twitch launched an Ads Safety Report in December 2025 highlighting how we work to ensure that ads appear alongside content that aligns with our Community Guidelines.
SLI 1.1.1
Signatories will report, quantitatively, on actions they took to enforce each of the policies mentioned in the qualitative part of this service level indicator, at the Member State or language level. This could include, for instance, actions to remove, to block, or to otherwise restrict advertising on pages and/or domains that disseminate harmful Disinformation.
In accordance with the Code’s accommodation of signatories that do not provide very large online services, Twitch has adapted this SLI as follows:  Actions taken to enforce each of the policies mentioned in the qualitative part of this service level indicator. This could include, for instance, actions to remove, to block, or to otherwise restrict advertising on pages and/or domains that disseminate harmful Disinformation.

This data represents the total number of indefinite suspensions issued between January and December 2025. Twitch’s enforcement is focused on indefinitely suspending all dedicated misinformation actors and any of their related accounts, and removing any associated content. 
  • Level: Page/Domain 
  • Data: 1