YouTube

Report September 2025

Submitted

Executive summary


Google’s mission is to organise the world’s information and make it universally accessible and useful. To deliver on this mission, and as technology evolves, helping users find useful, relevant and high-quality information across our services is of utmost importance. 

Since Google was founded, Google’s product, policy, and content enforcement decisions have been guided by the following three principles:

1. We value openness and accessibility: We lean towards keeping content accessible by providing access to an open and diverse information ecosystem.

2. We respect user choice: If users search for content that is not illegal or prohibited by our policies, they should be able to find it.

3. We build for everyone: Our services are used around the world by users from different cultures, languages, and backgrounds, and at different stages in their lives. We take the diversity of our users into account in policy development and policy enforcement decisions.

With these principles in mind, Google has long invested in ranking systems and has teams around the world working to connect people with high-quality content; in developing and enforcing rules that prohibit harmful behaviours and content on Google services; and in innovative ways to provide context to users when they might need it most. 

How companies like Google address information quality concerns has an impact on society and on the trust users place in our services. We are cognisant that these are complex issues, affecting all of society, which no single actor is in a position to fully tackle on their own. That is why we have welcomed the multi-stakeholder approach put forward by the EU Code of Conduct on Disinformation. 

Alongside our participation in the EU Code of Conduct on Disinformation, we continue to work closely with regulators to ensure that our services appropriately comply with the EU Digital Services Act (EU DSA), in full respect of EU fundamental rights such as freedom of expression.

The work of supporting a healthy information ecosystem is never finished and we remain committed to it. This is in our interest and the interest of our users.

This report includes metrics and narrative detail for Google Search, YouTube, and Google Advertising users in the European Union (EU), and covers the period from 1 January 2025 to 30 June 2025.

Updates to highlight in this report include (but are not limited to): 

  • 2025 Elections across EU Member States: In H1 2025 (1 January 2025 to 30 June 2025), voters cast their ballots in Germany, Portugal, Romania, and Poland. Google supported these democratic processes by surfacing high-quality information to voters, safeguarding its platforms from abuse, and equipping campaigns with best-in-class security tools and training. In addition, Google put in place a number of policies and other measures that helped people navigate political content that was AI-generated, including ad disclosures, content labels on YouTube, and digital watermarking tools. 

  • Advances in Artificial Intelligence (AI): In H1 2025, we announced new AI safeguards to help protect against misuse. We introduced SynthID Detector, a verification portal to identify AI-generated content made with Google AI. The portal, still in the early stages of tester mode, provides detection capabilities across different modalities in one place, and provides essential transparency in the rapidly evolving landscape of generative media.
    • When we launched SynthID — a state-of-the-art tool that embeds imperceptible watermarks and enables the identification of AI-generated content — our aim was to provide a suite of novel technical solutions to help minimise misinformation and misattribution.
    • SynthID not only preserves the content’s quality, it acts as a robust watermark that remains detectable even when the content is shared or undergoes a range of transformations. While originally focused on AI-generated imagery only, we’ve since expanded SynthID to Include AI-generated text, audio and video content, including content generated by our Gemini, Imagen, Lyria and Veo models. Over 10 billion pieces of content have already been watermarked with SynthID.
    • How SynthID Detector works: When you upload an image, audio track, video or piece of text created using Google's AI tools, the portal will scan the media for a SynthID watermark. If a watermark is detected, the portal will highlight specific portions of the content most likely to be watermarked. For audio, the portal pinpoints specific segments where a SynthID watermark is detected, and for images, it indicates areas where a watermark is most likely.

  • In addition to our continued work and investment in new tools, we are also committed to working with the greater ecosystem to help others benefit from and improve on the advances we are making. As such, we have open-sourced SynthID text watermarking through our updated Responsible Generative AI Toolkit. Underpinning our advancements in AI, as a member of the Coalition for Content Provenance and Authenticity (C2PA), we collaborate with Adobe, Microsoft, OpenAI, Meta, startups, and many others to build and implement the newest version (2.1) of the coalition’s technical standard, Content Credentials. This version is more secure against a wider range of tampering attacks due to stricter technical requirements for validating the history of the content’s provenance.

Google has been working on AI for over a decade to solve society’s biggest challenges and also power Google services people use every day. The progress in large-scale AI models (including generative AI) has sparked additional discussion about the social impacts of AI and raised concerns on topics such as disinformation. Google is committed to developing technology responsibly and first published AI Principles in 2018 to guide our work. Google’s robust internal governance focuses on responsibility throughout the AI development lifecycle, covering model development, application deployment, and post-launch monitoring. While we recently updated our Principles to adapt to shifts in technology, the global conversation, and the AI ecosystem, our deep commitment to responsible AI development remains unchanged. 

Through our philanthropic arm Google.org we have supported organisations that are using AI to tackle important societal issues. Google Search has published guidance on AI-generated content, outlining its approach to maintaining a high standard of information quality and the overall helpfulness of content on Search. To help enhance information quality across its services, Google continuously works to integrate new innovations in watermarking, metadata, and other techniques into its latest generative models. Google has also joined other leading AI companies to jointly commit to advancing responsible practices in the development of artificial intelligence which will support efforts by the G7, the Organization for Economic Co-operation and Development (OECD), and national governments. Going forward we will continue to report and expand upon Google developed AI tools and are committed to advance bold and responsible AI, to maximise AI’s benefits and minimise its risks.

Lastly, the contents of this report should be read with the following context in mind: 

  • This report discusses the key approaches across the following Google services when it comes to addressing disinformation: Google Search, YouTube, and Google Advertising. 
  • For chapters of the Code that involve the same actions across all three services (e.g. participation in the Permanent Task-force or in development of the Transparency Centre), we respond as 'Google, on behalf of related services'.
  • This report follows the structure and template laid out by the Code’s Permanent Task-force, organised around Commitments and Chapters of the Code.
  • Unless otherwise specified, metrics provided cover activities and actions during the period from 1 January 2025 to 30 June 2025.
  • The data provided in this report is subject to a range of factors, including product changes and user settings, and so is expected to fluctuate over the time of the reporting period. As Google continues to evolve its approach, in part to better address user and regulatory needs, the data reported here could vary substantially over time. 
  • We are continuously working to improve the safety and reliability of our services. We are not always in a position to pre-announce specific launch dates, details or timelines for upcoming improvements, and therefore may reply 'no' when asked whether we can disclose future plans for Code implementation measures in the coming reporting period. This 'no' should be understood against the background context that we are constantly working to improve safety and reliability and may in fact launch relevant changes without the ability to pre-announce. 
  • This report is filed concurrently with two ‘crisis reports’ about our response to the Israel-Gaza conflict and to the war in Ukraine. Additionally, an annex on Google’s response toward the recent elections in Romania, Portugal, Poland and Germany is included in this report.
  • The term ‘disinformation’ in this report refers to the definition included in the EU Code of Conduct on Disinformation.

Google looks forward to continuing to work together with other stakeholders in the EU to address challenges related to disinformation.

Download PDF

Commitment 18
Relevant Signatories commit to minimise the risks of viral propagation of Disinformation by adopting safe design practices as they develop their systems, policies, and features.
We signed up to the following measures of this commitment
Measure 18.2 Measure 18.3
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
No
If yes, list these implementation measures here
N/A
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
No
If yes, which further implementation measures do you plan to put in place in the next 6 months?
N/A
Measure 18.2
Relevant Signatories will develop and enforce publicly documented, proportionate policies to limit the spread of harmful false or misleading information (as depends on the service, such as prohibiting, downranking, or not recommending harmful false or misleading information, adapted to the severity of the impacts and with due regard to freedom of expression and information); and take action on webpages or actors that persistently violate these policies.
QRE 18.2.1
Relevant Signatories will report on the policies or terms of service that are relevant to Measure 18.2 and on their approach towards persistent violations of these policies.
Note: The below QRE response has been reproduced (in some instances truncated in order to meet the suggested character limit) from the previous report as there is no new information to share now.

See response to QRE 14.1.1 to see how YouTube’s Community Guidelines map to the TTPs. These policies seek to, among other things, limit the spread of misleading or deceptive content that poses a serious risk of egregious harm. 

Community Guidelines Enforcement
After a creator’s first Community Guidelines violation, they will typically get a warning with no penalty to their channel. They will have the chance to take a policy training to allow the warning to expire after 90 days. Creators will also get the chance to receive a warning in another policy category. If the same policy is violated within that 90 day window, the creator’s channel will be given a strike.

If the creator receives three strikes in the same 90-day period, their channel may be removed from YouTube. In some cases, YouTube may terminate a channel for a single case of severe abuse, as explained in the Help Centre. YouTube may also remove content for reasons other than Community Guidelines violations, such as a first-party privacy complaint or a court order. In these cases, creators will not be issued a strike.

If a creator’s channel gets a strike, they will receive an email, and can have notifications sent to them through their mobile and desktop notifications. The emails and notifications received by the creator explain the action taken on their content and which of YouTube’s policies the content violated. More detailed guidelines of YouTube’s processes and policies on strikes can be found here.

YouTube also reserves the right to restrict a creator's ability to create content on YouTube at its discretion. A channel may be turned off or restricted from using any YouTube features. If this happens, users are prohibited from using, creating, or acquiring another channel to get around these restrictions. This prohibition applies as long as the restriction remains active on the YouTube channel. A violation of this restriction is considered circumvention under YouTube’s Terms of Service, and may result in termination of all existing YouTube channels of the user, any new channels created or acquired, and channels in which the user is repeatedly or prominently featured.

Refer to SLI 18.2.1 on YouTube’s enforcement at an EEA Member State level.
SLI 18.2.1
Relevant Signatories will report on actions taken in response to violations of policies relevant to Measure 18.2, at the Member State level. The metrics shall include: Total number of violations and Meaningful metrics to measure the impact of these actions (such as their impact on the visibility of or the engagement with content that was actioned upon).
(1) Number of videos removed for violations of YouTube’s Misinformation Policies in H1 2025 (1 January 2025 to 30 June 2025), broken down by EEA Member State;

(2) Views threshold on videos removed for violations of YouTube’s Misinformation Policies in H1 2025 broken down by EEA Member State.

For SLI 18.2.1 (2): Starting March 2025, YouTube updated the terminology used for Shorts view counts. This terminology change does not apply to YouTube’s transparency reporting view-related metrics, which remain the same in name and methodology. Learn more here.
Country Number of videos removed Number of videos removed with 0 views Number of videos removed with 1-10 views Number of videos removed with 11-100 views Number of videos removed with 101-1,000 views Number of videos removed with 1,001- 10,000 views Number of videos removed with >10,000 views
Austria 62 6 26 16 12 1 1
Belgium 49 3 29 8 7 1 1
Bulgaria 90 28 23 14 15 7 3
Croatia 18 2 6 1 3 4 2
Cyprus 39 6 6 9 12 5 1
Czech Republic 70 17 27 7 10 5 4
Denmark 53 4 20 12 12 3 2
Estonia 30 2 9 4 10 4 1
Finland 41 8 7 17 5 3 1
France 528 70 209 124 74 29 22
Germany 902 108 339 194 138 74 49
Greece 76 4 14 15 17 21 5
Hungary 37 3 20 9 3 1 1
Ireland 136 19 52 26 23 13 3
Italy 311 30 119 67 54 24 17
Latvia 44 4 10 10 11 6 3
Lithuania 30 6 7 10 4 2 1
Luxembourg 3 0 2 1 0 0 0
Malta 6 1 3 0 1 1 0
Netherlands 320 46 134 72 43 17 8
Poland 155 26 47 31 24 21 6
Portugal 65 11 26 12 9 6 1
Romania 95 19 31 18 21 4 2
Slovakia 13 1 5 4 1 1 1
Slovenia 48 10 9 6 13 9 1
Spain 747 89 215 141 130 128 44
Sweden 92 8 34 15 12 17 6
Iceland 3 0 2 0 0 1 0
Norway 47 4 15 11 13 2 2
Total EU 4,060 531 1,429 843 664 407 186
Total EEA 4,110 535 1,446 854 677 410 188