Google Search

Report September 2025

Submitted

Executive summary


Google’s mission is to organise the world’s information and make it universally accessible and useful. To deliver on this mission, and as technology evolves, helping users find useful, relevant and high-quality information across our services is of utmost importance. 

Since Google was founded, Google’s product, policy, and content enforcement decisions have been guided by the following three principles:

1. We value openness and accessibility: We lean towards keeping content accessible by providing access to an open and diverse information ecosystem.

2. We respect user choice: If users search for content that is not illegal or prohibited by our policies, they should be able to find it.

3. We build for everyone: Our services are used around the world by users from different cultures, languages, and backgrounds, and at different stages in their lives. We take the diversity of our users into account in policy development and policy enforcement decisions.

With these principles in mind, Google has long invested in ranking systems and has teams around the world working to connect people with high-quality content; in developing and enforcing rules that prohibit harmful behaviours and content on Google services; and in innovative ways to provide context to users when they might need it most. 

How companies like Google address information quality concerns has an impact on society and on the trust users place in our services. We are cognisant that these are complex issues, affecting all of society, which no single actor is in a position to fully tackle on their own. That is why we have welcomed the multi-stakeholder approach put forward by the EU Code of Conduct on Disinformation. 

Alongside our participation in the EU Code of Conduct on Disinformation, we continue to work closely with regulators to ensure that our services appropriately comply with the EU Digital Services Act (EU DSA), in full respect of EU fundamental rights such as freedom of expression.

The work of supporting a healthy information ecosystem is never finished and we remain committed to it. This is in our interest and the interest of our users.

This report includes metrics and narrative detail for Google Search, YouTube, and Google Advertising users in the European Union (EU), and covers the period from 1 January 2025 to 30 June 2025.

Updates to highlight in this report include (but are not limited to): 

  • 2025 Elections across EU Member States: In H1 2025 (1 January 2025 to 30 June 2025), voters cast their ballots in Germany, Portugal, Romania, and Poland. Google supported these democratic processes by surfacing high-quality information to voters, safeguarding its platforms from abuse, and equipping campaigns with best-in-class security tools and training. In addition, Google put in place a number of policies and other measures that helped people navigate political content that was AI-generated, including ad disclosures, content labels on YouTube, and digital watermarking tools. 

  • Advances in Artificial Intelligence (AI): In H1 2025, we announced new AI safeguards to help protect against misuse. We introduced SynthID Detector, a verification portal to identify AI-generated content made with Google AI. The portal, still in the early stages of tester mode, provides detection capabilities across different modalities in one place, and provides essential transparency in the rapidly evolving landscape of generative media.
    • When we launched SynthID — a state-of-the-art tool that embeds imperceptible watermarks and enables the identification of AI-generated content — our aim was to provide a suite of novel technical solutions to help minimise misinformation and misattribution.
    • SynthID not only preserves the content’s quality, it acts as a robust watermark that remains detectable even when the content is shared or undergoes a range of transformations. While originally focused on AI-generated imagery only, we’ve since expanded SynthID to Include AI-generated text, audio and video content, including content generated by our Gemini, Imagen, Lyria and Veo models. Over 10 billion pieces of content have already been watermarked with SynthID.
    • How SynthID Detector works: When you upload an image, audio track, video or piece of text created using Google's AI tools, the portal will scan the media for a SynthID watermark. If a watermark is detected, the portal will highlight specific portions of the content most likely to be watermarked. For audio, the portal pinpoints specific segments where a SynthID watermark is detected, and for images, it indicates areas where a watermark is most likely.

  • In addition to our continued work and investment in new tools, we are also committed to working with the greater ecosystem to help others benefit from and improve on the advances we are making. As such, we have open-sourced SynthID text watermarking through our updated Responsible Generative AI Toolkit. Underpinning our advancements in AI, as a member of the Coalition for Content Provenance and Authenticity (C2PA), we collaborate with Adobe, Microsoft, OpenAI, Meta, startups, and many others to build and implement the newest version (2.1) of the coalition’s technical standard, Content Credentials. This version is more secure against a wider range of tampering attacks due to stricter technical requirements for validating the history of the content’s provenance.

Google has been working on AI for over a decade to solve society’s biggest challenges and also power Google services people use every day. The progress in large-scale AI models (including generative AI) has sparked additional discussion about the social impacts of AI and raised concerns on topics such as disinformation. Google is committed to developing technology responsibly and first published AI Principles in 2018 to guide our work. Google’s robust internal governance focuses on responsibility throughout the AI development lifecycle, covering model development, application deployment, and post-launch monitoring. While we recently updated our Principles to adapt to shifts in technology, the global conversation, and the AI ecosystem, our deep commitment to responsible AI development remains unchanged. 

Through our philanthropic arm Google.org we have supported organisations that are using AI to tackle important societal issues. Google Search has published guidance on AI-generated content, outlining its approach to maintaining a high standard of information quality and the overall helpfulness of content on Search. To help enhance information quality across its services, Google continuously works to integrate new innovations in watermarking, metadata, and other techniques into its latest generative models. Google has also joined other leading AI companies to jointly commit to advancing responsible practices in the development of artificial intelligence which will support efforts by the G7, the Organization for Economic Co-operation and Development (OECD), and national governments. Going forward we will continue to report and expand upon Google developed AI tools and are committed to advance bold and responsible AI, to maximise AI’s benefits and minimise its risks.

Lastly, the contents of this report should be read with the following context in mind: 

  • This report discusses the key approaches across the following Google services when it comes to addressing disinformation: Google Search, YouTube, and Google Advertising. 
  • For chapters of the Code that involve the same actions across all three services (e.g. participation in the Permanent Task-force or in development of the Transparency Centre), we respond as 'Google, on behalf of related services'.
  • This report follows the structure and template laid out by the Code’s Permanent Task-force, organised around Commitments and Chapters of the Code.
  • Unless otherwise specified, metrics provided cover activities and actions during the period from 1 January 2025 to 30 June 2025.
  • The data provided in this report is subject to a range of factors, including product changes and user settings, and so is expected to fluctuate over the time of the reporting period. As Google continues to evolve its approach, in part to better address user and regulatory needs, the data reported here could vary substantially over time. 
  • We are continuously working to improve the safety and reliability of our services. We are not always in a position to pre-announce specific launch dates, details or timelines for upcoming improvements, and therefore may reply 'no' when asked whether we can disclose future plans for Code implementation measures in the coming reporting period. This 'no' should be understood against the background context that we are constantly working to improve safety and reliability and may in fact launch relevant changes without the ability to pre-announce. 
  • This report is filed concurrently with two ‘crisis reports’ about our response to the Israel-Gaza conflict and to the war in Ukraine. Additionally, an annex on Google’s response toward the recent elections in Romania, Portugal, Poland and Germany is included in this report.
  • The term ‘disinformation’ in this report refers to the definition included in the EU Code of Conduct on Disinformation.

Google looks forward to continuing to work together with other stakeholders in the EU to address challenges related to disinformation.

Download PDF

Commitment 14
In order to limit impermissible manipulative behaviours and practices across their services, Relevant Signatories commit to put in place or further bolster policies to address both misinformation and disinformation across their services, and to agree on a cross-service understanding of manipulative behaviours, actors and practices not permitted on their services. Such behaviours and practices include: The creation and use of fake accounts, account takeovers and bot-driven amplification, Hack-and-leak operations, Impersonation, Malicious deep fakes, The purchase of fake engagements, Non-transparent paid messages or promotion by influencers, The creation and use of accounts that participate in coordinated inauthentic behaviour, User conduct aimed at artificially amplifying the reach or perceived public support for disinformation.
We signed up to the following measures of this commitment
Measure 14.1 Measure 14.2 Measure 14.3
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
No

If yes, list these implementation measures here
N/A

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
No

If yes, which further implementation measures do you plan to put in place in the next 6 months?
N/A
Measure 14.1
Relevant Signatories will adopt, reinforce and implement clear policies regarding impermissible manipulative behaviours and practices on their services, based on the latest evidence on the conducts and tactics, techniques and procedures (TTPs) employed by malicious actors, such as the AMITT Disinformation Tactics, Techniques and Procedures Framework.
QRE 14.1.1
Relevant Signatories will list relevant policies and clarify how they relate to the threats mentioned above as well as to other Disinformation threats.
Note: The below QRE response has been reproduced (in some instances truncated in order to meet the suggested character limit) from the previous report as there is no new information to share now.

Google Search’s systems are designed to elevate high-quality information and combat the threats listed in Commitment 14. While many of those tactics, techniques, and procedures (TTPs) are not relevant to search engines (e.g. TTPs 1 through 5, TTP 11), by seeking to elevate trustworthy, high-quality information, Search’s ranking systems directly tackle threats like inauthentic domains (TTP 4), obfuscation (TTP 6), deceptive manipulated media (TTP 7), hack and leak operations (TTP 8), inauthentic coordination (TTP 9), and a broad range of deceptive practices (TTP 10). More information about the design of Search’s ranking systems is outlined in the User Empowerment chapter.
 
Google Search’s Overall Content Policies outline that Search takes action against spam, which is content that exhibits deceptive or manipulative behaviour designed to deceive users or game search systems. Learn more about Google Search Webmaster Guidelines.

In line with these policies, Search deploys spam protection tools. These efforts address a range of deceptive practices and help reduce the spread of low quality content on Google Search through inauthentic behaviours outlined in relevant TTPs.

Moreover, Search has policies and community guidelines specifically governing what can appear in Google Search features (e.g. knowledge panels, content advisories, ‘About This Result’, etc.) to make sure that Search is showing high-quality and helpful content, while also taking action against content that may promote harmful mis-/disinformation. Relevant policies to the threats listed above include the following:
 
  • Deceptive Practices Policy: This policy prohibits content that impersonates any person or organisation, misrepresentation or concealment of ownership or primary purpose, and engagement in inauthentic or coordinated behaviour to deceive, defraud, or mislead. This policy does not cover content with certain artistic, educational, historical, documentary, or scientific considerations, or other substantial benefits to the public.
  • Manipulated Media Policy: This policy prohibits audio, video, or image content that has been manipulated to deceive, defraud, or mislead by means of creating a representation of actions or events that verifiably did not take place. 
  • Transparency Policy: This policy notes that news sources on Google should provide clear dates and bylines, as well as information about authors, the publication, the publisher, company or network behind it, and contact information.
QRE 14.1.2
Signatories will report on their proactive efforts to detect impermissible content, behaviours, TTPs and practices relevant to this commitment.
Note: The below QRE response has been reproduced (in some instances truncated in order to meet the suggested character limit) from the previous report as there is no new information to share now.

Google Search uses a variety of proactive detection efforts to counter spam, which overlaps significantly with tactics, techniques, and procedures (TTPs) used to disseminate disinformation. As outlined in the overall Google Search Content Policies and Community Guidelines for user generated content, action is taken against spam, which is content that exhibits deceptive or manipulative behaviour designed to deceive users or game search systems.

Pursuant to the Spam Content Policy, Google Search deploys spam protection tools, such as SpamBrain (Google’s AI-based spam-prevention system), to protect search quality and user safety. Addressing a wider range of content than only mis-/disinformation, these efforts help reduce the spread of low quality content on Google Search. Additional information can be found in the 2022 Google Search Webspam Report. In March 2024, Google Search released an update to its Spam Policies that addresses ‘scaled content abuse’ - artificially-generated content (including AI-generated content) that seeks to manipulate Google’s search ranking.

In addition, Google’s Threat Analysis Group (TAG) and Trust and Safety Teams are central to Google’s work to monitor malicious actors around the globe, including but not limited to coordinated information operations that may affect EU Member States. More information about this work is outlined in QRE 16.1.1.