Google Search

Report March 2026

Submitted

Executive summary


Google’s mission is to organise the world’s information and make it universally accessible and useful. To deliver on this mission, and as technology evolves, helping users find useful, relevant and high-quality information across our services is of utmost importance. 

Since Google was founded, Google’s product, policy, and content enforcement decisions have been guided by the following three principles:

1. We value openness and accessibility: We lean towards keeping content accessible by providing access to an open and diverse information ecosystem.

2. We respect user choice: If users search for content that is not illegal or prohibited by our policies, they should be able to find it.

3. We build for everyone: Our services are used around the world by users from different cultures, languages, and backgrounds, and at different stages in their lives. We take the diversity of our users into account in policy development and policy enforcement decisions.

With these principles in mind, Google has long invested in ranking systems and has teams around the world working to connect people with high-quality content; in developing and enforcing rules that prohibit harmful behaviours and content on Google services; and in innovative ways to provide context to users when they might need it most. 

How companies like Google address information quality concerns has an impact on society and on the trust users place in our services. We are cognisant that these are complex issues, affecting all of society, which no single actor is in a position to fully tackle on their own. That is why we have welcomed the multi-stakeholder approach put forward by the EU Code of Conduct on Disinformation. 

Alongside our participation in the EU Code of Conduct on Disinformation, we continue to work closely with regulators to ensure that our services appropriately comply with the EU Digital Services Act (EU DSA), in full respect of EU fundamental rights such as freedom of expression.

The work of supporting a healthy information ecosystem is never finished and we remain committed to it. This is in our interest and the interest of our users.

This report includes metrics and narrative detail for Google Search, YouTube, and Google Advertising users in the European Union (EU), and covers the period from 1 July 2025 to 31 December 2025.

Updates to highlight in this report include (but are not limited to):
 
  • 2025 Elections across EU Member States: During the reporting period, voters cast their ballots in Moldova, Czech Republic, Portugal, Ireland, and the Netherlands. Google supported these democratic processes by surfacing high-quality information to voters, safeguarding its platforms from abuse, and equipping campaigns with best-in-class security tools and training. In addition, Google put in place a number of policies and other measures that helped people navigate political content that was AI-generated, including ad disclosures, content labels on YouTube, and digital watermarking tools. 

  • Advances in Artificial Intelligence (AI): In H1 2025, we announced new AI safeguards to help protect against misuse. We introduced SynthID Detector, a verification portal to identify AI-generated content made with Google AI. The portal, which we have rolled out to early testers, provides detection capabilities across different modalities in one place, and provides essential transparency in the rapidly evolving landscape of generative media.
    • When we launched SynthID — a state-of-the-art tool that embeds imperceptible watermarks and enables the identification of AI-generated content — our aim was to provide a suite of novel technical solutions to help minimise misinformation and misattribution.
    • SynthID not only preserves the content’s quality, it acts as a robust watermark that remains detectable even when the content is shared or undergoes a range of transformations. While originally focused on AI-generated imagery only, we have since expanded SynthID to Include AI-generated text, audio and video content, including content generated by our Gemini, Imagen, Lyria and Veo models. Over 10 billion pieces of content have already been watermarked with SynthID.
    • How SynthID Detector works: When you upload an image, audio track, video or piece of text created using Google's AI tools, the portal will scan the media for a SynthID watermark. If a watermark is detected, the portal will highlight specific portions of the content most likely to be watermarked. For audio, the portal pinpoints specific segments where a SynthID watermark is detected, and for images, it indicates areas where a watermark is most likely.

  • In addition to our continued work and investment in new tools, we are also committed to working with the greater ecosystem to help others benefit from and improve on the advances we are making. As such, we have open-sourced SynthID text watermarking through our updated Responsible Generative AI Toolkit. Underpinning our advancements in AI, as a member of the Coalition for Content Provenance and Authenticity (C2PA), we collaborate with Adobe, Microsoft, OpenAI, Meta, startups, and many others to build and implement the newest version (2.2) of the coalition’s technical standard, Content Credentials. This version is more secure against a wider range of tampering attacks due to stricter technical requirements for validating the history of the content’s provenance.

Google has been working on AI for over a decade to solve society’s biggest challenges and also power the Google services people use every day. The progress in large-scale AI models (including generative AI) has sparked additional discussion about the social impacts of AI and raised concerns on topics such as disinformation. Google is committed to developing technology responsibly and first published AI Principles in 2018 to guide our work. Google’s robust internal governance focuses on responsibility throughout the AI development lifecycle, covering model development, application deployment, and post-launch monitoring. Through our philanthropic arm, Google.org, we have supported organisations that are using AI to tackle important societal issues. Google Search has published guidance on AI-generated content, outlining its approach to maintaining a high standard of information quality and the overall helpfulness of content on Search. To help enhance information quality across its services, Google continuously works to integrate new innovations in watermarking, metadata, and other techniques into its latest generative models. Google has also joined other leading AI companies to jointly commit to advancing responsible practices in the development of artificial intelligence which will support efforts by the G7, the Organisation for Economic Co-operation and Development (OECD), and national governments. Going forward we will continue to report and expand upon Google developed AI tools and are committed to advance bold and responsible AI, to maximise AI’s benefits and minimise its risks.

Lastly, the contents of this report should be read with the following context in mind: 

  • This report discusses the key approaches across the following Google services when it comes to addressing disinformation: Google Search, YouTube, and Google Advertising. 
  • For chapters of the Code that involve the same actions across all three services (e.g. participation in the Permanent Task-force or in development of the Transparency Centre), we respond as 'Google, on behalf of related services'.
  • This report follows the structure and template laid out by the Code’s Permanent Task-force, organised around Commitments and Chapters of the Code.
  • Unless otherwise specified, metrics provided cover activities and actions during the period from 1 July 2025 to 31 December 2025.
  • The data provided in this report is subject to a range of factors, including product changes and user settings, and so is expected to fluctuate over the time of the reporting period. As Google continues to evolve its approach, in part to better address user and regulatory needs, the data reported here could vary substantially over time. 
  • We are continuously working to improve the safety and reliability of our services. We are not always in a position to pre-announce specific launch dates, details or timelines for upcoming improvements, and therefore may reply 'no' when asked whether we can disclose future plans for Code implementation measures in the coming reporting period. This 'no' should be understood in the context that we are constantly working to improve safety and reliability and may in fact launch relevant changes without the ability to pre-announce. 
  • This report is filed concurrently with two ‘crisis reports’ about our response to the Israel-Gaza conflict and to the war in Ukraine. Additionally, an annex on Google’s response toward the recent elections in Moldova, Czech Republic, Portugal, Ireland, and the Netherlands is included in this report.
  • The term ‘disinformation’ in this report refers to the definition included in the EU Code of Conduct on Disinformation.

Google looks forward to continuing to work together with other stakeholders in the EU to address challenges related to disinformation.

Download PDF

Commitment 26
Relevant Signatories commit to provide access, wherever safe and practicable, to continuous, real-time or near real-time, searchable stable access to non-personal data and anonymised, aggregated, or manifestly-made public data for research purposes on Disinformation through automated means such as APIs or other open and accessible technical solutions allowing the analysis of said data.
We signed up to the following measures of this commitment
Measure 26.1 Measure 26.3
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
No
If yes, list these implementation measures here
N/A
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
No
If yes, which further implementation measures do you plan to put in place in the next 6 months?
N/A
Measure 26.1
Relevant Signatories will provide public access to non-personal data and anonymised, aggregated or manifestly-made public data pertinent to undertaking research on Disinformation on their services, such as engagement and impressions (views) of content hosted by their services, with reasonable safeguards to address risks of abuse (e.g. API policies prohibiting malicious or commercial uses).
QRE 26.1.1
Relevant Signatories will describe the tools and processes in place to provide public access to non-personal data and anonymised, aggregated and manifestly-made public data pertinent to undertaking research on Disinformation, as well as the safeguards in place to address risks of abuse.
Google Researcher Program
Eligible EU researchers can apply for access to publicly available data across some of Google’s products, including Search and YouTube, through the Google Researcher Program. Search and YouTube provide eligible researchers (including non-academics that meet predefined eligibility criteria) with access to limited metadata scraping for public data. For researchers who are not affiliated with an EU institution and don’t meet the qualifications for the EU program, Google also offers a global alternative. This program aims to enhance the public’s understanding of Google’s services and their impact. For additional details, see the Researcher Program landing page.

YouTube Researcher Program
The YouTube Researcher Program provides scaled, expanded access to global video metadata across the entire public YouTube corpus via a Data API for eligible academic researchers from around the world, who are affiliated with an accredited, higher-learning institution. Learn more about the data available in the YouTube API reference.

Transparency into paid content on YouTube
YouTube provides users a bespoke front end search page to access publicly available data containing organic content with paid product placements, sponsorships and endorsements as disclosed by creators. This is to enable users to understand that creators may receive goods or services in exchange for promotion. This search page complements YouTube’s existing process of displaying a disclosure message when creators disclose to YouTube that their content contains paid promotions. Learn more about adding paid product placements, sponsorships & endorsements here.
QRE 26.1.2
Relevant Signatories will publish information related to data points available via Measure 26.1, as well as details regarding the technical protocols to be used to access these data points, in the relevant help centre. This information should also be reachable from the Transparency Centre. At minimum, this information will include definitions of the data points available, technical and methodological information about how they were created, and information about the representativeness of the data.
Google Researcher Program
Approved researchers will receive permissions and access to public data for Search and YouTube in the following ways: 
  • Search: Access to an API for limited scraping with a budget for quota;
  • YouTube: Permission for scraping limited to metadata.

For additional details, see the Researcher Program landing page

YouTube Researcher Program
The YouTube Researcher Program provides scaled, expanded access to global video metadata across the entire public YouTube corpus via a Data API. The program allows eligible academic researchers around the world to independently analyse the data they collect, including generating new/derived metrics for their research. Information available via the Data API includes video title, description, views, likes, comments, channel metadata, search results, and other data.

Transparency into paid content on YouTube
The information provided via the bespoke front end search page allows users to view videos with active paid product placements, sponsorships, and endorsements that have been declared on YouTube.
  • Paid product placements
    • Videos about a product or service because there is a connection between the creator and the maker of the product or service;
    • Videos created for a company or business in exchange for compensation or free of charge products/services; 
    • Videos where that company or business’s brand, message, or product is included directly in the content and the company has given the creator money or free of charge products to make the video.
  • Endorsements - Videos created for an advertiser or marketer that contains a message that reflects the opinions, beliefs, or experiences of the creator.
  • Sponsorships - Videos that have been financed in whole or in part by a company, without integrating the brand, message, or product directly into the content. Sponsorships generally promote the brand, message, or product of the third party.

Definitions can be found on the YouTube Help Centre.

Additional data points are provided in SLI 26.1.1 and 26.2.1.
SLI 26.1.1
Relevant Signatories will provide quantitative information on the uptake of the tools and processes described in Measure 26.1, such as number of users.
Total number of applications under the Google Researcher Program, broken down by: 
  • Applications Approved
  • Applications Rejected
  • Applications Under Review 

We note that most of the applications Google Search received in the reporting period either constituted spam, or did not contribute to the detection, identification and understanding of systemic risks in the EU.
Total number of applications approved Total number of applications rejected Total number of applications under review
Austria
Belgium
Bulgaria
Croatia
Cyprus
Czech Republic
Denmark
Estonia
Finland
France
Germany
Greece
Hungary
Ireland
Italy
Latvia
Lithuania
Luxembourg
Malta
Netherlands
Poland
Portugal
Romania
Slovakia
Slovenia
Spain
Sweden
Iceland
Liechtenstein
Norway
Total EU
Total EEA 4 8 0