Google

Report September 2025

Submitted

Executive summary


Google’s mission is to organise the world’s information and make it universally accessible and useful. To deliver on this mission, and as technology evolves, helping users find useful, relevant and high-quality information across our services is of utmost importance. 

Since Google was founded, Google’s product, policy, and content enforcement decisions have been guided by the following three principles:

1. We value openness and accessibility: We lean towards keeping content accessible by providing access to an open and diverse information ecosystem.

2. We respect user choice: If users search for content that is not illegal or prohibited by our policies, they should be able to find it.

3. We build for everyone: Our services are used around the world by users from different cultures, languages, and backgrounds, and at different stages in their lives. We take the diversity of our users into account in policy development and policy enforcement decisions.

With these principles in mind, Google has long invested in ranking systems and has teams around the world working to connect people with high-quality content; in developing and enforcing rules that prohibit harmful behaviours and content on Google services; and in innovative ways to provide context to users when they might need it most. 

How companies like Google address information quality concerns has an impact on society and on the trust users place in our services. We are cognisant that these are complex issues, affecting all of society, which no single actor is in a position to fully tackle on their own. That is why we have welcomed the multi-stakeholder approach put forward by the EU Code of Conduct on Disinformation. 

Alongside our participation in the EU Code of Conduct on Disinformation, we continue to work closely with regulators to ensure that our services appropriately comply with the EU Digital Services Act (EU DSA), in full respect of EU fundamental rights such as freedom of expression.

The work of supporting a healthy information ecosystem is never finished and we remain committed to it. This is in our interest and the interest of our users.

This report includes metrics and narrative detail for Google Search, YouTube, and Google Advertising users in the European Union (EU), and covers the period from 1 January 2025 to 30 June 2025.

Updates to highlight in this report include (but are not limited to): 

  • 2025 Elections across EU Member States: In H1 2025 (1 January 2025 to 30 June 2025), voters cast their ballots in Germany, Portugal, Romania, and Poland. Google supported these democratic processes by surfacing high-quality information to voters, safeguarding its platforms from abuse, and equipping campaigns with best-in-class security tools and training. In addition, Google put in place a number of policies and other measures that helped people navigate political content that was AI-generated, including ad disclosures, content labels on YouTube, and digital watermarking tools. 

  • Advances in Artificial Intelligence (AI): In H1 2025, we announced new AI safeguards to help protect against misuse. We introduced SynthID Detector, a verification portal to identify AI-generated content made with Google AI. The portal, still in the early stages of tester mode, provides detection capabilities across different modalities in one place, and provides essential transparency in the rapidly evolving landscape of generative media.
    • When we launched SynthID — a state-of-the-art tool that embeds imperceptible watermarks and enables the identification of AI-generated content — our aim was to provide a suite of novel technical solutions to help minimise misinformation and misattribution.
    • SynthID not only preserves the content’s quality, it acts as a robust watermark that remains detectable even when the content is shared or undergoes a range of transformations. While originally focused on AI-generated imagery only, we’ve since expanded SynthID to Include AI-generated text, audio and video content, including content generated by our Gemini, Imagen, Lyria and Veo models. Over 10 billion pieces of content have already been watermarked with SynthID.
    • How SynthID Detector works: When you upload an image, audio track, video or piece of text created using Google's AI tools, the portal will scan the media for a SynthID watermark. If a watermark is detected, the portal will highlight specific portions of the content most likely to be watermarked. For audio, the portal pinpoints specific segments where a SynthID watermark is detected, and for images, it indicates areas where a watermark is most likely.

  • In addition to our continued work and investment in new tools, we are also committed to working with the greater ecosystem to help others benefit from and improve on the advances we are making. As such, we have open-sourced SynthID text watermarking through our updated Responsible Generative AI Toolkit. Underpinning our advancements in AI, as a member of the Coalition for Content Provenance and Authenticity (C2PA), we collaborate with Adobe, Microsoft, OpenAI, Meta, startups, and many others to build and implement the newest version (2.1) of the coalition’s technical standard, Content Credentials. This version is more secure against a wider range of tampering attacks due to stricter technical requirements for validating the history of the content’s provenance.

Google has been working on AI for over a decade to solve society’s biggest challenges and also power Google services people use every day. The progress in large-scale AI models (including generative AI) has sparked additional discussion about the social impacts of AI and raised concerns on topics such as disinformation. Google is committed to developing technology responsibly and first published AI Principles in 2018 to guide our work. Google’s robust internal governance focuses on responsibility throughout the AI development lifecycle, covering model development, application deployment, and post-launch monitoring. While we recently updated our Principles to adapt to shifts in technology, the global conversation, and the AI ecosystem, our deep commitment to responsible AI development remains unchanged. 

Through our philanthropic arm Google.org we have supported organisations that are using AI to tackle important societal issues. Google Search has published guidance on AI-generated content, outlining its approach to maintaining a high standard of information quality and the overall helpfulness of content on Search. To help enhance information quality across its services, Google continuously works to integrate new innovations in watermarking, metadata, and other techniques into its latest generative models. Google has also joined other leading AI companies to jointly commit to advancing responsible practices in the development of artificial intelligence which will support efforts by the G7, the Organization for Economic Co-operation and Development (OECD), and national governments. Going forward we will continue to report and expand upon Google developed AI tools and are committed to advance bold and responsible AI, to maximise AI’s benefits and minimise its risks.

Lastly, the contents of this report should be read with the following context in mind: 

  • This report discusses the key approaches across the following Google services when it comes to addressing disinformation: Google Search, YouTube, and Google Advertising. 
  • For chapters of the Code that involve the same actions across all three services (e.g. participation in the Permanent Task-force or in development of the Transparency Centre), we respond as 'Google, on behalf of related services'.
  • This report follows the structure and template laid out by the Code’s Permanent Task-force, organised around Commitments and Chapters of the Code.
  • Unless otherwise specified, metrics provided cover activities and actions during the period from 1 January 2025 to 30 June 2025.
  • The data provided in this report is subject to a range of factors, including product changes and user settings, and so is expected to fluctuate over the time of the reporting period. As Google continues to evolve its approach, in part to better address user and regulatory needs, the data reported here could vary substantially over time. 
  • We are continuously working to improve the safety and reliability of our services. We are not always in a position to pre-announce specific launch dates, details or timelines for upcoming improvements, and therefore may reply 'no' when asked whether we can disclose future plans for Code implementation measures in the coming reporting period. This 'no' should be understood against the background context that we are constantly working to improve safety and reliability and may in fact launch relevant changes without the ability to pre-announce. 
  • This report is filed concurrently with two ‘crisis reports’ about our response to the Israel-Gaza conflict and to the war in Ukraine. Additionally, an annex on Google’s response toward the recent elections in Romania, Portugal, Poland and Germany is included in this report.
  • The term ‘disinformation’ in this report refers to the definition included in the EU Code of Conduct on Disinformation.

Google looks forward to continuing to work together with other stakeholders in the EU to address challenges related to disinformation.

Download PDF

Elections 2025
[Note: Signatories are requested to provide information relevant to their particular response to the threats and challenges they observed on their service(s). They ensure that the information below provides an accurate and complete report of their relevant actions. As operational responses to crisis/election situations can vary from service to service, an absence of information should not be considered a priori a shortfall in the way a particular service has responded. Impact metrics are accurate to the best of signatories’ abilities to measure them].
Threats observed or anticipated
Overview
In elections and other democratic processes, people want access to high-quality information and a broad range of perspectives. High-quality information helps people make informed decisions when voting and counteracts abuse by bad actors. Consistent with its broader approach to elections around the world, during the various elections across the EU in H1 2025 (1 January 2025 to 30 June 2025), Google was committed to supporting these democratic processes by surfacing high-quality information to voters, safeguarding its platforms from abuse and equipping campaigns with best-in-class security tools and training – with a strong focus on helping people navigate AI-generated content.
Mitigations in place
Across Google, various teams support democratic processes by connecting people to election information like practical tips on how to register to vote or providing high-quality information about candidates. In 2025, a number of key elections took place around the world and across the EU in particular. In H1 2025, voters cast their votes in Germany, Poland, Portugal and Romania. Google was committed to supporting these democratic processes by surfacing high-quality information to voters, safeguarding its platforms from abuse and equipping campaigns with the best-in-class security tools and training. Across its efforts, Google also has an increased focus on the role of artificial intelligence (AI) and the part it can play in the disinformation landscape — while also leveraging AI models to augment Google’s abuse-fighting efforts. 

Safeguarding Google platforms and disrupting the spread of disinformation
To better secure its products and prevent abuse, Google continues to enhance its enforcement systems and to invest in Trust & Safety operations — including at its Google Safety Engineering Centre (GSEC) for Content Responsibility in Dublin, dedicated to online safety in Europe and around the world. Google also continues to partner with the wider ecosystem to combat disinformation. 
  • Enforcing Google policies and using AI models to fight abuse at scale: Google has long-standing policies that inform how it approaches areas like manipulated media, hate and harassment, and incitement to violence — along with policies around demonstrably false claims that could undermine democratic processes, for example in YouTube’s Community Guidelines. To help enforce Google policies, Google’s AI models are enhancing its abuse-fighting efforts. With recent advances in Google’s Large Language Models (LLMs), Google is building faster and more adaptable enforcement systems that enable us to remain nimble and take action even more quickly when new threats emerge.
  • Working with the wider ecosystem: Since Google’s inaugural commitment of €25 million to help launch the European Media & Information Fund, an effort designed to strengthen media literacy and information quality across Europe, 121 projects have been funded across 28 countries so far.

Helping people navigate AI-generated content
Like any emerging technology, AI presents new opportunities as well as challenges. For example, generative AI makes it easier than ever to create new content, but it can also raise questions about trustworthiness of information. Google put in place a number of policies and other measures that have helped people navigate content that was AI-generated. Overall, harmful altered or synthetic political content did not appear to be widespread on Google’s platforms. Measures that helped mitigate that risk include: 
  • Ads disclosures: Google expanded its Political Content Policies to require advertisers to disclose when their election ads include synthetic content that inauthentically depicts real or realistic-looking people or events. Google’s ads policies already prohibit the use of manipulated media to mislead people, like deep fakes or doctored content.
  • Content labels on YouTube: YouTube’s Misinformation Policies prohibit technically manipulated content that misleads users and could pose a serious risk of egregious harm — and YouTube requires creators to disclose when they have created realistic altered or synthetic content, and will display a label that indicates for people when the content they are watching is synthetic. For sensitive content, including election related content, that contains realistic altered or synthetic material, the label appears on the video itself and in the video description.
  • Provide users with additional context: 'About This Image' in Search helps people assess the credibility and context of images found online.
  • Industry collaboration: Google is a member of the Coalition for Content Provenance and Authenticity (C2PA) and standard, a cross-industry effort to help provide more transparency and context for people on AI-generated content. 

Informing voters surfacing high-quality information
In the build-up to elections, people need useful, relevant and timely information to help them navigate the electoral process. Here are some of the ways Google makes it easy for people to find what they need, and which were deployed during elections that took place across the EU in 2025: 
  • High-quality Information on YouTube: For news and information related to elections, YouTube’s systems prominently surface high-quality content, on the YouTube homepage, in search results and the ‘Up Next’ panel. YouTube also displays information panels at the top of search results and below videos to provide additional context. For example, YouTube may surface various election information panels above search results or on videos related to election candidates, parties or voting.
  • Ongoing transparency on Election Ads: All advertisers who wish to run election ads in the EU on Google’s platforms are required to go through a verification process and have an in-ad disclosure that clearly shows who paid for the ad. These ads are published in Google’s Political Ads Transparency Report, where anyone can look up information such as how much was spent and where it was shown. Google also limits how advertisers can target election ads. Google will stop serving political advertising in the EU before the EU’s Transparency and Targeting of Political Advertising (TTPA) Regulation enters into force in October 2025. 

Equipping campaigns and candidates with best-in-class security features and training
As elections come with increased cybersecurity risks, Google works hard to help high-risk users, such as campaigns and election officials, civil society and news sources, improve their security in light of existing and emerging threats, and to educate them on how to use Google’s products and services. 
  • Security tools for campaign and election teams: Google offers free services like its Advanced Protection Program — Google’s strongest set of cyber protections — and Project Shield, which provides unlimited protection against Distributed Denial of Service (DDoS) attacks. Google also partners with Possible, The International Foundation for Electoral Systems (IFES) and Deutschland sicher im Netz (DSIN) to scale account security training and to provide security tools including Titan Security Keys, which defend against phishing attacks and prevent bad actors from accessing users’ Google Accounts.
  • Tackling coordinated influence operations: Google’s Threat Intelligence Group helps identify, monitor and tackle emerging threats, ranging from coordinated influence operations to cyber espionage campaigns against high-risk entities. Google reports on actions taken in its quarterly bulletin, and meets regularly with government officials and others in the industry to share threat information and suspected election interference. Mandiant also helps organisations build holistic election security programs and harden their defences with comprehensive solutions, services and tools, including proactive exposure management, proactive intelligence threat hunts, cyber crisis communication services and threat intelligence tracking of information operations. A recent publication from the team gives an overview of the global election cybersecurity landscape, designed to help election organisations tackle a range of potential threats.

Google is committed to working with government, industry and civil society to protect the integrity of elections in the European Union — building on its commitments made in the EU Code of Conduct on Disinformation. 
Policies and Terms and Conditions
Outline any changes to your policies
Policy - 50.1.1
N/A
Changes (such as newly introduced policies, edits, adaptation in scope or implementation) - 50.1.2
N/A
Rationale - 50.1.3
N/A