Google Search

Report March 2025

Submitted

Executive summary


Google’s mission is to organise the world’s information and make it universally accessible and useful. To deliver on this mission, elevating high-quality information and enhancing information quality across our services is of utmost importance. Since Google was founded, Google’s product, policy, and content enforcement decisions have been guided by the following three principles:

1. Value openness and accessibility: Aim to provide access to an open and diverse information ecosystem, while maintaining a responsible approach to supporting information quality;

2. Respect user choice: Based on users’ intent, provide access to content that is not illegal or prohibited by Google’s policies, but set a high bar for information quality where users have not clearly expressed what they are looking for;

3. Build for everyone: Take into account the diversity of users (cultures, languages, backgrounds) and seek to address their needs appropriately.

With these principles in mind, Google has teams around the world working to combat harmful misinformation. Google has long invested in ranking systems that seek to connect people with high-quality content; in developing and enforcing rules that prohibit harmful behaviours and contents on Google services; and in innovative ways to provide context to users when they might need it most. We realise that fundamental rights are interdependent and are sometimes in tension with each other. When efforts to protect or advance one right may result in limiting another right, we identify and implement mitigation measures to address potential adverse impacts such as, protecting freedom of expression via appeals mechanisms or raising high-quality content to address lower-quality content that may appear on the platform. We comply with applicable laws by removing illegal content. We also remove content that violates our policies, and regularly evolve these policies in consultation with experts. Our work is not done, and we expect to continue improving upon these efforts in the future.

However, we are cognisant that these are complex issues, affecting all of society, which no single actor is in a position to fully tackle on their own. That is why we have welcomed the multi-stakeholder approach put forward by the EU Code of Practice on Disinformation. 

As the EU Code of Practice on Disinformation is being brought under the EU Digital Services Act (DSA) framework, Google has revised its subscription to focus on reasonable, proportionate and effective measures to mitigate systemic risks related to disinformation that are tailored to our services. Accordingly, Google has exited certain commitments that are not relevant, practicable or appropriate for its services, including all commitments under the Political Advertising and Fact-Checking chapters.

Alongside our participation in the EU Code of Practice on Disinformation, we will continue to work closely with regulators to ensure that our services appropriately comply with the DSA, in full respect of EU fundamental rights such as freedom of expression. The work of supporting a healthy information ecosystem is never finished and we remain committed to it. This is in our interest and the interest of our users.

This report includes metrics and narrative detail for Google Search, YouTube, and Google Advertising users in the European Union (EU), and covers the period from 1 July 2024 to 31 December 2024.

Updates to highlight in this report include (but are not limited to): 

  • 2024 EU Elections: In 2024, a number of elections took place around the world. In H2 2024, voters cast their ballots in the Romanian presidential election and in the second round of the French legislative election. Google was committed to supporting these democratic processes by surfacing high-quality information to voters, safeguarding its platforms from abuse and equipping campaigns with the best-in-class security tools and training. In addition, Google put in place a number of policies and other measures that have helped people navigate political content that was AI-generated, including ad disclosures, content labels on YouTube, and digital watermarking tools.

  • Supporting Researchers in Technology Related to Trust & Safety: Google has continued to demonstrate its commitment to empowering the research community by hosting workshops with researchers and providing grants to support research efforts related to Trust & Safety areas of interest. These Trust & Safety workshops aim to build relationships among scholars working in different fields, to share projects and insights across the broader Trust & Safety ecosystem. We are also committed to assisting researchers with financial support to continue their research. Google provides unrestricted grants to support research efforts across areas of interest related to Trust & Safety in technology through the Trust & Safety Research Awards. This program, in partnership with University Relations, is one of Google’s largest opportunities to partner with external researchers on priority Trust & Safety topics. Similarly, we announced the first-ever winners of the Google Academic Research Awards (GARA) program in October 2024. In this first funding cycle, the program will support 95 projects led by 143 researchers globally, and their work aligns with Google's commitment to responsible innovation.

  • Advances in Artificial Intelligence (AI): In H1 2024, we announced new AI safeguards to help protect against misuse. We introduced SynthID, a technology that adds imperceptible watermarks to AI-generated images and audio so they are easier to identify; this year, we are expanding SynthID’s capabilities to watermarking AI-generated text, audio, visual and video. YouTube also introduced a new tool in Creator Studio requiring creators to disclose to viewers when realistic content is made with altered or synthetic media, including generative AI. In addition to these new tools, we are also committed to working with the greater ecosystem to help others benefit from and improve on the advances we are making. As such, we will open-source SynthID text watermarking through our updated Responsible Generative AI Toolkit. Underpinning our advancements in AI, as a member of the Standard and Coalition for Content Provenance and Authenticity (C2PA), we collaborate with Adobe, Microsoft, startups and many others to build and implement the newest version (2.1) of the coalition’s technical standard, Content Credentials. This version is more secure against a wider range of tampering attacks due to stricter technical requirements for validating the history of the content’s provenance.

Google has been working on AI for more than a dozen years to solve society’s biggest challenges and power Google services people use every day. The progress in large-scale AI models (including generative AI) has sparked additional discussion about the social impacts of AI and raised concerns on topics such as misinformation. Google is committed to developing technology responsibly and published AI Principles in 2018 to guide our work. Google’s robust internal governance focuses on responsibility throughout the AI development lifecycle, covering model development, application deployment, and post-launch monitoring. While we recently updated our Principles to adapt to shifts in technology, the global conversation, and the AI ecosystem, our deep commitment to responsible AI development remains unchanged. Through our philanthropic arm Google.org we have supported organisations that are using AI to tackle important societal issues. Google Search has published guidance on AI-generated content, outlining its approach to maintaining a high standard of information quality and the overall helpfulness of content on Search. To help enhance information quality across its services, Google has also announced that it will soon be integrating new innovations in watermarking, metadata, and other techniques into its latest generative models. Google has also joined other leading AI companies to jointly commit to advancing responsible practices in the development of artificial intelligence which will support efforts by the G7, the OECD, and national governments. Going forward we will continue to report and expand upon Google developed AI tools and are committed to advance bold and responsible AI, to maximise AI’s benefits and minimise its risks.


Lastly, the contents of this report should be read with the following context in mind: 

  • This report discusses the key approaches across the following Google services when it comes to addressing disinformation: Google Search, YouTube, and Google Advertising. 
  • For chapters of the Code that involve the same actions across all three services (e.g. participation in the Permanent Task-force or in development of the Transparency Centre), we respond as 'Google, on behalf of related services'.
  • This report follows the structure and template laid out by the Code’s Permanent Task-force, organised around Commitments and Chapters of the Code.
  • Unless otherwise specified, metrics provided cover activities and actions during the period from 1 July 2024 to 31 December 2024.
  • The data provided in this report is subject to a range of factors, including product changes and user settings, and so is expected to fluctuate over the time of the reporting period. As Google continues to evolve its approach, in part to better address user and regulatory needs, the data reported here could vary substantially over time. 
  • We are continuously working to improve the safety and reliability of our services. We are not always in a position to pre-announce specific launch dates, details or timelines for upcoming improvements, and therefore may reply 'no' when asked whether we can disclose future plans for Code implementation measures in the coming reporting period. This 'no' should be understood against the background context that we are constantly working to improve safety and reliability and may in fact launch relevant changes without the ability to pre-announce. 
  • This report is filed concurrently with two ‘crisis reports’ about our response to the Israel-Gaza conflict and to the war in Ukraine. Additionally, an annex on Google’s response toward the recent elections in Romania and France is included in this report. As such, while there will be references to our actions throughout this report, information specific to these events should be sought in dedicated reports. 

Google will continue to publish subsequent versions of this report biannually, focusing on the 6 months review period relevant to each filing, as requested under the Code.

Google looks forward to continuing to work together with other stakeholders in the EU to address challenges related to disinformation.

Download PDF

Crisis 2024
[Note: Signatories are requested to provide information relevant to their particular response to the threats and challenges they observed on their service(s). They ensure that the information below provides an accurate and complete report of their relevant actions. As operational responses to crisis/election situations can vary from service to service, an absence of information should not be considered a priori a shortfall in the way a particular service has responded. Impact metrics are accurate to the best of signatories’ abilities to measure them].
Threats observed or anticipated

War in Ukraine

Overview
The ongoing war in Ukraine has continued throughout 2024, and Google continues to help by providing cybersecurity and humanitarian assistance, and providing high-quality information to people in the region. The following list outlines the main threats observed by Google during this conflict:

  1. Continued online services manipulation and coordinated influence operations;
  2. Advertising and monetisation linked to state-backed Russia and Ukraine disinformation;
  3. Threats to security and protection of digital infrastructure.


Israel-Gaza conflict

Overview
 Following the Israel-Gaza conflict, Google has actively worked to support humanitarian and relief efforts, ensure platforms and partnerships are responsive to the current crisis, and counter the threat of disinformation. Google identified a few areas of focus for addressing the ongoing crisis:

  • Humanitarian and relief efforts;
  • Supporting Israeli tech firms and Palestinian businesses; and
Platforms and partnerships to protect our services from coordinated influence operations, hate speech, and graphic and terrorist content.
Mitigations in place

War in Ukraine

The following sections summarise Google’s main strategies and actions taken to mitigate the identified threats and react to the war in Ukraine.

1. Online services manipulation and malign influence operations
Google’s Threat Analysis Group (TAG) is helping Ukraine by monitoring the threat landscape in Eastern Europe and disrupting coordinated influence operations from Russian threat actors. Google has also announced new long-term partnerships across Central and Eastern Europe.

In the Baltics, Google entered into long-term partnerships with the Civic Resilience Initiative and the Baltic Centre for Media Excellence. These two organisations have received €1.3 million in funding from Google to build on their impactful work towards increasing media literacy, building further resilience and actively tackling disinformation in Lithuania, Latvia and Estonia. Furthermore, Google is partnering with the Charles University in Prague, the main research centre of the Central European Digital Media Observatory (CEDMO) project, and providing €1 million in funding for CEDMO to further expand its research into information disorders, and work to increase the level of media and digital literacy in Poland, Czechia and Slovakia.

2. Advertising and monetisation linked to Russia and Ukraine disinformation
By H2 2024, Google had paused the majority of commercial activities in Russia – including ads serving in Russia, ads on Google’s properties and networks globally for all Russian-based advertisers, new Cloud sign ups, the payments functionality for most of Google’s services, AdSense ads on state-funded media sites, and monetisation features for YouTube viewers in Russia. Due to the war in Ukraine, Google paused ads containing content that exploits, dismisses, or condones the war. In addition, Google paused the ability of Russia-based publishers to monetise with AdSense, AdMob, and Ad Manager in August 2024. Free Google services such as Search, Gmail and YouTube are still operating in Russia. Google will continue to closely monitor developments.

3. Threats to security and protection of digital infrastructure
Google expanded eligibility for Project Shield, Google’s free protection against Distributed Denial of Service (DDoS) attacks, shortly after the war in Ukraine broke out. The expansion aimed to allow Ukrainian government websites and embassies worldwide to stay online and continue to offer their critical services. Since then, Google has continued to implement protections for users and track and disrupt cyber threats. 

TAG has been tracking threat actors, both before and during the war, and sharing their findings publicly and with law enforcement. TAG’s findings have shown that government-backed actors from Russia, Belarus, China, Iran, and North Korea have been targeting Ukrainian and Eastern European government and defence officials, military organisations, politicians, NGOs, and journalists, while financially motivated bad actors have also used the war as a lure for malicious campaigns. 

Google is continuing to provide critical cybersecurity and technical infrastructure support by donating 50,000 new Google Workspace licences to the Ukrainian government. By providing these licences and a year of free access to Google Workspace solutions, including Google’s cloud-first, zero-trust security model, Google can help provide Ukrainian public institutions with the security and protection they need to deal with constant threats to their digital systems. In February 2023, Google also announced an extension of the free access to premium Google Workspace for Education features for 250 universities and colleges until the end of August 2023.

Google aims to continue to follow the following approach when responding to future crisis situations: 
  • Elevate access to high-quality information across Google services;
  • Protect Google users from harmful disinformation;
  • Continue to monitor and disrupt cyber threats;
  • Explore ways to provide assistance to support the affected areas more broadly.

Future measures
Google is continually making investments in products, programs and partnerships to help fight disinformation, both in Ukraine and globally. Google will continue to monitor the situation and take additional action as needed.


Israel-Gaza conflict

Humanitarian and relief efforts
Google.org provided $6 million in Google.org funding, with $3 million to Israel organisations focused on mental health support, and $3 million in support to Gaza organisations focused on humanitarian aid and relief, including $1 million to Save the Children, $1 million to Palestinian Red Crescent, and $1 million to International Medical Corps (IMC). Specifically, Google’s humanitarian and relief efforts with these organisations include: 
  • Natal - Israel Trauma and Resiliency Centre: In the early days of the war, calls to Natal’s support hotline went from around 300 a day to 8,000 a day. With our funding, they were able to scale their support to patients by 450%, including multidisciplinary treatment and mental & psychosocial support to direct and indirect victims of trauma due to terror and war in Israel. 
  • International Medical Corps (IMC): As of October 2024, our support helped fund the delivery of two mobile operating theaters, doubling the surgical capacity of IMC’s field hospital, and enabling them to provide over 210,000 health consultations and well over 7,000 (often lifesaving) surgeries, as well as other support such as access to safe drinking water to nearly 200,000 people.

In addition, Google employees also directed more than $11 million in funding including employee donations and matching funding from Google.org to organisations providing aid and support in Israel and Gaza. 

Supporting Israeli tech firms and Palestinian businesses
Across Europe and Israel, Google is committed to supporting startups as they work at the forefront of innovation: striving to solve some of the most critical issues facing the world. These pioneering startups and businesses often struggle to access the support, expertise and tools they need to help them scale. In light of the Israel-Gaza conflict, Google is investing $8 million to support Israeli tech firms and Palestinian businesses. Of that investment, Google is providing $4 million to support Israeli AI startups and offer access to Google's knowledge, expertise (e.g. Cloud support), and mentorship opportunities in Israel and $4 million to support Palestinian startups and businesses. In addition, Google has announced that it will provide loans and grants to 1,000 Palestinian small businesses in partnership with local and global non-profit organisations, and will also provide seed grants to 50 Palestinian tech startups in hopes to preserve 4,500 jobs and create additional job opportunities. 

Platforms and partnerships
As the conflict continues, Google is committed to tackling misinformation, hate speech, graphic content and terrorist content by continuing to find ways to provide support through its products. For example, Google has deployed language capabilities to support emergency efforts including emergency translations, and localising Google content to help users, businesses and NGOs. Google has also pledged to help its partners in these extraordinary circumstances. For example, when schools closed in October 2023, the Ministry of Education in Israel used Meet as their core teach-from-home platform and Google provided support. Google has been in touch with Gaza-based partners and participants in its Palestine Launchpad program, its digital skills and entrepreneurship program for Palestinians, to try to support those who have been significantly impacted by this crisis.