Google Search

Report March 2025

Submitted

Your organisation description

Integrity of Services

Commitment 14

In order to limit impermissible manipulative behaviours and practices across their services, Relevant Signatories commit to put in place or further bolster policies to address both misinformation and disinformation across their services, and to agree on a cross-service understanding of manipulative behaviours, actors and practices not permitted on their services. Such behaviours and practices include: The creation and use of fake accounts, account takeovers and bot-driven amplification, Hack-and-leak operations, Impersonation, Malicious deep fakes, The purchase of fake engagements, Non-transparent paid messages or promotion by influencers, The creation and use of accounts that participate in coordinated inauthentic behaviour, User conduct aimed at artificially amplifying the reach or perceived public support for disinformation.

We signed up to the following measures of this commitment

Measure 14.1 Measure 14.2 Measure 14.3

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Yes

If yes, list these implementation measures here

Search
In August 2024, Google Search released new Community Guidelines for user generated content that define the types of content and behavior that are not allowed on Search. The guidelines cover areas such as spam, account hijacking, and deceptive practices. These guidelines also provide users with guidance on how to report different types of potentially harmful user generated content, such as posts and profiles.

Search & YouTube
In November 2024, Google released a white paper detailing how it is addressing the growing global issue of fraud and scams. In the paper, Google explains that it fights scams and fraud by taking proactive measures to protect users from harm, deliver reliable information, and partner to create a safer internet, through policies and built-in technological protections that help us to prevent, detect, and respond to harmful and illegal content. For details on YouTube and Google Search’s approaches to tackling scams, see the full report here.

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

N/A

Measure 14.1

Relevant Signatories will adopt, reinforce and implement clear policies regarding impermissible manipulative behaviours and practices on their services, based on the latest evidence on the conducts and tactics, techniques and procedures (TTPs) employed by malicious actors, such as the AMITT Disinformation Tactics, Techniques and Procedures Framework.

QRE 14.1.1

Relevant Signatories will list relevant policies and clarify how they relate to the threats mentioned above as well as to other Disinformation threats.

Note: The below QRE response has been reproduced (in some instances truncated in order to meet the suggested character limit) from the previous report as there is no new information to share now.

Google Search’s systems are designed to elevate high-quality information and combat the threats listed in Commitment 14. While many of those TTPs are not relevant to search engines (e.g. TTPs 1 through 5, TTP 11), by seeking to elevate authentic, original, high-quality information, Search’s ranking systems directly tackle threats like inauthentic domains (TTP 4), obfuscation (TTP 6), deceptive manipulated media (TTP 7), hack and leak operations (TTP 8), inauthentic coordination (TTP 9), and a broad range of deceptive practices (TTP 10). More information about the design of Search’s ranking systems is outlined in the User Empowerment chapter. 

Google Search’s Overall Content Policies outline that Search takes action against spam, which is content that exhibits deceptive or manipulative behaviour designed to deceive users or game search systems. Learn more about Google Search Webmaster Guidelines

In line with these policies, Search deploys spam protection tools. These efforts address a range of deceptive practices and help to reduce the spread of low quality content on Google Search through inauthentic behaviours outlined in relevant TTPs. 

Moreover, Search has policies and community guidelines specifically governing what can appear in Google Search features (e.g. knowledge panels, content advisories, ‘About This Result’, etc.) to make sure that Search is showing high quality and helpful content, while also taking action against content that may promote harmful mis-/disinformation. Relevant policies to the threats listed above include the following: 

  • Deceptive Practices Policy: This policy prohibits content that impersonates any person or organisation, misrepresentation or concealment of ownership or primary purpose, and engagement in inauthentic or coordinated behaviour to deceive, defraud, or mislead. This policy does not cover content with certain artistic, educational, historical, documentary, or scientific considerations, or other substantial benefits to the public.
  • Manipulated Media Policy: This policy prohibits audio, video, or image content that has been manipulated to deceive, defraud, or mislead by means of creating a representation of actions or events that verifiably did not take place. 
  • Transparency Policy: This policy notes that news sources on Google should provide clear dates and bylines, as well as information about authors, the publication, the publisher, company or network behind it, and contact information.

QRE 14.1.2

Signatories will report on their proactive efforts to detect impermissible content, behaviours, TTPs and practices relevant to this commitment.

Google Search uses a variety of proactive detection efforts to counter spam, which overlaps significantly with TTPs used to disseminate disinformation. As outlined in the overall Google Search Content Policies and Community Guidelines for user generated content, action is taken against spam, which is content that exhibits deceptive or manipulative behaviour designed to deceive users or game search systems. 

Pursuant to the Spam Content Policy, Google Search deploys spam protection tools, such as SpamBrain (Google’s AI-based spam-prevention system), to protect search quality and user safety. Addressing a wider range of content than only mis-/disinformation, these efforts help reduce the spread of low quality content on Google Search. Additional information can be found in the 2022 Google Search Webspam Report. In March 2024, Google Search released an update to its Spam Policies that addresses ‘scaled content abuse’ - artificially-generated content (including AI-generated content) that seeks to manipulate Google’s search ranking.

In addition, Google’s Threat Analysis Group (TAG) and Trust and Safety Team are central to Google’s work to monitor malicious actors around the globe, including but not limited to coordinated information operations that may affect EU Member States. More information about this work is outlined in QRE 16.1.1.

Measure 14.2

Relevant Signatories will keep a detailed, up-to-date list of their publicly available policies that clarifies behaviours and practices that are prohibited on their services and will outline in their reports how their respective policies and their implementation address the above set of TTPs, threats and harms as well as other relevant threats.

QRE 14.2.1

Relevant Signatories will report on actions taken to implement the policies they list in their reports and covering the range of TTPs identified/employed, at the Member State level.

Note: The below QRE response has been reproduced (in some instances truncated in order to meet the suggested character limit) from the previous report as there is no new information to share now.

Google Search relies on a combination of people and technology to enforce Google Search policies. Machine learning, for example, plays a critical role in content moderation on Google Search. Google Search systems are built to identify and weigh signals of high-quality information so people can find the most reliable and timely information available. Google Search algorithms look at many factors and signals to raise high-quality content and reduce low quality content. Google Search’s publicly available website, How Search Works, explains the key factors that help determine which results are returned for a query. Google Search works continuously to improve the quality and effectiveness of automated systems to protect platforms and users from harmful content. 

Furthermore, to ensure its algorithms meet high standards of relevance and quality, Google Search has a rigorous process that involves both live tests and thousands of trained external Search Quality Raters from around the world. Raters do not determine the ranking of an individual, specific page or website, but they help to benchmark the quality of Google Search’s results so that Google Search can meet a high bar for users all around the world. Under the Google Search Quality Rater Guidelines, raters are instructed to assign the lowest rating to pages that are potentially harmful to users or specified groups, misleading, untrustworthy, and spammy. Google Search also provides users the ability to flag content that might be violating Google Search policies.

SLI 14.2.1

Number of instances of identified TTPs and actions taken at the Member State level under policies addressing each of the TTPs as well as information on the type of content.

TTPs covered by this action, selected from the list at the top of this chapter
6. Deliberately targeting vulnerable recipients (e.g. via personalised advertising, location spoofing or obfuscation); 
9. Inauthentic coordination of content creation or amplification, including attempts to deceive/manipulate platforms algorithms (e.g. keyword stuffing or inauthentic posting/reposting designed to mislead people about popularity of content, including by influencers); 
10. Use of deceptive practices to deceive/manipulate platform algorithms, such as to create, amplify or hijack hashtags, data voids, filter bubbles, or echo chambers;
12. Coordinated mass reporting of non-violative opposing content or accounts. 

Methodology
(1) Manual enforcement instances under relevant policy violations (including Deceptive Practices, Manipulated Media, Medical Content, Misleading Content and Transparency Policies) on a global level in H2 2024 (1 July 2024 to 31 December 2024). 
(2) Domains affected by manual and algorithmic actions for Spam Policies for Google web search on a global level in H2 2024 (1 July 2024 to 31 December 2024). 

Response
(1) In H2 2024, there were 46,792 instances of policy enforcement, globally, which resulted in removal of false, disputed, non-representative facts, misrepresentation information, content that contradicts scientific or medical based consensus and evidence based best practices. The actions were enforced across Search features including knowledge engine, webanswers, news, discover, image and video search. 
(2) In H2 2024, there were 275,421 manual actions and 12,251,095 algorithmic actions taken against spam policies. Globally, a total of 12,408,811 unique domains were affected by manual and algorithmic actions for Spam Policies for Google web search. 

SLI 14.2.2

Views/impressions of and interaction/engagement at the Member State level (e.g. likes, shares, comments), related to each identified TTP, before and after action was taken.

These metrics are not feasible for Google Search as it is not known what queries a user will issue and, therefore, Google Search cannot do a before and after comparison. Google Search’s systems are trained to block policy-violating content.

SLI 14.2.3

Metrics to estimate the penetration and impact that e.g. Fake/Inauthentic accounts have on genuine users and report at the Member State level (including trends on audiences targeted; narratives used etc.).

This SLI is not applicable for Google Search, as users do not need accounts to use the search engine, and generally do not post content on Google Search.

SLI 14.2.4

Estimation, at the Member State level, of TTPs related content, views/impressions and interaction/engagement with such content as a percentage of the total content, views/impressions and interaction/engagement on relevant signatories' service.

These metrics are not feasible for Google Search as it is not known what queries a user will issue and, therefore, Google Search cannot do a before and after comparison. Google Search’s systems are trained to block policy-violating content.

Measure 14.3

Relevant Signatories will convene via the Permanent Task-force to agree upon and publish a list and terminology of TTPs employed by malicious actors, which should be updated on an annual basis.

QRE 14.3.1

Signatories will report on the list of TTPs agreed in the Permanent Task-force within 6 months of the signing of the Code and will update this list at least every year. They will also report about the common baseline elements, objectives and benchmarks for the policies and measures.

Note: The below QRE response has been reproduced (in some instances truncated in order to meet the suggested character limit) from the previous report as there is no new information to share now.

The final list of TTPs agreed within the Permanent Task-force in H2 2022 was used by Signatories as part of their reports from then on, as intended. The Permanent Task-force will continue to examine and update the list as necessary in light of the state of the art. 

Commitment 15

Relevant Signatories that develop or operate AI systems and that disseminate AI-generated and manipulated content through their services (e.g. deepfakes) commit to take into consideration the transparency obligations and the list of manipulative practices prohibited under the proposal for Artificial Intelligence Act.

We signed up to the following measures of this commitment

Measure 15.1 Measure 15.2

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Yes

If yes, list these implementation measures here

Search

Search & YouTube
  • After joining the Coalition for Content Provenance and Authenticity (C2PA), a cross-industry effort to help provide more transparency and context for people on AI-generated content, in February 2024, Google collaborated on the newest version (2.1) of the coalition’s technical standard, Content Credentials. This version is more secure against a wider range of tampering attacks due to stricter technical requirements for validating the history of the content’s provenance.
  • In July 2024 at the Aspen Security Forum, Google, alongside industry peers, introduced the Coalition for Secure AI (CoSAI) to advance comprehensive security measures for addressing the unique risks that come with AI, for both issues that arise in real time and those over the horizon. The first three areas of focus the coalition will tackle in collaboration with industry and academia include Software Supply Chain Security for AI systems; Preparing defenders for a changing cybersecurity landscape; and AI security governance.
  • In September 2024, Google announced a Global AI Opportunity Fund, which will invest $120 million to make AI education and training available in communities around the world. Google will provide this in local languages, in partnership with nonprofits and NGOs.
  • In October 2024, Google released its EU AI Opportunity Agenda, a series of recommendations for governments to seize the full economic and societal potential of AI. The Agenda outlines the need to revisit Europe’s workforce strategy, as well as investment in AI infrastructure and research, adoption and accessibility.
  • In October 2024, The Nobel Prize was awarded to Google DeepMind’s Demis Hassabis and John Jumper for their groundbreaking work with AlphaFold 2, which predicted the structures for nearly all proteins known to science. It has been used by more than 2 million researchers around the world, accelerating scientific discovery in important areas like malaria vaccines, cancer treatments, and more.

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

Yes

If yes, which further implementation measures do you plan to put in place in the next 6 months?

Google Search hopes Google’s SynthID technology can work together with a broad range of solutions for creators and users across society, and it is continuing to evolve SynthID by gathering feedback from users, enhancing its capabilities, and exploring new features.

SynthID could be expanded for use across other AI models and Google Search is excited about the potential of integrating it into more Google products and making it available to third parties in the near future — empowering people and organisations to responsibly work with AI-generated content.

Measure 15.1

Relevant signatories will establish or confirm their policies in place for countering prohibited manipulative practices for AI systems that generate or manipulate content, such as warning users and proactively detect such content.

QRE 15.1.1

In line with EU and national legislation, Relevant Signatories will report on their policies in place for countering prohibited manipulative practices for AI systems that generate or manipulate content.

Note: The below QRE response has been reproduced (in some instances truncated in order to meet the suggested character limit) from the previous report as there is no new information to share now.

The Google Search features Manipulated Media Policy prohibits audio, video, or image content that has been manipulated to deceive, defraud, or mislead by means of creating a representation of actions or events that verifiably did not take place. This includes if such content would cause a reasonable person to have a fundamentally different understanding or impression, such that it might cause significant harm to groups or individuals, or significantly undermine participation or trust in electoral or civic processes.

Measure 15.2

Relevant Signatories will establish or confirm their policies in place to ensure that the algorithms used for detection, moderation and sanctioning of impermissible conduct and content on their services are trustworthy, respect the rights of end-users and do not constitute prohibited manipulative practices impermissibly distorting their behaviour in line with Union and Member States legislation.

QRE 15.2.1

Relevant Signatories will report on their policies and actions to ensure that the algorithms used for detection, moderation and sanctioning of impermissible conduct and content on their services are trustworthy, respect the rights of end-users and do not constitute prohibited manipulative practices in line with Union and Member States legislation.

Google’s AI principles set out Google’s commitment to develop technology responsibly.

Google Search has published guidance on AI-generated content. This guidance explains how AI and automation can be a useful tool to create helpful content. However, if AI is used for the primary purpose of manipulating search rankings, that is a violation of Google Search’s long-standing policy against spammy automatically-generated content.

Across its services, Google has been examining the risks and challenges associated with more powerful language models. 

Improved AI systems can help bolster spam fighting capabilities and even help combat known loss patterns. Google Search introduced a system to better identify queries seeking explicit content, so Google Search can better avoid shocking or offending users not looking for that information, and ultimately make the Google Search experience safer for everyone.

In May 2024, Google published a white paper outlining its end-to-end AI Responsibility Lifecycle: a four-phase process (Research, Design, Govern, Share) that guides responsible AI development at Google. The initial Research and Design phases foster innovation, while the Govern and Share phases focus on risk assessment, testing, monitoring, and transparency. In this paper, Google aims to share its thoughts on emerging best practices for generative AI responsibility with others across the AI ecosystem, and discusses examples of how it has taken what it has learned about new applications, extensions and risks to inform innovation. For each phase of the AI Responsibility Lifecycle, Google also outlines the specific progress it has made towards building safer products that maximise the positive benefits of AI to society, and looks ahead to what is next.

In line with Google’s principled and responsible approach to its Generative AI products, Google has prioritised testing across safety risks ranging from cybersecurity vulnerabilities to misinformation and fairness.

Reiterating Google’s approach to AI Principles governance rests on a corporate-wide end-to-end commitment to three pillars:

  1. AI Principles serve as Google’s ethical charter and inform its product policies. Google is committed to developing technology responsibly and published AI Principles in 2018 to guide its work. Its robust internal governance focuses on responsibility throughout the AI development lifecycle, covering model development, application deployment, and post-launch monitoring. While the Principles were recently updated to adapt to shifts in technology, the global conversation, and the AI ecosystem, Google’s deep commitment to responsible AI development remains unchanged.
  2. Education and resources provide ethics training and technical tools to test, evaluate and monitor the application of the AI Principles to all of Google’s products and services. Google is sharing for the first time details of a new company-wide tool for monitoring products’ responsible AI maturity, and updates on technical approaches to fairness, data transparency, and more.
  3. Structures and processes include risk assessment frameworks, ethics reviews, and Executive accountability. This report provides a dive deep into how risk is identified and measured in the AI Principles reviews.

See additional details here.

Commitment 16

Relevant Signatories commit to operate channels of exchange between their relevant teams in order to proactively share information about cross-platform influence operations, foreign interference in information space and relevant incidents that emerge on their respective services, with the aim of preventing dissemination and resurgence on other services, in full compliance with privacy legislation and with due consideration for security and human rights risks.

We signed up to the following measures of this commitment

Measure 16.1

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Yes

If yes, list these implementation measures here

Search & YouTube

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

N/A

Measure 16.1

Relevant Signatories will share relevant information about cross-platform information manipulation, foreign interference in information space and incidents that emerge on their respective services for instance via a dedicated sub-group of the permanent Task-force or via existing fora for exchanging such information.

QRE 16.1.1

Relevant Signatories will disclose the fora they use for information sharing as well as information about learnings derived from this sharing.

Note: The below QRE response has been reproduced (in some instances truncated in order to meet the suggested character limit) from the previous report as there is no new information to share now.

Google’s Threat Analysis Group (TAG) and Trust & Safety Team work to monitor malicious actors around the globe, disable their accounts, and remove the content that they post, including but not limited to coordinated information operations and other operations that may affect EEA Member States. 

One of TAG’s missions is to understand and disrupt coordinated information operations threat actors. TAG’s work enables Google teams to make enforcement decisions backed by rigorous analysis. TAG’s investigations do not focus on making judgements about the content on Google platforms, but rather examining technical signals, heuristics, and behavioural patterns to make an assessment that activity is coordinated inauthentic behaviour.

TAG regularly publishes its TAG Bulletin, updated quarterly here, which provides updates around coordinated influence operation campaigns terminated on Google’s platforms, as well as additional periodic blog posts. TAG also engages with other platform Signatories to receive and, when strictly necessary for security purposes, share information related to threat actor activity – in compliance with applicable laws. To learn more, refer to SLI 16.1.1.

See Google’s disclosure policies about handling security vulnerabilities for developers and security professionals.

SLI 16.1.1

Number of actions taken as a result of the collaboration and information sharing between signatories. Where they have such information, they will specify which Member States that were affected (including information about the content being detected and acted upon due to this collaboration).

Google’s Threat Analysis Group (TAG) posts a quarterly Bulletin, which includes disclosure of coordinated influence operation campaigns terminated on Google’s products and services, as well as additional periodic blog posts. In the Bulletin, TAG often notes when findings are similar to or supported by those reported by other platforms. The publicly available H2 2024 TAG Bulletins (1 July 2024 - 31 December 2024) show 81,773 YouTube channels across 57 separate actions were involved in Coordinated Influence Operation Campaigns. Industry partners supported two of those separate actions by providing leads. The TAG Bulletin and periodic blog posts are Google’s, including YouTube’s, primary public source of information on coordinated influence operations and TTP-related issues.

As reported in the Bulletin, some channels YouTube took action on were part of campaigns that uploaded content in some EEA languages, specifically: French (546 channels), German (460 channels), Polish (389 channels), Italian (362 channels), Spanish (128 channels), Romanian (15 channels), Czech (12 Channels), and Hungarian (12 channels). Certain campaigns may have uploaded content in multiple languages, or in other countries outside of the EEA region utilising EEA languages. Please note that there may be many languages for any one coordinated influence campaign and that the presence of content in an EEA Member State language does not necessarily entail a particular focus on that Member State. For more information, please see the TAG Bulletin

Empowering Users

Commitment 17

In light of the European Commission's initiatives in the area of media literacy, including the new Digital Education Action Plan, Relevant Signatories commit to continue and strengthen their efforts in the area of media literacy and critical thinking, also with the aim to include vulnerable groups.

We signed up to the following measures of this commitment

Measure 17.1 Measure 17.2 Measure 17.3

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Yes

If yes, list these implementation measures here

  • In May 2024, Google announced that it would open-source SynthID text watermarking through its updated Responsible Generative AI Toolkit to help others benefit from and improve on the advances Google is making. In addition, Google expanded SynthID’s capabilities to include watermarking AI-generated text in the Gemini app and web experience, as well as video in Veo, its recently announced and most capable generative video model.
  • Google Search expanded the ‘About This Image’ tool to 40 additional languages around the world, including French, German, Hindi, Italian, Japanese, Korean, Portuguese, Spanish and Vietnamese.
  • In July 2024, Google announced ‘About This Image’ is now available on Circle to Search and Google Lens, giving users more ways to quickly get context on images that they see wherever they come across them.
  • In September 2024, Google announced that it would bring the latest version of the Coalition for Content Provenance and Authenticity (C2PA) technical standard, Content Credentials, to Search’s ‘About This Image’ feature. If an image contains C2PA metadata, users will be able to use the feature to see if an image was created or edited with AI tools.

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

Yes

If yes, which further implementation measures do you plan to put in place in the next 6 months?

No

Measure 17.1

Relevant Signatories will design and implement or continue to maintain tools to improve media literacy and critical thinking, for instance by empowering users with context on the content visible on services or with guidance on how to evaluate online content.

QRE 17.1.1

Relevant Signatories will outline the tools they develop or maintain that are relevant to this commitment and report on their deployment in each Member State.

Note: The below QRE response has been reproduced (in some instances truncated in order to meet the suggested character limit) from the previous report as there is no new information to share now.

Google Search aims to connect users with high quality information, and help users understand and evaluate that information. Google Search has deeply invested in both information quality and information literacy. Some ways in which Google Search does this include:

  • ‘About This Result’: Next to most results on Google Search, there is a menu icon that users can tap to learn more about the result or feature and where the information is coming from. With this additional context, users can make a more informed decision about the sites they want to visit and what results will be most useful for them. When available, users will see a description of the website from Wikipedia, which provides free, reliable information about tens of millions of sites on the web. If a website does not have a Wikipedia description, Google Search will show additional context that may be available, such as when Google Search first indexed the site. Users will also be able to quickly see if their connection to the site is secure based on its use of the HTTPS protocol, which encrypts all data between the website and the browser they are using, to help them stay safe as they browse the web. More information on the ‘About This Result’ feature can be found here, and here.  
    • The ‘More About This Page’ link within the ‘About This Result’ feature provides additional insights about sources and topics users find on Google Search. When a user taps the three dots on any search result, they will be able to learn more about the page. Users can: 
      • See more information about the source: Users will be able to read what a site says about itself in its own words, when that information is available.
      • Learn more about the topic: In the ‘About the topic’ section, users can find information about the same topic from other sources.
      • Find what others on the web have said about a site: Reading what others on the web have written about a site can help users better evaluate sources.
    • In December 2023, Google Search expanded this feature to 40 new languages, including Bulgarian, Croatian, Czech, Danish, Estonian, Finnish, Greek, Hungarian, Latvian, Lithuanian, Maltese, Polish, Romanian, Slovak, Slovenian, and Swedish.
    • Additional information can be found in the Google Search blog post here.

  • ‘About This Image’: With added insights in ‘About This Image’, users will know if an image may have been generated with Google’s AI tools when they come across it in Search or Chrome. All images generated with Imagen 2 in Google’s consumer products will be marked by SynthID, a tool developed by Google DeepMind that adds a digital watermark directly into the pixels of images generated. SynthID watermarks are imperceptible to the human eye but detectable for identification. In addition, Search expanded the ‘About This Image’ tool to 40 additional languages around the world, including French, German, Hindi, Italian, Japanese, Korean, Portuguese, Spanish and Vietnamese.
    • Consistent with its AI principles, Google Search also conducted extensive adversarial testing and red teaming to identify and mitigate potential harmful and problematic content. Google Search is also applying filters to avoid generating images of named people. Google Search will continue investing in new techniques to improve the safety and privacy protections of its models. 
    • More information on the ‘About This Image’ feature can be found here.

  • Content Advisory Notices: Helpful notices for users that highlight when information is scarce or when interest is travelling faster than facts. These are specifically designed to address data voids which include queries for which either content is limited or nonexistent or when a topic is rapidly evolving and reliable information is not yet available for that topic.

SLI 17.1.1

Relevant Signatories will report, at the Member State level, on metrics pertinent to assessing the effects of the tools described in the qualitative reporting element for Measure 17.1, which will include: the total count of impressions of the tool; and information on the interactions/engagement with the tool.

(1) Impression proportion estimate of content advisories for low relevance results in H2 2024 (1 July 2024 to 31 December 2024), broken down by EEA Member State;

(2) Impression proportion estimate of content advisories for rapidly changing results in H2 2024, broken down by EEA Member State;

(3) Impression proportion estimate of content advisories for potentially unreliable sets of results in H2 2024, broken down by EEA Member State;

Note metrics 1-3 are estimated proportions; metric 1 represents the number of content advisories for low relevance results out of all queries over the reporting period; metric 2 and 3 follow the same logic but are for content advisories for rapidly changing results and content advisories for potentially unreliable sets of results, respectively. 

(4) Number of times the ‘More About This Page’ feature was viewed in H2 2024, broken down by EEA Member State;

(5) Number of times the ‘Source’ section of the ‘About This Result’ panel was viewed in H2 2024, broken down by EEA Member State;

(6) Number of times the 'Your Search and this result' section of the ‘About This Result’ panel was viewed in H2 2024, broken down by EEA Member State; 

(7) Number of times the 'Personalisation' section of the ‘About This Result’ panel was viewed in H2 2024, broken down by EEA Member State.

Country Impression proportion estimate of content advisories for low relevance results (%) Impression proportion estimate of content advisories for rapidly changing results (%) Impression proportion estimate of content advisories for potentially unreliable set of results (%) Number of times the 'More About This Page' feature was viewed Number of times the 'Source' section of the ‘About This Result’ panel was viewed Number of times the 'Your Search and this result' section of the ‘About This Result’ panel was viewed Number of times the 'Personalisation' section of the ‘About This Result’ panel was viewed
Austria 0.009% 0.00007% 0.0000079% 492,336 10,161,776 10,002,348 8,443,368
Belgium 0.008% 0.00006% 0.0000062% 694,484 12,635,380 12,451,036 10,479,372
Bulgaria 0.015% 0.00004% 0.0000075% 418,008 5,520,908 5,427,300 4,564,196
Croatia 0.011% 0.00004% 0.0000067% 306,460 5,169,452 5,050,320 4,322,968
Cyprus 0.019% 0.00010% 0.0000329% 106,152 1,194,376 1,177,588 1,003,436
Czech Republic 0.009% 0.00003% 0.0000034% 658,648 9,308,096 9,185,980 7,769,180
Denmark 0.008% 0.00010% 0.0000045% 290,416 5,980,168 5,903,776 5,005,248
Estonia 0.015% 0.00011% 0.0000233% 86,100 1,521,468 1,506,608 1,285,236
Finland 0.009% 0.00012% 0.0000057% 333,476 7,408,228 7,328,700 6,237,504
France 0.007% 0.00005% 0.0000059% 4,434,580 86,918,604 85,351,180 72,299,248
Germany 0.011% 0.00010% 0.0000061% 4,744,012 96,729,900 94,959,380 80,156,384
Greece 0.017% 0.00003% 0.0000044% 753,108 11,418,624 11,171,708 9,546,464
Hungary 0.012% 0.00003% 0.0000046% 590,120 8,314,240 8,170,540 6,923,408
Ireland 0.010% 0.00010% 0.0000054% 432,868 7,267,604 7,130,680 6,061,364
Italy 0.015% 0.00004% 0.0000016% 4,336,524 79,300,560 77,454,444 66,299,316
Latvia 0.018% 0.00010% 0.0000184% 113,092 1,673,808 1,653,056 1,398,732
Lithuania 0.016% 0.00006% 0.0000127% 171,912 2,746,672 2,716,224 2,288,856
Luxembourg 0.013% 0.00015% 0.0000432% 37,252 707,288 695,948 591,364
Malta 0.015% 0.00018% 0.0000540% 49,240 703,072 691,912 594,788
Netherlands 0.009% 0.00006% 0.0000022% 1,347,040 23,309,716 22,909,616 19,352,032
Poland 0.006% 0.00002% 0.0000008% 2,240,116 47,084,536 46,397,932 39,155,200
Portugal 0.007% 0.00006% 0.0000052% 754,972 11,476,200 11,293,812 9,653,604
Romania 0.011% 0.00003% 0.0000037% 887,976 12,030,632 11,798,700 10,039,572
Slovakia 0.013% 0.00003% 0.0000104% 328,608 4,527,972 4,463,864 3,762,372
Slovenia 0.015% 0.00004% 0.0000182% 128,052 2,411,732 2,380,088 2,020,268
Spain 0.006% 0.00008% 0.0000070% 4,078,508 58,946,872 57,742,752 49,286,500
Sweden 0.007% 0.00012% 0.0000030% 614,072 13,062,580 12,902,600 10,946,140
Iceland 0.013% 0.00021% 0.0000000% 13,536 376,920 371,588 316,908
Liechtenstein 0.013% 0.00104% 0.0000000% 1,588 39,644 39,068 32,924
Norway 0.006% 0.00009% 0.0000043% 315,956 5,814,192 5,747,832 4,853,060
Total EU 0.010% 0.00006% 0.0000053% 29,428,132 527,530,464 517,918,092 439,486,120
Total EEA 0.010% 0.00006% 0.0000053% 29,759,212 533,761,220 524,076,580 444,689,012

Measure 17.2

Relevant Signatories will develop, promote and/or support or continue to run activities to improve media literacy and critical thinking such as campaigns to raise awareness about Disinformation, as well as the TTPs that are being used by malicious actors, among the general public across the European Union, also considering the involvement of vulnerable communities.

QRE 17.2.1

Relevant Signatories will describe the activities they launch or support and the Member States they target and reach. Relevant signatories will further report on actions taken to promote the campaigns to their user base per Member States targeted.

Grants
In H2 2024, Google.org has supported a number of organisations that seek to help build a safer and more tolerant online world, and promote media literacy. This includes: 
  • In H2 2024, Google.org announced $10 million in funding to the Raspberry Pi Foundation to further expand access to Experience AI. This educational program was co-created with Google DeepMind as part of Google.org’s broader commitment to support organisations helping young people build AI literacy.
    • Experience AI provides teachers with the training and resources needed to both educate and inspire young people aged 11-14 about AI.
    • The curriculum focuses on a structured learning journey, ethical considerations, real-world examples and role models, and culturally relevant content to engage learners in understanding AI and how to use it responsibly. Raspberry Pi Foundation and Google DeepMind continued to develop further resources, including three new lessons centred around AI safety: AI and Your Data, Media Literacy in the Age of AI, and Using Generative AI Responsibly.

Search
To raise awareness of its features and build literacy across society, Google Search is working with information literacy experts to help design tools in a way that allows users to feel confident and in control of the information they consume and the choices they make. 

In addition, Google Search builds capacity for librarians to empower their patrons and the general public with information literacy. At the end of September 2022, in cooperation with Google Search’s partner, ‘Public Libraries 2030’, Google Search launched a Training of Trainers program called ‘Super Searchers’ for librarians and library staff that seeks to achieve the following objectives: (a) provide librarians and library staff with the skills to build the information literacy capacity of the general public; (b) increase the information literacy capacity of library patrons and the general public. Since the launch, Google and ‘Public Libraries 2030’ have provided Super Searchers training in Ireland, Italy, Portugal, and the UK. Note, Public Libraries 2030 (PL2030), Google Search’s implementing partner, shared feedback that language barriers and lack of interest from patrons made it challenging to scale this program across the EU. While the agreement with PL2030 ended in H1 2023, the pilot program continued to expand in non-EU countries (e.g. in the US through the Public Library Association).

YouTube
YouTube remains committed to supporting efforts that deepen users’ collective understanding of misinformation. To empower users to think critically and use YouTube’s products safely and responsibly, YouTube invests in media literacy campaigns to improve users’ experiences on YouTube. In 2022, YouTube launched ‘Hit Pause’, a global media literacy campaign, which is live in all EEA Member States and the campaign has run in 40+ additional countries around the world, including all official EU languages.

The program seeks to teach viewers critical media literacy skills via engaging and educational public service announcements (PSAs) via YouTube home feed and pre-roll ads, and on a dedicated YouTube channel. The YouTube channel hosts videos from the YouTube Trust & Safety team that explain how YouTube protects the YouTube community from misinformation and other harmful content, as well as additional campaign content that provides members of the YouTube community with the opportunity to increase critical thinking skills around identifying different manipulation tactics used to spread misinformation – from using emotional language to cherry picking information. The content of this campaign helps to amplify other in-product interventions, such as information panels, which are meant to provide context for topics that are often subject to misinformation.

EEA Member State coverage of 'Hit Pause' media literacy impressions can be found in SLI 17.2.1.

SLI 17.2.1

Relevant Signatories report on number of media literacy and awareness raising activities organised and or participated in and will share quantitative information pertinent to show the effects of the campaigns they build or support at the Member State level.

In H2 2024 (1 July 2024 to 31 December 2024), as part of the Super Searchers Program, 517 librarians were trained across the 17 training sessions held in Europe. Specifically, in Portugal, after a successful pilot programme, MiudosSegurosNa.Net and Agarrados à Net launched Super Searchers in October 2024 in collaboration with the National Schools Coordinator. Working through schools and municipal libraries, the programme has already trained 490 trainers in 16 sessions around the country, with a current estimated reach of 14,700 students. 

Measure 17.3

For both of the above Measures, and in order to build on the expertise of media literacy experts in the design, implementation, and impact measurement of tools, relevant Signatories will partner or consult with media literacy experts in the EU, including for instance the Commission's Media Literacy Expert Group, ERGA's Media Literacy Action Group, EDMO, its country-specific branches, or relevant Member State universities or organisations that have relevant expertise.

QRE 17.3.1

Relevant Signatories will describe how they involved and partnered with media literacy experts for the purposes of all Measures in this Commitment.

See response to QRE 17.2.1.

Commitment 18

Relevant Signatories commit to minimise the risks of viral propagation of Disinformation by adopting safe design practices as they develop their systems, policies, and features.

We signed up to the following measures of this commitment

Measure 18.2 Measure 18.3

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

No

If yes, list these implementation measures here

N/A

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

N/A

Measure 18.2

Relevant Signatories will develop and enforce publicly documented, proportionate policies to limit the spread of harmful false or misleading information (as depends on the service, such as prohibiting, downranking, or not recommending harmful false or misleading information, adapted to the severity of the impacts and with due regard to freedom of expression and information); and take action on webpages or actors that persistently violate these policies.

QRE 18.2.1

Relevant Signatories will report on the policies or terms of service that are relevant to Measure 18.2 and on their approach towards persistent violations of these policies.

Note: The below QRE response has been reproduced (in some instances truncated in order to meet the suggested character limit) from the previous report as there is no new information to share now.

Google Search has the following policies which complement the Content Policies outlined in QRE 14.1.1:

Medical Content Policy: This policy prohibits content that contradicts or runs contrary to scientific or medical consensus and evidence-based best practices. 

Misleading Content Policy: This policy states that Search features and News prohibits preview content that misleads users to engage with it by promising details which are not reflected in the underlying content. 

These policies also provide users with information on how to report specific types of content that violate those policies. Google Search removes content for policy violations based on user reports as well as through its internal content moderation processes. More extensive policies are deployed for Search features, and can be found at the Content Policies Help Centre. 

In addition, Google Search removes content that has been determined to be unlawful under applicable law, in response to a notification from a third party, such as a user or an authority. Examples include material in relation to which Google Search has received a valid ‘right to be forgotten request’ or material in relation to which Google Search has received a valid court order. Google Search measures the number of court and government Legal Removal requests biannually (across all products), and publishes this information in transparency reports. 

SLI 18.2.1

Relevant Signatories will report on actions taken in response to violations of policies relevant to Measure 18.2, at the Member State level. The metrics shall include: Total number of violations and Meaningful metrics to measure the impact of these actions (such as their impact on the visibility of or the engagement with content that was actioned upon).

See response to SLI 14.2.1.

Measure 18.3

Relevant Signatories will invest and/or participate in research efforts on the spread of harmful Disinformation online and related safe design practices, will make findings available to the public or report on those to the Code's taskforce. They will disclose and discuss findings within the permanent Task-force, and explain how they intend to use these findings to improve existing safe design practices and features or develop new ones.

QRE 18.3.1

Relevant Signatories will describe research efforts, both in-house and in partnership with third-party organisations, on the spread of harmful Disinformation online and relevant safe design practices, as well as actions or changes as a result of this research. Relevant Signatories will include where possible information on financial investments in said research. Wherever possible, they will make their findings available to the general public.

Google, including YouTube, works with industry leaders across the technology sector, government, and civil society to set good policies, remain abreast of emerging challenges, and establish, share, and learn from industry best practices and research. 

Described below are examples that demonstrate Google’s, including YouTube, commitment to these actions:

Jigsaw-led Research
Jigsaw is a unit within Google that explores threats to open societies and builds technology that inspires scalable solutions. Jigsaw began conducting research on 'information interventions' more than 10 years ago. Jigsaw has since contributed research and technology on ways to make people more resilient to disinformation. Their research efforts are based on behavioural science and ethnographic studies that examine when people might be vulnerable to specific messages and how to provide helpful information when people need it most. These interventions provide a methodology for proactively addressing a range of threats to people online, as a complement to approaches that focus on removing or downranking material online.

An example of a notable research effort by Jigsaw run on and with YouTube is:
  • Accuracy Prompts (APs): APs remind users to think about accuracy. The prompts work by serving users bite-sized digital literacy tips at a moment when it might matter. Lab studies conducted across 16 countries with over 30,000 participants, suggest that APs increase engagement with accurate information and decrease engagement with less accurate information. Small experiments on YouTube suggest users enjoy the experience and report that it makes them feel safer online.

Commitment 19

Relevant Signatories using recommender systems commit to make them transparent to the recipients regarding the main criteria and parameters used for prioritising or deprioritising information, and provide options to users about recommender systems, and make available information on those options.

We signed up to the following measures of this commitment

Measure 19.1 Measure 19.2

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

No, Search has not recently introduced new implementation measures related to this Commitment, but Search regularly, and on an ongoing basis, updates its internal systems and processes related to its recommendation system.

If yes, list these implementation measures here

N/A

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

N/A

Measure 19.1

Relevant Signatories will make available to their users, including through the Transparency Centre and in their terms and conditions, in a clear, accessible and easily comprehensible manner, information outlining the main parameters their recommender systems employ.

QRE 19.1.1

Relevant Signatories will provide details of the policies and measures put in place to implement the above-mentioned measures accessible to EU users, especially by publishing information outlining the main parameters their recommender systems employ in this regard. This information should also be included in the Transparency Centre.

Note: The below QRE response has been reproduced (in some instances truncated in order to meet the suggested character limit) from the previous report as there is no new information to share now.

Google Search’s ranking systems sort through hundreds of billions of web pages and other content in the Search index to present the most relevant, useful results in a fraction of a second. Main parameters that help determine which results are returned for a user’s query include: 
  • Meaning of your query: To return relevant results, Google Search first needs to establish the intent behind a user’s query. Google Search builds language models to decipher how the words that a user enters into the search box match up to the most useful content available.
  • Relevance of content: Next, Google Search systems analyse the content to assess whether it contains information that might be relevant to what the user is looking for. The most basic signal that information is relevant is when content contains the same keywords as the user’s search query. 
  • Quality of content: Google Search systems prioritise content that seems most helpful by identifying signals that can help determine which content demonstrates expertise, high quality, and trustworthiness. For example, one of several factors that Google Search uses to help determine this is by understanding if other prominent websites link or refer to the content. Aggregated feedback from the Google Search quality evaluation process is used to further refine how Google Search systems discern the quality of information.
  • Usability: Google Search systems also consider the usability of content. When all things are relatively equal, content that people will find more accessible may perform better.
  • Context and settings: Information such as user location, past Google Search history, and Search settings all help Google Search ensure user results are what is most useful and relevant at that moment. Google Search uses the user’s country and location to deliver content relevant to their area. For instance, if a user in Chicago searches ‘football’, Google Search will likely show the user results about American football and the Chicago Bears first. Whereas if the user searches ‘football’ in London, Google will show results about soccer and the Premier League. Google Search settings are also an important indicator of which results a user is likely to find useful, such as if they set a preferred language or opted in to SafeSearch (a tool that helps filter out explicit results). Google Search also includes features that personalise results based on the activity in their Google account. The user can control what Google Search activity is used to improve their experience, including adjusting what data is saved to their Google account at myaccount.google.com. To disable Google Search personalisation based on activity in a user’s account, the user can turn off personal results in Search. Users can also prevent activity being stored to the user’s account or delete particular history items in Web & App Activity. Google Search systems are designed to match a user’s interests, but they are not designed to infer sensitive characteristics like race, religion or political party.

The How Search Works website explains the ins and outs of Google Search. The following links provide additional information about helping people and businesses learn how Search works and how results are automatically generated.

Measure 19.2

Relevant Signatories will provide options for the recipients of the service to select and to modify at any time their preferred options for relevant recommender systems, including giving users transparency about those options.

SLI 19.2.1

Relevant Signatories will provide aggregated information on effective user settings, such as the number of times users have actively engaged with these settings within the reporting period or over a sample representative timeframe, and clearly denote shifts in configuration patterns.

Number of impressions on the personal results control for logged in users in H2 2024 (1 July 2024 to 31 December 2024), broken down by EEA Member State.

Country Number of impressions on the personal results control for logged in users
Austria 52,907
Belgium 56,149
Bulgaria 27,334
Croatia 21,902
Cyprus 5,496
Czech Republic 46,097
Denmark 18,909
Estonia 6,491
Finland 36,868
France 367,129
Germany 496,001
Greece 51,962
Hungary 40,270
Ireland 27,259
Italy 366,985
Latvia 9,414
Lithuania 16,238
Luxembourg 2,675
Malta 1,785
Netherlands 104,851
Poland 205,946
Portugal 44,586
Romania 67,616
Slovakia 26,380
Slovenia 9,011
Spain 303,773
Sweden 47,411
Iceland 968
Liechtenstein 138
Norway 23,503
Total EU 2,461,445
Total EEA 2,486,054

Commitment 22

Relevant Signatories commit to provide users with tools to help them make more informed decisions when they encounter online information that may be false or misleading, and to facilitate user access to tools and information to assess the trustworthiness of information sources, such as indicators of trustworthiness for informed online navigation, particularly relating to societal issues or debates of general interest.

We signed up to the following measures of this commitment

Measure 22.7

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

No

If yes, list these implementation measures here

N/A

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

N/A

Measure 22.7

Relevant Signatories will design and apply products and features (e.g. information panels, banners, pop-ups, maps and prompts, trustworthiness indicators) that lead users to authoritative sources on topics of particular public and societal interest or in crisis situations.

QRE 22.7.1

Relevant Signatories will outline the products and features they deploy across their services and will specify whether those are available across Member States.

Google Search deploys the following feature:
  • ‘SOS Alerts’: Structured content that appears on a Google Search page, including high-quality help links and local relevant information when a crisis strikes. The alerts aim to make emergency information more accessible during a crisis. Google brings together relevant and high-quality content from the web, media, and Google products, and then highlights that information across Google products such as Google Search and Google Maps. See Help Centre for more information.

SLI 22.7.1

Relevant Signatories will report on the reach and/or user interactions with the products or features, at the Member State level, via the metrics of impressions and interactions (clicks, click-through rates (as relevant to the tools and services in question) and shares (as relevant to the tools and services in question).

Number of views/impressions on the following Google Search features in H2 2024 (1 July 2024 to 31 December 2024), for EEA Member States:
  • Crisis Response (e.g. ‘SOS Alerts’, ‘Public Alerts’);
  • Structured features for COVID-19.

In H2 2024, the following number of views/impressions were made on the Google Search features below:
  • 92,530,020 views/impressions on Crisis Response alerts (e.g. ‘SOS Alerts’, ‘Public Alerts’);
  • 2,800 views/impressions on COVID-19 Structured Features.

Commitment 23

Relevant Signatories commit to provide users with the functionality to flag harmful false and/or misleading information that violates Signatories policies or terms of service.

We signed up to the following measures of this commitment

Measure 23.1 Measure 23.2

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Yes

If yes, list these implementation measures here

  • In August 2024, Google Search released new Community Guidelines for user generated content that define the types of content and behavior that are not allowed on Search and incorporates Search’s overall content policies. The guidelines also provide users with guidance on how to report different types of potentially harmful user generated content, such as posts and profiles.

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

N/A

Measure 23.1

Relevant Signatories will develop or continue to make available on all their services and in all Member States languages in which their services are provided a user-friendly functionality for users to flag harmful false and/or misleading information that violates Signatories' policies or terms of service. The functionality should lead to appropriate, proportionate and consistent follow-up actions, in full respect of the freedom of expression.

QRE 23.1.1

Relevant Signatories will report on the availability of flagging systems for their policies related to harmful false and/or misleading information across EU Member States and specify the different steps that are required to trigger the systems.

Note: The below QRE response has been reproduced (in some instances truncated in order to meet the suggested character limit) from the previous report as there is no new information to share now.

Google Search aims to make the process of submitting removal requests as easy as possible, and has built reporting tools, which allow users in all EU Member States to report potentially violative content for review under Search’s Content Policies and Community Guidelines for user generated content. The Report Content On Google tool, for example, guides users to the right reporting form to provide the necessary information for the legal or policy issue they seek to flag.

Google Search has reporting tools for Search features, such as knowledge panels and featured snippets. For overall Search Results, users can flag content via the three dots in Search features and 10 blue links. Using the Send Feedback option in ‘About This Result’, users can then send feedback about the result, describing the issue and attaching a screenshot. 

Measure 23.2

Relevant Signatories will take the necessary measures to ensure that this functionality is duly protected from human or machine-based abuse (e.g., the tactic of 'mass-flagging' to silence other voices).

QRE 23.2.1

Relevant Signatories will report on the general measures they take to ensure the integrity of their reporting and appeals systems, while steering clear of disclosing information that would help would-be abusers find and exploit vulnerabilities in their defences.

Note: The below QRE response has been reproduced (in some instances truncated in order to meet the suggested character limit) from the previous report as there is no new information to share now.

Google Search carefully scrutinises most removal complaints that arrive in Google Search systems. Removal requests are handled according to the product area, issue type, and region, by a global team of front-line reviewers and Policy Leads who have expertise in a range of product areas, issue types (such as defamation or copyright infringement), local laws, and languages. Removal requests are processed in accordance with the mission of complying with the law and Google’s policies while maximising access to information and preserving user expression.

For most classes of requests, trained reviewers manually assess the removals. In some cases, such as copyright takedowns, Google Search deploys automation to speed the processing of high-volume complaints. To avoid abuse in this process, Google Search relies upon:

1) Limitations on who may submit high volumes of requests through flows like the Trusted Copyright Removals Program, ensuring that participants in this program are organisations with bona fide copyright interests unlikely to abuse their rights to suppress unrelated content;

2) Legal protections, such as those found in the Digital Services Act, or the possibility for Google or webmasters to file suit against submitters of bad-faith copyright complaints;

3) Handling counter-notifications from affected webmasters;

4) Tracking patterns of abusive behaviour and adjusting Google Search automation to avoid automatically honouring abusive takedowns of a kind Google Search has become aware of.

Empowering Researchers

Commitment 26

Relevant Signatories commit to provide access, wherever safe and practicable, to continuous, real-time or near real-time, searchable stable access to non-personal data and anonymised, aggregated, or manifestly-made public data for research purposes on Disinformation through automated means such as APIs or other open and accessible technical solutions allowing the analysis of said data.

We signed up to the following measures of this commitment

Measure 26.1 Measure 26.3

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

No

If yes, list these implementation measures here

N/A

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

N/A

Measure 26.1

Relevant Signatories will provide public access to non-personal data and anonymised, aggregated or manifestly-made public data pertinent to undertaking research on Disinformation on their services, such as engagement and impressions (views) of content hosted by their services, with reasonable safeguards to address risks of abuse (e.g. API policies prohibiting malicious or commercial uses).

QRE 26.1.1

Relevant Signatories will describe the tools and processes in place to provide public access to non-personal data and anonymised, aggregated and manifestly-made public data pertinent to undertaking research on Disinformation, as well as the safeguards in place to address risks of abuse.

Note: The below QRE response has been reproduced (in some instances truncated in order to meet the suggested character limit) from the previous report as there is no new information to share now.

Google Trends
Google Search and YouTube provide publicly available data via Google Trends, which provides access to a largely unfiltered sample of actual search requests made to Google Search and YouTube’s search function. It is anonymised (no one is personally identified), categorised (determined by the topic for a search query) and aggregated (grouped together). This allows Google Trends to display interest in a particular topic from around the globe or down to city-level geography. See Trends Help Centre for details.

Google Fact Check Explorer
Google Search also provides tools like Fact Check Explorer and the Google FactCheck Claim Search API. Google Search Fact Check Explorer allows anyone to explore the Fact Check articles that are using the ClaimReview markup. Additional information about ClaimReview markup can be found here

Using the Google FactCheck Claim Search API, users can query the same set of Fact Check results available via the Fact Check Explorer or a developer could continuously get the latest updates on a particular query. Use of the FactCheck Claim Search API is subject to Google’s API Terms of Service. To learn more, check the detailed API documentation

Google Researcher Program
As of 28 August 2023, eligible EU researchers can apply for access to publicly available data across some of Google’s products, including Search and YouTube, through the Google Researcher Program. Search and YouTube will provide eligible researchers (including non-academics that meet predefined eligibility criteria) with access to limited metadata scraping for public data. This program aims to enhance the public’s understanding of Google’s services and their impact. For additional details, see the Researcher Program landing page.

YouTube Researcher Program
The YouTube Researcher Program provides scaled, expanded access to global video metadata across the entire public YouTube corpus via a Data API for eligible academic researchers from around the world, who are affiliated with an accredited, higher-learning institution. Learn more about the data available in the YouTube API reference.

Transparency into paid content on YouTube
YouTube provides users a bespoke front end search page to access publicly available data containing organic content with paid product placements, sponsorships and endorsements as disclosed by creators. This is to enable users to understand that creators may receive goods or services in exchange for promotion. This search page complements YouTube’s existing process of displaying a disclosure message when creators disclose to YouTube that their content contains paid promotions. Learn more about adding paid product placements, sponsorships & endorsements here

Users can also query the same set of results using the YouTube Data API. Use is subject to YouTube’s API Terms of Service.

QRE 26.1.2

Relevant Signatories will publish information related to data points available via Measure 25.1, as well as details regarding the technical protocols to be used to access these data points, in the relevant help centre. This information should also be reachable from the Transparency Centre. At minimum, this information will include definitions of the data points available, technical and methodological information about how they were created, and information about the representativeness of the data.

Note: The below QRE response has been reproduced (in some instances truncated in order to meet the suggested character limit) from the previous report as there is no new information to share now.

Google Trends
The information provided via Google Trends is a sample of all of Google Search and YouTube’s search activity. The 2 different samples of Google Trends data that can be accessed are:
  • Real-time data - a sample covering the last seven days;
  • Non-realtime data - a separate sample from real-time data that goes as far back as 2004 and up to 72 hours before one’s search.

Only a sample of Google Search and YouTube searches are used in Google Trends (a publicly available research tool), because Google, including YouTube, handles billions of searches per day. Providing access to the entire data set would be too large to process quickly. By sampling data, Google can look at a dataset representative of all searches on Google, which includes YouTube, while finding insights that can be processed within minutes of an event happening in the real world. See Trends Help Centre for details.

Google Fact Check Explorer
The Fact Check Explorer includes the following information, from fact-check articles using the ClaimReview markup:
  • Claim made by: Name of the publisher making the claim;
  • Rating text: True or False;
  • Fact Check article: The fact-checking article on the publisher's site;
  • Claim reviewed: A short summary of the claim being evaluated;
  • Tags: The tags that show up next to the claim.

For additional details on fields included on Google Fact Check API, see API documentation.

Google Researcher Program
Approved researchers will receive permissions and access to public data for Search and YouTube in the following ways: 
  • Search: Access to an API for limited scraping with a budget for quota;
  • YouTube: Permission for scraping limited to metadata.

For additional details, see the Researcher Program landing page.

YouTube Researcher Program
The YouTube Researcher Program provides scaled, expanded access to global video metadata across the entire public YouTube corpus via a Data API. The program allows eligible academic researchers around the world to independently analyse the data they collect, including generating new/derived metrics for their research. Information available via the Data API includes video title, description, views, likes, comments, channel metadata, search results, and other data.

Transparency into paid content on YouTube
The information provided via the bespoke front end search page allows users to view videos with active paid product placements, sponsorships, and endorsements that have been declared on YouTube.
  • Paid product placements
    • Videos about a product or service because there is a connection between the creator and the maker of the product or service;
    • Videos created for a company or business in exchange for compensation or free of charge products/services; 
    • Videos where that company or business’s brand, message, or product is included directly in the content and the company has given the creator money or free of charge products to make the video.
  • Endorsements - Videos created for an advertiser or marketer that contains a message that reflects the opinions, beliefs, or experiences of the creator.
  • Sponsorships - Videos that have been financed in whole or in part by a company, without integrating the brand, message, or product directly into the content. Sponsorships generally promote the brand, message, or product of the third party.

Definitions can be found on the YouTube Help Centre.

Additional data points are provided in SLI 26.1.1 and 26.2.1.

SLI 26.1.1

Relevant Signatories will provide quantitative information on the uptake of the tools and processes described in Measure 26.1, such as number of users.

(1) Number of Fact Check API tool requests from users in H2 2024 (1 July 2024 to 31 December 2024), globally;
(2) Number of Fact Check Explorer tool users in H2 2024, broken down by EEA Member State (see table below);
(3) Number of users of the Google Trends online tool to research information relating to Google Search in H2 2024, broken down by EEA Member State (see table below).

(1) In H2 2024, the Fact Check Search API received approximately 211,980 requests from Google Search users, globally. 

Country Number of Fact Check Explorer tool users Number of Google Trends users researching Google Search
Austria 485 258,045
Belgium 732 256,912
Bulgaria 373 345,279
Croatia 216 116,183
Cyprus 80 69,882
Czech Republic 702 258,373
Denmark 521 158,667
Estonia 80 50,727
Finland 344 124,093
France 3,592 1,222,099
Germany 4,364 2,059,534
Greece 421 538,344
Hungary 368 448,879
Ireland 479 14,021,955
Italy 1,821 1,589,810
Latvia 104 84,437
Lithuania 104 120,403
Luxembourg 67 61,699
Malta 25 16,651
Netherlands 1,439 619,239
Poland 1,372 811,290
Portugal 538 283,543
Romania 453 481,999
Slovakia 289 116,546
Slovenia 129 56,493
Spain 5,530 1,388,768
Sweden 734 292,843
Iceland 25 6,396
Liechtenstein 9 697
Norway 974 147,732
Total EU 25,362 25,852,693
Total EEA 26,370 26,007,518

Measure 26.3

Relevant Signatories will implement procedures for reporting the malfunctioning of access systems and for restoring access and repairing faulty functionalities in a reasonable time.

QRE 26.3.1

Relevant Signatories will describe the reporting procedures in place to comply with Measure 26.3 and provide information about their malfunction response procedure, as well as about malfunctions that would have prevented the use of the systems described above during the reporting period and how long it took to remediate them.

Note: The below QRE response has been reproduced (in some instances truncated in order to meet the suggested character limit) from the previous report as there is no new information to share now.

Google Trends
For Google Trends, users have an option to report an issue by taking a screenshot of the malfunction area and then submitting it for feedback via the Send Feedback option on the Google Trends page. Additionally, users can access the Trends Help Centre to troubleshoot any issues they may be experiencing.

Google Fact Check Explorer
Within Google Search’s Fact Check Explorer, the Report Issue option provides users the ability to report issues to Google.

Google Researcher Program
For the Google Researcher Program, the most up to date information is captured in the Program description on the Transparency Centre, and also on the Acceptable Use Policy page. Google Search has additional Help Centre support via their Search Researcher Result API guidelines.

YouTube Researcher Program
For the YouTube Researcher Program, there is support available via email. Researchers can contact YouTube, with questions and to report technical issues or other suspected faults, via a unique email alias, provided upon acceptance into the program. Questions are answered by YouTube’s Developer Support team and by other relevant internal parties as needed.

​​Google is not aware of any malfunctions during the reporting period that would have prevented access to these reporting systems.

Commitment 28

COOPERATION WITH RESEARCHERS Relevant Signatories commit to support good faith research into Disinformation that involves their services.

We signed up to the following measures of this commitment

Measure 28.1 Measure 28.2 Measure 28.3 Measure 28.4

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Yes

If yes, list these implementation measures here

Search & YouTube
  • In 2024, Google hosted a workshop with over 30 attendees, including academics, at the Trust & Safety Forum in Lille, France exploring Safety by Design frameworks and implementation constraints, including misinformation.
  • In October 2024, Google announced the first-ever Google Academic Research Award (GARA) winners. In this first cycle, the program will support 95 projects led by 143 researchers globally.

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

N/A

Measure 28.1

Relevant Signatories will ensure they have the appropriate human resources in place in order to facilitate research, and should set-up and maintain an open dialogue with researchers to keep track of the types of data that are likely to be in demand for research and to help researchers find relevant contact points in their organisations.

QRE 28.1.1

Relevant Signatories will describe the resources and processes they deploy to facilitate research and engage with the research community, including e.g. dedicated teams, tools, help centres, programs, or events.

Google has a longstanding commitment to transparency, and has led the way in transparency reporting of content removals and government requests for user data over the past decade plus. 

Google and YouTube’s products, processes, and practices via the Lumen Database, Google Trends, and Fact Check Explorer show some of the ways that Google provides tools to support not only researchers, but journalists and others, to understand more about Google. 

Please refer to QRE 26.1.1, QRE 26.1.2, and QRE 26.3.1 for further information about Google Fact Check Tool API and Google Trends.

Google
Eligible EU researchers can apply for access to publicly available data across some of Google’s products, including Search and YouTube, through the Google Researcher Program. Search and YouTube will provide eligible researchers (including non-academics that meet predefined eligibility criteria) with access to limited metadata scraping for public data. This program aims to enhance the public’s understanding of Google’s services and their impact. 

Google has teams that operate the Google Researcher Program. They manage the researcher application process and evaluate potential updates and developments for the Google Researcher Program. Additional information can be found on the Google Transparency Centre. Google Search has additional Help Centre support via their Search Researcher Result API guidelines.

Additionally, Google’s partnership with Lumen is an independent research project managed by the Berkman Klein Centre for Internet & Society at Harvard Law School. The Lumen database houses millions of content takedown requests that have been voluntarily shared by various companies, including Google. Its purpose is to facilitate academic and industry research concerning the availability of online content. As part of Google’s partnership with Lumen, information about the legal notices Google receives may be sent to the Lumen project for publication. Google informs users about its Lumen practices under the 'Transparency at our core' section of the Legal Removals Help Centre. Additional information on Lumen can be found here

Trust & Safety Research partners internally with Google.org's Scientific Progress team to strategically fund and engage with academics working on cutting-edge interdisciplinary research in areas of mutual interest and societal benefit. In October 2024, Google announced the first-ever Google Academic Research Award (GARA) winners. Overall, the program supported 95 projects led by 143 researchers globally; within the Trust & Safety topic, Google funded 21 projects across 12 countries. 

YouTube
The YouTube Researcher Program provides eligible academic researchers from around the world with scaled, expanded access to global video metadata across the entire public YouTube corpus via a Data API. Information available via the Data API includes video title, description, views, likes, comments, channel metadata, search results, and other data. (See YouTube API reference for more information).

YouTube has teams that operate the YouTube Researcher Program. They manage the researcher application process and provide technical support throughout the research project. They also evaluate potential updates and developments for the YouTube Researcher Program. Researchers can use any of the options below to obtain support: 


In addition, Google Search and YouTube’s Product and Policy teams regularly communicate with researchers who reach out with questions about the functioning of YouTube or seek to receive feedback on past or future research projects.

Measure 28.2

Relevant Signatories will be transparent on the data types they currently make available to researchers across Europe.

QRE 28.2.1

Relevant Signatories will describe what data types European researchers can currently access via their APIs or via dedicated teams, tools, help centres, programs, or events.

See response to QRE 28.1.1.

Measure 28.3

Relevant Signatories will not prohibit or discourage genuinely and demonstratively public interest good faith research into Disinformation on their platforms, and will not take adversarial action against researcher users or accounts that undertake or participate in good-faith research into Disinformation.

QRE 28.3.1

Relevant Signatories will collaborate with EDMO to run an annual consultation of European researchers to assess whether they have experienced adversarial actions or are otherwise prohibited or discouraged to run such research.

Google Search and YouTube continue to engage constructively with the Code of Practice’s Permanent Task-force and with EDMO. As of the time of this report, no annual consultation has yet taken place, but Google Search and YouTube stand ready to collaborate with EDMO to that end in 2025. 

Additionally, refer to QRE 26.1.1 to learn more about how Google, including YouTube, provides opportunities for researchers on its platforms.

Measure 28.4

As part of the cooperation framework between the Signatories and the European research community, relevant Signatories will, with the assistance of the EDMO, make funds available for research on Disinformation, for researchers to independently manage and to define scientific priorities and transparent allocation procedures based on scientific merit.

QRE 28.4.1

Relevant Signatories will disclose the resources made available for the purposes of Measure 28.4 and procedures put in place to ensure the resources are independently managed.

In 2021, Google contributed €25M to help launch the European Media and Information Fund (EMIF).

The EMIF was established by the European University Institute and the Calouste Gulbenkian Foundation. The European Digital Media Observatory (EDMO) agreed to play a scientific advisory role in the evaluation and selection of projects that will receive the fund’s support, but does not receive Google funding. Google has no role in the assessment of applications. To date, at least 107 projects related to information quality across 25 countries (including 23 EEA Member States) have been granted €17.70 million.