
Report March 2025
Your organisation description
Integrity of Services
Commitment 14
In order to limit impermissible manipulative behaviours and practices across their services, Relevant Signatories commit to put in place or further bolster policies to address both misinformation and disinformation across their services, and to agree on a cross-service understanding of manipulative behaviours, actors and practices not permitted on their services. Such behaviours and practices include: The creation and use of fake accounts, account takeovers and bot-driven amplification, Hack-and-leak operations, Impersonation, Malicious deep fakes, The purchase of fake engagements, Non-transparent paid messages or promotion by influencers, The creation and use of accounts that participate in coordinated inauthentic behaviour, User conduct aimed at artificially amplifying the reach or perceived public support for disinformation.
We signed up to the following measures of this commitment
Measure 14.1 Measure 14.2 Measure 14.3
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
Search & YouTube
In November 2024, Google released a white paper detailing how it is addressing the growing global issue of fraud and scams. In the paper, Google explains that it fights scams and fraud by taking proactive measures to protect users from harm, deliver reliable information, and partner to create a safer internet, through policies and built-in technological protections that help us to prevent, detect, and respond to harmful and illegal content. For details on YouTube and Google Search’s approaches to tackling scams, see the full report here.
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 14.1
Relevant Signatories will adopt, reinforce and implement clear policies regarding impermissible manipulative behaviours and practices on their services, based on the latest evidence on the conducts and tactics, techniques and procedures (TTPs) employed by malicious actors, such as the AMITT Disinformation Tactics, Techniques and Procedures Framework.
QRE 14.1.1
Relevant Signatories will list relevant policies and clarify how they relate to the threats mentioned above as well as to other Disinformation threats.
- Deceptive Practices Policy: This policy prohibits content that impersonates any person or organisation, misrepresentation or concealment of ownership or primary purpose, and engagement in inauthentic or coordinated behaviour to deceive, defraud, or mislead. This policy does not cover content with certain artistic, educational, historical, documentary, or scientific considerations, or other substantial benefits to the public.
- Manipulated Media Policy: This policy prohibits audio, video, or image content that has been manipulated to deceive, defraud, or mislead by means of creating a representation of actions or events that verifiably did not take place.
- Transparency Policy: This policy notes that news sources on Google should provide clear dates and bylines, as well as information about authors, the publication, the publisher, company or network behind it, and contact information.
QRE 14.1.2
Signatories will report on their proactive efforts to detect impermissible content, behaviours, TTPs and practices relevant to this commitment.
Measure 14.2
Relevant Signatories will keep a detailed, up-to-date list of their publicly available policies that clarifies behaviours and practices that are prohibited on their services and will outline in their reports how their respective policies and their implementation address the above set of TTPs, threats and harms as well as other relevant threats.
QRE 14.2.1
Relevant Signatories will report on actions taken to implement the policies they list in their reports and covering the range of TTPs identified/employed, at the Member State level.
Furthermore, to ensure its algorithms meet high standards of relevance and quality, Google Search has a rigorous process that involves both live tests and thousands of trained external Search Quality Raters from around the world. Raters do not determine the ranking of an individual, specific page or website, but they help to benchmark the quality of Google Search’s results so that Google Search can meet a high bar for users all around the world. Under the Google Search Quality Rater Guidelines, raters are instructed to assign the lowest rating to pages that are potentially harmful to users or specified groups, misleading, untrustworthy, and spammy. Google Search also provides users the ability to flag content that might be violating Google Search policies.
SLI 14.2.1
Number of instances of identified TTPs and actions taken at the Member State level under policies addressing each of the TTPs as well as information on the type of content.
SLI 14.2.2
Views/impressions of and interaction/engagement at the Member State level (e.g. likes, shares, comments), related to each identified TTP, before and after action was taken.
SLI 14.2.3
Metrics to estimate the penetration and impact that e.g. Fake/Inauthentic accounts have on genuine users and report at the Member State level (including trends on audiences targeted; narratives used etc.).
SLI 14.2.4
Estimation, at the Member State level, of TTPs related content, views/impressions and interaction/engagement with such content as a percentage of the total content, views/impressions and interaction/engagement on relevant signatories' service.
Measure 14.3
Relevant Signatories will convene via the Permanent Task-force to agree upon and publish a list and terminology of TTPs employed by malicious actors, which should be updated on an annual basis.
QRE 14.3.1
Signatories will report on the list of TTPs agreed in the Permanent Task-force within 6 months of the signing of the Code and will update this list at least every year. They will also report about the common baseline elements, objectives and benchmarks for the policies and measures.
Commitment 15
Relevant Signatories that develop or operate AI systems and that disseminate AI-generated and manipulated content through their services (e.g. deepfakes) commit to take into consideration the transparency obligations and the list of manipulative practices prohibited under the proposal for Artificial Intelligence Act.
We signed up to the following measures of this commitment
Measure 15.1 Measure 15.2
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
- In September 2024, Google announced that they will bring the latest version of the Coalition for Content Provenance and Authenticity (C2PA) technical standard, Content Credentials, to Search’s ‘About This Image’ feature. If an image contains C2PA metadata, users will be able to use the feature to see if an image was created or edited with AI tools.
- After joining the Coalition for Content Provenance and Authenticity (C2PA), a cross-industry effort to help provide more transparency and context for people on AI-generated content, in February 2024, Google collaborated on the newest version (2.1) of the coalition’s technical standard, Content Credentials. This version is more secure against a wider range of tampering attacks due to stricter technical requirements for validating the history of the content’s provenance.
- In July 2024 at the Aspen Security Forum, Google, alongside industry peers, introduced the Coalition for Secure AI (CoSAI) to advance comprehensive security measures for addressing the unique risks that come with AI, for both issues that arise in real time and those over the horizon. The first three areas of focus the coalition will tackle in collaboration with industry and academia include Software Supply Chain Security for AI systems; Preparing defenders for a changing cybersecurity landscape; and AI security governance.
- In September 2024, Google announced a Global AI Opportunity Fund, which will invest $120 million to make AI education and training available in communities around the world. Google will provide this in local languages, in partnership with nonprofits and NGOs.
- In October 2024, Google released its EU AI Opportunity Agenda, a series of recommendations for governments to seize the full economic and societal potential of AI. The Agenda outlines the need to revisit Europe’s workforce strategy, as well as investment in AI infrastructure and research, adoption and accessibility.
- In October 2024, The Nobel Prize was awarded to Google DeepMind’s Demis Hassabis and John Jumper for their groundbreaking work with AlphaFold 2, which predicted the structures for nearly all proteins known to science. It has been used by more than 2 million researchers around the world, accelerating scientific discovery in important areas like malaria vaccines, cancer treatments, and more.
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 15.1
Relevant signatories will establish or confirm their policies in place for countering prohibited manipulative practices for AI systems that generate or manipulate content, such as warning users and proactively detect such content.
QRE 15.1.1
In line with EU and national legislation, Relevant Signatories will report on their policies in place for countering prohibited manipulative practices for AI systems that generate or manipulate content.
The Google Search features Manipulated Media Policy prohibits audio, video, or image content that has been manipulated to deceive, defraud, or mislead by means of creating a representation of actions or events that verifiably did not take place. This includes if such content would cause a reasonable person to have a fundamentally different understanding or impression, such that it might cause significant harm to groups or individuals, or significantly undermine participation or trust in electoral or civic processes.
Measure 15.2
Relevant Signatories will establish or confirm their policies in place to ensure that the algorithms used for detection, moderation and sanctioning of impermissible conduct and content on their services are trustworthy, respect the rights of end-users and do not constitute prohibited manipulative practices impermissibly distorting their behaviour in line with Union and Member States legislation.
QRE 15.2.1
Relevant Signatories will report on their policies and actions to ensure that the algorithms used for detection, moderation and sanctioning of impermissible conduct and content on their services are trustworthy, respect the rights of end-users and do not constitute prohibited manipulative practices in line with Union and Member States legislation.
- AI Principles serve as Google’s ethical charter and inform its product policies. Google is committed to developing technology responsibly and published AI Principles in 2018 to guide its work. Its robust internal governance focuses on responsibility throughout the AI development lifecycle, covering model development, application deployment, and post-launch monitoring. While the Principles were recently updated to adapt to shifts in technology, the global conversation, and the AI ecosystem, Google’s deep commitment to responsible AI development remains unchanged.
- Education and resources provide ethics training and technical tools to test, evaluate and monitor the application of the AI Principles to all of Google’s products and services. Google is sharing for the first time details of a new company-wide tool for monitoring products’ responsible AI maturity, and updates on technical approaches to fairness, data transparency, and more.
- Structures and processes include risk assessment frameworks, ethics reviews, and Executive accountability. This report provides a dive deep into how risk is identified and measured in the AI Principles reviews.
See additional details here.
Commitment 16
Relevant Signatories commit to operate channels of exchange between their relevant teams in order to proactively share information about cross-platform influence operations, foreign interference in information space and relevant incidents that emerge on their respective services, with the aim of preventing dissemination and resurgence on other services, in full compliance with privacy legislation and with due consideration for security and human rights risks.
We signed up to the following measures of this commitment
Measure 16.1
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
- Google’s Threat Analysis Group (TAG) published its Q3 2024, and Q4 2024 Quarterly Bulletin, which provides updates around coordinated influence operation campaigns terminated on Google’s platforms.
- In H2 2024 (1 July 2024 to 31 December 2024), Google TAG published 2 examples of information sharing and learnings in the TAG Blog:
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 16.1
Relevant Signatories will share relevant information about cross-platform information manipulation, foreign interference in information space and incidents that emerge on their respective services for instance via a dedicated sub-group of the permanent Task-force or via existing fora for exchanging such information.
QRE 16.1.1
Relevant Signatories will disclose the fora they use for information sharing as well as information about learnings derived from this sharing.
See Google’s disclosure policies about handling security vulnerabilities for developers and security professionals.
SLI 16.1.1
Number of actions taken as a result of the collaboration and information sharing between signatories. Where they have such information, they will specify which Member States that were affected (including information about the content being detected and acted upon due to this collaboration).
As reported in the Bulletin, some channels YouTube took action on were part of campaigns that uploaded content in some EEA languages, specifically: French (546 channels), German (460 channels), Polish (389 channels), Italian (362 channels), Spanish (128 channels), Romanian (15 channels), Czech (12 Channels), and Hungarian (12 channels). Certain campaigns may have uploaded content in multiple languages, or in other countries outside of the EEA region utilising EEA languages. Please note that there may be many languages for any one coordinated influence campaign and that the presence of content in an EEA Member State language does not necessarily entail a particular focus on that Member State. For more information, please see the TAG Bulletin.
Empowering Users
Commitment 17
In light of the European Commission's initiatives in the area of media literacy, including the new Digital Education Action Plan, Relevant Signatories commit to continue and strengthen their efforts in the area of media literacy and critical thinking, also with the aim to include vulnerable groups.
We signed up to the following measures of this commitment
Measure 17.1 Measure 17.2 Measure 17.3
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
- In May 2024, Google announced that it would open-source SynthID text watermarking through its updated Responsible Generative AI Toolkit to help others benefit from and improve on the advances Google is making. In addition, Google expanded SynthID’s capabilities to include watermarking AI-generated text in the Gemini app and web experience, as well as video in Veo, its recently announced and most capable generative video model.
- Google Search expanded the ‘About This Image’ tool to 40 additional languages around the world, including French, German, Hindi, Italian, Japanese, Korean, Portuguese, Spanish and Vietnamese.
- In July 2024, Google announced ‘About This Image’ is now available on Circle to Search and Google Lens, giving users more ways to quickly get context on images that they see wherever they come across them.
- In September 2024, Google announced that it would bring the latest version of the Coalition for Content Provenance and Authenticity (C2PA) technical standard, Content Credentials, to Search’s ‘About This Image’ feature. If an image contains C2PA metadata, users will be able to use the feature to see if an image was created or edited with AI tools.
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 17.1
Relevant Signatories will design and implement or continue to maintain tools to improve media literacy and critical thinking, for instance by empowering users with context on the content visible on services or with guidance on how to evaluate online content.
QRE 17.1.1
Relevant Signatories will outline the tools they develop or maintain that are relevant to this commitment and report on their deployment in each Member State.
- ‘About This Result’: Next to most results on Google Search, there is a menu icon that users can tap to learn more about the result or feature and where the information is coming from. With this additional context, users can make a more informed decision about the sites they want to visit and what results will be most useful for them. When available, users will see a description of the website from Wikipedia, which provides free, reliable information about tens of millions of sites on the web. If a website does not have a Wikipedia description, Google Search will show additional context that may be available, such as when Google Search first indexed the site. Users will also be able to quickly see if their connection to the site is secure based on its use of the HTTPS protocol, which encrypts all data between the website and the browser they are using, to help them stay safe as they browse the web. More information on the ‘About This Result’ feature can be found here, and here.
- The ‘More About This Page’ link within the ‘About This Result’ feature provides additional insights about sources and topics users find on Google Search. When a user taps the three dots on any search result, they will be able to learn more about the page. Users can:
- See more information about the source: Users will be able to read what a site says about itself in its own words, when that information is available.
- Learn more about the topic: In the ‘About the topic’ section, users can find information about the same topic from other sources.
- Find what others on the web have said about a site: Reading what others on the web have written about a site can help users better evaluate sources.
- In December 2023, Google Search expanded this feature to 40 new languages, including Bulgarian, Croatian, Czech, Danish, Estonian, Finnish, Greek, Hungarian, Latvian, Lithuanian, Maltese, Polish, Romanian, Slovak, Slovenian, and Swedish.
- Additional information can be found in the Google Search blog post here.
- The ‘More About This Page’ link within the ‘About This Result’ feature provides additional insights about sources and topics users find on Google Search. When a user taps the three dots on any search result, they will be able to learn more about the page. Users can:
- ‘About This Image’: With added insights in ‘About This Image’, users will know if an image may have been generated with Google’s AI tools when they come across it in Search or Chrome. All images generated with Imagen 2 in Google’s consumer products will be marked by SynthID, a tool developed by Google DeepMind that adds a digital watermark directly into the pixels of images generated. SynthID watermarks are imperceptible to the human eye but detectable for identification. In addition, Search expanded the ‘About This Image’ tool to 40 additional languages around the world, including French, German, Hindi, Italian, Japanese, Korean, Portuguese, Spanish and Vietnamese.
- Consistent with its AI principles, Google Search also conducted extensive adversarial testing and red teaming to identify and mitigate potential harmful and problematic content. Google Search is also applying filters to avoid generating images of named people. Google Search will continue investing in new techniques to improve the safety and privacy protections of its models.
- More information on the ‘About This Image’ feature can be found here.
- Content Advisory Notices: Helpful notices for users that highlight when information is scarce or when interest is travelling faster than facts. These are specifically designed to address data voids which include queries for which either content is limited or nonexistent or when a topic is rapidly evolving and reliable information is not yet available for that topic.
SLI 17.1.1
Relevant Signatories will report, at the Member State level, on metrics pertinent to assessing the effects of the tools described in the qualitative reporting element for Measure 17.1, which will include: the total count of impressions of the tool; and information on the interactions/engagement with the tool.
Country | Impression proportion estimate of content advisories for low relevance results (%) | Impression proportion estimate of content advisories for rapidly changing results (%) | Impression proportion estimate of content advisories for potentially unreliable set of results (%) | Number of times the 'More About This Page' feature was viewed | Number of times the 'Source' section of the ‘About This Result’ panel was viewed | Number of times the 'Your Search and this result' section of the ‘About This Result’ panel was viewed | Number of times the 'Personalisation' section of the ‘About This Result’ panel was viewed |
---|---|---|---|---|---|---|---|
Austria | 0.009% | 0.00007% | 0.0000079% | 492,336 | 10,161,776 | 10,002,348 | 8,443,368 |
Belgium | 0.008% | 0.00006% | 0.0000062% | 694,484 | 12,635,380 | 12,451,036 | 10,479,372 |
Bulgaria | 0.015% | 0.00004% | 0.0000075% | 418,008 | 5,520,908 | 5,427,300 | 4,564,196 |
Croatia | 0.011% | 0.00004% | 0.0000067% | 306,460 | 5,169,452 | 5,050,320 | 4,322,968 |
Cyprus | 0.019% | 0.00010% | 0.0000329% | 106,152 | 1,194,376 | 1,177,588 | 1,003,436 |
Czech Republic | 0.009% | 0.00003% | 0.0000034% | 658,648 | 9,308,096 | 9,185,980 | 7,769,180 |
Denmark | 0.008% | 0.00010% | 0.0000045% | 290,416 | 5,980,168 | 5,903,776 | 5,005,248 |
Estonia | 0.015% | 0.00011% | 0.0000233% | 86,100 | 1,521,468 | 1,506,608 | 1,285,236 |
Finland | 0.009% | 0.00012% | 0.0000057% | 333,476 | 7,408,228 | 7,328,700 | 6,237,504 |
France | 0.007% | 0.00005% | 0.0000059% | 4,434,580 | 86,918,604 | 85,351,180 | 72,299,248 |
Germany | 0.011% | 0.00010% | 0.0000061% | 4,744,012 | 96,729,900 | 94,959,380 | 80,156,384 |
Greece | 0.017% | 0.00003% | 0.0000044% | 753,108 | 11,418,624 | 11,171,708 | 9,546,464 |
Hungary | 0.012% | 0.00003% | 0.0000046% | 590,120 | 8,314,240 | 8,170,540 | 6,923,408 |
Ireland | 0.010% | 0.00010% | 0.0000054% | 432,868 | 7,267,604 | 7,130,680 | 6,061,364 |
Italy | 0.015% | 0.00004% | 0.0000016% | 4,336,524 | 79,300,560 | 77,454,444 | 66,299,316 |
Latvia | 0.018% | 0.00010% | 0.0000184% | 113,092 | 1,673,808 | 1,653,056 | 1,398,732 |
Lithuania | 0.016% | 0.00006% | 0.0000127% | 171,912 | 2,746,672 | 2,716,224 | 2,288,856 |
Luxembourg | 0.013% | 0.00015% | 0.0000432% | 37,252 | 707,288 | 695,948 | 591,364 |
Malta | 0.015% | 0.00018% | 0.0000540% | 49,240 | 703,072 | 691,912 | 594,788 |
Netherlands | 0.009% | 0.00006% | 0.0000022% | 1,347,040 | 23,309,716 | 22,909,616 | 19,352,032 |
Poland | 0.006% | 0.00002% | 0.0000008% | 2,240,116 | 47,084,536 | 46,397,932 | 39,155,200 |
Portugal | 0.007% | 0.00006% | 0.0000052% | 754,972 | 11,476,200 | 11,293,812 | 9,653,604 |
Romania | 0.011% | 0.00003% | 0.0000037% | 887,976 | 12,030,632 | 11,798,700 | 10,039,572 |
Slovakia | 0.013% | 0.00003% | 0.0000104% | 328,608 | 4,527,972 | 4,463,864 | 3,762,372 |
Slovenia | 0.015% | 0.00004% | 0.0000182% | 128,052 | 2,411,732 | 2,380,088 | 2,020,268 |
Spain | 0.006% | 0.00008% | 0.0000070% | 4,078,508 | 58,946,872 | 57,742,752 | 49,286,500 |
Sweden | 0.007% | 0.00012% | 0.0000030% | 614,072 | 13,062,580 | 12,902,600 | 10,946,140 |
Iceland | 0.013% | 0.00021% | 0.0000000% | 13,536 | 376,920 | 371,588 | 316,908 |
Liechtenstein | 0.013% | 0.00104% | 0.0000000% | 1,588 | 39,644 | 39,068 | 32,924 |
Norway | 0.006% | 0.00009% | 0.0000043% | 315,956 | 5,814,192 | 5,747,832 | 4,853,060 |
Total EU | 0.010% | 0.00006% | 0.0000053% | 29,428,132 | 527,530,464 | 517,918,092 | 439,486,120 |
Total EEA | 0.010% | 0.00006% | 0.0000053% | 29,759,212 | 533,761,220 | 524,076,580 | 444,689,012 |
Measure 17.2
Relevant Signatories will develop, promote and/or support or continue to run activities to improve media literacy and critical thinking such as campaigns to raise awareness about Disinformation, as well as the TTPs that are being used by malicious actors, among the general public across the European Union, also considering the involvement of vulnerable communities.
QRE 17.2.1
Relevant Signatories will describe the activities they launch or support and the Member States they target and reach. Relevant signatories will further report on actions taken to promote the campaigns to their user base per Member States targeted.
- In H2 2024, Google.org announced $10 million in funding to the Raspberry Pi Foundation to further expand access to Experience AI. This educational program was co-created with Google DeepMind as part of Google.org’s broader commitment to support organisations helping young people build AI literacy.
- Experience AI provides teachers with the training and resources needed to both educate and inspire young people aged 11-14 about AI.
- The curriculum focuses on a structured learning journey, ethical considerations, real-world examples and role models, and culturally relevant content to engage learners in understanding AI and how to use it responsibly. Raspberry Pi Foundation and Google DeepMind continued to develop further resources, including three new lessons centred around AI safety: AI and Your Data, Media Literacy in the Age of AI, and Using Generative AI Responsibly.
SLI 17.2.1
Relevant Signatories report on number of media literacy and awareness raising activities organised and or participated in and will share quantitative information pertinent to show the effects of the campaigns they build or support at the Member State level.
Measure 17.3
For both of the above Measures, and in order to build on the expertise of media literacy experts in the design, implementation, and impact measurement of tools, relevant Signatories will partner or consult with media literacy experts in the EU, including for instance the Commission's Media Literacy Expert Group, ERGA's Media Literacy Action Group, EDMO, its country-specific branches, or relevant Member State universities or organisations that have relevant expertise.
QRE 17.3.1
Relevant Signatories will describe how they involved and partnered with media literacy experts for the purposes of all Measures in this Commitment.
Commitment 18
Relevant Signatories commit to minimise the risks of viral propagation of Disinformation by adopting safe design practices as they develop their systems, policies, and features.
We signed up to the following measures of this commitment
Measure 18.2 Measure 18.3
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 18.2
Relevant Signatories will develop and enforce publicly documented, proportionate policies to limit the spread of harmful false or misleading information (as depends on the service, such as prohibiting, downranking, or not recommending harmful false or misleading information, adapted to the severity of the impacts and with due regard to freedom of expression and information); and take action on webpages or actors that persistently violate these policies.
QRE 18.2.1
Relevant Signatories will report on the policies or terms of service that are relevant to Measure 18.2 and on their approach towards persistent violations of these policies.
SLI 18.2.1
Relevant Signatories will report on actions taken in response to violations of policies relevant to Measure 18.2, at the Member State level. The metrics shall include: Total number of violations and Meaningful metrics to measure the impact of these actions (such as their impact on the visibility of or the engagement with content that was actioned upon).
Measure 18.3
Relevant Signatories will invest and/or participate in research efforts on the spread of harmful Disinformation online and related safe design practices, will make findings available to the public or report on those to the Code's taskforce. They will disclose and discuss findings within the permanent Task-force, and explain how they intend to use these findings to improve existing safe design practices and features or develop new ones.
QRE 18.3.1
Relevant Signatories will describe research efforts, both in-house and in partnership with third-party organisations, on the spread of harmful Disinformation online and relevant safe design practices, as well as actions or changes as a result of this research. Relevant Signatories will include where possible information on financial investments in said research. Wherever possible, they will make their findings available to the general public.
- Accuracy Prompts (APs): APs remind users to think about accuracy. The prompts work by serving users bite-sized digital literacy tips at a moment when it might matter. Lab studies conducted across 16 countries with over 30,000 participants, suggest that APs increase engagement with accurate information and decrease engagement with less accurate information. Small experiments on YouTube suggest users enjoy the experience and report that it makes them feel safer online.
Commitment 19
Relevant Signatories using recommender systems commit to make them transparent to the recipients regarding the main criteria and parameters used for prioritising or deprioritising information, and provide options to users about recommender systems, and make available information on those options.
We signed up to the following measures of this commitment
Measure 19.1 Measure 19.2
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 19.1
Relevant Signatories will make available to their users, including through the Transparency Centre and in their terms and conditions, in a clear, accessible and easily comprehensible manner, information outlining the main parameters their recommender systems employ.
QRE 19.1.1
Relevant Signatories will provide details of the policies and measures put in place to implement the above-mentioned measures accessible to EU users, especially by publishing information outlining the main parameters their recommender systems employ in this regard. This information should also be included in the Transparency Centre.
- Meaning of your query: To return relevant results, Google Search first needs to establish the intent behind a user’s query. Google Search builds language models to decipher how the words that a user enters into the search box match up to the most useful content available.
- Relevance of content: Next, Google Search systems analyse the content to assess whether it contains information that might be relevant to what the user is looking for. The most basic signal that information is relevant is when content contains the same keywords as the user’s search query.
- Quality of content: Google Search systems prioritise content that seems most helpful by identifying signals that can help determine which content demonstrates expertise, high quality, and trustworthiness. For example, one of several factors that Google Search uses to help determine this is by understanding if other prominent websites link or refer to the content. Aggregated feedback from the Google Search quality evaluation process is used to further refine how Google Search systems discern the quality of information.
- Usability: Google Search systems also consider the usability of content. When all things are relatively equal, content that people will find more accessible may perform better.
- Context and settings: Information such as user location, past Google Search history, and Search settings all help Google Search ensure user results are what is most useful and relevant at that moment. Google Search uses the user’s country and location to deliver content relevant to their area. For instance, if a user in Chicago searches ‘football’, Google Search will likely show the user results about American football and the Chicago Bears first. Whereas if the user searches ‘football’ in London, Google will show results about soccer and the Premier League. Google Search settings are also an important indicator of which results a user is likely to find useful, such as if they set a preferred language or opted in to SafeSearch (a tool that helps filter out explicit results). Google Search also includes features that personalise results based on the activity in their Google account. The user can control what Google Search activity is used to improve their experience, including adjusting what data is saved to their Google account at myaccount.google.com. To disable Google Search personalisation based on activity in a user’s account, the user can turn off personal results in Search. Users can also prevent activity being stored to the user’s account or delete particular history items in Web & App Activity. Google Search systems are designed to match a user’s interests, but they are not designed to infer sensitive characteristics like race, religion or political party.
The How Search Works website explains the ins and outs of Google Search. The following links provide additional information about helping people and businesses learn how Search works and how results are automatically generated.
Measure 19.2
Relevant Signatories will provide options for the recipients of the service to select and to modify at any time their preferred options for relevant recommender systems, including giving users transparency about those options.
SLI 19.2.1
Relevant Signatories will provide aggregated information on effective user settings, such as the number of times users have actively engaged with these settings within the reporting period or over a sample representative timeframe, and clearly denote shifts in configuration patterns.
Country | Number of impressions on the personal results control for logged in users |
---|---|
Austria | 52,907 |
Belgium | 56,149 |
Bulgaria | 27,334 |
Croatia | 21,902 |
Cyprus | 5,496 |
Czech Republic | 46,097 |
Denmark | 18,909 |
Estonia | 6,491 |
Finland | 36,868 |
France | 367,129 |
Germany | 496,001 |
Greece | 51,962 |
Hungary | 40,270 |
Ireland | 27,259 |
Italy | 366,985 |
Latvia | 9,414 |
Lithuania | 16,238 |
Luxembourg | 2,675 |
Malta | 1,785 |
Netherlands | 104,851 |
Poland | 205,946 |
Portugal | 44,586 |
Romania | 67,616 |
Slovakia | 26,380 |
Slovenia | 9,011 |
Spain | 303,773 |
Sweden | 47,411 |
Iceland | 968 |
Liechtenstein | 138 |
Norway | 23,503 |
Total EU | 2,461,445 |
Total EEA | 2,486,054 |
Commitment 22
Relevant Signatories commit to provide users with tools to help them make more informed decisions when they encounter online information that may be false or misleading, and to facilitate user access to tools and information to assess the trustworthiness of information sources, such as indicators of trustworthiness for informed online navigation, particularly relating to societal issues or debates of general interest.
We signed up to the following measures of this commitment
Measure 22.7
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 22.7
Relevant Signatories will design and apply products and features (e.g. information panels, banners, pop-ups, maps and prompts, trustworthiness indicators) that lead users to authoritative sources on topics of particular public and societal interest or in crisis situations.
QRE 22.7.1
Relevant Signatories will outline the products and features they deploy across their services and will specify whether those are available across Member States.
- ‘SOS Alerts’: Structured content that appears on a Google Search page, including high-quality help links and local relevant information when a crisis strikes. The alerts aim to make emergency information more accessible during a crisis. Google brings together relevant and high-quality content from the web, media, and Google products, and then highlights that information across Google products such as Google Search and Google Maps. See Help Centre for more information.
SLI 22.7.1
Relevant Signatories will report on the reach and/or user interactions with the products or features, at the Member State level, via the metrics of impressions and interactions (clicks, click-through rates (as relevant to the tools and services in question) and shares (as relevant to the tools and services in question).
- Crisis Response (e.g. ‘SOS Alerts’, ‘Public Alerts’);
- Structured features for COVID-19.
In H2 2024, the following number of views/impressions were made on the Google Search features below:
- 92,530,020 views/impressions on Crisis Response alerts (e.g. ‘SOS Alerts’, ‘Public Alerts’);
- 2,800 views/impressions on COVID-19 Structured Features.
Commitment 23
Relevant Signatories commit to provide users with the functionality to flag harmful false and/or misleading information that violates Signatories policies or terms of service.
We signed up to the following measures of this commitment
Measure 23.1 Measure 23.2
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
- In August 2024, Google Search released new Community Guidelines for user generated content that define the types of content and behavior that are not allowed on Search and incorporates Search’s overall content policies. The guidelines also provide users with guidance on how to report different types of potentially harmful user generated content, such as posts and profiles.
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 23.1
Relevant Signatories will develop or continue to make available on all their services and in all Member States languages in which their services are provided a user-friendly functionality for users to flag harmful false and/or misleading information that violates Signatories' policies or terms of service. The functionality should lead to appropriate, proportionate and consistent follow-up actions, in full respect of the freedom of expression.
QRE 23.1.1
Relevant Signatories will report on the availability of flagging systems for their policies related to harmful false and/or misleading information across EU Member States and specify the different steps that are required to trigger the systems.
Measure 23.2
Relevant Signatories will take the necessary measures to ensure that this functionality is duly protected from human or machine-based abuse (e.g., the tactic of 'mass-flagging' to silence other voices).
QRE 23.2.1
Relevant Signatories will report on the general measures they take to ensure the integrity of their reporting and appeals systems, while steering clear of disclosing information that would help would-be abusers find and exploit vulnerabilities in their defences.
Empowering Researchers
Commitment 26
Relevant Signatories commit to provide access, wherever safe and practicable, to continuous, real-time or near real-time, searchable stable access to non-personal data and anonymised, aggregated, or manifestly-made public data for research purposes on Disinformation through automated means such as APIs or other open and accessible technical solutions allowing the analysis of said data.
We signed up to the following measures of this commitment
Measure 26.1 Measure 26.3
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 26.1
Relevant Signatories will provide public access to non-personal data and anonymised, aggregated or manifestly-made public data pertinent to undertaking research on Disinformation on their services, such as engagement and impressions (views) of content hosted by their services, with reasonable safeguards to address risks of abuse (e.g. API policies prohibiting malicious or commercial uses).
QRE 26.1.1
Relevant Signatories will describe the tools and processes in place to provide public access to non-personal data and anonymised, aggregated and manifestly-made public data pertinent to undertaking research on Disinformation, as well as the safeguards in place to address risks of abuse.
Users can also query the same set of results using the YouTube Data API. Use is subject to YouTube’s API Terms of Service.
QRE 26.1.2
Relevant Signatories will publish information related to data points available via Measure 25.1, as well as details regarding the technical protocols to be used to access these data points, in the relevant help centre. This information should also be reachable from the Transparency Centre. At minimum, this information will include definitions of the data points available, technical and methodological information about how they were created, and information about the representativeness of the data.
- Real-time data - a sample covering the last seven days;
- Non-realtime data - a separate sample from real-time data that goes as far back as 2004 and up to 72 hours before one’s search.
- Claim made by: Name of the publisher making the claim;
- Rating text: True or False;
- Fact Check article: The fact-checking article on the publisher's site;
- Claim reviewed: A short summary of the claim being evaluated;
- Tags: The tags that show up next to the claim.
- Search: Access to an API for limited scraping with a budget for quota;
- YouTube: Permission for scraping limited to metadata.
- Paid product placements
- Videos about a product or service because there is a connection between the creator and the maker of the product or service;
- Videos created for a company or business in exchange for compensation or free of charge products/services;
- Videos where that company or business’s brand, message, or product is included directly in the content and the company has given the creator money or free of charge products to make the video.
- Endorsements - Videos created for an advertiser or marketer that contains a message that reflects the opinions, beliefs, or experiences of the creator.
- Sponsorships - Videos that have been financed in whole or in part by a company, without integrating the brand, message, or product directly into the content. Sponsorships generally promote the brand, message, or product of the third party.
SLI 26.1.1
Relevant Signatories will provide quantitative information on the uptake of the tools and processes described in Measure 26.1, such as number of users.
(1) In H2 2024, the Fact Check Search API received approximately 211,980 requests from Google Search users, globally.
Country | Number of Fact Check Explorer tool users | Number of Google Trends users researching Google Search |
---|---|---|
Austria | 485 | 258,045 |
Belgium | 732 | 256,912 |
Bulgaria | 373 | 345,279 |
Croatia | 216 | 116,183 |
Cyprus | 80 | 69,882 |
Czech Republic | 702 | 258,373 |
Denmark | 521 | 158,667 |
Estonia | 80 | 50,727 |
Finland | 344 | 124,093 |
France | 3,592 | 1,222,099 |
Germany | 4,364 | 2,059,534 |
Greece | 421 | 538,344 |
Hungary | 368 | 448,879 |
Ireland | 479 | 14,021,955 |
Italy | 1,821 | 1,589,810 |
Latvia | 104 | 84,437 |
Lithuania | 104 | 120,403 |
Luxembourg | 67 | 61,699 |
Malta | 25 | 16,651 |
Netherlands | 1,439 | 619,239 |
Poland | 1,372 | 811,290 |
Portugal | 538 | 283,543 |
Romania | 453 | 481,999 |
Slovakia | 289 | 116,546 |
Slovenia | 129 | 56,493 |
Spain | 5,530 | 1,388,768 |
Sweden | 734 | 292,843 |
Iceland | 25 | 6,396 |
Liechtenstein | 9 | 697 |
Norway | 974 | 147,732 |
Total EU | 25,362 | 25,852,693 |
Total EEA | 26,370 | 26,007,518 |
Measure 26.3
Relevant Signatories will implement procedures for reporting the malfunctioning of access systems and for restoring access and repairing faulty functionalities in a reasonable time.
QRE 26.3.1
Relevant Signatories will describe the reporting procedures in place to comply with Measure 26.3 and provide information about their malfunction response procedure, as well as about malfunctions that would have prevented the use of the systems described above during the reporting period and how long it took to remediate them.
Commitment 28
COOPERATION WITH RESEARCHERS Relevant Signatories commit to support good faith research into Disinformation that involves their services.
We signed up to the following measures of this commitment
Measure 28.1 Measure 28.2 Measure 28.3 Measure 28.4
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
- In 2024, Google hosted a workshop with over 30 attendees, including academics, at the Trust & Safety Forum in Lille, France exploring Safety by Design frameworks and implementation constraints, including misinformation.
- In October 2024, Google announced the first-ever Google Academic Research Award (GARA) winners. In this first cycle, the program will support 95 projects led by 143 researchers globally.
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 28.1
Relevant Signatories will ensure they have the appropriate human resources in place in order to facilitate research, and should set-up and maintain an open dialogue with researchers to keep track of the types of data that are likely to be in demand for research and to help researchers find relevant contact points in their organisations.
QRE 28.1.1
Relevant Signatories will describe the resources and processes they deploy to facilitate research and engage with the research community, including e.g. dedicated teams, tools, help centres, programs, or events.
Eligible EU researchers can apply for access to publicly available data across some of Google’s products, including Search and YouTube, through the Google Researcher Program. Search and YouTube will provide eligible researchers (including non-academics that meet predefined eligibility criteria) with access to limited metadata scraping for public data. This program aims to enhance the public’s understanding of Google’s services and their impact.
- YouTube provides a contact email alias to researchers who have been granted access to the program;
- YouTube API Code Samples at GitHub.
Measure 28.2
Relevant Signatories will be transparent on the data types they currently make available to researchers across Europe.
QRE 28.2.1
Relevant Signatories will describe what data types European researchers can currently access via their APIs or via dedicated teams, tools, help centres, programs, or events.
Measure 28.3
Relevant Signatories will not prohibit or discourage genuinely and demonstratively public interest good faith research into Disinformation on their platforms, and will not take adversarial action against researcher users or accounts that undertake or participate in good-faith research into Disinformation.
QRE 28.3.1
Relevant Signatories will collaborate with EDMO to run an annual consultation of European researchers to assess whether they have experienced adversarial actions or are otherwise prohibited or discouraged to run such research.
Measure 28.4
As part of the cooperation framework between the Signatories and the European research community, relevant Signatories will, with the assistance of the EDMO, make funds available for research on Disinformation, for researchers to independently manage and to define scientific priorities and transparent allocation procedures based on scientific merit.
QRE 28.4.1
Relevant Signatories will disclose the resources made available for the purposes of Measure 28.4 and procedures put in place to ensure the resources are independently managed.