Google Ads

Report March 2026

Submitted

Your organisation description

Advertising

Commitment 1

Relevant signatories participating in ad placements commit to defund the dissemination of disinformation, and improve the policies and systems which determine the eligibility of content to be monetised, the controls for monetisation and ad placement, and the data to report on the accuracy and effectiveness of controls and services around ad placements.

We signed up to the following measures of this commitment

Measure 1.3 Measure 1.5 Measure 1.6

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

No

If yes, list these implementation measures here

N/A

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

N/A

Measure 1.3

Relevant Signatories responsible for the selling of advertising, inclusive of publishers, media platforms, and ad tech companies, will take commercial and technically feasible steps, including support for relevant third-party approaches, to give advertising buyers transparency on the placement of their advertising.

QRE 1.3.1

Signatories will report on the controls and transparency they provide to advertising buyers with regards to the placement of their ads as it relates to Measure 1.3.

Note: The below QRE response has been reproduced (in some instances truncated in order to meet the suggested character limit) from the previous report as there is no new information to share now.

Google sets a high bar for information quality on services that involve advertising and content monetisation. Given that many bad actors may seek to make money by spreading harmful content, raising the bar for monetisation can also diminish their incentives to misuse Google services. For example, Google prohibits deceptive behaviour on Google advertising products.

Google Ads also provides advertisers with additional controls and helps them exclude types of content that, while in compliance with AdSense policies, may not fit their brand or business. These controls let advertisers apply content filters or exclude certain types of content or terms from their video, display, and search ad campaigns. Advertisers can exclude content such as politics, news, sports, beauty, fashion and many other categories. These categories are listed in the Google Ads Help Centre.

Measure 1.5

Relevant Signatories involved in the reporting of monetisation activities inclusive of media platforms, ad networks, and ad verification companies will take the necessary steps to give industry-recognised relevant independent third-party auditors commercially appropriate and fair access to their services and data in order to: - First, confirm the accuracy of first party reporting relative to monetisation and Disinformation, seeking alignment with regular audits performed under the DSA. - Second, accreditation services should assess the effectiveness of media platforms' policy enforcement, including Disinformation policies.

QRE 1.5.1

Signatories that produce first party reporting will report on the access provided to independent third-party auditors as outlined in Measure 1.5 and will link to public reports and results from such auditors, such as MRC Content Level Brand Safety Accreditation, TAG Brand Safety certifications, or other similarly recognised industry accepted certifications.

Note: The below QRE response has been reproduced (in some instances truncated in order to meet the suggested character limit) from the previous report as there is no new information to share now.

Google partakes in audits including those conducted by independent accreditation organisations such as the Media Rating Council (MRC) and maintains this accreditation via participation in annual audit cycles conducted by the MRC. 

The current MRC accreditation certifies that:

  • Google Ads display and search clicks measurement methodology and AdSense ad serving technologies adhere to the industry standards for click measurement.
  • Google Ads video impression and video viewability measurement as reported in the Video Viewability Report adheres to the industry standards for video impression and viewability measurement.
  • The processes supporting these technologies are accurate. This applies to Google’s measurement technology which is used across all device types: desktop, mobile, and tablet, in both browser and mobile apps environments.

For more information about what this accreditation means, please see this help page.

QRE 1.5.2

Signatories that conduct independent accreditation via audits will disclose areas of their accreditation that have been updated to reflect needs in Measure 1.5.

See response to QRE 1.5.1.

Measure 1.6

Relevant Signatories will advance the development, improve the availability, and take practical steps to advance the use of brand safety tools and partnerships, with the following goals: - To the degree commercially viable, relevant Signatories will provide options to integrate information and analysis from source-raters, services that provide indicators of trustworthiness, fact-checkers, researchers or other relevant stakeholders providing information e.g., on the sources of Disinformation campaigns to help inform decisions on ad placement by ad buyers, namely advertisers and their agencies. - Advertisers, agencies, ad tech companies, and media platforms and publishers will take effective and reasonable steps to integrate the use of brand safety tools throughout the media planning, buying and reporting process, to avoid the placement of their advertising next to Disinformation content and/or in places or sources that repeatedly publish Disinformation. - Brand safety tool providers and rating services who categorise content and domains will provide reasonable transparency about the processes they use, insofar that they do not release commercially sensitive information or divulge trade secrets, and that they establish a mechanism for customer feedback and appeal.

QRE 1.6.1

Signatories that place ads will report on the options they provide for integration of information, indicators and analysis from source raters, services that provide indicators of trustworthiness, fact-checkers, researchers, or other relevant stakeholders providing information e.g. on the sources of Disinformation campaigns to help inform decisions on ad placement by buyers.

Note: The below QRE response has been reproduced (in some instances truncated in order to meet the suggested character limit) from the previous report as there is no new information to share now.

Google Ads provides its advertising partners with features that enable them to maintain control over where their ads appear, the format in which their ads run, and their intended audience. 

Since April 2021, advertisers have the ability to use dynamic exclusion lists that can be updated seamlessly and continuously over time. These lists can be created by advertisers themselves or by a third party they trust, such as brand safety organisations and industry groups. Once advertisers upload a dynamic exclusion list to their Google Ads account, they can schedule automatic updates as new web pages or domains are added, ensuring that their exclusion lists remain effective and up-to-date.

QRE 1.6.2

Signatories that purchase ads will outline the steps they have taken to integrate the use of brand safety tools in their advertising and media operations, disclosing what percentage of their media investment is protected by such services.

Not relevant for Google Ads (intended for Signatories that purchase ads).

QRE 1.6.3

Signatories that provide brand safety tools will outline how they are ensuring transparency and appealability about their processes and outcomes.

Not relevant for Google Ads (intended for Signatories that provide brand safety tools).

QRE 1.6.4

Relevant Signatories that rate sources to determine if they persistently publish Disinformation shall provide reasonable information on the criteria under which websites are rated, make public the assessment of the relevant criteria relating to Disinformation, operate in an apolitical manner and give publishers the right to reply before ratings are published.

Not relevant for Google Ads (intended for Signatories that rate sources).

SLI 1.6.1

Signatories that purchase ads will outline the steps they have taken to integrate the use of brand safety tools in their advertising and media operations, disclosing what percentage of their media investment is protected by such services.

Not relevant for Google Ads (intended for Signatories that purchase ads).

Country In view of steps taken to integrate brand safety tools: % of advertising/media investment protected by such tools
Austria
Belgium
Bulgaria
Croatia
Cyprus
Czech Republic
Denmark
Estonia
Finland
France
Germany
Greece
Hungary
Ireland
Italy
Latvia
Lithuania
Luxembourg
Malta
Netherlands
Poland
Portugal
Romania
Slovakia
Slovenia
Spain
Sweden
Iceland
Liechtenstein
Norway

Commitment 2

Relevant Signatories participating in advertising commit to prevent the misuse of advertising systems to disseminate Disinformation in the form of advertising messages.

We signed up to the following measures of this commitment

Measure 2.2 Measure 2.3 Measure 2.4

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

No

If yes, list these implementation measures here

N/A

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

N/A

Measure 2.2

Relevant Signatories will develop tools, methods, or partnerships, which may include reference to independent information sources both public and proprietary (for instance partnerships with fact-checking or source rating organisations, or services providing indicators of trustworthiness, or proprietary methods developed internally) to identify content and sources as distributing harmful Disinformation, to identify and take action on ads and promoted content that violate advertising policies regarding Disinformation mentioned in Measure 2.1.

QRE 2.2.1

Signatories will describe the tools, methods, or partnerships they use to identify content and sources that contravene policies mentioned in Measure 2.1 - while being mindful of not disclosing information that'd make it easier for malicious actors to circumvent these tools, methods, or partnerships. Signatories will specify the independent information sources involved in these tools, methods, or partnerships.

Note: The below QRE response has been reproduced (in some instances truncated in order to meet the suggested character limit) from the previous report as there is no new information to share now.

All newly created ads and ads that are edited by users are reviewed for policy violations. The review of new ads is performed by either, or a combination of:
  • Automated mechanisms; and 
  • Manual reviews performed by human reviewers.

For more information on how the ad review process works, please see the ‘About the ad review process’ page.

Measure 2.3

Relevant Signatories will adapt their current ad verification and review systems as appropriate and commercially feasible, with the aim of preventing ads placed through or on their services that do not comply with their advertising policies in respect of Disinformation to be inclusive of advertising message, promoted content, and site landing page.

QRE 2.3.1

Signatories will describe the systems and procedures they use to ensure that ads placed through their services comply with their advertising policies as described in Measure 2.1.

See response to QRE 2.2.1. 

SLI 2.3.1

Signatories will report quantitatively, at the Member State level, on the ads removed or prohibited from their services using procedures outlined in Measure 2.3. In the event of ads successfully removed, parties should report on the reach of violatory content and advertising.

Number of own-initiative actions taken on advertisements that affect the availability, visibility, and accessibility of information provided by recipients of Google Ads services during the reporting period, by EEA Member State billing country and policy. These actions taken include enforcement against ads and ad assets that violate any of the policy topics in scope for reporting.

Google takes content moderation actions on content which violates or may be shown to violate Google Ads policies, or where the content is illegal. These can encompass both proactive and reactive enforcement actions. Proactive enforcement takes place when potentially policy-violating content has been flagged internally, for example, via algorithms or contractors. Reactive enforcement takes place in response to external notifications, such as user policy flags or legal complaints (e.g. an Article 9 order or an Article 16 notice under the Digital Services Act). 

To ensure a safe and positive experience for users, Google requires that advertisers comply with all applicable laws and regulations in addition to the Google Ads policies. Ads, assets, destinations, and other content that violates Google Ads policies can be blocked on the Google Ads platform and associated networks. 

Ad or asset disapproval
Ads and assets that do not follow Google Ads policies will be disapproved. A disapproved ad will not be able to run until the policy violation is fixed and the ad is reviewed.

Account suspension
Google Ads Accounts may be suspended if Google finds violations of its policies or the Terms and Conditions.

For more information on what happens when an ad or account is violating Google Ads policies, please see the 'What happens if you violate our policies' page. 

Policies in scope: 
  • Destination Requirements (Insufficient Original Content); 
  • Inappropriate Content (Dangerous or Derogatory Content, Shocking Content, Sensitive Events);
  • Misrepresentation (Unacceptable Business Practices, Coordinated Deceptive Practices, Misleading Representation, Manipulated Media, Unreliable Claims, Misleading Ad Design, Clickbait Ads, Unclear Relevance, Unavailable Offers, Dishonest Pricing Practices).

Country Number of actions taken, for Destination Requirements Number of actions taken, for Inappropriate Content Number of actions taken, for Misrepresentation
Austria 8,558,947 176,712 550,104
Belgium 10,146,325 174,131 1,103,042
Bulgaria 13,012,850 73,255 4,099,084
Croatia 2,565,291 22,775 149,653
Cyprus 11,047,741 214,616 2,612,130
Czech Republic 20,018,821 403,011 6,074,846
Denmark 16,354,214 192,766 1,380,073
Estonia 2,905,799 21,023 822,081
Finland 4,989,902 74,304 512,802
France 252,736,323 752,422 4,689,004
Germany 208,184,248 924,430 6,529,947
Greece 3,147,306 31,903 238,327
Hungary 7,064,451 113,487 345,110
Ireland 18,054,072 2,318,725 10,614,303
Italy 55,127,772 349,416 39,759,721
Latvia 2,169,264 24,493 5,758,223
Lithuania 7,975,427 92,851 898,983
Luxembourg 11,806,902 98,041 84,250
Malta 3,354,788 3,429 327,889
Netherlands 171,424,901 915,585 64,374,444
Poland 40,259,580 636,331 3,340,905
Portugal 3,760,047 101,150 579,452
Romania 13,628,774 476,512 810,376
Slovakia 7,547,063 433,271 308,335
Slovenia 3,241,865 28,183 291,623
Spain 62,458,598 690,980 5,132,572
Sweden 19,790,524 171,056 23,135,985
Iceland 156,178 2,575 185,293
Liechtenstein 142,661 1,680 6,486
Norway 3,585,167 44,325 654,742
Total EU 981,331,795 9,514,858 184,523,264
Total EEA 985,215,801 9,563,438 185,369,785

Measure 2.4

Relevant Signatories will provide relevant information to advertisers about which advertising policies have been violated when they reject or remove ads violating policies described in Measure 2.1 above or disable advertising accounts in application of these policies and clarify their procedures for appeal.

QRE 2.4.1

Signatories will describe how they provide information to advertisers about advertising policies they have violated and how advertisers can appeal these policies.

Note: The below QRE response has been reproduced (in some instances truncated in order to meet the suggested character limit) from the previous report as there is no new information to share now.

Notification
Ads that do not follow Google Ads policies will be disapproved or (if appropriate) limited in where and when they can show. This will be shown in the ‘Status’ column as ‘Disapproved’ or ‘Eligible (limited),’ and the ad may not be able to run until the policy violation is fixed and the ad is re-reviewed. By hovering the cursor over the status of the ad, there is additional information, including the policy violation impacting the ad. For more information on how to fix a disapproved ad, see the external Help Centre page

Appeal process
Advertisers have multiple options and pathways to appeal a policy decision directly from their Google Ads account, for instance the 'ads and assets' table, the Policy Manager, or the Disapproved Ads and Policy Questions form. For more information about the appeal process, check the Help Centre page. For account suspensions, advertisers can also appeal following the submit an appeal process

SLI 2.4.1

Signatories will report quantitatively, at the Member State level, on the number of appeals per their standard procedures they received from advertisers on the application of their policies and on the proportion of these appeals that led to a change of the initial policy decision.

Number of content moderation complaints received from advertisers located in EEA Member States during the reporting period, broken down by EEA Member State and by complaint outcome. Advertiser complaints were received via Google Ads standardised path for appealing policy decisions. 

Complaint outcomes include initial decision upheld and initial decision reversed. An ‘initial decision’ refers to the first enforcement of Google’s terms of service or product policies. These decisions may be reversed in light of additional information provided by the appellant as part of an appeal or additional automatic, manual review of the content. 

Policies in scope:
  • Destination Requirements (Insufficient Original Content);
  • Inappropriate Content (Dangerous or Derogatory Content, Shocking Content, Sensitive Events);
  • Misrepresentation (Unacceptable Business Practices, Coordinated Deceptive Practices, Misleading Representation, Manipulated Media, Unreliable Claims, Misleading Ad Design, Clickbait Ads, Unclear Relevance, Unavailable Offers, Dishonest Pricing Practices).

Country Number of Ads Appeals Number of Successful Appeals Number of Failed Appeals
Austria 38,933 17,857 21,076
Belgium 36,390 11,707 24,683
Bulgaria 32,515 12,766 19,749
Croatia 11,393 3,269 8,124
Cyprus 190,226 64,443 125,783
Czech Republic 90,148 25,832 64,316
Denmark 32,482 16,230 16,252
Estonia 37,309 13,246 24,063
Finland 10,533 6,872 3,661
France 207,588 44,094 163,494
Germany 293,021 83,235 209,786
Greece 23,996 7,689 16,307
Hungary 29,374 12,891 16,483
Ireland 31,397 6,997 24,400
Italy 538,950 66,743 472,207
Latvia 24,049 7,896 16,153
Lithuania 156,934 27,714 129,220
Luxembourg 4,400 1,048 3,352
Malta 25,669 9,221 16,448
Netherlands 170,283 81,051 89,232
Poland 189,587 86,444 103,143
Portugal 16,658 4,815 11,843
Romania 56,164 27,616 28,548
Slovakia 11,569 6,911 4,658
Slovenia 90,015 20,156 69,859
Spain 166,245 49,507 116,738
Sweden 116,774 64,384 52,390
Iceland 221 151 70
Liechtenstein 1,673 112 1,561
Norway 14,146 4,448 9,698
Total EU 2,632,602 780,634 1,851,968
Total EEA 2,648,642 785,345 1,863,297

Commitment 3

Relevant Signatories involved in buying, selling and placing digital advertising commit to exchange best practices and strengthen cooperation with relevant players, expanding to organisations active in the online monetisation value chain, such as online e-payment services, e-commerce platforms and relevant crowd-funding/donation systems, with the aim to increase the effectiveness of scrutiny of ad placements on their own services.

We signed up to the following measures of this commitment

Measure 3.1 Measure 3.2 Measure 3.3

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

No

If yes, list these implementation measures here

N/A

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

N/A

Measure 3.1

Relevant Signatories will cooperate with platforms, advertising supply chain players, source-rating services, services that provide indicators of trustworthiness, fact-checking organisations, advertisers and any other actors active in the online monetisation value chain, to facilitate the integration and flow of information, in particular information relevant for tackling purveyors of harmful Disinformation, in full respect of all relevant data protection rules and confidentiality agreements.

QRE 3.1.1

Signatories will outline how they work with others across industry and civil society to facilitate the flow of information that may be relevant for tackling purveyors of harmful Disinformation.

Google Advertising works across industry and with civil society to facilitate the flow of information, relevant to tackling disinformation. For example, Google participates in the EU Code of Conduct on Disinformation Permanent Task-force’s dedicated Working Groups, such as the Working Group on elections, which involves civil society and Industry Signatories. 

Measure 3.2

Relevant Signatories will exchange among themselves information on Disinformation trends and TTPs (Tactics, Techniques, and Procedures), via the Code Task-force, GARM, IAB Europe, or other relevant fora. This will include sharing insights on new techniques or threats observed by Relevant Signatories, discussing case studies, and other means of improving capabilities and steps to help remove Disinformation across the advertising supply chain - potentially including real-time technical capabilities.

QRE 3.2.1

Signatories will report on their discussions within fora mentioned in Measure 3.2, being mindful of not disclosing information that is confidential and/or that may be used by malicious actors to circumvent the defences set by Signatories and others across the advertising supply chain. This could include, for instance, information about the fora Signatories engaged in; about the kinds of information they shared; and about the learnings they derived from these exchanges.

Google takes part in the EU Code of Conduct on Disinformation Permanent Task-force’s Working Group on elections - as mentioned in response to QRE 3.1.1. In addition, Google’s Threat Intelligence Group (GTIG) continues to engage with other Industry Signatories to the Code in order to stay abreast of cross-platform deceptive practices, such as operations leveraging fake or impersonated accounts.

Measure 3.3

Relevant Signatories will integrate the work of or collaborate with relevant third-party organisations, such as independent source-rating services, services that provide indicators of trustworthiness, fact-checkers, researchers, or open-source investigators, in order to reduce monetisation of Disinformation and avoid the dissemination of advertising containing Disinformation.

QRE 3.3.1

Signatories will report on the collaborations and integrations relevant to their work with organisations mentioned.

Note: The below QRE response has been reproduced (in some instances truncated in order to meet the suggested character limit) from the previous report as there is no new information to share now.

Google Advertising frequently engages with third-party organisations in order to explain, collect feedback on, and improve Google Advertising policies. Google Advertising has also exchanged views with experts at numerous policy roundtables, conferences, and workshops - both in Brussels and in the EU capitals.

Crisis and Elections Response

Elections 2025

[Note: Signatories are requested to provide information relevant to their particular response to the threats and challenges they observed on their service(s). They ensure that the information below provides an accurate and complete report of their relevant actions. As operational responses to crisis/election situations can vary from service to service, an absence of information should not be considered a priori a shortfall in the way a particular service has responded. Impact metrics are accurate to the best of signatories’ abilities to measure them].

Threats observed or anticipated

Overview
In elections and other democratic processes, people want access to high-quality information and a broad range of perspectives. High-quality information helps people make informed decisions when voting and counteracts abuse by bad actors. Consistent with its broader approach to elections around the world, during the various elections across the EU in the reporting period, Google was committed to supporting these democratic processes by surfacing high-quality information to voters, safeguarding its platforms from abuse and equipping campaigns with best-in-class security tools and training – with a strong focus on helping people navigate AI-generated content.

Mitigations in place

Across Google, various teams support democratic processes by connecting people to election information like practical tips on how to register to vote or providing high-quality information about candidates. In 2025, a number of key elections took place around the world and across the EU in particular. During the reporting period, voters cast their votes in Moldova, Czech Republic, Portugal, Ireland and the Netherlands. Google was committed to supporting these democratic processes by surfacing high-quality information to voters, safeguarding its platforms from abuse and equipping campaigns with the best-in-class security tools and training. Across its efforts, Google also has an increased focus on the role of artificial intelligence (AI) and the part it can play in the disinformation landscape — while also leveraging AI models to augment Google’s abuse-fighting efforts. 

Safeguarding Google platforms and disrupting the spread of disinformation
To better secure its products and prevent abuse, Google continues to enhance its enforcement systems and to invest in Trust & Safety operations — including at its Google Safety Engineering Centre (GSEC) for Content Responsibility in Dublin, dedicated to online safety in Europe and around the world. Google also continues to partner with the wider ecosystem to combat disinformation. 
  • Enforcing Google policies and using AI models to fight abuse at scale: Google has long-standing policies that inform how it approaches areas like manipulated media, hate and harassment, and incitement to violence — along with policies around demonstrably false claims that could undermine democratic processes, for example in YouTube’s Community Guidelines. To help enforce Google policies, Google’s AI models are enhancing its abuse-fighting efforts. With recent advances in Google’s Large Language Models (LLMs), Google is building faster and more adaptable enforcement systems that enable us to remain nimble and take action even more quickly when new threats emerge.
  • Working with the wider ecosystem: Since Google’s inaugural commitment of €25 million to help launch the European Media & Information Fund, an effort designed to strengthen media literacy and information quality across Europe, 121 projects have been funded across 28 countries so far.

Helping people navigate AI-generated content
Like any emerging technology, AI presents new opportunities as well as challenges. For example, generative AI makes it easier than ever to create new content, but it can also raise questions about trustworthiness of information. Google put in place a number of policies and other measures that have helped people navigate content that was AI-generated. Overall, harmful altered or synthetic political content did not appear to be widespread on Google’s platforms. Measures that helped mitigate that risk include: 
  • Ads disclosures: Google expanded its Political Content Policies in November 2023 to require advertisers to disclose when their election ads include synthetic content that inauthentically depicts real or realistic-looking people or events. Google’s ads policies already prohibit the use of manipulated media to mislead people, like deep fakes or doctored content. In September 2025, Google updated the Political Content Policies restricting political advertising in the European Union.
  • Content labels on YouTube: YouTube’s Misinformation Policies prohibit technically manipulated content that misleads users and could pose a serious risk of egregious harm — and YouTube requires creators to disclose when they have created realistic altered or synthetic content, and will display a label that indicates for people when the content they are watching is synthetic. For sensitive content, including election related content, that contains realistic altered or synthetic material, the label appears on the video itself and in the video description.
  • Provide users with additional context: 'About This Image' in Search helps people assess the credibility and context of images found online. We continue looking at ways to integrate integrity signals more directly throughout the Search experience, with a view to enhancing user experience and providing users with the context needed to make informed decisions about the information they see online. For example, we are looking at embedding image provenance into Google Search features in order to enable users to check image provenance more seamlessly.
  • Industry collaboration: Google is a member of the Coalition for Content Provenance and Authenticity (C2PA) and standard, a cross-industry effort to help provide more transparency and context for people on AI-generated content. 

Informing voters surfacing high-quality information
In the build-up to elections, people need useful, relevant and timely information to help them navigate the electoral process. Here are some of the ways Google makes it easy for people to find what they need, and which were deployed during elections that took place across the EU in 2025: 
  • High-quality Information on YouTube: For news and information related to elections, YouTube’s systems prominently surface high-quality content, on the YouTube homepage, in search results and the ‘Up Next’ panel. YouTube also displays information panels at the top of search results and below videos to provide additional context. For example, YouTube may surface various election information panels above search results or on videos related to election candidates, parties or voting.
  • Ongoing transparency on Election Ads: Starting September 2025, Google restricted political advertising in the European Union under new regulations. Since mid-August 2025, advertisers have been asked to declare if they intend to run political advertising. EU Election Ads previously shown in the Political Ads Transparency Report will remain publicly accessible in the Ads Transparency Centre, subject to retention policies.

Equipping campaigns and candidates with best-in-class security features and training
As elections come with increased cybersecurity risks, Google works hard to help high-risk users, such as campaigns and election officials, civil society and news sources, improve their security in light of existing and emerging threats, and to educate them on how to use Google’s products and services. 
  • Security tools for campaign and election teams: Google offers free services like its Advanced Protection Program — Google’s strongest set of cyber protections — and Project Shield, which provides unlimited protection against Distributed Denial of Service (DDoS) attacks. Google also partners with Possible, The International Foundation for Electoral Systems (IFES) and Deutschland sicher im Netz (DSIN) to scale account security training and to provide security tools including Titan Security Keys, which defend against phishing attacks and prevent bad actors from accessing users’ Google Accounts.
  • Tackling coordinated influence operations: Google’s Threat Intelligence Group (GTIG) helps identify, monitor and tackle emerging threats, ranging from coordinated influence operations to cyber espionage campaigns against high-risk entities. Google reports on actions taken in its quarterly bulletin, and meets regularly with government officials and others in the industry to share threat information and suspected election interference. Mandiant also helps organisations build holistic election security programs and harden their defences with comprehensive solutions, services and tools, including proactive exposure management, proactive intelligence threat hunts, cyber crisis communication services and threat intelligence tracking of information operations. A recent publication from the team gives an overview of the global election cybersecurity landscape, designed to help election organisations tackle a range of potential threats.

Google is committed to working with government, industry and civil society to protect the integrity of elections in the European Union — building on its commitments made in the EU Code of Conduct on Disinformation. 

Policies and Terms and Conditions

Outline any changes to your policies

Policy - 50.1.1

Please see the ‘Scrutiny of Ads Placement’ section below.

Changes (such as newly introduced policies, edits, adaptation in scope or implementation) - 50.1.2

Please see the ‘Scrutiny of Ads Placement’ section below.

Rationale - 50.1.3

Please see the ‘Scrutiny of Ads Placement’ section below.

Scrutiny of Ads Placements

Outline approaches pertinent to this chapter, highlighting similarities/commonalities and differences with regular enforcement.

Specific Action applied - 50.2.1

Political Content Policy

Google stopped serving political advertising in the EU before the EU’s Transparency and Targeting of Political Advertising (TTPA) Regulation entered into force in October 2025.

Description of intervention - 50.2.2

In 2019, Google launched the EU Election Ads Policy, which required advertisers wishing to run EU Election Ads in the EU to complete a two-step verification process

Once an advertiser had completed EU Election Ads verification, all their EU Election Ads contained a disclosure that identified who paid for the ad. All EU Election Ads run by verified EU election advertisers in the EU were also subject to targeting restrictions. To provide transparency for users, Google published a Political Advertising transparency report and a political ads library. Only ads that were in scope of the Election Ads Policy, and that were run by verified election advertisers, were included in the report. 

In July 2024, Google updated the Disclosure requirements for synthetic content under the Political Content Policy, requiring advertisers to disclose election ads that contained synthetic or digitally altered content that inauthentically depicted real or realistic-looking people or events. 

In June 2024, Google updated the policy for EU Election Ads to include restrictions in Italy that required advertisers to comply with applicable local electoral laws, including pausing ads as required during periods defined by law as silence periods. Google did not allow EU Election Ads, as defined by Ads’ policies, to serve in Italy during a silence period.

In September 2025, Google updated the Political Content Policy to include a regional restriction for the EU. Under this new restriction (as well as previous Election Ads Policy), forms of political advertising, as defined by EU Regulation 2024/900, are no longer permitted to serve on Google platforms in EU countries.

Indication of impact - 50.2.3

No applicable metrics to report at this time.

Specific Action applied - 50.2.4

Misrepresentation Policy

Description of intervention - 50.2.5

AdSense policies that disrupt the monetisation incentives of malicious and misrepresentative actors related to politics in the AdSense ecosystem that publishers must adhere to include Manipulated Media and Deceptive Practices.

Google Ads provides a way for advertisers and businesses to reach new customers as they search on Google for words related to an advertiser’s business, or browse websites with related themes. However, Google Ads enforces policies that do not allow ads or destinations related to politics that display Inappropriate Content or Misrepresentation. Policies that prohibit political ads and destinations that display Inappropriate Content include the Sensitive Event Policy and Hacked Political Materials Policy. Policies that prohibit political ads and destinations that display Misrepresentation include the Coordinated Deceptive Practices and Manipulated Media Policy.

In March 2024, Google Advertising updated the Unacceptable business practices portion of the Misrepresentation Policy to include enticing users to part with money or information by impersonating or falsely implying affiliation with or endorsement by a public figure, brand, or organisation. Google Advertising began enforcing this policy in March 2024 for advertisers outside of France. For advertisers in France, Google Advertising began enforcing this policy in April 2024. The reason for this was that toward the end of 2023 and into 2024, Google Advertising faced a targeted campaign of ads featuring the likeness of public figures to scam users, often through the use of deep fakes. When Google Advertising detected this threat, it created a dedicated team to respond immediately. It also pinpointed patterns in the bad actors’ behaviour, trained its automated enforcement models to detect similar ads and began removing them at scale. Google Advertising also updated its Misrepresentation Policy to better enable it to rapidly suspend the accounts of bad actors.

In October 2025, the Google Ads Misrepresentation policy concerning Dishonest Pricing Practices was updated to ensure greater transparency and prevent user deception. These updates require advertisers to clearly and conspicuously disclose the payment model or full expense that a user will bear and prohibits pricing practices that create a false or misleading impression of the cost of a product or service, leading to inflated or unexpected charges.

Indication of impact - 50.2.6

Please refer to SLI 2.3.1 for metrics related to these policies.

Crisis 2025

[Note: Signatories are requested to provide information relevant to their particular response to the threats and challenges they observed on their service(s). They ensure that the information below provides an accurate and complete report of their relevant actions. As operational responses to crisis/election situations can vary from service to service, an absence of information should not be considered a priori a shortfall in the way a particular service has responded. Impact metrics are accurate to the best of signatories’ abilities to measure them].

Threats observed or anticipated

War in Ukraine
Overview
In response to the ongoing war in Ukraine, which has continued through 2025, Google remains committed to help by providing cybersecurity and humanitarian assistance, and providing high-quality information to people in the region. The following list outlines the main threats observed by Google during this conflict:

  1. Continued online services manipulation and coordinated influence operations;
  2. Advertising and monetisation linked to state-backed Russia and Ukraine disinformation;
  3. Threats to security and protection of digital infrastructure.


Israel-Gaza conflict
Overview
Following the Israel-Gaza conflict, Google has actively worked to support humanitarian and relief efforts, ensure platforms and partnerships are responsive to the current crisis, and counter the threat of disinformation. Google identified a few areas of focus for addressing the ongoing crisis:

  • Humanitarian and relief efforts;
  • Platforms and partnerships to protect our services from coordinated influence operations, hate speech, and graphic and terrorist content.

Mitigations in place

War in Ukraine
The following sections summarise Google’s main strategies and actions taken to mitigate the identified threats and react to the war in Ukraine.

1. Online services manipulation and malign influence operations
Google’s Threat Intelligence Group (GTIG) is helping Ukraine by monitoring the threat landscape in Eastern Europe and disrupting coordinated influence operations from Russian threat actors. 

2. Advertising and monetisation linked to Russia and Ukraine disinformation
During the reporting period, Google continued to pause the majority of commercial activities in Russia – including ads serving in Russia via Google demand and third-party bidding, ads on Google’s properties and networks globally for all Russian-based advertisers, AdSense ads on state-funded media sites, and monetisation features for YouTube viewers in Russia. Google paused ads containing content that exploits, dismisses, or condones the war. In addition, Google paused the ability of Russia-based publishers to monetise with AdSense, AdMob, and Ad Manager in August 2024. Free Google services such as Search, Gmail and YouTube are still operating in Russia. Google will continue to closely monitor developments.

3. Threats to security and protection of digital infrastructure
Google expanded eligibility for Project Shield, Google’s free protection against Distributed Denial of Service (DDoS) attacks, shortly after the war in Ukraine broke out. The expansion aimed to allow Ukrainian government websites and embassies worldwide to stay online and continue to offer their critical services. Since then, Google has continued to implement protections for users and track and disrupt cyber threats. 

GTIG has been tracking threat actors, both before and during the war, and sharing their findings publicly and with law enforcement. GTIG’s findings have shown that government-backed actors from Russia, Belarus, China, Iran, and North Korea have been targeting Ukrainian and Eastern European government and defence officials, military organisations, politicians, nonprofit organisations, and journalists, while financially motivated bad actors have also used the war as a lure for malicious campaigns. 

Future measures
Google aims to continue the following approach when responding to future crisis situations: 
  • Elevate access to high-quality information across Google services;
  • Protect Google users from harmful disinformation;
  • Continue to monitor and disrupt cyber threats;
  • Explore ways to provide assistance to support the affected areas more broadly.

Google will continue to monitor the situation and take additional action as needed.


Israel-Gaza conflict
Humanitarian and relief efforts
Google.org has provided more than $18 million to nonprofits providing relief to civilians affected in Israel and Gaza. This includes more than $11 million raised globally by Google employees with company match and $1 million in donated Search Ads to nonprofits so they can better connect with people in need and provide information to those looking to help. We also provided $6 million in Google.org grant funding, including $3 million provided to Natal, an apolitical nonprofit organisation focused on psychological treatment of victims of trauma. The remaining funds were provided to organisations focussed on humanitarian aid and relief in Gaza, including $1 million to Save the Children, $1 million to Palestinian Red Crescent, $1 million to International Medical Corps.

Specifically, Google’s humanitarian and relief efforts with these organisations include: 
  • Natal- Israel Trauma and Resiliency Centre: In the early days of the war, calls to Natal’s support hotline went from around 300 a day to 8,000 a day. With our funding, they were able to scale their support to patients by 450%, including multidisciplinary treatment and mental & psychosocial support to direct and indirect victims of trauma due to terror and war in Israel. 
  • [See two-year detailed report] After more than two years and thanks to Google’s support, International Medical Corps continues to deliver lifesaving health and humanitarian services across Gaza. In addition to the two field hospitals they have been operating in Dier al Balah and Al Zawaida, they announced that they opened a third field hospital in Gaza City in November 2025, significantly expanding access to critical care for civilians in the north. As of late Jan 2026, International Medical Corps has: 
    • Provided 533,119 outpatient consultations;
    • Performed more than 19,771 surgeries;
    • Supported 9,238 deliveries, including 1,930 caesarean sections;
    • Screened 154,473 children under 5 and pregnant and lactating women for malnutrition; and much more. 


Platforms and partnerships
As the conflict continues, Google is committed to tackling disinformation, hate speech, graphic content and terrorist content by continuing to find ways to provide support through its products. For example, Google has deployed language capabilities to support emergency efforts including emergency translations, and localising Google content to help users, businesses and nonprofit organisations. Google has also pledged to help its partners in these extraordinary circumstances. For example, when schools closed in October 2023, the Ministry of Education in Israel used Meet as their core teach-from-home platform and Google provided support. Google has been in touch with Gaza-based partners and participants in its Palestine Launchpad program, its digital skills and entrepreneurship program for Palestinians, to try to support those who have been significantly impacted by this crisis.

Policies and Terms and Conditions

Outline any changes to your policies

Policy - 51.1.1

War in Ukraine: Enforcement of existing policies

Changes (such as newly introduced policies, edits, adaptation in scope or implementation) - 51.1.2

War in Ukraine: Google Ads continued to enforce all Google Ads policies during the war in Ukraine, including its Sensitive Events Policy.

Rationale - 51.1.3

War in Ukraine: No changes to Ads policies and to Terms and Conditions were made as a result of the war in Ukraine during this reporting period. Google Ads continues to enforce all Google Ads policies, including the ones mentioned in this report. 

Policy - 51.1.4

Israel-Gaza conflict: Enforcement of existing policies

Changes (such as newly introduced policies, edits, adaptation in scope or implementation) - 51.1.5

Israel-Gaza conflict: Google Ads continued to enforce all Google Ads policies during the Israel-Gaza conflict.

Rationale - 51.1.6

Israel-Gaza conflict: No changes to Ads policies were made as a result of the Israel-Gaza conflict. Google Advertising continues to enforce all Google Ads policies, including the ones mentioned in this report.

Scrutiny of Ads Placements

Outline approaches pertinent to this chapter, highlighting similarities/commonalities and differences with regular enforcement.

Specific Action applied - 51.2.1

War in Ukraine: Enforces the Google Ads Misrepresentation Policy focusing on ensuring ads are honest and transparent, providing users with the information needed to make informed decisions. This policy covers various forms of deception, including unacceptable business practices and misleading representations.

Description of intervention - 51.2.2

War in Ukraine: Specifically for the war in Ukraine, Google Ads focused on the Manipulated Media sub-category in the Misrepresentation Policy which disallows the practice of deceptively doctoring media related to politics, social issues or matters of public concern.

Google Ads also enforced the Clickbait Ads Policy which is a sub-category under the Misrepresentation Policy. This policy prohibits ads that use clickbait tactics or sensationalist text or imagery to drive traffic. 

Indication of impact - 51.2.3

War in Ukraine: Please refer to SLI 2.3.1 for more details on Google Ads Misrepresentation Policy, including Manipulated Media and Clickbait Ads sub-categories.

Specific Action applied - 51.2.4

War in Ukraine: As noted above, Google Ads enforces the Sensitive Events Policy which does not allow ads that potentially profit from or exploit a sensitive event with significant social, cultural, or political impact, such as civil emergencies, natural disasters, public health emergencies, terrorism and related activities, conflict, or mass acts of violence.

Description of intervention - 51.2.5

War in Ukraine: Due to the war in Ukraine, Google Ads enforced the Sensitive Events Policy and paused ads on pages containing content that is exploitative, dismissive, or condones the invasion in March 2022. This is in addition to the pausing of ads from and on Russian Federation state-funded media in February 2022. 

Indication of impact - 51.2.6

War in Ukraine: Google Advertising continues to remain vigilant in enforcing all relevant policies, including the Sensitive Events Policy, related to the war in Ukraine.

Specific Action applied - 51.2.7

War in Ukraine: Enforces the Inappropriate Content Policy which does not allow ads or destinations that display shocking content or that promote hatred, intolerance, discrimination, or violence.

Description of intervention - 51.2.8

War in Ukraine: Due to the war in Ukraine, Google Ads focused on enforcing the Dangerous or Derogatory and Shocking Content sub-categories of the Inappropriate Content Policy. The Dangerous or Derogatory sub-category does not allow content that incites hatred against, promotes discrimination of, or disparages an individual or group on the basis of their race or ethnic origin, religion, disability, age, nationality, veteran status, sexual orientation, gender, gender identity, or any other characteristic that is associated with systemic discrimination or marginalisation. The Shocking Content sub-category does not allow promotions containing violent language, gruesome or disgusting imagery, or graphic images or accounts of physical trauma.

Indication of impact - 51.2.9

War in Ukraine: Please refer to SLI 2.3.1 for more details on Google Ads Inappropriate Content Policy.

Specific Action applied - 51.2.10

War in Ukraine: Enforces the Other Restricted Businesses Policy which restricts certain kinds of businesses from advertising with Google Ads to prevent users from being exploited, even if individual businesses appear to comply with other policies.

Description of intervention - 51.2.11

War in Ukraine: In order to protect users, Google Ads specifically focused on enforcing the Government Documents and Official Services Policy which disallows the promotion of documents and/or services that facilitate the acquisition, renewal, replacement or lookup of official documents or information that are available directly from a government or government delegated provider.

Indication of impact - 51.2.12

War in Ukraine: No applicable metrics to report at this time.

Specific Action applied - 51.2.13

War in Ukraine: Enforces policies, such as the Misleading Representation Policy and the Dangerous or Derogatory Content Policy, on AdSense that aim to prohibit the monetisation of content that has misleading representation, unreliable and harmful claims, deceptive practices, manipulated media, or is deemed dangerous or derogatory.

Description of intervention - 51.2.14

War in Ukraine: Google AdSense will continue to monitor and prevent monetisation of content that violates these policies. 

Indication of impact - 51.2.15

War in Ukraine: No applicable metrics to report at this time.

Specific Action applied - 51.2.16

War in Ukraine: Paused Google AdSense’s monetisation of Russian Federation state-funded media.

Description of intervention - 51.2.17

War in Ukraine: Beginning in February 2022, Google AdSense prohibited the monetisation of any Russian Federation state-funded media (i.e. sites, apps, YouTube channels). It is important to note that Google’s current Publisher Policies and advertiser-friendly guidelines already prohibited many forms of content related to the war in Ukraine from monetising. In addition, Google Advertising paused the monetisation of content that exploits, dismisses, or condones the invasion across services.

Indication of impact - 51.2.18

War in Ukraine: No applicable metrics to report at this time.

Specific Action applied - 51.2.19

War in Ukraine: Paused the ability of Russian-based publishers to monetise with AdSense, AdMob, and Ad Manager.

Description of intervention - 51.2.20

War in Ukraine: In August 2024, due to ongoing developments in Russia, Google paused the ability of Russia-based publishers to monetise with AdSense, AdMob, and Ad Manager.

Indication of impact - 51.2.21

War in Ukraine: No applicable metrics to report at this time.

Specific Action applied - 51.2.22

War in Ukraine: Paused ads from and for Russian Federation state-funded media since February 2022.

Description of intervention - 51.2.23

War in Ukraine: Google also paused ads from and for Russian Federation state-funded media.

Indication of impact - 51.2.24

War in Ukraine: No applicable metrics to report at this time.

Specific Action applied - 51.2.25

War in Ukraine: Enforced the Coordinated Deceptive Practices Policy which prohibits advertisers from promoting content related to public concerns while misrepresenting or concealing their identity or country or origin. 

Description of intervention - 51.2.26

War in Ukraine: Accounts found to be engaging in Coordinated Deceptive Practices are suspended immediately and without prior warning. 

Clickbait ads are disapproved upon detection. Repeated violations of this policy can lead to an account suspension.

Indication of impact - 51.2.27

War in Ukraine: No applicable metrics to report at this time.

Specific Action applied - 51.2.28

Israel-Gaza conflict: Google AdSense enforces the Dangerous or Derogatory Content Policy which does not allow monetisation of content that incites hatred against, promotes discrimination of, or disparages an individual or group of people on the basis of their race or ethnic origin, religion, or nationality.

Description of intervention - 51.2.29

Israel-Gaza conflict: In order to protect users and advertisers, Google requires that all publishers comply with Google Publisher Policies in order to monetise on AdSense. 

Due to the Israel-Gaza conflict, Google AdSense focused on enforcing the Dangerous or Derogatory Content Policy. Under this policy, Google AdSense does not allow monetisation of content that incites hatred against, promotes discrimination of, or disparages an individual or group on the basis of their race or ethnic origin, religion, disability, age, nationality, veteran status, sexual orientation, gender, gender identity, or other characteristic that is associated with systemic discrimination or marginalisation. Nor is content allowed that harasses, intimidates, or bullies an individual or group of individuals. In addition, content that threatens or advocates for physical or mental harm to oneself or others is also not allowed. Google also does not allow content that seeks to exploit others, like extortion, blackmail.

Indication of impact - 51.2.30

Israel-Gaza conflict: No applicable metrics to report at this time.

Specific Action applied - 51.2.31

Israel-Gaza conflict: Implementation of a Sensitive Event

Description of intervention - 51.2.32

Israel-Gaza conflict: Since 7 October 2023, Google Ads has taken several measures across its platforms in response to the Israel-Gaza conflict, including implementing a sensitive event to help prevent exploitative ads around this conflict. Google’s mission to elevate high-quality information and enhance information quality across its services is of utmost importance and Google Ads has and will continue to rigorously enforce its policies.

Google Ads often institutes sensitive events following natural disasters or other tragic events. When a sensitive event is declared, Google Ads does not allow ads that exploit or capitalise on these tragedies.

Google does not allow ads that potentially profit from or exploit a sensitive event with significant social, cultural, or political impact, such as civil emergencies, natural disasters, public health emergencies, terrorism and related activities, conflict, or mass acts of violence. Google does not allow ads that claim victims of a sensitive event were responsible for their own tragedy or similar instances of victim blaming; ads that claim victims of a sensitive event are not deserving of remedy or support.

Indication of impact - 51.2.33

Israel-Gaza conflict: See SLI 2.3.1 for metrics on this policy. 

Specific Action applied - 51.2.34

Israel-Gaza conflict: Within the Inappropriate Content Policy, Google Advertising does not allow Shocking Content.

Description of intervention - 51.2.35

Israel-Gaza conflict: Google does not allow promotions containing violent language, gruesome or disgusting imagery, or graphic images or accounts of physical trauma.

Google does not allow promotions containing gratuitous portrayals of bodily fluids or waste. 

Google does not allow promotions containing obscene or profane language.

Google does not allow promotions that are likely to shock or scare.

Indication of impact - 51.2.36

Israel-Gaza conflict: See SLI 2.3.1 for metrics on this policy.

Specific Action applied - 51.2.37

Israel-Gaza conflict: Google Advertising enforces the Misrepresentation Policy, which includes Clickbait ads.

Description of intervention - 51.2.38

Israel-Gaza conflict: Google does not allow ads that use clickbait tactics or sensationalist text or imagery to drive traffic. Google also does not allow ads that use negative life events such as death, accidents, illness, arrests or bankruptcy to induce fear, guilt or other strong negative emotions to pressure the viewer to take immediate action.

Indication of impact - 51.2.39

Israel-Gaza conflict: See SLI 2.3.1 for metrics on this policy.

Specific Action applied - 51.2.40

Israel-Gaza conflict: No changes to the enforcement of Ads Policies as a result of the Israel-Gaza conflict.

Description of intervention - 51.2.41

Israel-Gaza conflict: To ensure a safe and positive experience for users, Google requires that advertisers comply with all applicable laws and regulations in addition to the Google Ads policies. Ads, assets, destinations, and other content that violate these policies can be blocked on the Google Ads platform and associated networks. Google Ads policy violations can lead to ad or asset disapproval, or account suspension. 

Indication of impact - 51.2.42

Israel-Gaza conflict: No applicable metrics to report at this time.

Specific Action applied - 51.2.43

Israel-Gaza conflict: Teams across the company are dedicating resources as part of an urgent escalations workforce to respond to the Israel-Gaza conflict and take quick measures as needed.

Description of intervention - 51.2.44

Israel-Gaza conflict: Google Advertising invests heavily in the enforcement of its policies. Google Advertising has a team of thousands working around the clock to create and enforce its policies at scale. 

Indication of impact - 51.2.45

Israel-Gaza conflict: No applicable metrics to report at this time.