
Report March 2025
Your organisation description
Advertising
Commitment 1
Relevant signatories participating in ad placements commit to defund the dissemination of disinformation, and improve the policies and systems which determine the eligibility of content to be monetised, the controls for monetisation and ad placement, and the data to report on the accuracy and effectiveness of controls and services around ad placements.
We signed up to the following measures of this commitment
Measure 1.1 Measure 1.2 Measure 1.3 Measure 1.5 Measure 1.6
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 1.3
Relevant Signatories responsible for the selling of advertising, inclusive of publishers, media platforms, and ad tech companies, will take commercial and technically feasible steps, including support for relevant third-party approaches, to give advertising buyers transparency on the placement of their advertising.
QRE 1.3.1
Signatories will report on the controls and transparency they provide to advertising buyers with regards to the placement of their ads as it relates to Measure 1.3.
Google Ads also provides advertisers with additional controls and helps them exclude types of content that, while in compliance with AdSense policies, may not fit their brand or business. These controls let advertisers apply content filters or exclude certain types of content or terms from their video, display, and search ad campaigns. Advertisers can exclude content such as politics, news, sports, beauty, fashion and many other categories. These categories are listed in the Google Ads Help Centre.
Measure 1.5
Relevant Signatories involved in the reporting of monetisation activities inclusive of media platforms, ad networks, and ad verification companies will take the necessary steps to give industry-recognised relevant independent third-party auditors commercially appropriate and fair access to their services and data in order to: - First, confirm the accuracy of first party reporting relative to monetisation and Disinformation, seeking alignment with regular audits performed under the DSA. - Second, accreditation services should assess the effectiveness of media platforms' policy enforcement, including Disinformation policies.
QRE 1.5.1
Signatories that produce first party reporting will report on the access provided to independent third-party auditors as outlined in Measure 1.5 and will link to public reports and results from such auditors, such as MRC Content Level Brand Safety Accreditation, TAG Brand Safety certifications, or other similarly recognised industry accepted certifications.
- Google's Google Ads display and Search Clicks measurement methodology and AdSense ad serving technologies adhere to the industry standards for click measurement.
- Google Ads video impression and video viewability measurement as reported in the Video Viewability Report adheres to the industry standards for video impression and viewability measurement.
- The processes supporting these technologies are accurate. This applies to Google’s measurement technology which is used across all device types: desktop, mobile, and tablet, in both browser and mobile apps environments.
QRE 1.5.2
Signatories that conduct independent accreditation via audits will disclose areas of their accreditation that have been updated to reflect needs in Measure 1.5.
Measure 1.6
Relevant Signatories will advance the development, improve the availability, and take practical steps to advance the use of brand safety tools and partnerships, with the following goals: - To the degree commercially viable, relevant Signatories will provide options to integrate information and analysis from source-raters, services that provide indicators of trustworthiness, fact-checkers, researchers or other relevant stakeholders providing information e.g., on the sources of Disinformation campaigns to help inform decisions on ad placement by ad buyers, namely advertisers and their agencies. - Advertisers, agencies, ad tech companies, and media platforms and publishers will take effective and reasonable steps to integrate the use of brand safety tools throughout the media planning, buying and reporting process, to avoid the placement of their advertising next to Disinformation content and/or in places or sources that repeatedly publish Disinformation. - Brand safety tool providers and rating services who categorise content and domains will provide reasonable transparency about the processes they use, insofar that they do not release commercially sensitive information or divulge trade secrets, and that they establish a mechanism for customer feedback and appeal.
QRE 1.6.1
Signatories that place ads will report on the options they provide for integration of information, indicators and analysis from source raters, services that provide indicators of trustworthiness, fact-checkers, researchers, or other relevant stakeholders providing information e.g. on the sources of Disinformation campaigns to help inform decisions on ad placement by buyers.
Since April 2021, advertisers have the ability to use dynamic exclusion lists that can be updated seamlessly and continuously over time. These lists can be created by advertisers themselves or by a third party they trust, such as brand safety organisations and industry groups. Once advertisers upload a dynamic exclusion list to their Google Ads account, they can schedule automatic updates as new web pages or domains are added, ensuring that their exclusion lists remain effective and up-to-date.
QRE 1.6.2
Signatories that purchase ads will outline the steps they have taken to integrate the use of brand safety tools in their advertising and media operations, disclosing what percentage of their media investment is protected by such services.
QRE 1.6.3
Signatories that provide brand safety tools will outline how they are ensuring transparency and appealability about their processes and outcomes.
QRE 1.6.4
Relevant Signatories that rate sources to determine if they persistently publish Disinformation shall provide reasonable information on the criteria under which websites are rated, make public the assessment of the relevant criteria relating to Disinformation, operate in an apolitical manner and give publishers the right to reply before ratings are published.
SLI 1.6.1
Signatories that purchase ads will outline the steps they have taken to integrate the use of brand safety tools in their advertising and media operations, disclosing what percentage of their media investment is protected by such services.
Commitment 2
Relevant Signatories participating in advertising commit to prevent the misuse of advertising systems to disseminate Disinformation in the form of advertising messages.
We signed up to the following measures of this commitment
Measure 2.1 Measure 2.2 Measure 2.3 Measure 2.4
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
- In July 2024, Google updated the Disclosure requirements for synthetic content under the Political Content Policy, requiring advertisers to disclose election ads that contain synthetic or digitally altered content that inauthentically depicts real or realistic-looking people or events by selecting the checkbox in the ‘Altered or synthetic content’ section in their campaign settings. Google will then generate an in-ad disclosure based on that checkbox, for certain types of formats.
- After joining the Coalition for Content Provenance and Authenticity (C2PA), a cross-industry effort to help provide more transparency and context for people on AI-generated content, in February 2024, Google recently announced that it had begun integrating C2PA metadata into their ads systems. Google aims to use CP2A signals to inform how it enforces key policies.
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 2.2
Relevant Signatories will develop tools, methods, or partnerships, which may include reference to independent information sources both public and proprietary (for instance partnerships with fact-checking or source rating organisations, or services providing indicators of trustworthiness, or proprietary methods developed internally) to identify content and sources as distributing harmful Disinformation, to identify and take action on ads and promoted content that violate advertising policies regarding Disinformation mentioned in Measure 2.1.
QRE 2.2.1
Signatories will describe the tools, methods, or partnerships they use to identify content and sources that contravene policies mentioned in Measure 2.1 - while being mindful of not disclosing information that'd make it easier for malicious actors to circumvent these tools, methods, or partnerships. Signatories will specify the independent information sources involved in these tools, methods, or partnerships.
- Automated mechanisms; and
- Manual reviews performed by human reviewers.
For more information on how the ad review process works, please see the ‘About the ad review process’ page.
Measure 2.3
Relevant Signatories will adapt their current ad verification and review systems as appropriate and commercially feasible, with the aim of preventing ads placed through or on their services that do not comply with their advertising policies in respect of Disinformation to be inclusive of advertising message, promoted content, and site landing page.
QRE 2.3.1
Signatories will describe the systems and procedures they use to ensure that ads placed through their services comply with their advertising policies as described in Measure 2.1.
SLI 2.3.1
Signatories will report quantitatively, at the Member State level, on the ads removed or prohibited from their services using procedures outlined in Measure 2.3. In the event of ads successfully removed, parties should report on the reach of violatory content and advertising.
- Destination Requirements (Insufficient Original Content);
- Inappropriate Content (Dangerous or Derogatory Content, Shocking Content, Sensitive Events);
- Misrepresentation (Unacceptable Business Practices, Coordinated Deceptive Practices, Misleading Representation, Manipulated Media, Unreliable Claims, Misleading Ad Design, Clickbait Ads, Unclear Relevance, Unavailable Offers, Dishonest Pricing Practices).
Country | Number of actions taken, for Destination Requirements | Number of actions taken, for Inappropriate Content | Number of actions taken, for Misrepresentation |
---|---|---|---|
Austria | 7,422,101 | 60,174 | 66,717 |
Belgium | 12,660,562 | 59,045 | 116,586 |
Bulgaria | 6,971,115 | 88,399 | 155,851 |
Croatia | 2,727,827 | 34,436 | 37,895 |
Cyprus | 52,668,089 | 113,444 | 963,259 |
Czech Republic | 22,154,687 | 309,514 | 219,848 |
Denmark | 156,943,475 | 136,645 | 395,612 |
Estonia | 2,021,982 | 16,377 | 108,880 |
Finland | 2,956,655 | 43,135 | 60,524 |
France | 196,126,998 | 540,361 | 2,367,010 |
Germany | 131,475,890 | 955,572 | 2,443,336 |
Greece | 2,720,410 | 30,688 | 135,403 |
Hungary | 4,030,059 | 87,838 | 138,459 |
Ireland | 40,613,267 | 1,040,422 | 25,643,951 |
Italy | 55,368,074 | 328,135 | 2,220,113 |
Latvia | 1,961,748 | 49,753 | 127,796 |
Lithuania | 7,357,129 | 149,638 | 198,308 |
Luxembourg | 1,904,111 | 48,285 | 639,716 |
Malta | 2,342,282 | 3,807 | 153,093 |
Netherlands | 75,660,484 | 540,200 | 1,733,070 |
Poland | 19,165,056 | 714,955 | 2,112,907 |
Portugal | 2,438,751 | 44,576 | 183,139 |
Romania | 5,415,231 | 118,864 | 343,813 |
Slovakia | 3,671,184 | 32,633 | 101,007 |
Slovenia | 5,550,505 | 28,316 | 53,231 |
Spain | 107,768,933 | 380,582 | 5,457,434 |
Sweden | 19,021,742 | 343,419 | 248,193 |
Iceland | 90,480 | 1,296 | 25,059 |
Liechtenstein | 1,220,132 | 322 | 1,442 |
Norway | 3,432,920 | 18,154 | 128,489 |
Total EU | 949,118,347 | 6,299,213 | 46,425,151 |
Total EEA | 953,861,879 | 6,318,985 | 46,580,141 |
Measure 2.4
Relevant Signatories will provide relevant information to advertisers about which advertising policies have been violated when they reject or remove ads violating policies described in Measure 2.1 above or disable advertising accounts in application of these policies and clarify their procedures for appeal.
QRE 2.4.1
Signatories will describe how they provide information to advertisers about advertising policies they have violated and how advertisers can appeal these policies.
SLI 2.4.1
Signatories will report quantitatively, at the Member State level, on the number of appeals per their standard procedures they received from advertisers on the application of their policies and on the proportion of these appeals that led to a change of the initial policy decision.
- Destination Requirements (Insufficient Original Content);
- Inappropriate Content (Dangerous or Derogatory Content, Shocking Content, Sensitive Events);
- Misrepresentation (Unacceptable Business Practices, Coordinated Deceptive Practices, Misleading Representation, Manipulated Media, Unreliable Claims, Misleading Ad Design, Clickbait Ads, Unclear Relevance, Unavailable Offers, Dishonest Pricing Practices).
Country | Number of Ads Appeals | Number of Successful Appeals | Number of Failed Appeals |
---|---|---|---|
Austria | 14,234 | 4,207 | 10,027 |
Belgium | 18,261 | 10,279 | 7,982 |
Bulgaria | 15,513 | 4,350 | 11,163 |
Croatia | 5,071 | 2,472 | 2,599 |
Cyprus | 113,836 | 39,665 | 74,171 |
Czech Republic | 46,001 | 9,706 | 36,295 |
Denmark | 48,601 | 32,199 | 16,402 |
Estonia | 14,257 | 6,882 | 7,375 |
Finland | 4,739 | 2,199 | 2,540 |
France | 66,428 | 20,094 | 46,334 |
Germany | 200,343 | 47,937 | 152,406 |
Greece | 3,758 | 1,407 | 2,351 |
Hungary | 15,212 | 5,850 | 9,362 |
Ireland | 23,656 | 13,854 | 9,802 |
Italy | 93,382 | 32,128 | 61,254 |
Latvia | 5,108 | 1,555 | 3,553 |
Lithuania | 55,362 | 23,029 | 32,333 |
Luxembourg | 1,215 | 440 | 775 |
Malta | 31,292 | 8,573 | 22,719 |
Netherlands | 323,775 | 137,084 | 186,691 |
Poland | 141,849 | 37,149 | 104,700 |
Portugal | 14,029 | 5,704 | 8,325 |
Romania | 34,736 | 13,454 | 21,282 |
Slovakia | 8,169 | 6,016 | 2,153 |
Slovenia | 32,944 | 9,524 | 23,420 |
Spain | 130,730 | 36,745 | 93,985 |
Sweden | 39,057 | 12,096 | 26,961 |
Iceland | 68 | 22 | 46 |
Liechtenstein | 1,748 | 248 | 1,500 |
Norway | 3,172 | 1,127 | 2,045 |
Total EU | 1,501,558 | 524,598 | 976,960 |
Total EEA | 1,506,546 | 525,995 | 980,551 |
Commitment 3
Relevant Signatories involved in buying, selling and placing digital advertising commit to exchange best practices and strengthen cooperation with relevant players, expanding to organisations active in the online monetisation value chain, such as online e-payment services, e-commerce platforms and relevant crowd-funding/donation systems, with the aim to increase the effectiveness of scrutiny of ad placements on their own services.
We signed up to the following measures of this commitment
Measure 3.1 Measure 3.2 Measure 3.3
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 3.1
Relevant Signatories will cooperate with platforms, advertising supply chain players, source-rating services, services that provide indicators of trustworthiness, fact-checking organisations, advertisers and any other actors active in the online monetisation value chain, to facilitate the integration and flow of information, in particular information relevant for tackling purveyors of harmful Disinformation, in full respect of all relevant data protection rules and confidentiality agreements.
QRE 3.1.1
Signatories will outline how they work with others across industry and civil society to facilitate the flow of information that may be relevant for tackling purveyors of harmful Disinformation.
Measure 3.2
Relevant Signatories will exchange among themselves information on Disinformation trends and TTPs (Tactics, Techniques, and Procedures), via the Code Task-force, GARM, IAB Europe, or other relevant fora. This will include sharing insights on new techniques or threats observed by Relevant Signatories, discussing case studies, and other means of improving capabilities and steps to help remove Disinformation across the advertising supply chain - potentially including real-time technical capabilities.
QRE 3.2.1
Signatories will report on their discussions within fora mentioned in Measure 3.2, being mindful of not disclosing information that is confidential and/or that may be used by malicious actors to circumvent the defences set by Signatories and others across the advertising supply chain. This could include, for instance, information about the fora Signatories engaged in; about the kinds of information they shared; and about the learnings they derived from these exchanges.
Measure 3.3
Relevant Signatories will integrate the work of or collaborate with relevant third-party organisations, such as independent source-rating services, services that provide indicators of trustworthiness, fact-checkers, researchers, or open-source investigators, in order to reduce monetisation of Disinformation and avoid the dissemination of advertising containing Disinformation.
QRE 3.3.1
Signatories will report on the collaborations and integrations relevant to their work with organisations mentioned.