YouTube

Report March 2025

Submitted

Your organisation description

Integrity of Services

Commitment 14

In order to limit impermissible manipulative behaviours and practices across their services, Relevant Signatories commit to put in place or further bolster policies to address both misinformation and disinformation across their services, and to agree on a cross-service understanding of manipulative behaviours, actors and practices not permitted on their services. Such behaviours and practices include: The creation and use of fake accounts, account takeovers and bot-driven amplification, Hack-and-leak operations, Impersonation, Malicious deep fakes, The purchase of fake engagements, Non-transparent paid messages or promotion by influencers, The creation and use of accounts that participate in coordinated inauthentic behaviour, User conduct aimed at artificially amplifying the reach or perceived public support for disinformation.

We signed up to the following measures of this commitment

Measure 14.1 Measure 14.2 Measure 14.3

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Yes

If yes, list these implementation measures here

YouTube
YouTube has long been updating, on a regular and ongoing basis, its internal systems and processes related to the detection of content that violates its policies. This includes investment in automated detection systems. 

Search & YouTube
In November 2024, Google released a white paper detailing how it is addressing the growing global issue of fraud and scams. In the paper, Google explains that it fights scams and fraud by taking proactive measures to protect users from harm, deliver reliable information, and partner to create a safer internet, through policies and built-in technological protections that help us to prevent, detect, and respond to harmful and illegal content. For details on YouTube and Google Search’s approaches to tackling scams, see the full report here.

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

N/A

Measure 14.1

Relevant Signatories will adopt, reinforce and implement clear policies regarding impermissible manipulative behaviours and practices on their services, based on the latest evidence on the conducts and tactics, techniques and procedures (TTPs) employed by malicious actors, such as the AMITT Disinformation Tactics, Techniques and Procedures Framework.

QRE 14.1.1

Relevant Signatories will list relevant policies and clarify how they relate to the threats mentioned above as well as to other Disinformation threats.

Note: The below QRE response has been reproduced (in some instances truncated in order to meet the suggested character limit) from the previous report as there is no new information to share now.

Similar to Google Search, YouTube’s systems are designed to connect people with high-quality content.

In addition, YouTube has various policies which set out what is not allowed on YouTube. These policies, which can be accessed via this landing page in YouTube’s Help Centre, address relevant TTPs. Notably, YouTube’s policies tend to be broader than the identified TTPs. As such, related SLIs providing information about actions taken related to the TTP may be overinclusive.

YouTube’s Community Guidelines, commitment to promote high-quality content and curb the spread of harmful misinformation, disclosure requirements for paid product placements, sponsorships & endorsements, and ongoing work with Google’s Threat Analysis Group (TAG) broadly address TTPs: 1, 2, 3, 5, 7, 8, 9, 10, and 11 - and notably, go beyond these TTPs.

In this report, YouTube has provided information relating to TTPs 1, 5, 7 and 9. Removals relating to the remaining TTPs are included, in part or in whole, in the Community Guidelines enforcement report, but YouTube does not have more detailed removal reporting at this time. TTPs do not necessarily map singularly to one Community Guideline, and therefore, there are challenges in providing more granular mapping for TTPs. 

YouTube continues to assess, evaluate, and update its policies on a regular basis, the latest updated policies, including Community Guidelines, can be found here

QRE 14.1.2

Signatories will report on their proactive efforts to detect impermissible content, behaviours, TTPs and practices relevant to this commitment.

YouTube’s approach to combating misinformation involves removing content that violates YouTube’s policies as quickly as possible, raising high quality information in rankings and recommendations, curbing the spread of harmful misinformation, and rewarding trusted, eligible creators and artists. YouTube applies these principles globally, including across the EU. 

A YouTube channel may be permanently terminated if the creator receives three strikes in the same 90-day period, or the channel is determined to be wholly dedicated to violating YouTube’s guidelines (as may be the case with spam accounts). In some cases, YouTube may terminate a channel for a single case of severe abuse, as explained in the Help Centre. When a channel is terminated, all of its videos are removed.

A user’s channel may be turned off or restricted from using any YouTube features. If this happens, the user is prohibited from using, creating, or acquiring another channel to get around these restrictions. This prohibition applies as long as the restriction remains active on their YouTube channel. Violation of this restriction is considered circumvention under YouTube’s Terms of Service, and may result in termination of all their existing YouTube channels, any new channels that they create or acquire, and channels in which they are repeatedly or prominently featured.

YouTube uses a combination of people and machine learning to detect problematic content automatically and at scale. Machine learning is well-suited to detect patterns, including harmful misinformation, which helps YouTube find content similar to other content that YouTube has already removed, even before it is viewed. Every quarter, YouTube publishes data in the Community Guidelines enforcement report about removals that were first detected by automated means. 

YouTube’s Intelligence Desk monitors the news, social media, and user reports to detect new trends surrounding inappropriate content, and works to make sure YouTube’s teams are prepared to address them before they can become a larger issue.

In addition, Google’s Threat Analysis Group (TAG) and Google and YouTube’s Trust and Safety Teams are central to Google’s work to monitor malicious actors around the globe, including but not limited to coordinated information operations that may affect EU Member States. More information about this work is outlined in QRE 16.1.1.

YouTube continues to invest in automated detection systems, and rely on both human evaluators and machine learning to train their systems on new data. YouTube’s engineering teams also continue to update and improve their detection systems regularly. YouTube aims to leverage an even more targeted mix of classifiers, keywords in additional languages, and information from regional analysts to identify narratives their main classifier does not catch. Over time, this will continue to make YouTube faster and more accurate at catching viral misinformation narratives.

Measure 14.2

Relevant Signatories will keep a detailed, up-to-date list of their publicly available policies that clarifies behaviours and practices that are prohibited on their services and will outline in their reports how their respective policies and their implementation address the above set of TTPs, threats and harms as well as other relevant threats.

QRE 14.2.1

Relevant Signatories will report on actions taken to implement the policies they list in their reports and covering the range of TTPs identified/employed, at the Member State level.

Note: The below QRE response has been reproduced (in some instances truncated in order to meet the suggested character limit) from the previous report as there is no new information to share now.

YouTube enforces a broad range of policies to help build a safer community. These policies include, but are not limited to, YouTube’s Community Guidelines, which include policies covering Spam, Scams, and Deceptive Practices, Impersonation Policy and Fake Engagement Policy. YouTube applies these policies globally, including across the EEA Member States.

Implementing and enforcing YouTube policies
In general, enforcement of YouTube’s policies is a joint effort between people and machine learning technology. YouTube starts by giving its most experienced team of content moderators enforcement guidelines (detailed explanations of what makes content violative and non-violative), and asks them to differentiate between violative and non-violative material. If the new guidelines allow them to achieve a very high level of accuracy, YouTube expands the testing group to include hundreds of moderators across different backgrounds, languages and experience levels. 

Then YouTube may begin revising the guidelines so that they can be accurately interpreted across a larger, more diverse set of moderators, and is only complete once the group reaches a similarly high degree of accuracy. These findings then help train YouTube’s machine learning technology to detect potentially violative content at scale. As done with its content moderators, YouTube also tests its models to understand whether it has provided enough context for them to make accurate assessments about what to surface for people to review.

Once models are trained to identify potentially violative content, the role of content moderators remains essential throughout the enforcement process. Machine learning identifies potentially violative content at scale and nominates for review content that may be against YouTube Community Guidelines. Content moderators then help confirm or deny whether the content should be removed.

This collaborative approach helps improve the accuracy of YouTube’s models over time, as models continuously learn and adapt based on content moderator feedback. It also means YouTube’s enforcement systems can manage the sheer scale of content that is uploaded to YouTube, while still digging into the nuances that determine whether a piece of content is violative.

For TTPs 1, 5, 7 and 9, YouTube provides details around mapping to its policies. To learn more about these methodologies, refer to SLI 14.2.1, SLI 14.2.2, and SLI 14.2.4.

SLI 14.2.1

Number of instances of identified TTPs and actions taken at the Member State level under policies addressing each of the TTPs as well as information on the type of content.

TTP 1

(1) Number of channels for TTP 1, identified for potential removal by EEA Member State for reporting period H2 2024 (1 July 2024 to 31 December 2024);
(2) Number of removals of channels for TTP 1 by EEA Member State for reporting period H2 2024.

TTP 5

(3) Number of channels for TTP 5, identified for potential removal by EEA Member State for reporting period H2 2024 (1 July 2024 to 31 December 2024);
(4) Number of removals of channels for TTP 5 by EEA Member State for reporting period H2 2024;
(5) Number of videos for TTP 5, identified for potential removal by EEA Member State for reporting period H2 2024;
(6) Number of removals of videos for TTP 5 by EEA Member State for reporting period H2 2024.

TTP 7

(7) Number of videos for TTP 7, identified for potential removal, by EEA Member State for reporting period H2 2024 (1 July 2024 to 31 December 2024);
(8) Number of removals of videos for TTP 7, by EEA Member State for reporting period H2 2024.

TTP 9

(9) Number of channels for TTP 9, identified for potential removal by EEA Member State for reporting period H2 2024 (1 July 2024 to 31 December 2024);
(10) Number of removals of channels for TTP 9 by EEA Member State for reporting period H2 2024;
(11) Number of videos for TTP 9, identified for potential removal by EEA Member State for reporting period H2 2024;
(12) Number of removals of videos for TTP 9 by EEA Member State for reporting period H2 2024.


Where possible, each TTP has been mapped to relevant YT Community Guidelines. However, there is not an exact one to one mapping. Content might be violative of more than one of YouTube’s Community Guidelines so could be labelled under more than one policy violation. This means the data presented here is a best estimation of relevant TTP-activity under the relevant Community Guideline. In addition, the number of removals may represent an overcount, as the respective Community Guidelines may be inclusive of more policy-violative activity than identified by the TTP alone. 

YouTube’s Community Guidelines, commitment to promote high-quality content and curb the spread of harmful misinformation, disclosure requirements for paid product placements, sponsorships & endorsements, and ongoing work with Google’s Threat Analysis Group (TAG) broadly address TTPs: 1, 2, 3, 5, 7, 8, 9, 10, and 11 - and notably, beyond these TTPs.

In this report, YouTube has provided information relating to TTPs 1, 5, 7 and 9. Removals relating to the remaining TTPs are included, in part or in whole, in the Community Guidelines enforcement report, but YouTube does not have more detailed removal reporting at this time. TTPs do not necessarily map singularly to one Community Guideline, and therefore, there are challenges in providing more granular mapping for TTPs. 

YouTube continues to assess, evaluate, and update its policies on a regular basis. The latest updated policies, including Community Guidelines, can be found here

Country TTP OR ACTION 1 - Number of channels identified TTP OR ACTION 1 - Number of channels removed TTP OR ACTION 5 - Number of channels identified TTP OR ACTION 5 - Number of channels removed TTP OR ACTION 5 - Number of videos identified TTP OR ACTION 5 - Number of videos removed TTP OR ACTION 7 - Number of videos identified TTP OR ACTION 7 - Number of videos removed TTP OR ACTION 9 - Number of channels identified TTP OR ACTION 9 - Number of channels removed TTP OR ACTION 9 - Number of videos identified TTP OR ACTION 9 - Number of videos removed
Austria 3,789 3,789 170 170 4 4 19 19 90 90 67 67
Belgium 2,193 2,193 237 237 1 1 38 38 144 144 81 81
Bulgaria 1,335 1,335 145 145 2 2 26 26 74 74 30 30
Croatia 505 505 66 66 0 0 4 4 36 36 13 13
Cyprus 450 450 51 51 2 2 26 26 68 68 47 47
Czech Republic 1,830 1,830 159 159 7 7 19 19 139 139 183 183
Denmark 633 633 65 65 12 12 8 8 92 92 54 54
Estonia 213 213 27 27 0 0 3 3 17 17 19 19
Finland 686 686 90 90 45 45 19 19 58 58 38 38
France 14,635 14,635 1,277 1,277 237 237 236 236 603 603 394 394
Germany 14,157 14,157 1,749 1,749 1,440 1,440 338 338 947 947 641 641
Greece 3,831 3,831 124 124 6 6 96 96 115 115 59 59
Hungary 1,945 1,945 129 129 2 2 10 10 82 82 23 23
Ireland 5,823 5,823 116 116 102 102 55 55 61 61 101 101
Italy 5,651 5,651 828 828 250 250 85 85 319 319 177 177
Latvia 611 611 46 46 0 0 8 8 34 34 45 45
Lithuania 10,170 10,170 49 49 10 10 5 5 142 142 26 26
Luxembourg 429 429 20 20 0 0 5 5 13 13 8 8
Malta 193 193 13 13 0 0 2 2 17 17 5 5
Netherlands 5,055 5,055 406 406 195 195 143 143 556 556 866 866
Poland 13,947 13,947 697 697 6 6 45 45 1,040 1,040 982 982
Portugal 2,488 2,488 165 165 50 50 27 27 146 146 14 14
Romania 3,062 3,062 460 460 5 5 27 27 235 235 96 96
Slovakia 583 583 47 47 4 4 5 5 53 53 49 49
Slovenia 306 306 25 25 0 0 2 2 39 39 30 30
Spain 3,812 3,812 907 907 177 177 125 125 413 413 133 133
Sweden 1,642 1,642 211 211 10 10 45 45 139 139 110 110
Iceland 211 211 9 9 0 0 0 0 8 8 4 4
Liechtenstein 6 6 0 0 0 0 0 0 0 0 0 0
Norway 1,389 1,389 97 97 19 19 20 20 122 122 74 74
Total EU 99,974 99,974 8,279 8,279 2,567 2,567 1,421 1,421 5,672 5,672 4,291 4,291
Total EEA 101,580 101,580 8,385 8,385 2,586 2,586 1,441 1,441 5,802 5,802 4,369 4,369

SLI 14.2.2

Views/impressions of and interaction/engagement at the Member State level (e.g. likes, shares, comments), related to each identified TTP, before and after action was taken.

TTP 1

Methodology
(1) Views threshold on video removals for TTP 1 by EEA Member State for reporting period H2 2024;
(2) Interaction/engagement before action for TTP 1 by EEA Member State for reporting period H2 2024;
(3) Views/impressions after action for TTP 1 by video by EEA Member State for reporting period H2 2024;
(4) Interaction/engagement after action for TTP 1 by EEA Member State for reporting period H2 2024.

Where possible, each TTP has been mapped to relevant YT Community Guidelines. However, there is not an exact one to one mapping. Content might be violative of more than one of YouTube’s Community Guidelines so could be labelled under more than one policy violation. This means the data presented here is a best estimation of relevant TTP-activity under the relevant Community Guideline. 

For SLI 14.2.2 (3): Actions in this context constitute removals of the video themselves. And therefore there should be no views after YouTube removes the content.

Response
(1) N/A;
(2) N/A;
(3) N/A;
(4) N/A. 


TTP 5

Methodology
(1) Views threshold on video removals for TTP 5 by EEA Member State for reporting period H2 2024;
(2) Interaction/engagement before action for TTP 5 by EEA Member State for reporting period H2 2024;
(3) Views/ impressions after action for TTP 5 by video by EEA Member State for reporting period H2 2024;
(4) Interaction/engagement after action for TTP 5 by EEA Member State for reporting period H2 2024.

Where possible, each TTP has been mapped to relevant YT Community Guidelines. However, there is not an exact one to one mapping. Content might be violative of more than one of YouTube’s Community Guidelines so could be labelled under more than one policy violation. This means the data presented here is a best estimation of relevant TTP-activity under the relevant Community Guideline. 

For SLI 14.2.2 (3): Actions in this context constitute removals of the video themselves. And therefore there should be no views after YouTube removes the content.

Response
(1) Please see table below;
(2) N/A;
(3) Please see table below;
(4) N/A. 


TTP 7

Methodology
(1) Views threshold on video removals for TTP 7 by EEA Member State for reporting period H2 2024;
(2) Interaction/engagement before action for TTP 7 by EEA Member State for reporting period H2 2024;
(3) Views/ impressions after action for TTP 7 by video by EEA Member State for reporting period H2 2024;
(4) Interaction/engagement after action for TTP 7 by EEA Member State for reporting period H2 2024.

Where possible, each TTP has been mapped to relevant YT Community Guidelines. However, there is not an exact one to one mapping. Content might be violative of more than one of YouTube’s Community Guidelines so could be labelled under more than one policy violation. This means the data presented here is a best estimation of relevant TTP-activity under the relevant Community Guideline. 

For SLI 14.2.2 (3): Actions in this context constitute removals of the video themselves. And therefore there should be no views after YouTube removes the content.

Response
(1) Please see table below;
(2) N/A;
(3) Please see table below;
(4) N/A.


TTP 9

Methodology
(1) Views threshold on video removals for TTP 9 by EEA Member State for reporting period H2 2024;
(2) Interaction/engagement before action for TTP 9 by EEA Member State for reporting period H2 2024;
(3) Views/ impressions after action for TTP 9 by video by EEA Member State for reporting period H2 2024;
(4) Interaction/engagement after action for TTP 9 by EEA Member State for reporting period H2 2024.

Where possible, each TTP has been mapped to relevant YT Community Guidelines. However, there is not an exact one to one mapping. Content might be violative of more than one of YouTube’s Community Guidelines so could be labelled under more than one policy violation. This means the data presented here is a best estimation of relevant TTP-activity under the relevant Community Guideline. 

For SLI 14.2.2 (3): Actions in this context constitute removals of the video themselves. And therefore there should be no views after YouTube removes the content.

Response
(1) Please see table below; 
(2) N/A;
(3) Please see table below;
(4) N/A.

Country TTP OR ACTION 5 - Number of videos removed with 0 views TTP OR ACTION 5 - Number of videos removed with 1-10 views TTP OR ACTION 5 - Number of videos removed with 11-100 views TTP OR ACTION 5 - Number of videos removed with 101-1,000 views TTP OR ACTION 5 - Number of videos removed with 1,001- 10,000 views TTP OR ACTION 5 - Number of videos removed with >10,000 views TTP OR ACTION 5 - Views after action TTP OR ACTION 7 - Number of videos removed with 0 views TTP OR ACTION 7 - Number of videos removed with 1-10 views TTP OR ACTION 7 - Number of videos removed with 11-100 views TTP OR ACTION 7 - Number of videos removed with 101-1,000 views TTP OR ACTION 7 - Number of videos removed with 1,001- 10,000 views TTP OR ACTION 7 - Number of videos removed with >10,000 views TTP OR ACTION 7 - Views after action TTP OR ACTION 9 - Number of videos removed with 0 views TTP OR ACTION 9 - Number of videos removed with 1-10 views TTP OR ACTION 9 - Number of videos removed with 11-100 views TTP OR ACTION 9 - Number of videos removed with 101-1,000 views TTP OR ACTION 9 - Number of videos removed with 1,001- 10,000 views TTP OR ACTION 9 - Number of videos removed with >10,000 views TTP OR ACTION 9 - Views after action
Austria 1 0 0 2 1 0 0 1 7 6 0 2 3 0 0 1 13 19 13 21 0
Belgium 1 0 0 0 0 0 0 8 9 5 1 10 5 0 0 5 13 23 21 19 0
Bulgaria 1 0 0 0 1 0 0 4 3 5 7 4 3 0 0 1 3 6 10 10 0
Croatia 0 0 0 0 0 0 0 1 2 1 0 0 0 0 0 2 1 3 5 2 0
Cyprus 0 0 0 0 0 2 0 1 5 2 6 8 4 0 0 1 5 11 24 6 0
Czech Republic 1 0 0 1 3 2 0 2 2 7 2 1 5 0 0 2 25 69 61 26 0
Denmark 2 1 8 1 0 0 0 1 5 2 0 0 0 0 0 3 12 16 16 7 0
Estonia 0 0 0 0 0 0 0 0 1 1 0 0 1 0 0 1 2 9 6 1 0
Finland 21 3 3 10 8 0 0 3 8 6 2 0 0 0 0 1 4 20 7 6 0
France 15 10 26 123 62 1 0 28 74 63 23 23 25 0 3 16 58 143 116 58 0
Germany 984 190 88 86 63 29 0 70 95 59 48 35 31 0 3 26 89 266 150 107 0
Greece 1 4 1 0 0 0 0 13 11 10 14 24 24 0 0 1 8 23 13 14 0
Hungary 2 0 0 0 0 0 0 2 4 1 2 1 0 0 0 1 5 5 7 5 0
Ireland 3 1 2 75 21 0 0 11 12 8 5 12 7 0 2 5 18 38 22 16 0
Italy 3 1 109 104 31 2 0 15 29 16 11 7 7 0 1 13 19 58 62 24 0
Latvia 0 0 0 0 0 0 0 1 6 0 0 1 0 0 0 1 7 20 12 5 0
Lithuania 2 3 0 5 0 0 0 0 1 2 0 1 1 0 0 0 2 12 7 5 0
Luxembourg 0 0 0 0 0 0 0 3 0 2 0 0 0 0 0 0 1 5 2 0 0
Malta 0 0 0 0 0 0 0 0 0 0 2 0 0 0 0 1 0 1 2 1 0
Netherlands 28 66 27 38 19 17 0 18 36 16 15 20 38 0 6 34 149 349 273 55 0
Poland 1 0 1 0 2 2 0 8 10 11 4 6 6 0 128 258 209 231 105 51 0
Portugal 3 2 27 18 0 0 0 4 12 2 4 3 2 0 0 0 0 7 5 2 0
Romania 3 0 0 2 0 0 0 2 5 5 5 6 4 0 0 6 12 30 23 25 0
Slovakia 0 0 0 2 2 0 0 0 2 0 1 1 1 0 0 5 12 14 13 5 0
Slovenia 0 0 0 0 0 0 0 1 0 1 0 0 0 0 0 0 4 10 9 7 0
Spain 8 7 24 40 35 63 0 14 34 23 28 15 11 0 1 7 18 41 38 28 0
Sweden 1 1 1 4 3 0 0 6 16 10 6 4 3 0 0 2 14 42 36 16 0
Iceland 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 2 1 0 0
Liechtenstein 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Norway 2 1 1 15 0 0 0 1 12 2 2 0 3 0 0 4 10 27 20 13 0
Total EU 1,081 289 317 511 251 118 0 217 389 264 186 184 181 0 144 393 703 1,471 1,058 522 0
Total EEA 1,083 290 318 526 251 118 0 218 401 266 188 184 184 0 144 397 714 1,500 1,079 535 0

SLI 14.2.3

Metrics to estimate the penetration and impact that e.g. Fake/Inauthentic accounts have on genuine users and report at the Member State level (including trends on audiences targeted; narratives used etc.).

TTP 1

Views are a measure of penetration / impact on the platform. SLI 14.2.2 provides data on video removals by view threshold and view / impressions on the platform after action has been taken. 

TTP 5

Views are a measure of penetration / impact on the platform. SLI 14.2.2 provides data on video removals by view threshold and view / impressions on the platform after action has been taken. 

TTP 7

Views are a measure of penetration / impact on the platform. SLI 14.2.2 provides data on video removals by view threshold and view / impressions on the platform after action has been taken. 

TTP 9

Views are a measure of penetration / impact on the platform. SLI 14.2.2 provides data on video removals by view threshold and view / impressions on the platform after action has been taken. 

SLI 14.2.4

Estimation, at the Member State level, of TTPs related content, views/impressions and interaction/engagement with such content as a percentage of the total content, views/impressions and interaction/engagement on relevant signatories' service.

TTP 1

Methodology
(1) Percentage of TTP 1 channel removals out of all related channel removals by EEA Member State for reporting period H2 2024;
(2) N/A;
(3) N/A.

Response
(1) Please see table below;
(2, 3) The Community Guidelines enforcement report provides information regarding views on videos before they are removed for Community Guidelines violations.


TTP 5

Methodology
(1) Percentage of TTP 5 channel removals out of all related channel removals by EEA Member State for reporting period H2 2024;
(2) Percentage of TTP 5 video removals out of all related video removals by EEA Member State for reporting period H2 2024;
(3) N/A;
(4) N/A.

Response
(1) Please see table below;
(2) Please see table below;
(3, 4) The Community Guidelines enforcement report provides information regarding views on videos before they are removed for Community Guidelines violations.


TTP 7

Methodology
(1) Percentage of TTP 7 video removals out of all related video removals by EEA Member State for reporting period H2 2024;
(2) N/A;
(3) N/A.

Response
(1) Please see table below;
(2, 3) The Community Guidelines enforcement report provides information regarding views on videos before they are removed for Community Guidelines violations.


TTP 9

Methodology
(1) Percentage of TTP 9 channel removals out of all related channel removals by EEA Member State for reporting period H2 2024;
(2) Percentage of TTP 9 video removals out of all related channel removals by EEA Member State for reporting period H2 2024;
(3) N/A;
(4) N/A.

Response
(1) Please see table below;
(2) Please see table below;
(3, 4) The Community Guidelines enforcement report provides information regarding views on videos before they are removed for Community Guidelines violations.

Country TTP OR ACTION 1 - Percentage of TTP 1 channel removals out of all related channel removals TTP OR ACTION 5 - Percentage of TTP 5 channel removals out of all related channel removals TTP OR ACTION 5 - Percentage of TTP 5 video removals out of all related video removals TTP OR ACTION 7 - Percentage of TTP 7 video removals out of all related video removals TTP OR ACTION 9 - Percentage of TTP 9 channel removals out of all related channel removals TTP OR ACTION 9 - Percentage of TTP 9 video removals out of all related video removals
Austria 43.12% 1.93% 0.03% 0.13% 1.02% 0.46%
Belgium 32.95% 3.56% 0.00% 0.16% 2.16% 0.35%
Bulgaria 27.23% 2.96% 0.01% 0.12% 1.51% 0.14%
Croatia 20.87% 2.73% 0.00% 0.05% 1.49% 0.17%
Cyprus 31.38% 3.56% 0.04% 0.51% 4.74% 0.93%
Czech Republic 28.11% 2.44% 0.02% 0.05% 2.14% 0.51%
Denmark 23.00% 2.36% 0.07% 0.05% 3.34% 0.33%
Estonia 12.63% 1.60% 0.00% 0.05% 1.01% 0.33%
Finland 24.04% 3.15% 0.41% 0.17% 2.03% 0.35%
France 27.01% 2.36% 0.18% 0.18% 1.11% 0.30%
Germany 23.67% 2.92% 0.65% 0.15% 1.58% 0.29%
Greece 9.14% 0.30% 0.04% 0.63% 0.27% 0.38%
Hungary 37.43% 2.48% 0.01% 0.05% 1.58% 0.12%
Ireland 53.86% 1.07% 0.65% 0.35% 0.56% 0.64%
Italy 30.51% 4.47% 0.25% 0.09% 1.72% 0.18%
Latvia 29.59% 2.23% 0.00% 0.09% 1.65% 0.53%
Lithuania 66.89% 0.32% 0.10% 0.05% 0.93% 0.26%
Luxembourg 37.27% 1.74% 0.00% 0.40% 1.13% 0.65%
Malta 31.08% 2.09% 0.00% 0.14% 2.74% 0.36%
Netherlands 21.13% 1.70% 0.27% 0.20% 2.32% 1.18%
Poland 37.06% 1.85% 0.01% 0.05% 2.76% 1.06%
Portugal 30.35% 2.01% 0.18% 0.10% 1.78% 0.05%
Romania 18.71% 2.81% 0.01% 0.03% 1.44% 0.12%
Slovakia 23.10% 1.86% 0.03% 0.04% 2.10% 0.39%
Slovenia 32.83% 2.68% 0.00% 0.07% 4.18% 1.03%
Spain 20.06% 4.77% 0.15% 0.10% 2.17% 0.11%
Sweden 27.20% 3.50% 0.04% 0.16% 2.30% 0.39%
Iceland 40.50% 1.73% 0.00% 0.00% 1.54% 0.38%
Liechtenstein 15.38% 0.00% 0.00% 0.00% 0.00% 0.00%
Norway 30.95% 2.16% 0.14% 0.14% 2.72% 0.54%
Total EU 27.61% 2.29% 0.23% 0.13% 1.57% 0.39%
Total EEA 27.67% 2.28% 0.23% 0.13% 1.58% 0.39%

Measure 14.3

Relevant Signatories will convene via the Permanent Task-force to agree upon and publish a list and terminology of TTPs employed by malicious actors, which should be updated on an annual basis.

QRE 14.3.1

Signatories will report on the list of TTPs agreed in the Permanent Task-force within 6 months of the signing of the Code and will update this list at least every year. They will also report about the common baseline elements, objectives and benchmarks for the policies and measures.

Note: The below QRE response has been reproduced (in some instances truncated in order to meet the suggested character limit) from the previous report as there is no new information to share now.

The final list of TTPs agreed within the Permanent Task-force in H2 2022 was used by Signatories as part of their reports from then on, as intended. The Permanent Task-force will continue to examine and update the list as necessary in light of the state of the art. 

Commitment 15

Relevant Signatories that develop or operate AI systems and that disseminate AI-generated and manipulated content through their services (e.g. deepfakes) commit to take into consideration the transparency obligations and the list of manipulative practices prohibited under the proposal for Artificial Intelligence Act.

We signed up to the following measures of this commitment

Measure 15.1 Measure 15.2

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Yes

If yes, list these implementation measures here

Search & YouTube
  • After joining the Coalition for Content Provenance and Authenticity (C2PA), a cross-industry effort to help provide more transparency and context for people on AI-generated content, in February 2024, Google collaborated on the newest version (2.1) of the coalition’s technical standard, Content Credentials. This version is more secure against a wider range of tampering attacks due to stricter technical requirements for validating the history of the content’s provenance.
  • In July 2024 at the Aspen Security Forum, Google, alongside industry peers, introduced the Coalition for Secure AI (CoSAI) to advance comprehensive security measures for addressing the unique risks that come with AI, for both issues that arise in real time and those over the horizon. The first three areas of focus the coalition will tackle in collaboration with industry and academia include Software Supply Chain Security for AI systems; Preparing defenders for a changing cybersecurity landscape; and AI security governance.
  • In September 2024, Google announced a Global AI Opportunity Fund, which will invest $120 million to make AI education and training available in communities around the world. Google will provide this in local languages, in partnership with nonprofits and NGOs.
  • In October 2024, Google released its EU AI Opportunity Agenda, a series of recommendations for governments to seize the full economic and societal potential of AI. The Agenda outlines the need to revisit Europe’s workforce strategy, as well as investment in AI infrastructure and research, adoption and accessibility.
  • In October 2024, The Nobel Prize was awarded to Google DeepMind’s Demis Hassabis and John Jumper for their groundbreaking work with AlphaFold 2, which predicted the structures for nearly all proteins known to science. It has been used by more than 2 million researchers around the world, accelerating scientific discovery in important areas like malaria vaccines, cancer treatments, and more. 

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

Yes

If yes, which further implementation measures do you plan to put in place in the next 6 months?

YouTube requires that creators disclose when they have created altered or synthetic content that is realistic, including using AI tools. YouTube applies labels to content indicating that some of the content was altered or synthetic, as well as a more prominent label for certain types of content about sensitive topics. 

YouTube continually invests in the ability to detect policy-violative accounts and evolves this work accordingly. 

Measure 15.1

Relevant signatories will establish or confirm their policies in place for countering prohibited manipulative practices for AI systems that generate or manipulate content, such as warning users and proactively detect such content.

QRE 15.1.1

In line with EU and national legislation, Relevant Signatories will report on their policies in place for countering prohibited manipulative practices for AI systems that generate or manipulate content.

Note: The below QRE response has been reproduced (in some instances truncated in order to meet the suggested character limit) from the previous report as there is no new information to share now.

All content uploaded to YouTube is subject to its Community Guidelines—regardless of how it is generated.

YouTube’s long-standing Misinformation Policies prohibit content that has been technically manipulated or doctored in a way that misleads users (usually beyond clips taken out of context) and may pose a serious risk of egregious harm. YouTube detects content that violates Community Guidelines using a combination of machine learning and human review. YouTube also has policies on Spam & Deceptive Practices that prohibit, for example, spam, scams, and other deceptive practices that take advantage of the YouTube community, Impersonation, and Fake Engagement.

Refer to QRE 18.2.1 for how YouTube enforces these policies.

Measure 15.2

Relevant Signatories will establish or confirm their policies in place to ensure that the algorithms used for detection, moderation and sanctioning of impermissible conduct and content on their services are trustworthy, respect the rights of end-users and do not constitute prohibited manipulative practices impermissibly distorting their behaviour in line with Union and Member States legislation.

QRE 15.2.1

Relevant Signatories will report on their policies and actions to ensure that the algorithms used for detection, moderation and sanctioning of impermissible conduct and content on their services are trustworthy, respect the rights of end-users and do not constitute prohibited manipulative practices in line with Union and Member States legislation.

Google’s AI principles set out Google’s commitment to develop technology responsibly and establish specific application areas that will not be pursued. YouTube responsibly applies Google AI principles to all its products. 

YouTube’s approach to responsible AI innovation
All content uploaded to YouTube is subject to its Community Guidelines—regardless of how it is generated. 

YouTube requires creators to disclose when they have created altered or synthetic content that is realistic, including using AI tools. YouTube also informs viewers that content may be altered or synthetic in two ways. A label may be added to the description panel indicating that some of the content was altered or synthetic. For certain types of content about sensitive topics, YouTube will apply a more prominent label to the video player. Examples of content that require disclosures can be found here.

YouTube has noted feedback from its community, including creators, viewers, and artists, about the ways in which emerging technologies could impact them. YouTube makes it possible to request the removal of AI-generated or other synthetic or altered content that simulates an identifiable individual, including their face or voice, using its privacy request process. Not all content will be removed from YouTube, and YouTube will consider a variety of factors when evaluating these requests, some examples can be found here

Additionally, YouTube has highlighted how it will build responsibility into its AI tools and features for creators. This includes significant, ongoing work to develop guardrails that will prevent its AI tools from generating the type of content that does not belong on YouTube.

YouTube incorporates user feedback to continuously improve protections. And within YouTube, dedicated teams like the intelligence desk are specifically focused on adversarial testing and threat detection to ensure YouTube’s systems meet new challenges as they emerge. All content generated by YouTube’s AI tools will include a SynthID watermark, which is a tool for watermarking and identifying AI-generated images. Across the industry, Google, including YouTube, continues to help increase transparency around digital content. This includes its work as a steering member of the Coalition for Content Provenance and Authenticity (C2PA).

Deploying AI technology to power content moderation
YouTube has always used a combination of people and machine learning technologies to enforce its Community Guidelines. AI classifiers help YouTube detect potentially violative content at scale, and reviewers work to confirm whether content has actually crossed policy lines. AI is continuously increasing both the speed and accuracy of YouTube’s content moderation systems.

Improved speed and accuracy of YouTube’s systems also allows it to reduce the amount of harmful content human reviewers are exposed to.

Google’s Commitment to Safe and Secure AI
Google has a long history of supporting collective security through the Vulnerability Rewards Program (VRP), Project Zero and in the field of Open Source software security
Google believes incentivising research around AI safety and security, and bringing potential issues to light, will ultimately make AI safer for everyone. The latest updates about Google’s AI efforts can be found here.

Commitment 16

Relevant Signatories commit to operate channels of exchange between their relevant teams in order to proactively share information about cross-platform influence operations, foreign interference in information space and relevant incidents that emerge on their respective services, with the aim of preventing dissemination and resurgence on other services, in full compliance with privacy legislation and with due consideration for security and human rights risks.

We signed up to the following measures of this commitment

Measure 16.1 Measure 16.2

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Yes

If yes, list these implementation measures here

Search & YouTube

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

N/A

Measure 16.1

Relevant Signatories will share relevant information about cross-platform information manipulation, foreign interference in information space and incidents that emerge on their respective services for instance via a dedicated sub-group of the permanent Task-force or via existing fora for exchanging such information.

QRE 16.1.1

Relevant Signatories will disclose the fora they use for information sharing as well as information about learnings derived from this sharing.

Note: The below QRE response has been reproduced (in some instances truncated in order to meet the suggested character limit) from the previous report as there is no new information to share now.

Google’s Threat Analysis Group (TAG) and Trust & Safety Team work to monitor malicious actors around the globe, disable their accounts, and remove the content that they post, including but not limited to coordinated information operations and other operations that may affect EEA Member States. 

One of TAG’s missions is to understand and disrupt coordinated information operations threat actors. TAG’s work enables Google teams to make enforcement decisions backed by rigorous analysis. TAG’s investigations do not focus on making judgements about the content on Google platforms, but rather examining technical signals, heuristics, and behavioural patterns to make an assessment that activity is coordinated inauthentic behaviour.

TAG regularly publishes its TAG Bulletin, updated quarterly here, which provides updates around coordinated influence operation campaigns terminated on Google’s platforms, as well as additional periodic blog posts. TAG also engages with other platform Signatories to receive and, when strictly necessary for security purposes, share information related to threat actor activity – in compliance with applicable laws. To learn more, refer to SLI 16.1.1.

See Google’s disclosure policies about handling security vulnerabilities for developers and security professionals.

SLI 16.1.1

Number of actions taken as a result of the collaboration and information sharing between signatories. Where they have such information, they will specify which Member States that were affected (including information about the content being detected and acted upon due to this collaboration).

Google’s Threat Analysis Group (TAG) posts a quarterly Bulletin, which includes disclosure of coordinated influence operation campaigns terminated on Google’s products and services, as well as additional periodic blog posts. In the Bulletin, TAG often notes when findings are similar to or supported by those reported by other platforms. The publicly available H2 2024 TAG Bulletins (1 July 2024 - 31 December 2024) show 81,773 YouTube channels across 57 separate actions were involved in Coordinated Influence Operation Campaigns. Industry partners supported two of those separate actions by providing leads. The TAG Bulletin and periodic blog posts are Google’s, including YouTube’s, primary public source of information on coordinated influence operations and TTP-related issues.

As reported in the Bulletin, some channels YouTube took action on were part of campaigns that uploaded content in some EEA languages, specifically: French (546 channels), German (460 channels), Polish (389 channels), Italian (362 channels), Spanish (128 channels), Romanian (15 channels), Czech (12 Channels), and Hungarian (12 channels). Certain campaigns may have uploaded content in multiple languages, or in other countries outside of the EEA region utilising EEA languages. Please note that there may be many languages for any one coordinated influence campaign and that the presence of content in an EEA Member State language does not necessarily entail a particular focus on that Member State. For more information, please see the TAG Bulletin

Measure 16.2

Relevant Signatories will pay specific attention to and share information on the tactical migration of known actors of misinformation, disinformation and information manipulation across different platforms as a way to circumvent moderation policies, engage different audiences or coordinate action on platforms with less scrutiny and policy bandwidth.

QRE 16.2.1

As a result of the collaboration and information sharing between them, Relevant Signatories will share qualitative examples and case studies of migration tactics employed and advertised by such actors on their platforms as observed by their moderation team and/or external partners from Academia or fact-checking organisations engaged in such monitoring.

Note: The below QRE response has been reproduced (in some instances truncated in order to meet the suggested character limit) from the previous report as there is no new information to share now.

Google’s Threat Analysis Group (TAG) and Trust & Safety Teams work to monitor malicious actors around the globe, disable their accounts, and remove the content that they posted, including but not limited to coordinated information operations and other operations that may affect EU Member States. 

Refer to the TAG Bulletin articles that cover the reporting period to learn more about the number of YouTube channels terminated as part of TAG’s investigation into coordinated influence operations linked to Russia, Poland, and other countries around the world. 

The most recent examples of specific tactics, techniques, and procedures (TTPs) used to lure victims, as well as how Google collaborates and shares information, can be found in Google’s TAG Blog

Empowering Users

Commitment 17

In light of the European Commission's initiatives in the area of media literacy, including the new Digital Education Action Plan, Relevant Signatories commit to continue and strengthen their efforts in the area of media literacy and critical thinking, also with the aim to include vulnerable groups.

We signed up to the following measures of this commitment

Measure 17.1 Measure 17.2 Measure 17.3

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Yes

If yes, list these implementation measures here

YouTube continues to assess and update the topics prone to misinformation that receive additional context from information panels. 

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

Yes

If yes, which further implementation measures do you plan to put in place in the next 6 months?

YouTube plans to continue rolling out new content as part of its ‘Hit Pause’ campaign.

Measure 17.1

Relevant Signatories will design and implement or continue to maintain tools to improve media literacy and critical thinking, for instance by empowering users with context on the content visible on services or with guidance on how to evaluate online content.

QRE 17.1.1

Relevant Signatories will outline the tools they develop or maintain that are relevant to this commitment and report on their deployment in each Member State.

Note: The below QRE response has been reproduced (in some instances truncated in order to meet the suggested character limit) from the previous report as there is no new information to share now.

YouTube takes its responsibility efforts seriously, outlining clear policies used to moderate content on the platform and providing tools that users can leverage to improve their media literacy education and better evaluate what content and sources to trust. 

Information panels may also appear alongside search results and below relevant videos to provide more context and to help people make more informed decisions about the content they are viewing. For example, topics that are more prone to misinformation may have information panels that show basic background info, sourced from independent, third-party partners, to give more context on the topic. If a user wants to learn more, the panels also link to the third-party partner’s website. YouTube continues to assess and update the topics prone to misinformation that receive additional context from information panels. 

During election periods, text-based information panels about a candidate, how to vote, and election results may also be displayed to users.

Further EEA Member State coverage can be found in SLI 17.1.1.

SLI 17.1.1

Relevant Signatories will report, at the Member State level, on metrics pertinent to assessing the effects of the tools described in the qualitative reporting element for Measure 17.1, which will include: the total count of impressions of the tool; and information on the interactions/engagement with the tool.

Impressions of information panels (excluding fact-check panels, crisis resource panel, non-COVID medical panels) in H2 2024 (1 July 2024 to 31 December 2024), broken down by EEA Member State.

Note: Due to a technical issue, some info panel impressions were undercounted. YouTube relies on a number of systems to calculate this metric and make the best effort to be as accurate as possible.

Country Impressions of information panels (excluding fact-check panels, crisis resource panels and non-COVID medical panels)
Austria 35,930,356
Belgium 140,278,448
Bulgaria 34,494,718
Croatia 45,489,297
Cyprus 4,709,568
Czech Republic 76,755,521
Denmark 18,153,587
Estonia 13,864,098
Finland 14,400,111
France 909,171,599
Germany 1,931,996,858
Greece 29,433,930
Hungary 55,347,902
Ireland 68,473,652
Italy 559,067,820
Latvia 39,720,895
Lithuania 36,214,789
Luxembourg 2,825,126
Malta 2,407,322
Netherlands 357,656,982
Poland 187,672,506
Portugal 36,677,338
Romania 90,075,363
Slovakia 25,112,829
Slovenia 14,414,991
Spain 417,913,504
Sweden 109,624,386
Iceland 1,117,258
Liechtenstein 149,057
Norway 19,947,673
Total EU 5,257,883,496
Total EEA 5,279,097,484

Measure 17.2

Relevant Signatories will develop, promote and/or support or continue to run activities to improve media literacy and critical thinking such as campaigns to raise awareness about Disinformation, as well as the TTPs that are being used by malicious actors, among the general public across the European Union, also considering the involvement of vulnerable communities.

QRE 17.2.1

Relevant Signatories will describe the activities they launch or support and the Member States they target and reach. Relevant signatories will further report on actions taken to promote the campaigns to their user base per Member States targeted.

Grants
In H2 2024, Google.org has supported a number of organisations that seek to help build a safer and more tolerant online world, and promote media literacy. This includes: 
  • In H2 2024, Google.org announced $10 million in funding to the Raspberry Pi Foundation to further expand access to Experience AI. This educational program was co-created with Google DeepMind as part of Google.org’s broader commitment to support organisations helping young people build AI literacy.
    • Experience AI provides teachers with the training and resources needed to both educate and inspire young people aged 11-14 about AI.
    • The curriculum focuses on a structured learning journey, ethical considerations, real-world examples and role models, and culturally relevant content to engage learners in understanding AI and how to use it responsibly. Raspberry Pi Foundation and Google DeepMind continued to develop further resources, including three new lessons centred around AI safety: AI and Your Data, Media Literacy in the Age of AI, and Using Generative AI Responsibly.

Search
To raise awareness of its features and build literacy across society, Google Search is working with information literacy experts to help design tools in a way that allows users to feel confident and in control of the information they consume and the choices they make. 

In addition, Google Search builds capacity for librarians to empower their patrons and the general public with information literacy. At the end of September 2022, in cooperation with Google Search’s partner, ‘Public Libraries 2030’, Google Search launched a Training of Trainers program called ‘Super Searchers’ for librarians and library staff that seeks to achieve the following objectives: (a) provide librarians and library staff with the skills to build the information literacy capacity of the general public; (b) increase the information literacy capacity of library patrons and the general public. Since the launch, Google and ‘Public Libraries 2030’ have provided Super Searchers training in Ireland, Italy, Portugal, and the UK. Note, Public Libraries 2030 (PL2030), Google Search’s implementing partner, shared feedback that language barriers and lack of interest from patrons made it challenging to scale this program across the EU. While the agreement with PL2030 ended in H1 2023, the pilot program continued to expand in non-EU countries (e.g. in the US through the Public Library Association).

YouTube
YouTube remains committed to supporting efforts that deepen users’ collective understanding of misinformation. To empower users to think critically and use YouTube’s products safely and responsibly, YouTube invests in media literacy campaigns to improve users’ experiences on YouTube. In 2022, YouTube launched ‘Hit Pause’, a global media literacy campaign, which is live in all EEA Member States and the campaign has run in 40+ additional countries around the world, including all official EU languages.

The program seeks to teach viewers critical media literacy skills via engaging and educational public service announcements (PSAs) via YouTube home feed and pre-roll ads, and on a dedicated YouTube channel. The YouTube channel hosts videos from the YouTube Trust & Safety team that explain how YouTube protects the YouTube community from misinformation and other harmful content, as well as additional campaign content that provides members of the YouTube community with the opportunity to increase critical thinking skills around identifying different manipulation tactics used to spread misinformation – from using emotional language to cherry picking information. The content of this campaign helps to amplify other in-product interventions, such as information panels, which are meant to provide context for topics that are often subject to misinformation.

EEA Member State coverage of 'Hit Pause' media literacy impressions can be found in SLI 17.2.1.

SLI 17.2.1

Relevant Signatories report on number of media literacy and awareness raising activities organised and or participated in and will share quantitative information pertinent to show the effects of the campaigns they build or support at the Member State level.

Media Literacy campaign impressions in H2 2024 (1 July 2024 to 31 December 2024), broken down by EEA Member State.

Country Impressions from YouTube's media literacy campaigns
Austria 3,051,110
Belgium 4,052,296
Bulgaria 5,131,069
Croatia 3,451,905
Cyprus 633,076
Czech Republic 10,984,244
Denmark 4,651,692
Estonia 622,296
Finland 3,382,016
France 11,950,238
Germany 33,707,796
Greece 11,984,135
Hungary 3,544,139
Ireland 3,009,023
Italy 25,554,069
Latvia 1,016,128
Lithuania 1,958,077
Luxembourg 177,872
Malta 202,535
Netherlands 10,340,630
Poland 45,007,000
Portugal 5,495,194
Romania 9,622,147
Slovakia 5,236,812
Slovenia 1,365,628
Spain 28,050,584
Sweden 6,719,603
Iceland 112,624
Liechtenstein 18,828
Norway 1,119,845
Total EU 240,901,314
Total EEA 242,152,611

Measure 17.3

For both of the above Measures, and in order to build on the expertise of media literacy experts in the design, implementation, and impact measurement of tools, relevant Signatories will partner or consult with media literacy experts in the EU, including for instance the Commission's Media Literacy Expert Group, ERGA's Media Literacy Action Group, EDMO, its country-specific branches, or relevant Member State universities or organisations that have relevant expertise.

QRE 17.3.1

Relevant Signatories will describe how they involved and partnered with media literacy experts for the purposes of all Measures in this Commitment.

Note: The below QRE response has been reproduced (in some instances truncated in order to meet the suggested character limit) from the previous report as there is no new information to share now.

YouTube partners with media literacy experts and researchers to identify unique and engaging ways to build up the YouTube Community’s media literacy. For example, to inform the ‘Hit Pause’ global campaign, YouTube partnered with the National Association for Media Literacy Education (NAMLE), a U.S.-based organisation, to identify which competency areas the campaign should focus on. 

For additional information about YouTube’s ‘Hit Pause’ campaign, please refer to QRE 17.2.1.

Commitment 18

Relevant Signatories commit to minimise the risks of viral propagation of Disinformation by adopting safe design practices as they develop their systems, policies, and features.

We signed up to the following measures of this commitment

Measure 18.1 Measure 18.2 Measure 18.3

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Yes

If yes, list these implementation measures here

YouTube has long been updating, on a regular and ongoing basis, its internal systems and processes related to the detection of content that violates its policies. This includes investment in automated detection systems.

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

N/A

Measure 18.2

Relevant Signatories will develop and enforce publicly documented, proportionate policies to limit the spread of harmful false or misleading information (as depends on the service, such as prohibiting, downranking, or not recommending harmful false or misleading information, adapted to the severity of the impacts and with due regard to freedom of expression and information); and take action on webpages or actors that persistently violate these policies.

QRE 18.2.1

Relevant Signatories will report on the policies or terms of service that are relevant to Measure 18.2 and on their approach towards persistent violations of these policies.

Note: The below QRE response has been reproduced (in some instances truncated in order to meet the suggested character limit) from the previous report as there is no new information to share now.

See response to QRE 14.1.1 to see how YouTube’s Community Guidelines map to the TTPs. These policies seek to, among other things, limit the spread of misleading or deceptive content that poses a serious risk of egregious harm. 

Community Guidelines Enforcement
After a creator’s first Community Guidelines violation, they will typically get a warning with no penalty to their channel. They will have the chance to take a policy training to allow the warning to expire after 90 days. Creators will also get the chance to receive a warning in another policy category. If the same policy is violated within that 90 day window, the creator’s channel will be given a strike.

If the creator receives three strikes in the same 90-day period, their channel may be permanently removed from YouTube. In some cases, YouTube may terminate a channel for a single case of severe abuse, as explained in the Help Centre. YouTube may also remove content for reasons other than Community Guidelines violations, such as a first-party privacy complaint or a court order. In these cases, creators will not be issued a strike.

If a creator’s channel gets a strike, they will receive an email, and can have notifications sent to them through their mobile and desktop notifications. The emails and notifications received by the creator explain the action taken on their content and which of YouTube’s policies the content violated. More detailed guidelines of YouTube’s processes and policies on strikes here.

YouTube also reserves the right to restrict a creator's ability to create content on YouTube at its discretion. A channel may be turned off or restricted from using any YouTube features. If this happens, users are prohibited from using, creating, or acquiring another channel to get around these restrictions. This prohibition applies as long as the restriction remains active on the YouTube channel. A violation of this restriction is considered circumvention under YouTube’s Terms of Service, and may result in termination of all existing YouTube channels of the user, any new channels created or acquired, and channels in which the user is repeatedly or prominently featured.

Refer to SLI 18.2.1 on YouTube’s enforcement at an EEA Member State level.

SLI 18.2.1

Relevant Signatories will report on actions taken in response to violations of policies relevant to Measure 18.2, at the Member State level. The metrics shall include: Total number of violations and Meaningful metrics to measure the impact of these actions (such as their impact on the visibility of or the engagement with content that was actioned upon).

(1) Number of videos removed for violations of YouTube’s Misinformation Policies in H2 2024 (1 July 2024 to 31 December 2024), broken down by EEA Member State;

(2) Views threshold on videos removed for violations of YouTube’s Misinformation Policies in H2 2024, broken down by EEA Member State.

Country Number of videos removed Number of videos removed with 0 views Number of videos removed with 1-10 views Number of videos removed with 11-100 views Number of videos removed with 101-1,000 views Number of videos removed with 1,001-10,000 views Number of videos removed with >10,000 views
Austria 196 19 59 55 28 26 9
Belgium 200 30 53 56 32 22 7
Bulgaria 128 28 27 20 30 15 8
Croatia 54 9 12 17 8 6 2
Cyprus 71 5 15 13 14 17 7
Czech Republic 95 7 19 31 16 8 14
Denmark 112 7 23 36 30 12 4
Estonia 74 5 22 19 17 9 2
Finland 110 19 32 22 22 13 2
France 1,019 115 312 283 150 102 57
Germany 1,808 205 453 498 362 201 89
Greece 225 30 45 42 42 39 27
Hungary 69 6 26 19 9 5 4
Ireland 614 68 233 158 87 52 16
Italy 1,279 123 390 348 243 113 62
Latvia 87 8 26 24 14 11 4
Lithuania 88 8 27 27 13 9 4
Luxembourg 9 3 0 5 0 1 0
Malta 10 1 2 1 3 3 0
Netherlands 723 77 183 200 123 85 55
Poland 237 38 54 66 38 25 16
Portugal 242 39 66 54 50 23 10
Romania 154 24 39 38 31 18 4
Slovakia 34 3 13 10 3 2 3
Slovenia 90 8 28 24 27 2 1
Spain 2,075 271 539 488 428 244 105
Sweden 303 26 82 69 76 37 13
Iceland 10 1 1 2 6 0 0
Liechtenstein 1 0 0 0 0 1 0
Norway 227 17 43 51 65 43 8
Total EU 10,106 1,182 2,780 2,623 1,896 1,100 525
Total EEA 10,344 1,200 2,824 2,676 1,967 1,144 533

Measure 18.3

Relevant Signatories will invest and/or participate in research efforts on the spread of harmful Disinformation online and related safe design practices, will make findings available to the public or report on those to the Code's taskforce. They will disclose and discuss findings within the permanent Task-force, and explain how they intend to use these findings to improve existing safe design practices and features or develop new ones.

QRE 18.3.1

Relevant Signatories will describe research efforts, both in-house and in partnership with third-party organisations, on the spread of harmful Disinformation online and relevant safe design practices, as well as actions or changes as a result of this research. Relevant Signatories will include where possible information on financial investments in said research. Wherever possible, they will make their findings available to the general public.

Google, including YouTube, works with industry leaders across the technology sector, government, and civil society to set good policies, remain abreast of emerging challenges, and establish, share, and learn from industry best practices and research. 

Described below are examples that demonstrate Google’s, including YouTube, commitment to these actions:

Jigsaw-led Research
Jigsaw is a unit within Google that explores threats to open societies and builds technology that inspires scalable solutions. Jigsaw began conducting research on 'information interventions' more than 10 years ago. Jigsaw has since contributed research and technology on ways to make people more resilient to disinformation. Their research efforts are based on behavioural science and ethnographic studies that examine when people might be vulnerable to specific messages and how to provide helpful information when people need it most. These interventions provide a methodology for proactively addressing a range of threats to people online, as a complement to approaches that focus on removing or downranking material online.

An example of a notable research effort by Jigsaw run on and with YouTube is:
  • Accuracy Prompts (APs): APs remind users to think about accuracy. The prompts work by serving users bite-sized digital literacy tips at a moment when it might matter. Lab studies conducted across 16 countries with over 30,000 participants, suggest that APs increase engagement with accurate information and decrease engagement with less accurate information. Small experiments on YouTube suggest users enjoy the experience and report that it makes them feel safer online.

Commitment 19

Relevant Signatories using recommender systems commit to make them transparent to the recipients regarding the main criteria and parameters used for prioritising or deprioritising information, and provide options to users about recommender systems, and make available information on those options.

We signed up to the following measures of this commitment

Measure 19.1 Measure 19.2

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

No, YouTube has not recently introduced new implementation measures related to this Commitment, but YouTube regularly, and on an ongoing basis, updates its internal systems and processes related to its recommendation system. 

If yes, list these implementation measures here

N/A

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

N/A

Measure 19.1

Relevant Signatories will make available to their users, including through the Transparency Centre and in their terms and conditions, in a clear, accessible and easily comprehensible manner, information outlining the main parameters their recommender systems employ.

QRE 19.1.1

Relevant Signatories will provide details of the policies and measures put in place to implement the above-mentioned measures accessible to EU users, especially by publishing information outlining the main parameters their recommender systems employ in this regard. This information should also be included in the Transparency Centre.

On YouTube, recommendations help users discover more of the videos they love, whether it is a great new recipe to try or finding their next favourite song. 

Users can find recommendations across the platform, including the homepage, the ‘Up Next’ panel, and the Shorts tab:

  • Homepage: A user’s homepage is what they typically see when they first open YouTube.
  • Up Next: The Up Next panel appears when a user is watching a video. It suggests additional content based on what they are currently watching and personalised signals (details below).
  • Shorts: Shorts are ranked based on their performance and relevancy to that individual viewer. 

YouTube understands that individuals have unique viewing habits and uses signals to recommend content. YouTube’s system compares the user’s viewing habits with those that are similar to others, and uses that information to suggest other content.

YouTube’s recommendation system is constantly evolving, learning every day from over 80 billion pieces of information or 'signals,' the primary ones being:
  • Watch history: YouTube’s system uses the videos a user watches to give better recommendations, remember where a user left off, and more.
  • Search history: YouTube’s system uses what a user searches for on YouTube to influence future recommendations.
  • Channel subscriptions: YouTube’s system uses information about the channels a user subscribes to in order to recommend videos they may like.
  • Likes: YouTube’s system uses a user’s likes information to try to predict the likelihood that they will be interested in similar videos in the future.
  • Dislikes: YouTube’s system uses videos a user dislikes to inform what to avoid recommending in the future.
  • 'Not interested' feedback selections: YouTube’s system uses videos a user marks as 'Not interested' to inform what to avoid recommending in the future.
  • 'Don’t recommend channel' feedback selections: YouTube’s system uses 'Don’t recommend channel' feedback selections as a signal that the channel content likely is not something a user enjoyed watching.
  • Satisfaction surveys: YouTube’s system uses user surveys that ask a user to rate videos that they watched, which helps the system understand satisfaction, not just watch time.

Different YouTube features rely on certain recommendation signals more than others. For example, YouTube uses the video a user is currently watching as an important signal when suggesting a video to play next. The influence of each signal on recommendations can vary based on many variables, including but not limited to the user’s device type and the type of content they are watching. This is why the same user will see different recommendations on a mobile phone vs. a television. 

Recommendations
Recommendations connect viewers to high-quality information and complement the work done by the robust Community Guidelines that define what is and is not allowed on YouTube. YouTube raises up videos in search and recommendations to viewers on certain topics where quality is key. Human evaluators, trained using publicly available guidelines, assess the quality of information from a variety of channels and videos. 

These human evaluations are used to train YouTube’s system to model their decisions, and YouTube then scales their assessments to all videos across the platform. Learn more about how YouTube elevates high-quality information on the How YouTube Works website and the YouTube Blog

Controls to personalise recommendations 
YouTube has built controls that help users decide how much data they want to provide. Users can view, delete, or turn on or off their YouTube watch and search history whenever they want. And, if users do not want to see recommendations at all on the homepage or on the Shorts tab, they can turn off and clear their YouTube watch history. For users with YouTube watch history off and no significant prior watch history, the homepage will show the search bar and the Guide menu, with no feed of recommended videos.

Users can also tell YouTube when it is recommending something a user is not interested in. For example, buttons on the homepage and in the ‘Up next' section allow users to filter and choose recommendations by specific topics. Users can also click on 'Not interested' and/or 'Don’t recommend channel' to tell YouTube that a video or channel is not what a user wanted to see at that time, and YouTube will consider that when generating recommendations for that viewer in the future.

Additional information about how a user can manage their recommendation settings are outlined here in YouTube’s Help Centre. 

Measure 19.2

Relevant Signatories will provide options for the recipients of the service to select and to modify at any time their preferred options for relevant recommender systems, including giving users transparency about those options.

SLI 19.2.1

Relevant Signatories will provide aggregated information on effective user settings, such as the number of times users have actively engaged with these settings within the reporting period or over a sample representative timeframe, and clearly denote shifts in configuration patterns.

YouTube is sharing the percentage of Daily Active Users that are signed in to the platform (those not signed in are signed out). Signed in users are able to amend their settings in their YouTube or Google Accounts.

The average percentage of signed in Daily Active Users over H2 2024 (1 July 2024 to 31 December 2024), broken down by EEA Member State.

Country Percentage of daily active users that are signed in
Austria 69%
Belgium 71%
Bulgaria 73%
Croatia 74%
Cyprus 75%
Czech Republic 72%
Denmark 65%
Estonia 73%
Finland 70%
France 72%
Germany 69%
Greece 74%
Hungary 73%
Ireland 67%
Italy 76%
Latvia 73%
Lithuania 75%
Luxembourg 67%
Malta 74%
Netherlands 70%
Poland 75%
Portugal 76%
Romania 76%
Slovakia 73%
Slovenia 72%
Spain 76%
Sweden 66%
Iceland 67%
Liechtenstein 57%
Norway 63%
Total EU 72%
Total EEA 72%

Commitment 22

Relevant Signatories commit to provide users with tools to help them make more informed decisions when they encounter online information that may be false or misleading, and to facilitate user access to tools and information to assess the trustworthiness of information sources, such as indicators of trustworthiness for informed online navigation, particularly relating to societal issues or debates of general interest.

We signed up to the following measures of this commitment

Measure 22.7

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Yes

If yes, list these implementation measures here

YouTube continues to assess and update the topics prone to misinformation that receive additional context from information panels. 

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

N/A

Measure 22.7

Relevant Signatories will design and apply products and features (e.g. information panels, banners, pop-ups, maps and prompts, trustworthiness indicators) that lead users to authoritative sources on topics of particular public and societal interest or in crisis situations.

QRE 22.7.1

Relevant Signatories will outline the products and features they deploy across their services and will specify whether those are available across Member States.

Note: The below QRE response has been reproduced (in some instances truncated in order to meet the suggested character limit) from the previous report as there is no new information to share now.

YouTube highlights information from high-quality, third-party sources using information panels. As users navigate YouTube, they might see a variety of different information panels. These panels provide additional context, with each designed to help users make their own decisions about the content they find. 

These information panels will show regardless of what opinions or perspectives are expressed in a video. If users want to learn more, most panels also link to the third-party partner’s website.

Information panels on YouTube include, but are not limited to:
  • Panels on topics prone to misinformation: Topics that are prone to misinformation, such as the moon landing, may display an information panel at the top of search results or under a video. These information panels show basic background information, sourced from independent, third-party partners, to give more context on a topic. The panels also link to the third-party partner’s website. YouTube continues to assess and update the topics prone to misinformation that receive additional context from information panels. More details found here.
  • Election information panels: The election-related features are only available in select countries/regions during election cycles. Users may see candidate information panels, voting information panels, election integrity information panels, or election results information panels. More details found here.
  • Health-related information panels: Health-related topics, such as cancer treatment misinformation, may have a health information panel in your search results. These panels show info like symptoms, prevention and treatment options. More details found here.
  • Crisis resource panels: These panels let users connect with live support, 24/7 from recognised service partners. The panels may surface on the Watch page, when a user watches videos on topics related to suicide or self-harm, or in search results, when a user searches for topics related to certain health crises or emotional distress. More details found here.

Additional data points and EEA Member State coverage is provided in SLI 22.7.1.

SLI 22.7.1

Relevant Signatories will report on the reach and/or user interactions with the products or features, at the Member State level, via the metrics of impressions and interactions (clicks, click-through rates (as relevant to the tools and services in question) and shares (as relevant to the tools and services in question).

Impressions of information panels (excluding fact-check panels, crisis resource panels and non-COVID medical panels) in H2 2024 (1 July 2024 to 31 December 2024), broken down by EEA Member State.

Note: Due to a technical issue, some information panel impressions were undercounted. YouTube relies on a number of systems to calculate this metric and make the best effort to be as accurate as possible.

Country Impressions of information panels (excluding fact-check panels, crisis resource panels and non-COVID medical panels)
Austria 35,930,356
Belgium 140,278,448
Bulgaria 34,494,718
Croatia 45,489,297
Cyprus 4,709,568
Czech Republic 76,755,521
Denmark 18,153,587
Estonia 13,864,098
Finland 14,400,111
France 909,171,599
Germany 1,931,996,858
Greece 29,433,930
Hungary 55,347,902
Ireland 68,473,652
Italy 559,067,820
Latvia 39,720,895
Lithuania 36,214,789
Luxembourg 2,825,126
Malta 2,407,322
Netherlands 357,656,982
Poland 187,672,506
Portugal 36,677,338
Romania 90,075,363
Slovakia 25,112,829
Slovenia 14,414,991
Spain 417,913,504
Sweden 109,624,386
Iceland 1,117,258
Liechtenstein 149,057
Norway 19,947,673
Total EU 5,257,883,496
Total EEA 5,279,097,484

Commitment 23

Relevant Signatories commit to provide users with the functionality to flag harmful false and/or misleading information that violates Signatories policies or terms of service.

We signed up to the following measures of this commitment

Measure 23.1 Measure 23.2

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

No

If yes, list these implementation measures here

N/A

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

N/A

Measure 23.1

Relevant Signatories will develop or continue to make available on all their services and in all Member States languages in which their services are provided a user-friendly functionality for users to flag harmful false and/or misleading information that violates Signatories' policies or terms of service. The functionality should lead to appropriate, proportionate and consistent follow-up actions, in full respect of the freedom of expression.

QRE 23.1.1

Relevant Signatories will report on the availability of flagging systems for their policies related to harmful false and/or misleading information across EU Member States and specify the different steps that are required to trigger the systems.

YouTube's approach to combating misinformation involves removing content that violates YouTube’s policies as quickly as possible, and surfacing high-quality information in ranking and recommendations. YouTube applies these principles globally, including across the EU.

Implementing and enforcing YouTube policies
Each of YouTube’s policies is carefully thought through so they are consistent, well-informed, and can be applied to content from around the world. They are developed in consultation with a wide range of external industry and policy experts, as well as YouTube Creators. New policies go through testing before they go live to ensure YouTube’s global team of content reviewers can apply them accurately and consistently. 

Flagging inappropriate or harmful content on YouTube
YouTube offers YouTube users an opportunity to report or flag content that they believe violates YouTube’s Community Guidelines or other policies. Users can report content using YouTube’s flagging feature, which is available to signed-in users in all EU Member States via computer (desktop or laptop), mobile devices, and other surfaces. Details on how to report different types of content using YouTube’s flagging feature is outlined in YouTube’s Help Centre.

In addition to user flagging, YouTube uses machine learning technology to flag videos for review. YouTube developed powerful machine learning that detects content that may violate YouTube’s policies and sends it for human review. In some cases, that same machine learning automatically takes an action, if there is high confidence that content is violative given information about similar or related content that has been previously removed.

Machine learning identifies potentially violative content at scale and nominates for review content that may be against YouTube Community Guidelines. Content moderators then help confirm whether the content should be removed or remain on the platform. YouTube relies on this combination of humans and machine learning technology to flag inappropriate content and enforce YouTube’s community guidelines. This collaborative approach helps improve the accuracy of these models over time. It also means that the enforcement systems can manage the sheer scale of content that is uploaded to YouTube, while still digging into the nuances that determine whether a piece of content is violative.

Information about YouTube’s content moderation efforts across the official EU Member State languages can be found in the Human Resources involved in Content Moderation section of the VLOSE/VLOP Transparency Report under the European Union Digital Services Act (EU DSA)

Reporting illegal content
While YouTube’s Community Guidelines are policies that apply globally, YouTube is available in more than 100 different countries; therefore, processes are in place to review and appropriately act on requests from users, courts, and governments about content that violates local laws. Users can report illegal content using webforms dedicated to specific legal issues such as trademark, copyright, counterfeit and defamation. Webforms may also be accessed via the flagging feature after selecting Legal Issue as the report reason. To expedite the review, users should report content that violates the legal policies outlined here in YouTube’s Help Centre.

Measure 23.2

Relevant Signatories will take the necessary measures to ensure that this functionality is duly protected from human or machine-based abuse (e.g., the tactic of 'mass-flagging' to silence other voices).

QRE 23.2.1

Relevant Signatories will report on the general measures they take to ensure the integrity of their reporting and appeals systems, while steering clear of disclosing information that would help would-be abusers find and exploit vulnerabilities in their defences.

Note: The below QRE response has been reproduced (in some instances truncated in order to meet the suggested character limit) from the previous report as there is no new information to share now.

Content can be flagged by YouTube users, YouTube’s machine learning technology, and human content moderators. All users agree to not 'misuse any reporting, flagging, complaint, dispute, or appeals process, including by making groundless, vexatious, or frivolous submissions' in YouTube’s Terms of Service.

Additionally, YouTube ensures integrity of its systems through: 
  • Having a dedicated team to identify and mitigate the impact of sophisticated bad actors on YouTube at scale, while protecting the broader community;
  • Partnering with Google’s Threat Analysis Group (TAG) and Trust & Safety Teams to monitor malicious actors around the globe, disable their accounts, and remove the content that they post (See QRE 16.1.1 and QRE 16.2.1);
  • Educating users about Community Guidelines violations through its guided policy experience;
  • Providing clear communication on appeals processes and notifications, and regular policy updates on its Help Centre; 
  • Investing in automated systems to provide efficient detection of content to be evaluated by human reviewers.

Where appropriate, YouTube makes it clear to users that it has taken action on their content and provides them the opportunity to appeal that decision.

For more detailed information about YouTube’s complaint handling systems (i.e. appeals), please see the latest VLOSE/VLOP Transparency Report under the European Union Digital Services Act (EU DSA)

Commitment 24

Relevant Signatories commit to inform users whose content or accounts has been subject to enforcement actions (content/accounts labelled, demoted or otherwise enforced on) taken on the basis of violation of policies relevant to this section (as outlined in Measure 18.2), and provide them with the possibility to appeal against the enforcement action at issue and to handle complaints in a timely, diligent, transparent, and objective manner and to reverse the action without undue delay where the complaint is deemed to be founded.

We signed up to the following measures of this commitment

Measure 24.1

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Yes

If yes, list these implementation measures here

YouTube has long been updating, on a regular and ongoing basis, its internal systems and processes related to the detection of content that violates its policies. This includes investment in automated detection systems.

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

N/A

Measure 24.1

Relevant Signatories commit to provide users with information on why particular content or accounts have been labelled, demoted, or otherwise enforced on, on the basis of violation of policies relevant to this section, as well as the basis for such enforcement action, and the possibility for them to appeal through a transparent mechanism.

QRE 24.1.1

Relevant Signatories will report on the availability of their notification and appeals systems across Member States and languages and provide details on the steps of the appeals procedure.

Note: The below QRE response has been reproduced (in some instances truncated in order to meet the suggested character limit) from the previous report as there is no new information to share now.

As noted in QRE 18.2.1, if a creator’s channel gets a strike, they will receive an email, and can have notifications sent to them through their mobile and desktop notifications. The emails and notifications received by the creator explain what content was removed or age restricted, which policies the content violated, how it affects the user’s channel, and what the creator can do next. More detailed guidelines of YouTube’s processes and policies on strikes here.

Sometimes a single case of severe abuse will result in channel termination without warning.

The below appeals processes are available in all Member States, which are outlined in the YouTube Help Centre: 

After a creator submits an appeal
After a creator submits an appeal, they will get an email from YouTube letting them know the appeal outcome. One of the following will happen:

  • If YouTube finds that a user’s content followed YouTube’s Community Guidelines, YouTube will reinstate it and remove the strike from their channel. If a user appeals a warning and the appeal is granted, the next offence will be a warning.

  • If YouTube finds that a user’s content followed YouTube’s Community Guidelines, but is not appropriate for all audiences, YouTube will apply an age-restriction. If it is a video, it will not be visible to users who are signed out, are under 18 years of age, or have Restricted Mode turned on. If it is a custom thumbnail, it will be removed.

  • If YouTube finds that a user’s content was in violation of YouTube’s Community Guidelines, the strike will stay and the video will remain down from the site. There is no additional penalty for appeals that are rejected.

For a more granular Member State level breakdown, refer to SLI 24.1.1.

For more information about YouTube’s median time needed to action a complaint, please see the latest VLOSE/VLOP Transparency Report under the European Union Digital Services Act (EU DSA)

SLI 24.1.1

Relevant Signatories provide information on the number and nature of enforcement actions for policies described in response to Measure 18.2, the numbers of such actions that were subsequently appealed, the results of these appeals, information, and to the extent possible metrics, providing insight into the duration or effectiveness of processing of appeals process, and publish this information on the Transparency Centre.

(1) Appeals following video removal for violations of YouTube’s Misinformation Policies in H2 2024 (1 July 2024 to 31 December 2024), broken down by EEA Member State;

(2) Video reinstatements following a successful appeal against content removals for violations of YouTube’s Misinformation Policies in H2 2024, broken down by EEA Member State.

Country Number of videos removed that were subsequently appealed Number of videos removed that were then reinstated following a creator’s appeal
Austria 46 8
Belgium 26 4
Bulgaria 16 0
Croatia 10 1
Cyprus 4 1
Czech Republic 11 3
Denmark 18 3
Estonia 13 2
Finland 28 2
France 127 18
Germany 282 34
Greece 47 4
Hungary 11 2
Ireland 101 5
Italy 186 33
Latvia 7 0
Lithuania 9 1
Luxembourg 0 0
Malta 3 0
Netherlands 121 12
Poland 63 7
Portugal 26 1
Romania 25 2
Slovakia 3 0
Slovenia 13 0
Spain 276 28
Sweden 32 4
Iceland 1 0
Liechtenstein 0 0
Norway 25 2
Total EU 1,504 175
Total EEA 1,530 177

Empowering Researchers

Commitment 26

Relevant Signatories commit to provide access, wherever safe and practicable, to continuous, real-time or near real-time, searchable stable access to non-personal data and anonymised, aggregated, or manifestly-made public data for research purposes on Disinformation through automated means such as APIs or other open and accessible technical solutions allowing the analysis of said data.

We signed up to the following measures of this commitment

Measure 26.1 Measure 26.2 Measure 26.3

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

No

If yes, list these implementation measures here

N/A

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

N/A

Measure 26.1

Relevant Signatories will provide public access to non-personal data and anonymised, aggregated or manifestly-made public data pertinent to undertaking research on Disinformation on their services, such as engagement and impressions (views) of content hosted by their services, with reasonable safeguards to address risks of abuse (e.g. API policies prohibiting malicious or commercial uses).

QRE 26.1.1

Relevant Signatories will describe the tools and processes in place to provide public access to non-personal data and anonymised, aggregated and manifestly-made public data pertinent to undertaking research on Disinformation, as well as the safeguards in place to address risks of abuse.

Note: The below QRE response has been reproduced (in some instances truncated in order to meet the suggested character limit) from the previous report as there is no new information to share now.

Google Trends
Google Search and YouTube provide publicly available data via Google Trends, which provides access to a largely unfiltered sample of actual search requests made to Google Search and YouTube’s search function. It is anonymised (no one is personally identified), categorised (determined by the topic for a search query) and aggregated (grouped together). This allows Google Trends to display interest in a particular topic from around the globe or down to city-level geography. See Trends Help Centre for details.

Google Fact Check Explorer
Google Search also provides tools like Fact Check Explorer and the Google FactCheck Claim Search API. Google Search Fact Check Explorer allows anyone to explore the Fact Check articles that are using the ClaimReview markup. Additional information about ClaimReview markup can be found here

Using the Google FactCheck Claim Search API, users can query the same set of Fact Check results available via the Fact Check Explorer or a developer could continuously get the latest updates on a particular query. Use of the FactCheck Claim Search API is subject to Google’s API Terms of Service. To learn more, check the detailed API documentation

Google Researcher Program
As of 28 August 2023, eligible EU researchers can apply for access to publicly available data across some of Google’s products, including Search and YouTube, through the Google Researcher Program. Search and YouTube will provide eligible researchers (including non-academics that meet predefined eligibility criteria) with access to limited metadata scraping for public data. This program aims to enhance the public’s understanding of Google’s services and their impact. For additional details, see the Researcher Program landing page.

YouTube Researcher Program
The YouTube Researcher Program provides scaled, expanded access to global video metadata across the entire public YouTube corpus via a Data API for eligible academic researchers from around the world, who are affiliated with an accredited, higher-learning institution. Learn more about the data available in the YouTube API reference.

Transparency into paid content on YouTube
YouTube provides users a bespoke front end search page to access publicly available data containing organic content with paid product placements, sponsorships and endorsements as disclosed by creators. This is to enable users to understand that creators may receive goods or services in exchange for promotion. This search page complements YouTube’s existing process of displaying a disclosure message when creators disclose to YouTube that their content contains paid promotions. Learn more about adding paid product placements, sponsorships & endorsements here

Users can also query the same set of results using the YouTube Data API. Use is subject to YouTube’s API Terms of Service

QRE 26.1.2

Relevant Signatories will publish information related to data points available via Measure 25.1, as well as details regarding the technical protocols to be used to access these data points, in the relevant help centre. This information should also be reachable from the Transparency Centre. At minimum, this information will include definitions of the data points available, technical and methodological information about how they were created, and information about the representativeness of the data.

Note: The below QRE response has been reproduced (in some instances truncated in order to meet the suggested character limit) from the previous report as there is no new information to share now.

Google Trends
The information provided via Google Trends is a sample of all of Google Search and YouTube’s search activity. The 2 different samples of Google Trends data that can be accessed are:
  • Real-time data - a sample covering the last seven days;
  • Non-realtime data - a separate sample from real-time data that goes as far back as 2004 and up to 72 hours before one’s search.

Only a sample of Google Search and YouTube searches are used in Google Trends (a publicly available research tool), because Google, including YouTube, handles billions of searches per day. Providing access to the entire data set would be too large to process quickly. By sampling data, Google can look at a dataset representative of all searches on Google, which includes YouTube, while finding insights that can be processed within minutes of an event happening in the real world. See Trends Help Centre for details.

Google Fact Check Explorer
The Fact Check Explorer includes the following information, from fact-check articles using the ClaimReview markup:
  • Claim made by: Name of the publisher making the claim;
  • Rating text: True or False;
  • Fact Check article: The fact-checking article on the publisher's site;
  • Claim reviewed: A short summary of the claim being evaluated;
  • Tags: The tags that show up next to the claim.

For additional details on fields included on Google Fact Check API, see API documentation.

Google Researcher Program
Approved researchers will receive permissions and access to public data for Search and YouTube in the following ways: 
  • Search: Access to an API for limited scraping with a budget for quota;
  • YouTube: Permission for scraping limited to metadata.

For additional details, see the Researcher Program landing page.

YouTube Researcher Program
The YouTube Researcher Program provides scaled, expanded access to global video metadata across the entire public YouTube corpus via a Data API. The program allows eligible academic researchers around the world to independently analyse the data they collect, including generating new/derived metrics for their research. Information available via the Data API includes video title, description, views, likes, comments, channel metadata, search results, and other data.

Transparency into paid content on YouTube
The information provided via the bespoke front end search page allows users to view videos with active paid product placements, sponsorships, and endorsements that have been declared on YouTube.
  • Paid product placements
    • Videos about a product or service because there is a connection between the creator and the maker of the product or service;
    • Videos created for a company or business in exchange for compensation or free of charge products/services; 
    • Videos where that company or business’s brand, message, or product is included directly in the content and the company has given the creator money or free of charge products to make the video.
  • Endorsements - Videos created for an advertiser or marketer that contains a message that reflects the opinions, beliefs, or experiences of the creator.
  • Sponsorships - Videos that have been financed in whole or in part by a company, without integrating the brand, message, or product directly into the content. Sponsorships generally promote the brand, message, or product of the third party.

Definitions can be found on the YouTube Help Centre.

Additional data points are provided in SLI 26.1.1 and 26.2.1.

SLI 26.1.1

Relevant Signatories will provide quantitative information on the uptake of the tools and processes described in Measure 26.1, such as number of users.

Number of users of the Google Trends online tool to research information relating to YouTube in H2 2024 (1 July 2024 to 31 December 2024), broken down by EEA Member State (see table below).

Country Number of Google Trends users researching YouTube
Austria 1,535
Belgium 1,748
Bulgaria 1,366
Croatia 702
Cyprus 607
Czech Republic 1,582
Denmark 1,403
Estonia 390
Finland 1,073
France 11,281
Germany 16,423
Greece 1,947
Hungary 2,107
Ireland 2,030
Italy 9,452
Latvia 550
Lithuania 784
Luxembourg 203
Malta 210
Netherlands 5,890
Poland 6,697
Portugal 3,111
Romania 3,000
Slovakia 712
Slovenia 454
Spain 10,955
Sweden 2,726
Iceland 63
Liechtenstein 10
Norway 1,408
Total EU 88,938
Total EEA 90,419

Measure 26.2

Relevant Signatories will provide real-time or near real-time, machine-readable access to non-personal data and anonymised, aggregated or manifestly-made public data on their service for research purposes, such as accounts belonging to public figures such as elected official, news outlets and government accounts subject to an application process which is not overly cumbersome.

QRE 26.2.1

Relevant Signatories will describe the tools and processes in place to provide real-time or near real-time access to non-personal data and anonymised, aggregated and manifestly-made public data for research purposes as described in Measure 26.2.

Please refer to QRE 26.1.1 and QRE 26.1.2.

QRE 26.2.2

Relevant Signatories will describe the scope of manifestly-made public data as applicable to their services.

Please refer to QRE 26.1.1 and QRE 26.1.2.

QRE 26.2.3

Relevant Signatories will describe the application process in place to in order to gain the access to non-personal data and anonymised, aggregated and manifestly-made public data described in Measure 26.2.

Note: The below QRE response has been reproduced (in some instances truncated in order to meet the suggested character limit) from the previous report as there is no new information to share now.

Google Researcher Program
The Google Researcher Program, which includes YouTube, has a 3-step application process:

  1. Review and confirm the applicant’s eligibility;
  2. Submit an application, which requires a Google account;
  3. If approved, the applicant gains permission to access public data relevant to their research.

Once an application has been submitted, accepted researchers will be notified via email. 

YouTube Researcher Program
The YouTube Researcher Program has a 3-step application process: 

  1. YouTube verifies the applicant is an academic researcher affiliated with an accredited, higher-learning institution;
  2. The Researcher creates an API project in the Google Cloud Console and enables the relevant YouTube APIs. They can learn more by visiting the enabled APIs page;
  3. The Researcher applies with their institutional email (e.g. with a .edu suffix), includes as much detail as possible, and confirms that all of their information is accurate.

Once an application has been submitted, YouTube’s operations team will conduct a review and let applicants know if they are accepted into the program. 

SLI 26.2.1

Relevant Signatories will provide meaningful metrics on the uptake, swiftness, and acceptance level of the tools and processes in Measure 26.2, such as: Number of monthly users (or users over a sample representative timeframe), Number of applications received, rejected, and accepted (over a reporting period or a sample representative timeframe), Average response time (over a reporting period or a sample representative timeframe).

(1-4) Applications received, approved, rejected or under review for the YouTube Researcher Program in H2 2024 (1 July 2024 to 31 December 2024), broken down by EEA Member States (* indicates applications that were rejected on the basis of incorrect/incomplete application);

(5) Total number of unique researchers accessing the YouTube Researcher Program API in H2 2024, broken down by EEA Member States;

(6) Median application resolution time in days in H2 2024, reported at the EU and EEA level.

Please note the following:
  • Cells with '0' under applications received signify that there were no applications submitted by a researcher from that country. Similarly, cells with '0' signify that there were no applications approved, rejected, or under review for that country.

  • Applications under review reflect those applications still being processed at the end of the reporting period. The outcomes of these applications will be included in the next reporting period. 

  • Researchers accessing the Researcher Program API from 1 July 2024 to 31 December 2024 may have been approved before H2 2024. There can be more than one researcher per application. 

  • Median Application Resolution time is the median number of days from application creation to application resolution. Applications may go back and forth between the applicant and API Ops Agents throughout the approval process. This metric does not reflect YouTube’s first response back to the applicant.

Country Applications Received Applications Approved Applications Rejected Applications under Review Number of unique researchers accessing the API Median application resolution time
Austria 3 0 1 2 2 -
Belgium 1 0 0 1 2 -
Bulgaria 0 0 0 0 0 -
Croatia 0 0 0 0 0 -
Cyprus 0 0 0 0 0 -
Czech Republic 0 0 0 0 0 -
Denmark 1 0 1 0 1 -
Estonia 0 0 0 0 0 -
Finland 1 0 1 0 0 -
France 2 0 1 1 5 -
Germany 14 3 5 6 16 -
Greece 0 0 0 0 0 -
Hungary 0 0 0 0 0 -
Ireland 0 0 0 0 0 -
Italy 4 1 1 2 6 -
Latvia 0 0 0 0 0 -
Lithuania 0 0 0 0 0 -
Luxembourg 0 0 0 0 0 -
Malta 0 0 0 0 0 -
Netherlands 4 2 2 0 3 -
Poland 1 1 0 0 0 -
Portugal 0 0 0 0 0 -
Romania 0 0 0 0 1 -
Slovakia 0 0 0 0 0 -
Slovenia 0 0 0 0 0 -
Spain 4 3 0 1 3 -
Sweden 1 1 0 0 2 -
Iceland 0 0 0 0 0 -
Liechtenstein 0 0 0 0 0 -
Norway 0 0 0 0 0 -
Total EU 36 11 12 13 41 24.0 days
Total EEA 36 11 12 13 41 24.0 days

Measure 26.3

Relevant Signatories will implement procedures for reporting the malfunctioning of access systems and for restoring access and repairing faulty functionalities in a reasonable time.

QRE 26.3.1

Relevant Signatories will describe the reporting procedures in place to comply with Measure 26.3 and provide information about their malfunction response procedure, as well as about malfunctions that would have prevented the use of the systems described above during the reporting period and how long it took to remediate them.

Note: The below QRE response has been reproduced (in some instances truncated in order to meet the suggested character limit) from the previous report as there is no new information to share now.

Google Trends
For Google Trends, users have an option to report an issue by taking a screenshot of the malfunction area and then submitting it for feedback via the Send Feedback option on the Google Trends page. Additionally, users can access the Trends Help Centre to troubleshoot any issues they may be experiencing.

Google Fact Check Explorer
Within Google Search’s Fact Check Explorer, the Report Issue option provides users the ability to report issues to Google.

Google Researcher Program
For the Google Researcher Program, the most up to date information is captured in the Program description on the Transparency Centre, and also on the Acceptable Use Policy page. Google Search has additional Help Centre support via their Search Researcher Result API guidelines.

YouTube Researcher Program
For the YouTube Researcher Program, there is support available via email. Researchers can contact YouTube, with questions and to report technical issues or other suspected faults, via a unique email alias, provided upon acceptance into the program. Questions are answered by YouTube’s Developer Support team and by other relevant internal parties as needed.

​​Google is not aware of any malfunctions during the reporting period that would have prevented access to these reporting systems.

Commitment 28

COOPERATION WITH RESEARCHERS Relevant Signatories commit to support good faith research into Disinformation that involves their services.

We signed up to the following measures of this commitment

Measure 28.1 Measure 28.2 Measure 28.3 Measure 28.4

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Yes

If yes, list these implementation measures here

Search & YouTube
  • In 2024, Google hosted a workshop with over 30 attendees, including academics, at the Trust & Safety Forum in Lille, France exploring Safety by Design frameworks and implementation constraints, including misinformation.
  • In October 2024, Google announced the first-ever Google Academic Research Award (GARA) winners. In this first cycle, the program will support 95 projects led by 143 researchers globally.

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

N/A

Measure 28.1

Relevant Signatories will ensure they have the appropriate human resources in place in order to facilitate research, and should set-up and maintain an open dialogue with researchers to keep track of the types of data that are likely to be in demand for research and to help researchers find relevant contact points in their organisations.

QRE 28.1.1

Relevant Signatories will describe the resources and processes they deploy to facilitate research and engage with the research community, including e.g. dedicated teams, tools, help centres, programs, or events.

Google has a longstanding commitment to transparency, and has led the way in transparency reporting of content removals and government requests for user data over the past decade plus. 

Google and YouTube’s products, processes, and practices via the Lumen Database, Google Trends, and Fact Check Explorer show some of the ways that Google provides tools to support not only researchers, but journalists and others, to understand more about Google. 

Please refer to QRE 26.1.1, QRE 26.1.2, and QRE 26.3.1 for further information about Google Fact Check Tool API and Google Trends.

Google
Eligible EU researchers can apply for access to publicly available data across some of Google’s products, including Search and YouTube, through the Google Researcher Program. Search and YouTube will provide eligible researchers (including non-academics that meet predefined eligibility criteria) with access to limited metadata scraping for public data. This program aims to enhance the public’s understanding of Google’s services and their impact. 

Google has teams that operate the Google Researcher Program. They manage the researcher application process and evaluate potential updates and developments for the Google Researcher Program. Additional information can be found on the Google Transparency Centre. Google Search has additional Help Centre support via their Search Researcher Result API guidelines.

Additionally, Google’s partnership with Lumen is an independent research project managed by the Berkman Klein Centre for Internet & Society at Harvard Law School. The Lumen database houses millions of content takedown requests that have been voluntarily shared by various companies, including Google. Its purpose is to facilitate academic and industry research concerning the availability of online content. As part of Google’s partnership with Lumen, information about the legal notices Google receives may be sent to the Lumen project for publication. Google informs users about its Lumen practices under the 'Transparency at our core' section of the Legal Removals Help Centre. Additional information on Lumen can be found here

Trust & Safety Research partners internally with Google.org's Scientific Progress team to strategically fund and engage with academics working on cutting-edge interdisciplinary research in areas of mutual interest and societal benefit. In October 2024, Google announced the first-ever Google Academic Research Award (GARA) winners. Overall, the program supported 95 projects led by 143 researchers globally; within the Trust & Safety topic, Google funded 21 projects across 12 countries. 

YouTube
The YouTube Researcher Program provides eligible academic researchers from around the world with scaled, expanded access to global video metadata across the entire public YouTube corpus via a Data API. Information available via the Data API includes video title, description, views, likes, comments, channel metadata, search results, and other data. (See YouTube API reference for more information).

YouTube has teams that operate the YouTube Researcher Program. They manage the researcher application process and provide technical support throughout the research project. They also evaluate potential updates and developments for the YouTube Researcher Program. Researchers can use any of the options below to obtain support: 


In addition, Google Search and YouTube’s Product and Policy teams regularly communicate with researchers who reach out with questions about the functioning of YouTube or seek to receive feedback on past or future research projects.

Measure 28.2

Relevant Signatories will be transparent on the data types they currently make available to researchers across Europe.

QRE 28.2.1

Relevant Signatories will describe what data types European researchers can currently access via their APIs or via dedicated teams, tools, help centres, programs, or events.

See response to QRE 28.1.1.

Measure 28.3

Relevant Signatories will not prohibit or discourage genuinely and demonstratively public interest good faith research into Disinformation on their platforms, and will not take adversarial action against researcher users or accounts that undertake or participate in good-faith research into Disinformation.

QRE 28.3.1

Relevant Signatories will collaborate with EDMO to run an annual consultation of European researchers to assess whether they have experienced adversarial actions or are otherwise prohibited or discouraged to run such research.

Google Search and YouTube continue to engage constructively with the Code of Practice’s Permanent Task-force and with EDMO. As of the time of this report, no annual consultation has yet taken place, but Google Search and YouTube stand ready to collaborate with EDMO to that end in 2025. 

Additionally, refer to QRE 26.1.1 to learn more about how Google, including YouTube, provides opportunities for researchers on its platforms.

Measure 28.4

As part of the cooperation framework between the Signatories and the European research community, relevant Signatories will, with the assistance of the EDMO, make funds available for research on Disinformation, for researchers to independently manage and to define scientific priorities and transparent allocation procedures based on scientific merit.

QRE 28.4.1

Relevant Signatories will disclose the resources made available for the purposes of Measure 28.4 and procedures put in place to ensure the resources are independently managed.

In 2021, Google contributed €25M to help launch the European Media and Information Fund (EMIF).

The EMIF was established by the European University Institute and the Calouste Gulbenkian Foundation. The European Digital Media Observatory (EDMO) agreed to play a scientific advisory role in the evaluation and selection of projects that will receive the fund’s support, but does not receive Google funding. Google has no role in the assessment of applications. To date, at least 107 projects related to information quality across 25 countries (including 23 EEA Member States) have been granted €17.70 million.