YouTube

Report March 2026

Submitted

Your organisation description

Integrity of Services

Commitment 14

In order to limit impermissible manipulative behaviours and practices across their services, Relevant Signatories commit to put in place or further bolster policies to address both misinformation and disinformation across their services, and to agree on a cross-service understanding of manipulative behaviours, actors and practices not permitted on their services. Such behaviours and practices include: The creation and use of fake accounts, account takeovers and bot-driven amplification, Hack-and-leak operations, Impersonation, Malicious deep fakes, The purchase of fake engagements, Non-transparent paid messages or promotion by influencers, The creation and use of accounts that participate in coordinated inauthentic behaviour, User conduct aimed at artificially amplifying the reach or perceived public support for disinformation.

We signed up to the following measures of this commitment

Measure 14.1 Measure 14.2 Measure 14.3

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

No

If yes, list these implementation measures here

See QRE 14.1.2

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

N/A

Measure 14.1

Relevant Signatories will adopt, reinforce and implement clear policies regarding impermissible manipulative behaviours and practices on their services, based on the latest evidence on the conducts and tactics, techniques and procedures (TTPs) employed by malicious actors, such as the AMITT Disinformation Tactics, Techniques and Procedures Framework.

QRE 14.1.1

Relevant Signatories will list relevant policies and clarify how they relate to the threats mentioned above as well as to other Disinformation threats.

Note: The below QRE response has been reproduced (in some instances truncated in order to meet the suggested character limit) from the previous report as there is no new information to share now.

YouTube’s systems are designed to connect people with high-quality content.

In addition, YouTube has various policies which set out what is not allowed on YouTube. These policies, which can be accessed in YouTube’s Help Centre, address relevant TTPs. Notably, YouTube’s policies tend to be broader than the identified TTPs. As such, related SLIs providing information about actions taken related to the TTP may be overinclusive.

YouTube’s Community Guidelines, commitment to promote high-quality content and curb the spread of harmful misinformation, disclosure requirements for paid product placements, sponsorships & endorsements, and ongoing work with Google’s Threat Intelligence Group (GTIG) broadly address TTPs: 1, 2, 3, 5, 7, 8, 9, 10, and 11 - and notably, go beyond these TTPs.

In this report, YouTube has provided data relating to TTPs 1, 5, 7 and 9. Removals relating to the remaining TTPs are included, in part or in whole, in the Community Guidelines enforcement report, but YouTube does not have more detailed removal reporting at this time. TTPs do not necessarily map singularly to one Community Guideline, and therefore, there are challenges in providing more granular mapping for TTPs. 

YouTube continues to assess, evaluate, and update its policies on a regular basis, the latest updated policies, including Community Guidelines, can be found here

QRE 14.1.2

Signatories will report on their proactive efforts to detect impermissible content, behaviours, TTPs and practices relevant to this commitment.

Note: The below QRE response has been reproduced (in some instances truncated in order to meet the suggested character limit) from the previous report as there is no new information to share now.

YouTube’s approach to combating misinformation involves removing content that violates YouTube’s policies, raising high-quality information in rankings and recommendations curbing the spread of harmful misinformation, and rewarding trusted, eligible creators and artists. YouTube applies these principles globally, including across the EU. 

YouTube uses a combination of people and machine learning to detect problematic content automatically and at scale. Machine learning is well-suited to detect patterns, including harmful misinformation, which helps YouTube find content similar to other content that YouTube has already removed, even before it is viewed. Every quarter, YouTube publishes data in the Community Guidelines enforcement report about removals that were first detected by automated means. 

YouTube’s Intelligence Desk monitors the news, social media, and user reports to detect new trends surrounding inappropriate content, and works to make sure YouTube’s teams are prepared to address them before they can become a larger issue.

In addition, Google’s Threat Intelligence Group (GTIG) and Google and YouTube’s Trust and Safety Teams are central to Google’s work to monitor malicious actors around the globe, including but not limited to coordinated information operations that may affect EU Member States. More information about this work is outlined in QRE 16.1.1.

YouTube continues to invest in automated detection systems, and rely on both human evaluators and machine learning to train their systems on new data. YouTube’s engineering teams also continue to update and improve their detection systems regularly. 

Measure 14.2

Relevant Signatories will keep a detailed, up-to-date list of their publicly available policies that clarifies behaviours and practices that are prohibited on their services and will outline in their reports how their respective policies and their implementation address the above set of TTPs, threats and harms as well as other relevant threats.

QRE 14.2.1

Relevant Signatories will report on actions taken to implement the policies they list in their reports and covering the range of TTPs identified/employed, at the Member State level.

Note: The below QRE response has been reproduced (in some instances truncated in order to meet the suggested character limit) from the previous report as there is no new information to share now.

YouTube enforces a broad range of policies to help build a safer community. These policies include, but are not limited to, YouTube’s Community Guidelines, which include policies covering Spam, Deceptive Practices, and Scams, Impersonation and Fake Engagement. YouTube applies these policies globally, including across the EEA Member States.

Implementing and enforcing YouTube policies
In general, enforcement of YouTube’s policies is a joint effort between people and machine learning technology. YouTube starts by giving a team of experienced content moderators enforcement guidelines (detailed explanations of what makes content violative and non-violative), and asks them to differentiate between violative and non-violative material. If the new guidelines allow them to achieve a very high level of accuracy, YouTube expands the testing group to include moderators across different backgrounds, languages and experience levels. 

Then YouTube may begin revising the guidelines so that they can be accurately interpreted across a larger, more diverse set of moderators. These findings then help train YouTube’s machine learning technology to detect potentially violative content at scale. As done with its content moderators, YouTube also tests its models to understand whether it has provided enough context for them to make accurate assessments about what to surface for people to review.

Once models are trained to identify potentially violative content, the role of content moderators remains essential throughout the enforcement process. Machine learning helps identify potentially violative content at scale and content moderators may then help assess whether the content should be removed. In some cases, YouTube’s systems may take automated action, such as when there is high confidence that the content is violative given similar content that was previously removed.

This collaborative approach helps improve the accuracy of YouTube’s models over time, as models continuously learn and adapt based on content moderator feedback. It also means YouTube’s enforcement systems can manage the sheer scale of content that is uploaded to YouTube, while still digging into the nuances that determine whether a piece of content is violative.

YouTube provides data for TTPs 1, 5, 7 and 9.

SLI 14.2.1

Number of instances of identified TTPs and actions taken at the Member State level under policies addressing each of the TTPs as well as information on the type of content.

Where possible, each TTP has been mapped to relevant Community Guidelines. However, there is not an exact one-to-one mapping. Content might be violative of more than one of YouTube’s Community Guidelines so could be labelled under more than one policy violation. This means the data presented here is a best estimation of relevant TTP-activity under the relevant Community Guideline.

Refer to QRE 14.1.1 for more information on YouTube’s efforts to broadly address these TTPs.

TTP 1
(1) Number of channels identified and removed for TTP 1 during the reporting period, broken down by EEA Member State.


TTP 5
(2) Number of channels identified and removed for TTP 5 during the reporting period, broken down by EEA Member State;
(3) Number of videos identified and removed for TTP 5 during the reporting period, broken down by EEA Member State.


TTP 7
(4) Number of videos identified and removed for TTP 7 during the reporting period, broken down by EEA Member State.


TTP 9
(5) Number of channels identified and removed for TTP 9 during the reporting period, broken down by EEA Member State;
(6) Number of videos identified and removed for TTP 9 during the reporting period, broken down by EEA Member State.


The number of removals may represent an overcount, as the respective Community Guidelines may be inclusive of more policy-violative activity than identified by the TTP alone. 

Country TTP OR ACTION 1 - Number of channels identified TTP OR ACTION 1 - Number of channels removed TTP OR ACTION 5 - Number of channels identified TTP OR ACTION 5 - Number of channels removed TTP OR ACTION 5 - Number of videos identified TTP OR ACTION 5 - Number of videos removed TTP OR ACTION 7 - Number of videos identified TTP OR ACTION 7 - Number of videos removed TTP OR ACTION 9 - Number of channels identified TTP OR ACTION 9 - Number of channels removed TTP OR ACTION 9 - Number of videos identified TTP OR ACTION 9 - Number of videos removed
Austria 1,676 188 1 14 36 1
Belgium 1,507 395 822 12 47 9
Bulgaria 1,688 313 3 11 32 2
Croatia 367 121 4 4 14 0
Cyprus 76,212 98 111 6 25 24
Czech Republic 3,576 326 363 23 93 7
Denmark 1,954 151 1 3 26 6
Estonia 402 48 2 2 9 6
Finland 34,451 148 56 8 37 7
France 28,462 1,964 268 101 310 27
Germany 70,558 2,200 998 194 615 186
Greece 1,508 257 10 12 30 1
Hungary 602 209 33 5 26 2
Ireland 1,549 210 936 26 31 1
Italy 16,274 1,655 571 42 143 10
Latvia 7581 65 0 4 30 3
Lithuania 1,271 86 268 2 22 3
Luxembourg 277 17 3 2 4 0
Malta 167 14 0 2 6 0
Netherlands 38,549 666 254 117 298 115
Poland 28,648 1,134 108 56 235 18
Portugal 925 320 26 13 32 2
Romania 4,525 887 98 18 86 6
Slovakia 579 140 2 3 22 0
Slovenia 153 48 0 0 13 1
Spain 8,053 1,269 2,309 72 197 12
Sweden 4,240 379 69 27 62 2
Iceland 136 12 1 0 2 0
Liechtenstein 8 1 0 0 1 0
Norway 1,036 235 19 5 40 1
Total EU 335,754 13,308 7,316 779 2,481 451
Total EEA 336,934 13,556 7,336 784 2,524 452

SLI 14.2.2

Views/impressions of and interaction/engagement at the Member State level (e.g. likes, shares, comments), related to each identified TTP, before and after action was taken.

Where possible, each TTP has been mapped to relevant Community Guidelines. However, there is not an exact one-to-one mapping. Content might be violative of more than one of YouTube’s Community Guidelines so could be labelled under more than one policy violation. This means the data presented here is a best estimation of relevant TTP-activity under the relevant Community Guideline.

Refer to QRE 14.1.1 for more information on YouTube’s efforts to broadly address these TTPs.

TTP 5
(1) Views threshold on video removals for TTP 5 during the reporting period, broken down by EEA Member State.


TTP 7
(2) Views threshold on video removals for TTP 7 during the reporting period, broken down by EEA Member State.


TTP 9
(3) Views threshold on video removals for TTP 9 during the reporting period, broken down by EEA Member State.


Actions in this context constitute removals of the video themselves. And therefore there should be no views, actions, or engagement after YouTube removes the content.

Country TTP OR ACTION 5 - Number of videos removed with 0 views TTP OR ACTION 5 - Number of videos removed with 1-10 views TTP OR ACTION 5 - Number of videos removed with 11-100 views TTP OR ACTION 5 - Number of videos removed with 101-1,000 views TTP OR ACTION 5 - Number of videos removed with 1,001- 10,000 views TTP OR ACTION 5 - Number of videos removed with >10,000 views TTP OR ACTION 5 - Views after action TTP OR ACTION 7 - Number of videos removed with 0 views TTP OR ACTION 7 - Number of videos removed with 1-10 views TTP OR ACTION 7 - Number of videos removed with 11-100 views TTP OR ACTION 7 - Number of videos removed with 101-1,000 views TTP OR ACTION 7 - Number of videos removed with 1,001- 10,000 views TTP OR ACTION 7 - Number of videos removed with >10,000 views TTP OR ACTION 7 - Views after action TTP OR ACTION 9 - Number of videos removed with 0 views TTP OR ACTION 9 - Number of videos removed with 1-10 views TTP OR ACTION 9 - Number of videos removed with 11-100 views TTP OR ACTION 9 - Number of videos removed with 101-1,000 views TTP OR ACTION 9 - Number of videos removed with 1,001- 10,000 views TTP OR ACTION 9 - Number of videos removed with >10,000 views TTP OR ACTION 9 - Views after action
Austria 0 0 0 0 0 1 1 5 3 4 1 0 1 0 0 0 0 0
Belgium 11 1 3 719 84 4 1 4 2 2 2 1 0 0 0 2 3 4
Bulgaria 1 0 2 0 0 0 0 4 2 4 1 0 0 0 0 1 1 0
Croatia 0 1 0 0 3 0 0 1 1 2 0 0 0 0 0 0 0 0
Cyprus 20 10 56 24 1 0 1 3 0 2 0 0 0 0 3 1 15 5
Czech Republic 120 25 110 98 10 0 4 9 4 4 1 1 1 0 0 2 2 2
Denmark 1 0 0 0 0 0 1 1 0 0 1 0 0 0 2 2 1 1
Estonia 0 1 0 1 0 0 0 0 1 1 0 0 0 2 0 2 2 0
Finland 15 0 15 15 9 2 0 1 3 2 2 0 1 0 1 4 0 1
France 54 34 65 82 30 3 13 44 16 17 7 4 2 0 3 2 5 15
Germany 146 51 197 286 287 31 22 70 35 27 26 14 3 2 5 50 87 39
Greece 4 1 1 1 2 1 1 3 2 3 2 1 0 0 0 0 1 0
Hungary 16 0 4 12 1 0 4 0 1 0 0 0 0 0 0 0 2 0
Ireland 5 4 15 161 621 130 0 12 4 4 4 2 0 0 0 1 0 0
Italy 56 33 171 202 75 34 4 20 9 6 2 1 0 2 1 4 1 2
Latvia 0 0 0 0 0 0 0 1 1 0 0 2 0 1 0 1 1 0
Lithuania 0 0 2 0 9 257 0 0 0 0 0 2 0 0 1 0 2 0
Luxembourg 1 0 0 0 0 2 0 1 0 1 0 0 0 0 0 0 0 0
Malta 0 0 0 0 0 0 0 2 0 0 0 0 0 0 0 0 0 0
Netherlands 26 34 45 84 36 29 19 44 24 18 4 8 4 2 6 19 63 21
Poland 15 14 38 27 10 4 9 18 10 8 5 6 1 3 1 2 4 7
Portugal 9 1 2 14 0 0 2 5 2 3 1 0 0 0 0 1 0 1
Romania 14 24 13 5 20 22 5 6 4 2 0 1 0 0 2 0 2 2
Slovakia 1 0 0 0 1 0 0 0 2 0 1 0 0 0 0 0 0 0
Slovenia 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0
Spain 93 143 815 905 263 90 13 22 12 13 5 7 0 0 1 0 5 6
Sweden 22 3 13 21 8 2 7 13 4 1 2 0 0 1 0 1 0 0
Iceland 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Liechtenstein 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Norway 8 0 1 9 1 0 0 1 3 1 0 0 0 0 0 0 1 0
Total EU 630 380 1,567 2,657 1,470 612 107 289 142 124 67 50 13 14 26 95 197 106
Total EEA 638 380 1,568 2,667 1,471 612 107 290 145 125 67 50 13 14 26 95 198 106

SLI 14.2.3

Metrics to estimate the penetration and impact that e.g. Fake/Inauthentic accounts have on genuine users and report at the Member State level (including trends on audiences targeted; narratives used etc.).

Where possible, each TTP has been mapped to relevant Community Guidelines. However, there is not an exact one-to-one mapping. Content might be violative of more than one of YouTube’s Community Guidelines so could be labelled under more than one policy violation. This means the data presented here is a best estimation of relevant TTP-activity under the relevant Community Guideline.

Refer to QRE 14.1.1 for more information on YouTube’s efforts to broadly address these TTPs.

TTP 5
Refer to SLI 14.2.2, which provides data on video removals by view threshold and view / impressions on the platform after action has been taken. Views are a measure of penetration / impact on the platform.


TTP 7
Refer to SLI 14.2.2, which provides data on video removals by view threshold and view / impressions on the platform after action has been taken. Views are a measure of penetration / impact on the platform.


TTP 9
Refer to SLI 14.2.2, which provides data on video removals by view threshold and view / impressions on the platform after action has been taken. Views are a measure of penetration / impact on the platform. 

Country TTP OR ACTION1 - Penetration and impact on genuine users TTP OR ACTION1 - Trends on targeted audiences TTP OR ACTION1 - Trends on narratives used TTP OR ACTION2 - Penetration and impact on genuine users TTP OR ACTION2 - Trends on targeted audiences TTP OR ACTION2 - Trends on narratives used TTP OR ACTION3 - Penetration and impact on genuine users TTP OR ACTION3 - Trends on targeted audiences TTP OR ACTION3 - Trends on narratives used TTP OR ACTION4 - Penetration and impact on genuine users TTP OR ACTION4 - Trends on targeted audiences TTP OR ACTION4 - Trends on narratives used TTP OR ACTION5 - Penetration and impact on genuine users TTP OR ACTION5 - Trends on targeted audiences TTP OR ACTION5 - Trends on narratives used TTP OR ACTION6 - Penetration and impact on genuine users TTP OR ACTION6 - Trends on targeted audiences TTP OR ACTION6 - Trends on narratives used TTP OR ACTION7 - Penetration and impact on genuine users TTP OR ACTION7 - Trends on targeted audiences TTP OR ACTION7 - Trends on narratives used TTP OR ACTION8 - Penetration and impact on genuine users TTP OR ACTION8 - Trends on targeted audiences TTP OR ACTION8 - Trends on narratives used TTP OR ACTION9 - Penetration and impact on genuine users TTP OR ACTION9 - Trends on targeted audiences TTP OR ACTION9 - Trends on narratives used TTP OR ACTION10 - Penetration and impact on genuine users TTP OR ACTION10 - Trends on targeted audiences TTP OR ACTION10 - Trends on narratives used TTP OR ACTION11 - Penetration and impact on genuine users TTP OR ACTION11 - Trends on targeted audiences TTP OR ACTION11 - Trends on narratives used TTP OR ACTION12 - Penetration and impact on genuine users TTP OR ACTION12 - Trends on targeted audiences TTP OR ACTION12 - Trends on narratives used
Austria
Belgium
Bulgaria
Croatia
Cyprus
Czech Republic
Denmark
Estonia
Finland
France
Germany
Greece
Hungary
Ireland
Italy
Latvia
Lithuania
Luxembourg
Malta
Netherlands
Poland
Portugal
Romania
Slovakia
Slovenia
Spain
Sweden
Iceland
Liechtenstein
Norway

SLI 14.2.4

Estimation, at the Member State level, of TTPs related content, views/impressions and interaction/engagement with such content as a percentage of the total content, views/impressions and interaction/engagement on relevant signatories' service.

Where possible, each TTP has been mapped to relevant Community Guidelines. However, there is not an exact one-to-one mapping. Content might be violative of more than one of YouTube’s Community Guidelines so could be labelled under more than one policy violation. This means the data presented here is a best estimation of relevant TTP-activity under the relevant Community Guideline.

Refer to QRE 14.1.1 for more information on YouTube’s efforts to broadly address these TTPs.

TTP 1
(1) Percentage of TTP 1 channel removals out of all related channel removals during the reporting period, broken down by EEA Member State.

Refer to the Community Guidelines enforcement report for more information regarding removed violative content.


TTP 5
(2) Percentage of TTP 5 channel removals out of all related channel removals during the reporting period, broken down by EEA Member State;
(3) Percentage of TTP 5 video removals out of all related video removals during the reporting period, broken down by EEA Member State.

Refer to the Community Guidelines enforcement report for more information regarding removed violative content.


TTP 7
(4) Percentage of TTP 7 video removals out of all related video removals during the reporting period, broken down by EEA Member State.

Refer to the Community Guidelines enforcement report for more information regarding removed violative videos.


TTP 9
(5) Percentage of TTP 9 channel removals out of all related channel removals during the reporting period, broken down by EEA Member State;
(6) Percentage of TTP 9 video removals out of all related video removals during the reporting period, broken down by EEA Member State.

Refer to the Community Guidelines enforcement report for more information regarding removed violative videos.

Country TTP OR ACTION 1 - Percentage of TTP 1 channel removals out of all related channel removals TTP OR ACTION 5 - Percentage of TTP 5 channel removals out of all related channel removals TTP OR ACTION 5 - Percentage of TTP 5 video removals out of all related video removals TTP OR ACTION 7 - Percentage of TTP 7 video removals out of all related video removals TTP OR ACTION 9 - Percentage of TTP 9 channel removals out of all related channel removals TTP OR ACTION 9 - Percentage of TTP 9 video removals out of all related video removals
Austria 15.20% 1.70% 0.01% 0.09% 0.33% 0.01%
Belgium 30.69% 8.04% 3.13% 0.05% 0.96% 0.03%
Bulgaria 20.07% 3.72% 0.01% 0.04% 0.38% 0.01%
Croatia 17.95% 5.92% 0.06% 0.06% 0.68% 0.00%
Cyprus 59.74% 0.08% 1.33% 0.07% 0.02% 0.29%
Czech Republic 36.74% 3.35% 0.71% 0.04% 0.96% 0.01%
Denmark 39.77% 3.07% 0.01% 0.02% 0.53% 0.04%
Estonia 15.04% 1.80% 0.03% 0.03% 0.34% 0.10%
Finland 31.86% 0.14% 0.47% 0.07% 0.03% 0.06%
France 19.52% 1.35% 0.18% 0.07% 0.21% 0.02%
Germany 19.82% 0.62% 0.59% 0.11% 0.17% 0.11%
Greece 32.65% 5.57% 0.06% 0.07% 0.65% 0.01%
Hungary 25.17% 8.74% 0.17% 0.03% 1.09% 0.01%
Ireland 26.39% 3.58% 4.44% 0.12% 0.53% 0.00%
Italy 41.07% 4.18% 0.50% 0.04% 0.36% 0.01%
Latvia 24.57% 0.21% 0.00% 0.04% 0.10% 0.03%
Lithuania 33.39% 2.26% 2.52% 0.02% 0.58% 0.03%
Luxembourg 17.98% 1.10% 0.24% 0.16% 0.26% 0.00%
Malta 13.14% 1.10% 0.00% 0.12% 0.47% 0.00%
Netherlands 20.53% 0.35% 0.30% 0.14% 0.16% 0.13%
Poland 37.77% 1.50% 0.12% 0.06% 0.31% 0.02%
Portugal 15.35% 5.31% 0.08% 0.04% 0.53% 0.01%
Romania 38.00% 7.45% 0.11% 0.02% 0.72% 0.01%
Slovakia 26.56% 6.42% 0.01% 0.02% 1.01% 0.00%
Slovenia 16.76% 5.26% 0.00% 0.00% 1.42% 0.03%
Spain 22.06% 3.48% 1.72% 0.05% 0.54% 0.01%
Sweden 20.35% 1.82% 0.26% 0.10% 0.30% 0.01%
Iceland 24.55% 2.17% 0.10% 0.00% 0.36% 0.00%
Liechtenstein 9.09% 1.14% 0.00% 0.00% 1.14% 0.00%
Norway 31.20% 7.08% 0.11% 0.03% 1.20% 0.01%
Total EU 27.67% 1.10% 0.63% 0.07% 0.20% 0.04%
Total EEA 27.68% 1.11% 0.62% 0.07% 0.21% 0.04%

Measure 14.3

Relevant Signatories will convene via the Permanent Task-force to agree upon and publish a list and terminology of TTPs employed by malicious actors, which should be updated on an annual basis.

QRE 14.3.1

Signatories will report on the list of TTPs agreed in the Permanent Task-force within 6 months of the signing of the Code and will update this list at least every year. They will also report about the common baseline elements, objectives and benchmarks for the policies and measures.

Note: The below QRE response has been reproduced (in some instances truncated in order to meet the suggested character limit) from the previous report as there is no new information to share now.

The final list of TTPs agreed within the Permanent Task-force in H2 2022 was used by Signatories as part of their reports from then on, as intended. The Permanent Task-force will continue to examine and update the list as necessary in light of technical advancements and evolving disinformation tactics.

Commitment 15

Relevant Signatories that develop or operate AI systems and that disseminate AI-generated and manipulated content through their services (e.g. deepfakes) commit to take into consideration the transparency obligations and the list of manipulative practices prohibited under the proposal for Artificial Intelligence Act.

We signed up to the following measures of this commitment

Measure 15.1 Measure 15.2

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

No

If yes, list these implementation measures here

See QRE 15.1.1

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

N/A

Measure 15.1

Relevant signatories will establish or confirm their policies in place for countering prohibited manipulative practices for AI systems that generate or manipulate content, such as warning users and proactively detect such content.

QRE 15.1.1

In line with EU and national legislation, Relevant Signatories will report on their policies in place for countering prohibited manipulative practices for AI systems that generate or manipulate content.

Note: The below QRE response has been reproduced (in some instances truncated in order to meet the suggested character limit) from the previous report as there is no new information to share now.

All content uploaded to YouTube is subject to its Community Guidelines—regardless of how it is generated.

YouTube requires creators to disclose when they have created altered or synthetic content that is realistic, including using AI tools. YouTube also informs viewers that content may be altered or synthetic in two ways. A label may be added to the description panel indicating that some of the content was altered or synthetic. For certain types of content about sensitive topics, YouTube will apply a more prominent label to the video player. Examples of content that require disclosures can be found here.

YouTube has noted feedback from its community, including creators, viewers, and artists, about the ways in which emerging technologies could impact them. YouTube makes it possible to request the removal of AI-generated or other synthetic or altered content that simulates an identifiable individual, including their face or voice, using its privacy request process. Not all content will be removed from YouTube, and YouTube will consider a variety of factors when evaluating these requests, some examples can be found here

Additionally, YouTube has highlighted how it will build responsibility into its AI tools and features for creators. This includes significant, ongoing work to develop guardrails that will prevent its AI tools from generating the type of content that does not belong on YouTube.

YouTube works to continuously improve protections. And within YouTube, dedicated teams like the intelligence desk are specifically focused on adversarial testing and threat detection to ensure YouTube’s systems meet new challenges as they emerge. Content generated by YouTube’s AI tools includes a SynthID watermark, which is a tool for watermarking and identifying AI-generated images. Across the industry, Google, including YouTube, continues to help increase transparency around digital content. This includes its work as a steering member of the Coalition for Content Provenance and Authenticity (C2PA).

YouTube’s Misinformation Policies prohibit content that has been technically manipulated or doctored in a way that misleads users (usually beyond clips taken out of context) and may pose a serious risk of egregious harm. YouTube detects content that violates Community Guidelines using a combination of machine learning and human review. YouTube also has policies on: 

Refer to QRE 18.2.1 for how YouTube enforces these policies

Measure 15.2

Relevant Signatories will establish or confirm their policies in place to ensure that the algorithms used for detection, moderation and sanctioning of impermissible conduct and content on their services are trustworthy, respect the rights of end-users and do not constitute prohibited manipulative practices impermissibly distorting their behaviour in line with Union and Member States legislation.

QRE 15.2.1

Relevant Signatories will report on their policies and actions to ensure that the algorithms used for detection, moderation and sanctioning of impermissible conduct and content on their services are trustworthy, respect the rights of end-users and do not constitute prohibited manipulative practices in line with Union and Member States legislation.

Note: The below QRE response has been reproduced (in some instances truncated in order to meet the suggested character limit) from the previous report as there is no new information to share now.

YouTube has always used a combination of people and machine learning technologies to enforce its Community Guidelines. AI helps YouTube detect potentially violative content at scale, while humans provide critical oversight. AI is continuously increasing both the speed and accuracy of YouTube’s content moderation systems.

Improved speed and accuracy of YouTube’s systems also allows it to reduce the amount of harmful content human reviewers are exposed to.

Refer to QRE 14.2.1 for information on how YouTube implements and enforces its policies, including through machine learning technology.

Commitment 16

Relevant Signatories commit to operate channels of exchange between their relevant teams in order to proactively share information about cross-platform influence operations, foreign interference in information space and relevant incidents that emerge on their respective services, with the aim of preventing dissemination and resurgence on other services, in full compliance with privacy legislation and with due consideration for security and human rights risks.

We signed up to the following measures of this commitment

Measure 16.1 Measure 16.2

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Yes

If yes, list these implementation measures here

Search & YouTube
Google’s Threat Intelligence Group (GTIG) published its Q3 2025, and Q4 2025 Quarterly Bulletin, which provides updates around coordinated influence operation campaigns terminated on Google’s platforms.

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

N/A

Measure 16.1

Relevant Signatories will share relevant information about cross-platform information manipulation, foreign interference in information space and incidents that emerge on their respective services for instance via a dedicated sub-group of the permanent Task-force or via existing fora for exchanging such information.

QRE 16.1.1

Relevant Signatories will disclose the fora they use for information sharing as well as information about learnings derived from this sharing.

Note: The below QRE response has been reproduced (in some instances truncated in order to meet the suggested character limit) from the previous report as there is no new information to share now.

Google’s Threat Intelligence Group (GTIG) and Trust & Safety Teams work to monitor malicious actors around the globe, disable their accounts, and remove the content that they post, including but not limited to coordinated information operations and other operations that may affect EEA Member States. 

One of GTIG’s missions is to understand and disrupt coordinated information operations threat actors. Their work enables Google teams to make enforcement decisions backed by rigorous analysis. Their investigations do not focus on making judgements about the content on Google platforms, but rather examining technical signals, heuristics, and behavioural patterns to make an assessment that activity is coordinated inauthentic behaviour.

GTIG regularly publishes the TAG Bulletin, updated quarterly here, which provides updates around coordinated influence operation campaigns terminated on Google’s platforms, as well as additional periodic blog posts. GTIG also engages with other platform Signatories to receive and, when permitted by law and strictly necessary for security purposes, share information related to threat actor activity – in compliance with applicable laws. To learn more, refer to SLI 16.1.1.

See Google’s disclosure policies about handling security vulnerabilities for developers and security professionals.

SLI 16.1.1

Number of actions taken as a result of the collaboration and information sharing between signatories. Where they have such information, they will specify which Member States that were affected (including information about the content being detected and acted upon due to this collaboration).

Google’s Threat Intelligence Group (GTIG) posts a quarterly Bulletin, which includes disclosure of coordinated influence operation campaigns terminated on Google’s products and services, as well as additional periodic blog posts. In the Bulletin, they often note when findings are similar to or supported by those reported by other platforms.

YouTube
The publicly available TAG Bulletins that were published for the reporting period show:
- The number of actions taken on YouTube channels involved in Coordinated Influence Operation Campaigns.
- The languages of the uploaded content that were part of campaigns.
- Brief descriptions of the campaigns.
- Instances when industry partners supported YouTube’s actions by providing leads.

Certain campaigns may have uploaded content in multiple languages, or in other countries outside of the EEA region utilising EEA languages. Please note that there may be many languages for any one coordinated influence campaign and that the presence of content in an EEA Member State language does not necessarily entail a particular focus on that Member State.

The TAG Bulletin and periodic blog posts are Google’s, including YouTube’s, primary public source of information on coordinated influence operations and TTP-related issues.


The EU Code of Conduct on Disinformation Rapid Response System (RRS) is a collaborative initiative involving both non-platform and platform Signatories of the Code of Conduct to provide a means for cooperation and communication between them for a period of time ahead, during and after the election period.

The RRS allows non-platform Signatories of the Code of Conduct to report time-sensitive content or accounts that they deem may present serious or systemic concerns to the integrity of the electoral process, and enables discussion with the platform Signatories in light of their respective policies.

The disclosures below also include reporting through the RRS of allegedly illegal content. Although the Article 16 Digital Services Act (DSA) mechanism should be used by non-platform Signatories to report allegedly illegal content, Google reviews such notifications, too, as part of the RRS, provided the non-platform Signatory has already used the Article 16 DSA mechanism to submit them and shares the appropriate notification reference with Google through the RRS.

Search
  • Czech Republic - no notifications were received through RRS.
  • Ireland - no notifications were received through RRS.
  • The Netherlands - no notifications were received through RRS.
  • Portugal - no notifications were received through RRS.
  • Moldova - no notifications were received through RRS.

YouTube
  • Czech Republic - no notifications were received through RRS.
  • Ireland - 31 notifications were received through RRS;
    • 24 flags were found to be non-violative; 
    • 7 flags led to the removal of content or accounts. 
  • The Netherlands - no notifications were received through RRS.
  • Portugal - no notifications were received through RRS.
  • Moldova - 142 notifications were received through RRS;
    • 86 flags were found to be non-violative; 
    • 56 flags led to the removal of content or accounts. 

Measure 16.2

Relevant Signatories will pay specific attention to and share information on the tactical migration of known actors of misinformation, disinformation and information manipulation across different platforms as a way to circumvent moderation policies, engage different audiences or coordinate action on platforms with less scrutiny and policy bandwidth.

QRE 16.2.1

As a result of the collaboration and information sharing between them, Relevant Signatories will share qualitative examples and case studies of migration tactics employed and advertised by such actors on their platforms as observed by their moderation team and/or external partners from Academia or fact-checking organisations engaged in such monitoring.

Note: The below QRE response has been reproduced (in some instances truncated in order to meet the suggested character limit) from the previous report as there is no new information to share now.

Google’s Threat Intelligence Group (GTIG) and Trust & Safety Teams work to monitor malicious actors around the globe, disable their accounts, and remove the content that they posted, including but not limited to coordinated information operations and other operations that may affect EU Member States. 

Refer to the Bulletin articles that cover the reporting period to learn more about the number of YouTube channels terminated as part of investigations into coordinated influence operations linked to Russia, Poland, and other countries around the world. 

The most recent examples of specific tactics, techniques, and procedures (TTPs) used to lure victims, as well as how Google collaborates and shares information, can be found in Google’s TAG Blog and Threat Intelligence website

Empowering Users

Commitment 17

In light of the European Commission's initiatives in the area of media literacy, including the new Digital Education Action Plan, Relevant Signatories commit to continue and strengthen their efforts in the area of media literacy and critical thinking, also with the aim to include vulnerable groups.

We signed up to the following measures of this commitment

Measure 17.1 Measure 17.2 Measure 17.3

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

No

If yes, list these implementation measures here

See QRE 17.1.1

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

N/A

Measure 17.1

Relevant Signatories will design and implement or continue to maintain tools to improve media literacy and critical thinking, for instance by empowering users with context on the content visible on services or with guidance on how to evaluate online content.

QRE 17.1.1

Relevant Signatories will outline the tools they develop or maintain that are relevant to this commitment and report on their deployment in each Member State.

Note: The below QRE response has been reproduced (in some instances truncated in order to meet the suggested character limit) from the previous report as there is no new information to share now.

YouTube takes its responsibility efforts seriously, outlining clear policies used to moderate content on the platform and providing tools that users can leverage to improve their media literacy education and better evaluate what content and sources to trust. 

Information panels may appear alongside search results and below relevant videos to provide more context and to help people make more informed decisions about the content they are viewing. For example, topics that are more prone to misinformation may have information panels that show basic background info, sourced from independent, third-party partners, to give more context on the topic. If a user wants to learn more, the panels also link to the third-party partner’s website. YouTube continues to assess and update the topics prone to misinformation that receive additional context from information panels. 

During election periods, text-based information panels about a candidate, how to vote, and election results may also be displayed to users.

For information about YouTube’s altered and synthetic disclosures and labels, please refer to QRE 15.1.1.

Further EEA Member State coverage can be found in SLI 17.1.1.

SLI 17.1.1

Relevant Signatories will report, at the Member State level, on metrics pertinent to assessing the effects of the tools described in the qualitative reporting element for Measure 17.1, which will include: the total count of impressions of the tool; and information on the interactions/engagement with the tool.

(1) Number of impressions on information panels during the reporting period, broken down by EEA Member State.

(2) Number of impressions on labels indicating altered or synthetic content during the reporting period, broken down by EEA Member State.

Note: Since the H2 2024 report, YouTube derives the metric from a subset of the data by using random sampling to estimate the number of impressions.

Country Impressions of information panels Impressions on labels indicating altered or synthetic content
Austria 35,436,900 97,074,600
Belgium 139,752,900 99,382,200
Bulgaria 52,956,700 61,140,600
Croatia 52,579,800 41,299,300
Cyprus 4,353,000 15,044,500
Czech Republic 117,762,400 108,281,500
Denmark 18,210,000 50,458,400
Estonia 16,565,900 19,797,300
Finland 16,550,200 55,616,500
France 860,771,700 700,025,700
Germany 1,817,072,400 1,053,238,700
Greece 28,401,500 86,025,500
Hungary 44,661,000 54,791,100
Ireland 66,024,000 88,340,500
Italy 355,847,700 692,892,600
Latvia 46,580,400 38,949,600
Lithuania 49,898,800 37,585,600
Luxembourg 2,629,900 7,255,000
Malta 2,572,500 6,504,700
Netherlands 457,391,700 325,040,900
Poland 199,452,000 490,280,400
Portugal 23,057,900 120,349,000
Romania 89,976,900 163,318,700
Slovakia 23,796,300 38,073,100
Slovenia 15,106,100 18,283,300
Spain 353,322,900 752,917,000
Sweden 94,865,900 112,427,500
Iceland 1,098,900 4,603,500
Liechtenstein 181,600 462,300
Norway 23,963,100 76,312,900
Total EU 4,985,597,400 5,334,393,800
Total EEA 5,010,841,000 5,415,772,500

Measure 17.2

Relevant Signatories will develop, promote and/or support or continue to run activities to improve media literacy and critical thinking such as campaigns to raise awareness about Disinformation, as well as the TTPs that are being used by malicious actors, among the general public across the European Union, also considering the involvement of vulnerable communities.

QRE 17.2.1

Relevant Signatories will describe the activities they launch or support and the Member States they target and reach. Relevant signatories will further report on actions taken to promote the campaigns to their user base per Member States targeted.

Grants

In H2 2025, Google.org supported a number of organisations that seek to help build a safer online world and promote media literacy. This includes over $7M in funding to further expand access to Be Internet Awesome and other child safety curricula.

Search
Super Searchers is our information literacy scaled education program, launched globally in 2022. The program was developed in consultation with information literacy experts and is a train-the-trainer program delivered through local partner organisations. The program teaches foundational information literacy skills, using evidence-based approaches such as the SIFT Method (a useful framework which compels users to Stop, Check it out, Investigate the source, Find better coverage and Trace back to original context).

The content was recently refreshed in November 2025 to reflect innovations in Search, such as AI Mode and AI Overviews. These tools are now incorporated into the Super Searchers curriculum, which helps users identify and evaluate the accuracy of information found online. 

YouTube
YouTube remains committed to supporting efforts that deepen users’ collective understanding of misinformation. To empower users to think critically and use YouTube’s products safely and responsibly, YouTube invests in media literacy campaigns to improve users’ experiences on YouTube. In 2022, YouTube launched ‘Hit Pause’, a global media literacy campaign, which is live in all EEA Member States and the campaign has run in 40+ additional countries around the world, including all official EU languages.

The program seeks to teach viewers critical media literacy skills via engaging and educational public service announcements (PSAs) via YouTube home feed and pre-roll ads, and on a dedicated YouTube channel. The YouTube channel hosts videos from the YouTube Trust & Safety team that explain how YouTube protects the YouTube community from misinformation and other harmful content. There is additional campaign content that provides members of the YouTube community with the opportunity to increase critical thinking skills around identifying different manipulation tactics used to spread misinformation – from using emotional language to cherry picking information. 

EEA Member State coverage of 'Hit Pause' media literacy impressions can be found in SLI 17.2.1.

SLI 17.2.1

Relevant Signatories report on number of media literacy and awareness raising activities organised and or participated in and will share quantitative information pertinent to show the effects of the campaigns they build or support at the Member State level.

Media Literacy campaign impressions during the reporting period, broken down by EEA Member State.

Country Impressions from YouTube's media literacy campaigns
Austria 3,107,142
Belgium 2,038,464
Bulgaria 2,506,221
Croatia 1,941,695
Cyprus 200,595
Czech Republic 4,772,598
Denmark 1,832,197
Estonia 251,623
Finland 1,853,365
France 28,332,778
Germany 28,667,395
Greece 4,168,868
Hungary 4,028,295
Ireland 1,854,044
Italy 22,611,323
Latvia 410,984
Lithuania 976,023
Luxembourg 197,541
Malta 409,677
Netherlands 6,223,229
Poland 16,142,521
Portugal 4,463,198
Romania 7,302,666
Slovakia 2,024,229
Slovenia 687,317
Spain 23,417,687
Sweden 3,712,435
Iceland 239,963
Liechtenstein 23,369
Norway 1,559,768
Total EU 174,134,110
Total EEA 175,957,210

Measure 17.3

For both of the above Measures, and in order to build on the expertise of media literacy experts in the design, implementation, and impact measurement of tools, relevant Signatories will partner or consult with media literacy experts in the EU, including for instance the Commission's Media Literacy Expert Group, ERGA's Media Literacy Action Group, EDMO, its country-specific branches, or relevant Member State universities or organisations that have relevant expertise.

QRE 17.3.1

Relevant Signatories will describe how they involved and partnered with media literacy experts for the purposes of all Measures in this Commitment.

Note: The below QRE response has been reproduced (in some instances truncated in order to meet the suggested character limit) from the previous report as there is no new information to share now.

YouTube partners with media literacy experts and researchers to identify unique and engaging ways to build up the YouTube Community’s media literacy. For example, to inform the ‘Hit Pause’ global campaign, YouTube partnered with the National Association for Media Literacy Education (NAMLE), a U.S.-based organisation, to identify which competency areas the campaign should focus on. 

For additional information about YouTube’s ‘Hit Pause’ campaign, please refer to QRE 17.2.1.

Commitment 18

Relevant Signatories commit to minimise the risks of viral propagation of Disinformation by adopting safe design practices as they develop their systems, policies, and features.

We signed up to the following measures of this commitment

Measure 18.2 Measure 18.3

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

No

If yes, list these implementation measures here

N/A

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

N/A

Measure 18.2

Relevant Signatories will develop and enforce publicly documented, proportionate policies to limit the spread of harmful false or misleading information (as depends on the service, such as prohibiting, downranking, or not recommending harmful false or misleading information, adapted to the severity of the impacts and with due regard to freedom of expression and information); and take action on webpages or actors that persistently violate these policies.

QRE 18.2.1

Relevant Signatories will report on the policies or terms of service that are relevant to Measure 18.2 and on their approach towards persistent violations of these policies.

Note: The below QRE response has been reproduced (in some instances truncated in order to meet the suggested character limit) from the previous report as there is no new information to share now.

See response to QRE 14.1.1 to see how YouTube’s Community Guidelines map to the TTPs. These policies seek to, among other things, limit the spread of misleading or deceptive content that poses a serious risk of egregious harm. 

Community Guidelines Enforcement
After a creator’s first Community Guidelines violation, they will typically get a warning with no penalty to their channel. They will have the chance to take a policy training to allow the warning to expire after 90 days. Creators will also get the chance to receive a warning in another policy category. If the same policy is violated within that 90 day window, the creator’s channel will be given a strike.

If the creator receives three strikes in the same 90-day period, their channel may be removed from YouTube. In some cases, YouTube may terminate a channel for a single case of severe abuse, as explained in the Help Centre. YouTube may also remove content for reasons other than Community Guidelines violations, such as a first-party privacy complaint or a court order. In these cases, creators will not be issued a strike.

If a creator’s channel gets a strike, they will receive an email, and can have notifications sent to them through their mobile and desktop notifications. The emails and notifications received by the creator explain the action taken on their content and which of YouTube’s policies the content violated. More detailed guidelines of YouTube’s processes and policies on strikes can be found here.

YouTube also reserves the right to restrict a creator's ability to create content on YouTube at its discretion. A channel may be turned off or restricted from using any YouTube features. If this happens, users are prohibited from using, creating, or acquiring another channel to get around these restrictions. This prohibition applies as long as the restriction remains active on the YouTube channel. A violation of this restriction is considered circumvention under YouTube’s Terms of Service, and may result in termination of all existing YouTube channels of the user, any new channels created or acquired, and channels in which the user is repeatedly or prominently featured.

Refer to SLI 18.2.1 on YouTube’s enforcement at an EEA Member State level.

SLI 18.2.1

Relevant Signatories will report on actions taken in response to violations of policies relevant to Measure 18.2, at the Member State level. The metrics shall include: Total number of violations and Meaningful metrics to measure the impact of these actions (such as their impact on the visibility of or the engagement with content that was actioned upon).

(1) Number of videos removed for violations of YouTube’s Misinformation Policies in the reporting period, broken down by EEA Member State;

(2) Views threshold on videos removed for violations of YouTube’s Misinformation Policies in the reporting period broken down by EEA Member State.

Country Number of videos removed Number of videos removed with 0 views Number of videos removed with 1-10 views Number of videos removed with 11-100 views Number of videos removed with 101-1,000 views Number of videos removed with 1,001- 10,000 views Number of videos removed with >10,000 views
Austria 85 5 18 17 23 14 8
Belgium 34 4 11 10 5 3 1
Bulgaria 48 18 10 4 7 8 1
Croatia 7 0 3 2 2 0 0
Cyprus 17 3 5 1 4 2 2
Czech Republic 55 10 18 10 9 5 3
Denmark 12 1 7 0 2 1 1
Estonia 78 1 3 3 8 63 0
Finland 22 3 9 4 2 4 0
France 235 29 69 55 41 29 12
Germany 799 89 188 161 171 125 65
Greece 24 2 5 3 4 6 4
Hungary 16 4 2 4 4 2 0
Ireland 84 17 25 18 13 8 3
Italy 76 8 29 18 11 5 5
Latvia 34 3 7 5 10 7 2
Lithuania 17 1 3 5 3 3 2
Luxembourg 3 0 2 0 1 0 0
Malta 2 0 2 0 0 0 0
Netherlands 216 35 82 44 32 8 15
Poland 115 26 33 19 15 13 9
Portugal 50 7 14 13 12 4 0
Romania 36 7 12 7 4 3 3
Slovakia 9 4 2 2 0 1 0
Slovenia 15 4 2 1 4 4 0
Spain 648 76 154 149 145 85 39
Sweden 52 10 20 10 7 5 0
Iceland 1 0 0 0 0 0 1
Liechtenstein 0 0 0 0 0 0 0
Norway 30 15 6 5 3 1 0
Total EU 2,789 367 735 565 539 408 175
Total EEA 2,820 382 741 570 542 409 176

Measure 18.3

Relevant Signatories will invest and/or participate in research efforts on the spread of harmful Disinformation online and related safe design practices, will make findings available to the public or report on those to the Code's taskforce. They will disclose and discuss findings within the permanent Task-force, and explain how they intend to use these findings to improve existing safe design practices and features or develop new ones.

QRE 18.3.1

Relevant Signatories will describe research efforts, both in-house and in partnership with third-party organisations, on the spread of harmful Disinformation online and relevant safe design practices, as well as actions or changes as a result of this research. Relevant Signatories will include where possible information on financial investments in said research. Wherever possible, they will make their findings available to the general public.

Google, including YouTube, works with stakeholders across the technology sector, government, and civil society to set good policies, remain abreast of emerging challenges, and establish, share, and learn from industry best practices and research. 

In 2024, Google published a paper on determining trustworthiness through context and provenance, showing how better assessment tools can empower people to make informed decisions about what they’re seeing on the internet.

In July 2025, Google introduced Backstory, an experimental artificial intelligence (AI) tool that surfaces information and helps people learn more about the context of images seen online.

When given an image and a written prompt, Backstory investigates whether an image was AI-generated, when and where it’s previously been used online, and whether it’s been digitally altered. It quickly equips users with helpful information, responds to further prompts, describes whether and how an image has been used, and how its story may have changed over time. Backstory also generates easy-to-read reports of its findings.

As Google continues to conduct research and develop Backstory, we are working closely with trusted testers, including content creators and expert information practitioners, who manage, organise and disseminate high-quality information. Over the past few months, we have been gathering feedback about examples, user experiences and more to improve our technology and make it more helpful. We welcomed over 140 industry practitioners to our five Backstory Signal Sessions (Buenos Aires, London, Kuala Lumpur, Coimbra-Portugal, & Delhi).

Commitment 19

Relevant Signatories using recommender systems commit to make them transparent to the recipients regarding the main criteria and parameters used for prioritising or deprioritising information, and provide options to users about recommender systems, and make available information on those options.

We signed up to the following measures of this commitment

Measure 19.1 Measure 19.2

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

No

If yes, list these implementation measures here

N/A

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

N/A

Measure 19.1

Relevant Signatories will make available to their users, including through the Transparency Centre and in their terms and conditions, in a clear, accessible and easily comprehensible manner, information outlining the main parameters their recommender systems employ.

QRE 19.1.1

Relevant Signatories will provide details of the policies and measures put in place to implement the above-mentioned measures accessible to EU users, especially by publishing information outlining the main parameters their recommender systems employ in this regard. This information should also be included in the Transparency Centre.

Note: The below QRE response has been reproduced (in some instances truncated in order to meet the suggested character limit) from the previous report as there is no new information to share now.

On YouTube, recommendations help users discover more of the videos they love, whether it is a great new recipe to try or finding their next favourite song. 

Users can find recommendations across the platform, including the homepage, the ‘Up Next’ panel, and the Shorts tab:

  • Homepage: A user’s homepage is what they typically see when they first open YouTube.
  • Up Next: The Up Next panel appears when a user is watching a video. It suggests additional content based on what they are currently watching and personalised signals (details below).
  • Shorts: Shorts are ranked based on their performance and personalisation. 

YouTube understands that individuals have unique viewing habits and uses signals to recommend content. YouTube’s system compares the user’s viewing habits with those that are similar to others, and uses that information to suggest other content.

YouTube’s recommendation system is constantly evolving, learning every day from over 80 billion pieces of information or 'signals,' the primary ones being:
  • Watch history: YouTube’s system uses the videos a user watches to give better recommendations, remember where a user left off, and more.
  • Search history: YouTube’s system uses what a user searches for on YouTube to influence future recommendations.
  • Channel subscriptions: YouTube’s system uses information about the channels a user subscribes to in order to recommend videos they may like.
  • Likes: YouTube’s system uses a user’s likes information to try to predict the likelihood that they will be interested in similar videos in the future.
  • Dislikes: YouTube’s system uses videos a user dislikes to inform what to avoid recommending in the future.
  • 'Not interested' feedback selections: YouTube’s system uses videos a user marks as 'Not interested' to inform what to avoid recommending in the future.
  • 'Don’t recommend channel' feedback selections: YouTube’s system uses 'Don’t recommend channel' feedback selections as a signal that the channel content likely is not something a user enjoyed watching.

Different YouTube features rely on certain recommendation signals more than others. For example, YouTube uses the video a user is currently watching as an important signal when suggesting a video to play next. The influence of each signal on recommendations can vary based on many variables, including but not limited to the user’s device type and the type of content they are watching. This is why the same user will see different recommendations on a mobile phone vs. a television. 

Recommendations
Recommendations connect viewers to high-quality information and complement the work done by the Community Guidelines that define what is and is not allowed on YouTube. YouTube raises up videos in search and recommendations to viewers on certain topics where quality is key. Human evaluators, trained using publicly available guidelines, assess the quality of information from a variety of channels and videos. 

These human evaluations are used to train YouTube’s system to model their decisions, and YouTube then scales their assessments to all videos across the platform. Learn more about how YouTube elevates high-quality information on the How YouTube Works website and the YouTube Blog

Controls to personalise recommendations 
YouTube has built controls that help users decide how much data they want to provide. Users can view, delete, or turn on or off their YouTube watch and search history whenever they want. And, if users do not want to see recommendations at all on the homepage or on the Shorts tab, they can turn off and clear their YouTube watch history. For users with YouTube watch history off and no significant prior watch history, the homepage will show the search bar and the Guide menu, with no feed of recommended videos.

Users can also tell YouTube when it is recommending something a user is not interested in. For example, buttons on the homepage and in the ‘Up next' section allow users to filter and choose recommendations by specific topics. Users can also click on 'Not interested' and/or 'Don’t recommend channel' to tell YouTube that a video or channel is not what a user wanted to see at that time, and YouTube will consider that when generating recommendations for that viewer in the future.

Additional information about how a user can manage their recommendation settings are outlined here in YouTube’s Help Centre. 

Measure 19.2

Relevant Signatories will provide options for the recipients of the service to select and to modify at any time their preferred options for relevant recommender systems, including giving users transparency about those options.

SLI 19.2.1

Relevant Signatories will provide aggregated information on effective user settings, such as the number of times users have actively engaged with these settings within the reporting period or over a sample representative timeframe, and clearly denote shifts in configuration patterns.

YouTube shares the number of Monthly Active Recipients across the official EU Member States that are signed in to the platform (those not signed in are signed out) in the latest published report on Information about Monthly Active Recipients under the Digital Service Act (EU). Signed-in users are able to amend their settings in their YouTube or Google Accounts.

Country
Austria
Belgium
Bulgaria
Croatia
Cyprus
Czech Republic
Denmark
Estonia
Finland
France
Germany
Greece
Hungary
Ireland
Italy
Latvia
Lithuania
Luxembourg
Malta
Netherlands
Poland
Portugal
Romania
Slovakia
Slovenia
Spain
Sweden
Iceland
Liechtenstein
Norway

Commitment 22

Relevant Signatories commit to provide users with tools to help them make more informed decisions when they encounter online information that may be false or misleading, and to facilitate user access to tools and information to assess the trustworthiness of information sources, such as indicators of trustworthiness for informed online navigation, particularly relating to societal issues or debates of general interest.

We signed up to the following measures of this commitment

Measure 22.7

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

No

If yes, list these implementation measures here

See QRE 22.7.1

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

N/A

Measure 22.7

Relevant Signatories will design and apply products and features (e.g. information panels, banners, pop-ups, maps and prompts, trustworthiness indicators) that lead users to authoritative sources on topics of particular public and societal interest or in crisis situations.

QRE 22.7.1

Relevant Signatories will outline the products and features they deploy across their services and will specify whether those are available across Member States.

Note: The below QRE response has been reproduced (in some instances truncated in order to meet the suggested character limit) from the previous report as there is no new information to share now.

YouTube highlights information from high-quality, third-party sources using information panels. As users navigate YouTube, they might see a variety of different information panels. These panels provide additional context, with each designed to help users make their own decisions about the content they find. 

These information panels will show regardless of what opinions or perspectives are expressed in a video. If users want to learn more, most panels link to the third-party partner’s website.

Information panels on YouTube include, but are not limited to:
  • Panels on topics prone to misinformation: Topics that are prone to misinformation, such as the moon landing, may display an information panel at the top of search results or under a video. These information panels show basic background information, sourced from independent, third-party partners, to give more context on a topic. The panels also link to the third-party partner’s website. YouTube continues to assess and update the topics prone to misinformation that receive additional context from information panels. More details found here.
  • Election information panels: The election-related features are only available in select countries/regions during election cycles. Users may see candidate information panels, voting information panels, election integrity information panels, or election results information panels. More details found here.

Additionally, learn more about health-related information panels and crisis resource panels in YouTube’s Help Centre.

For information about YouTube’s altered and synthetic disclosures and labels, please refer to QRE 15.1.1.

Additional data points and EEA Member State coverage is provided in SLI 17.1.1.

SLI 22.7.1

Relevant Signatories will report on the reach and/or user interactions with the products or features, at the Member State level, via the metrics of impressions and interactions (clicks, click-through rates (as relevant to the tools and services in question) and shares (as relevant to the tools and services in question).

Please refer to SLI 17.1.1 for relevant metrics related to impressions of information panels and impressions on labels indicating altered or synthetic content.

Country
Austria
Belgium
Bulgaria
Croatia
Cyprus
Czech Republic
Denmark
Estonia
Finland
France
Germany
Greece
Hungary
Ireland
Italy
Latvia
Lithuania
Luxembourg
Malta
Netherlands
Poland
Portugal
Romania
Slovakia
Slovenia
Spain
Sweden
Iceland
Liechtenstein
Norway
Total EU
Total EEA

Commitment 23

Relevant Signatories commit to provide users with the functionality to flag harmful false and/or misleading information that violates Signatories policies or terms of service.

We signed up to the following measures of this commitment

Measure 23.1 Measure 23.2

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

No

If yes, list these implementation measures here

N/A

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

N/A

Measure 23.1

Relevant Signatories will develop or continue to make available on all their services and in all Member States languages in which their services are provided a user-friendly functionality for users to flag harmful false and/or misleading information that violates Signatories' policies or terms of service. The functionality should lead to appropriate, proportionate and consistent follow-up actions, in full respect of the freedom of expression.

QRE 23.1.1

Relevant Signatories will report on the availability of flagging systems for their policies related to harmful false and/or misleading information across EU Member States and specify the different steps that are required to trigger the systems.

Note: The below QRE response has been reproduced (in some instances truncated in order to meet the suggested character limit) from the previous report as there is no new information to share now.

YouTube's approach to combating misinformation involves removing content that violates YouTube’s policies, and surfacing high-quality information in ranking and recommendations. YouTube applies these principles globally, including across the EU.

Implementing and enforcing YouTube policies
Each of YouTube’s policies is carefully thought through so they are consistent, well-informed, and can be applied to content from around the world. They are developed in consultation with a wide range of external experts, as well as YouTube Creators. New policies go through testing before they go live to ensure YouTube’s global team of content reviewers can apply them accurately and consistently. 

Flagging inappropriate or harmful content on YouTube
YouTube offers YouTube users the possibility to report or flag content that they believe violates YouTube’s Community Guidelines or other policies. Users can report content using YouTube’s flagging feature, which is available to signed-in users in all EU Member States via computer (desktop or laptop), mobile devices, and other surfaces. Details on how to report different types of content using YouTube’s flagging feature is outlined in YouTube’s Help Centre.

In addition to user flagging, YouTube uses machine learning technology to flag videos for review. YouTube developed powerful machine learning that detects content that may violate YouTube’s policies. In some cases, YouTube’s systems may take automated action, such as when there is high confidence that the content is violative given similar content that was previously removed.

YouTube relies on this combination of people and machine learning technology to flag inappropriate content and enforce YouTube’s community guidelines. 

Information about YouTube’s content moderation efforts, specifically regarding human resources dedicated to content moderation across the official EU Member State languages can be found in relevant sections of the VLOSE/VLOP Transparency Report under the European Union Digital Services Act (EU DSA)

Reporting illegal content
While YouTube’s Community Guidelines are policies that apply globally, YouTube is available in more than 100 different countries; therefore, processes are in place to review and appropriately act on requests from users, courts, and governments about content that violates local laws. Users can report illegal content using webforms dedicated to specific legal issues such as trademark, copyright, counterfeit and defamation. Webforms may also be accessed via the flagging feature after selecting Legal Issue as the report reason. Users can learn more about YouTube’s legal policies and how to report legal violations here in YouTube’s Help Centre.

Measure 23.2

Relevant Signatories will take the necessary measures to ensure that this functionality is duly protected from human or machine-based abuse (e.g., the tactic of 'mass-flagging' to silence other voices).

QRE 23.2.1

Relevant Signatories will report on the general measures they take to ensure the integrity of their reporting and appeals systems, while steering clear of disclosing information that would help would-be abusers find and exploit vulnerabilities in their defences.

Note: The below QRE response has been reproduced (in some instances truncated in order to meet the suggested character limit) from the previous report as there is no new information to share now.

Content can be flagged by YouTube users, YouTube’s machine learning technology, and human content moderators. All users agree to not 'misuse any reporting, flagging, complaint, dispute, or appeals process, including by making groundless, vexatious, or frivolous submissions' in YouTube’s Terms of Service.

Additionally, YouTube ensures integrity of its systems through: 
  • Having a dedicated team to identify and mitigate the impact of sophisticated bad actors on YouTube at scale, while protecting the broader community;
  • Partnering with Google’s Threat Intelligence Group (GTIG) and Trust & Safety Teams to monitor malicious actors around the globe, disable their accounts, and remove the content that they post (See QRE 16.1.1 and QRE 16.2.1);
  • Legal protections, such as those found in the Digital Services Act;
  • Educating users about Community Guidelines violations through its guided policy experience;
  • Providing clear communication on appeals processes and notifications, and regular policy updates on its Help Centre; and, 
  • Investing in automated systems to provide efficient detection of content to be evaluated by human reviewers.

Where appropriate, YouTube makes it clear to users that it has taken action on their content and provides them the opportunity to appeal that decision.

For more detailed information about YouTube’s complaint handling systems (i.e. appeals), please see the latest VLOSE/VLOP Transparency Report under the European Union Digital Services Act (EU DSA)

Commitment 24

Relevant Signatories commit to inform users whose content or accounts has been subject to enforcement actions (content/accounts labelled, demoted or otherwise enforced on) taken on the basis of violation of policies relevant to this section (as outlined in Measure 18.2), and provide them with the possibility to appeal against the enforcement action at issue and to handle complaints in a timely, diligent, transparent, and objective manner and to reverse the action without undue delay where the complaint is deemed to be founded.

We signed up to the following measures of this commitment

Measure 24.1

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

No

If yes, list these implementation measures here

N/A

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

N/A

Measure 24.1

Relevant Signatories commit to provide users with information on why particular content or accounts have been labelled, demoted, or otherwise enforced on, on the basis of violation of policies relevant to this section, as well as the basis for such enforcement action, and the possibility for them to appeal through a transparent mechanism.

QRE 24.1.1

Relevant Signatories will report on the availability of their notification and appeals systems across Member States and languages and provide details on the steps of the appeals procedure.

Note: The below QRE response has been reproduced (in some instances truncated in order to meet the suggested character limit) from the previous report as there is no new information to share now.

As noted in QRE 18.2.1, if a creator’s channel gets a strike, they will receive an email, and can have notifications sent to them through their mobile and desktop notifications. The emails and notifications received by the creator explain what content was removed or age restricted, which policies the content violated, how it affects the user’s channel, and what the creator can do next. More detailed guidelines of YouTube’s processes and policies on strikes here.

Sometimes a single case of severe abuse will result in channel termination without warning.

The below appeals processes are available in all Member States, which are outlined in the YouTube Help Centre: 

After a creator submits an appeal
After a creator submits an appeal, they will get an email from YouTube letting them know the appeal outcome. One of the following will happen:

  • If YouTube finds that a user’s content followed YouTube’s Community Guidelines, YouTube will reinstate it and remove the strike from their channel. If a user appeals a warning and the appeal is granted, the next offence will be a warning.

  • If YouTube finds that a user’s content followed YouTube’s Community Guidelines, but is not appropriate for all audiences, YouTube will apply an age-restriction. If it is a video, it will not be visible to users who are signed out, are under 18 years of age, or have Restricted Mode turned on. If it is a custom thumbnail, it will be removed.

  • If YouTube finds that a user’s content was in violation of YouTube’s Community Guidelines, the strike will stay and the video will remain down from the site. There is no additional penalty for appeals that are rejected.

For a more granular Member State level breakdown, refer to SLI 24.1.1.

For more information about YouTube’s median time needed to action a complaint, please see the latest VLOSE/VLOP Transparency Report under the European Union Digital Services Act (EU DSA).

SLI 24.1.1

Relevant Signatories provide information on the number and nature of enforcement actions for policies described in response to Measure 18.2, the numbers of such actions that were subsequently appealed, the results of these appeals, information, and to the extent possible metrics, providing insight into the duration or effectiveness of processing of appeals process, and publish this information on the Transparency Centre.

(1) Video appeals against content removals for violations of YouTube’s Misinformation Policies in the reporting period, broken down by EEA Member State;

(2) Video reinstatements following a successful appeal against content removals for violations of YouTube’s Misinformation Policies in the reporting period, broken down by EEA Member State.

Country Number of videos removed that were subsequently appealed Number of videos removed that were then reinstated following a creator's appeal
Austria 22 5
Belgium 12 4
Bulgaria 11 0
Croatia 0 0
Cyprus 9 1
Czech Republic 12 5
Denmark 1 1
Estonia 14 1
Finland 9 0
France 98 38
Germany 202 54
Greece 8 1
Hungary 6 2
Ireland 40 13
Italy 60 39
Latvia 11 2
Lithuania 4 0
Luxembourg 1 0
Malta 0 0
Netherlands 55 22
Poland 51 13
Portugal 11 2
Romania 19 9
Slovakia 4 3
Slovenia 5 2
Spain 133 36
Sweden 10 2
Iceland 0 0
Liechtenstein 0 0
Norway 7 5
Total EU 808 255
Total EEA 815 260

Empowering Researchers

Commitment 26

Relevant Signatories commit to provide access, wherever safe and practicable, to continuous, real-time or near real-time, searchable stable access to non-personal data and anonymised, aggregated, or manifestly-made public data for research purposes on Disinformation through automated means such as APIs or other open and accessible technical solutions allowing the analysis of said data.

We signed up to the following measures of this commitment

Measure 26.1 Measure 26.2 Measure 26.3

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

No

If yes, list these implementation measures here

N/A

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

N/A

Measure 26.1

Relevant Signatories will provide public access to non-personal data and anonymised, aggregated or manifestly-made public data pertinent to undertaking research on Disinformation on their services, such as engagement and impressions (views) of content hosted by their services, with reasonable safeguards to address risks of abuse (e.g. API policies prohibiting malicious or commercial uses).

QRE 26.1.1

Relevant Signatories will describe the tools and processes in place to provide public access to non-personal data and anonymised, aggregated and manifestly-made public data pertinent to undertaking research on Disinformation, as well as the safeguards in place to address risks of abuse.

Google Researcher Program
Eligible EU researchers can apply for access to publicly available data across some of Google’s products, including Search and YouTube, through the Google Researcher Program. Search and YouTube provide eligible researchers (including non-academics that meet predefined eligibility criteria) with access to limited metadata scraping for public data. For researchers who are not affiliated with an EU institution and don’t meet the qualifications for the EU program, Google also offers a global alternative. This program aims to enhance the public’s understanding of Google’s services and their impact. For additional details, see the Researcher Program landing page.

YouTube Researcher Program
The YouTube Researcher Program provides scaled, expanded access to global video metadata across the entire public YouTube corpus via a Data API for eligible academic researchers from around the world, who are affiliated with an accredited, higher-learning institution. Learn more about the data available in the YouTube API reference.

Transparency into paid content on YouTube
YouTube provides users a bespoke front end search page to access publicly available data containing organic content with paid product placements, sponsorships and endorsements as disclosed by creators. This is to enable users to understand that creators may receive goods or services in exchange for promotion. This search page complements YouTube’s existing process of displaying a disclosure message when creators disclose to YouTube that their content contains paid promotions. Learn more about adding paid product placements, sponsorships & endorsements here

QRE 26.1.2

Relevant Signatories will publish information related to data points available via Measure 26.1, as well as details regarding the technical protocols to be used to access these data points, in the relevant help centre. This information should also be reachable from the Transparency Centre. At minimum, this information will include definitions of the data points available, technical and methodological information about how they were created, and information about the representativeness of the data.

Google Researcher Program
Approved researchers will receive permissions and access to public data for Search and YouTube in the following ways: 
  • Search: Access to an API for limited scraping with a budget for quota;
  • YouTube: Permission for scraping limited to metadata.

For additional details, see the Researcher Program landing page

YouTube Researcher Program
The YouTube Researcher Program provides scaled, expanded access to global video metadata across the entire public YouTube corpus via a Data API. The program allows eligible academic researchers around the world to independently analyse the data they collect, including generating new/derived metrics for their research. Information available via the Data API includes video title, description, views, likes, comments, channel metadata, search results, and other data.

Transparency into paid content on YouTube
The information provided via the bespoke front end search page allows users to view videos with active paid product placements, sponsorships, and endorsements that have been declared on YouTube.
  • Paid product placements
    • Videos about a product or service because there is a connection between the creator and the maker of the product or service;
    • Videos created for a company or business in exchange for compensation or free of charge products/services; 
    • Videos where that company or business’s brand, message, or product is included directly in the content and the company has given the creator money or free of charge products to make the video.
  • Endorsements - Videos created for an advertiser or marketer that contains a message that reflects the opinions, beliefs, or experiences of the creator.
  • Sponsorships - Videos that have been financed in whole or in part by a company, without integrating the brand, message, or product directly into the content. Sponsorships generally promote the brand, message, or product of the third party.

Definitions can be found on the YouTube Help Centre.

Additional data points are provided in SLI 26.1.1 and 26.2.1.

SLI 26.1.1

Relevant Signatories will provide quantitative information on the uptake of the tools and processes described in Measure 26.1, such as number of users.

Total number of unique researchers accessing the YouTube Researcher Program API during the reporting period, broken down by EEA Member States.

  • Researchers accessing the Researcher Program API during the reporting period may have been approved before the reporting period. There can be more than one researcher per application. 

Country Number of unique researchers accessing the YouTube Researcher API
Austria 2
Belgium 2
Bulgaria 0
Croatia 0
Cyprus 0
Czech Republic 1
Denmark 2
Estonia 0
Finland 2
France 5
Germany 16
Greece 0
Hungary 0
Ireland 0
Italy 8
Latvia 0
Lithuania 0
Luxembourg 0
Malta 0
Netherlands 4
Poland 0
Portugal 0
Romania 1
Slovakia 0
Slovenia 0
Spain 26
Sweden 0
Iceland 0
Liechtenstein 0
Norway 0
Total EU 69
Total EEA 69

Measure 26.2

Relevant Signatories will provide real-time or near real-time, machine-readable access to non-personal data and anonymised, aggregated or manifestly-made public data on their service for research purposes, such as accounts belonging to public figures such as elected official, news outlets and government accounts subject to an application process which is not overly cumbersome.

QRE 26.2.1

Relevant Signatories will describe the tools and processes in place to provide real-time or near real-time access to non-personal data and anonymised, aggregated and manifestly-made public data for research purposes as described in Measure 26.2.

Please refer to QRE 26.1.1 and QRE 26.1.2.

QRE 26.2.2

Relevant Signatories will describe the scope of manifestly-made public data as applicable to their services.

Please refer to QRE 26.1.1 and QRE 26.1.2.

QRE 26.2.3

Relevant Signatories will describe the application process in place to in order to gain the access to non-personal data and anonymised, aggregated and manifestly-made public data described in Measure 26.2.

Note: The below QRE response has been reproduced (in some instances truncated in order to meet the suggested character limit) from the previous report as there is no new information to share now.

Google Researcher Program
The Google Researcher Program, which includes YouTube, has a 3-step application process:

  1. Review and confirm the applicant’s eligibility;
  2. Submit an application, which requires a Google account;
  3. If approved, the applicant gains permission to access public data relevant to their research.

Once an application has been submitted, accepted researchers will be notified via email. 

YouTube Researcher Program
The YouTube Researcher Program has a 3-step application process: 

  1. YouTube verifies the applicant is an academic researcher affiliated with an accredited, higher-learning institution;
  2. The Researcher creates an API project in the Google Cloud Console and enables the relevant YouTube APIs. They can learn more by visiting the enabled APIs page;
  3. The Researcher applies with their institutional email (e.g. with a .edu suffix), includes as much detail as possible, and confirms that all of their information is accurate.

Once an application has been submitted, YouTube’s operations team will conduct a review and let applicants know if they are accepted into the program. 

SLI 26.2.1

Relevant Signatories will provide meaningful metrics on the uptake, swiftness, and acceptance level of the tools and processes in Measure 26.2, such as: Number of monthly users (or users over a sample representative timeframe), Number of applications received, rejected, and accepted (over a reporting period or a sample representative timeframe), Average response time (over a reporting period or a sample representative timeframe).

(1-4) Applications received, approved, rejected or under review for the YouTube Researcher Program in the reporting period, broken down by EEA Member States.

(5) Median application resolution time in days in the reporting period, reported at the EU and EEA level.

Refer to SLI 26.1.1 for the total number of unique researchers accessing the YouTube Researcher Program API.

Please note the following:
  • Cells with '0' under applications received signify that there were no applications submitted by a researcher from that country. Similarly, cells with '0' signify that there were no applications approved, rejected, or under review for that country.
  • Median Application Resolution time is the median number of days from application creation to application resolution, which may include communication back and forth with the applicant. This metric does not reflect YouTube’s first response back to the applicant.

Country Applications Received Applications Approved Applications Rejected Applications under Review Median application resolution time
Austria 2 2 0 0 -
Belgium 0 0 0 0 -
Bulgaria 0 0 0 0 -
Croatia 0 0 0 0 -
Cyprus 0 0 0 0 -
Czech Republic 3 2 1 0 -
Denmark 2 2 0 0 -
Estonia 0 0 0 0 -
Finland 3 3 0 0 -
France 0 0 0 0 -
Germany 10 8 1 1 -
Greece 0 0 0 0 -
Hungary 0 0 0 0 -
Ireland 1 0 1 0 -
Italy 2 2 0 0 -
Latvia 0 0 0 0 -
Lithuania 0 0 0 0 -
Luxembourg 0 0 0 0 -
Malta 0 0 0 0 -
Netherlands 4 4 0 0 -
Poland 1 0 1 0 -
Portugal 0 0 0 0 -
Romania 0 0 0 0 -
Slovakia 0 0 0 0 -
Slovenia 0 0 0 0 -
Spain 50 31 18 1 -
Sweden 1 1 0 0 -
Iceland 0 0 0 0 -
Liechtenstein 0 0 0 0 -
Norway 0 0 0 0 -
Total EU 79 55 22 2 10.0 Days
Total EEA 79 55 22 2 10.0 Days

Measure 26.3

Relevant Signatories will implement procedures for reporting the malfunctioning of access systems and for restoring access and repairing faulty functionalities in a reasonable time.

QRE 26.3.1

Relevant Signatories will describe the reporting procedures in place to comply with Measure 26.3 and provide information about their malfunction response procedure, as well as about malfunctions that would have prevented the use of the systems described above during the reporting period and how long it took to remediate them.

Google Researcher Program
For the Google Researcher Program, the most up-to-date information is captured in the program description on the Transparency Centre, and also on the Acceptable Use Policy page. Google Search has additional Help Centre support via their Search Researcher Result API guidelines.

YouTube Researcher Program
For the YouTube Researcher Program, there is support available via email. Researchers can contact YouTube, with questions and to report technical issues or other suspected faults, via a unique email alias, provided upon acceptance into the program. Questions are answered by YouTube’s Developer Support team and by other relevant internal parties as needed.

​​Google is not aware of any malfunctions during the reporting period that would have prevented access to these reporting systems. 

Commitment 28

COOPERATION WITH RESEARCHERS Relevant Signatories commit to support good faith research into Disinformation that involves their services.

We signed up to the following measures of this commitment

Measure 28.1 Measure 28.2 Measure 28.3 Measure 28.4

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Yes

If yes, list these implementation measures here

Search & YouTube
  • In October 2025, Google announced the recipients of the 2025 Google Academic Research Awards (GARA), committing $5.6 million to support 56 projects led by 84 researchers across 12 countries. Each recipient received up to $100,000 USD in funding and is paired with a Google research sponsor.

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

N/A

Measure 28.1

Relevant Signatories will ensure they have the appropriate human resources in place in order to facilitate research, and should set-up and maintain an open dialogue with researchers to keep track of the types of data that are likely to be in demand for research and to help researchers find relevant contact points in their organisations.

QRE 28.1.1

Relevant Signatories will describe the resources and processes they deploy to facilitate research and engage with the research community, including e.g. dedicated teams, tools, help centres, programs, or events.

Google has a longstanding commitment to transparency, and has led the way in transparency reporting of content removals and government requests for user data for more than a decade. 

Google
Eligible EU researchers can apply for access to publicly available data across some of Google’s products, including Search and YouTube, through the Google Researcher Program. Search and YouTube will provide eligible researchers (including non-academics that meet predefined eligibility criteria) with access to limited metadata scraping for public data. This program aims to enhance the public’s understanding of Google’s services and their impact. 

Google has teams that operate the Google Researcher Program. They manage the researcher application process and evaluate potential updates and developments for the Google Researcher Program. Additional information can be found on the Google Transparency Centre. Google Search has additional Help Centre support via their Search Researcher Result API guidelines.

Additionally, Google’s partnership with Lumen is an independent research project managed by the Berkman Klein Centre for Internet & Society at Harvard Law School. The Lumen database houses millions of content takedown requests that have been voluntarily shared by various companies, including Google. Its purpose is to facilitate academic and industry research concerning the availability of online content. As part of Google’s partnership with Lumen, information about the legal notices Google receives may be sent to the Lumen project for publication. Google informs users about its Lumen practices under the 'Transparency at our core' section of the Legal Removals Help Centre. Additional information on Lumen can be found here

YouTube
The YouTube Researcher Program provides eligible academic researchers from around the world with scaled, expanded access to global video metadata across the entire public YouTube corpus via a Data API. Information available via the Data API includes video title, description, views, likes, comments, channel metadata, search results, and other data. (See YouTube API reference for more information).

YouTube has teams that operate the YouTube Researcher Program. They manage the researcher application process and provide technical support throughout the research project. They also evaluate potential updates and developments for the YouTube Researcher Program. Researchers can use any of the options below to obtain support: 

Measure 28.2

Relevant Signatories will be transparent on the data types they currently make available to researchers across Europe.

QRE 28.2.1

Relevant Signatories will describe what data types European researchers can currently access via their APIs or via dedicated teams, tools, help centres, programs, or events.

See response to QRE 28.1.1.

Measure 28.3

Relevant Signatories will not prohibit or discourage genuinely and demonstratively public interest good faith research into Disinformation on their platforms, and will not take adversarial action against researcher users or accounts that undertake or participate in good-faith research into Disinformation.

QRE 28.3.1

Relevant Signatories will collaborate with EDMO to run an annual consultation of European researchers to assess whether they have experienced adversarial actions or are otherwise prohibited or discouraged to run such research.

Note: The below QRE response has been reproduced (in some instances truncated in order to meet the suggested character limit) from the previous report as there is no new information to share now.

Google continues to engage constructively with the Code of Conduct’s Permanent Task-force and with the European Digital Media Observatory (EDMO). 

Additionally, refer to QRE 26.1.1 to learn more about how Google, including YouTube, provides opportunities for researchers on its platforms.

Measure 28.4

As part of the cooperation framework between the Signatories and the European research community, relevant Signatories will, with the assistance of the EDMO, make funds available for research on Disinformation, for researchers to independently manage and to define scientific priorities and transparent allocation procedures based on scientific merit.

QRE 28.4.1

Relevant Signatories will disclose the resources made available for the purposes of Measure 28.4 and procedures put in place to ensure the resources are independently managed.

In 2021, Google committed €25M to help launch the European Media & Information Fund (EMIF) and a final scheduled payment was made in February 2026. Overall, 121 projects related to information quality received grants across 28 countries (including 26 EEA Member States).

The EMIF was established by the European University Institute and the Calouste Gulbenkian Foundation. The European Digital Media Observatory (EDMO) agreed to play a scientific advisory role in the evaluation and selection of projects that will receive the fund’s support, but does not receive Google funding. Google has no role in the assessment of applications. 

Crisis and Elections Response

Elections 2025

[Note: Signatories are requested to provide information relevant to their particular response to the threats and challenges they observed on their service(s). They ensure that the information below provides an accurate and complete report of their relevant actions. As operational responses to crisis/election situations can vary from service to service, an absence of information should not be considered a priori a shortfall in the way a particular service has responded. Impact metrics are accurate to the best of signatories’ abilities to measure them].

Threats observed or anticipated

Overview
In elections and other democratic processes, people want access to high-quality information and a broad range of perspectives. High-quality information helps people make informed decisions when voting and counteracts abuse by bad actors. Consistent with its broader approach to elections around the world, during the various elections across the EU in the reporting period, Google was committed to supporting these democratic processes by surfacing high-quality information to voters, safeguarding its platforms from abuse and equipping campaigns with best-in-class security tools and training – with a strong focus on helping people navigate AI-generated content. 

Mitigations in place

Across Google, various teams support democratic processes by connecting people to election information like practical tips on how to register to vote or providing high-quality information about candidates. In 2025, a number of key elections took place around the world and across the EU in particular. During the reporting period, voters cast their votes in Moldova, Czech Republic, Portugal, Ireland and the Netherlands. Google was committed to supporting these democratic processes by surfacing high-quality information to voters, safeguarding its platforms from abuse and equipping campaigns with the best-in-class security tools and training. Across its efforts, Google also has an increased focus on the role of artificial intelligence (AI) and the part it can play in the disinformation landscape — while also leveraging AI models to augment Google’s abuse-fighting efforts. 

Safeguarding Google platforms and disrupting the spread of disinformation
To better secure its products and prevent abuse, Google continues to enhance its enforcement systems and to invest in Trust & Safety operations — including at its Google Safety Engineering Centre (GSEC) for Content Responsibility in Dublin, dedicated to online safety in Europe and around the world. Google also continues to partner with the wider ecosystem to combat disinformation. 
  • Enforcing Google policies and using AI models to fight abuse at scale: Google has long-standing policies that inform how it approaches areas like manipulated media, hate and harassment, and incitement to violence — along with policies around demonstrably false claims that could undermine democratic processes, for example in YouTube’s Community Guidelines. To help enforce Google policies, Google’s AI models are enhancing its abuse-fighting efforts. With recent advances in Google’s Large Language Models (LLMs), Google is building faster and more adaptable enforcement systems that enable us to remain nimble and take action even more quickly when new threats emerge.
  • Working with the wider ecosystem: Since Google’s inaugural commitment of €25 million to help launch the European Media & Information Fund, an effort designed to strengthen media literacy and information quality across Europe, 121 projects have been funded across 28 countries so far.

Helping people navigate AI-generated content
Like any emerging technology, AI presents new opportunities as well as challenges. For example, generative AI makes it easier than ever to create new content, but it can also raise questions about trustworthiness of information. Google put in place a number of policies and other measures that have helped people navigate content that was AI-generated. Overall, harmful altered or synthetic political content did not appear to be widespread on Google’s platforms. Measures that helped mitigate that risk include: 
  • Ads disclosures: Google expanded its Political Content Policies in November 2023 to require advertisers to disclose when their election ads include synthetic content that inauthentically depicts real or realistic-looking people or events. Google’s ads policies already prohibit the use of manipulated media to mislead people, like deep fakes or doctored content. In September 2025, Google updated the Political Content Policies restricting political advertising in the European Union.
  • Content labels on YouTube: YouTube’s Misinformation Policies prohibit technically manipulated content that misleads users and could pose a serious risk of egregious harm — and YouTube requires creators to disclose when they have created realistic altered or synthetic content, and will display a label that indicates for people when the content they are watching is synthetic. For sensitive content, including election related content, that contains realistic altered or synthetic material, the label appears on the video itself and in the video description.
  • Provide users with additional context: 'About This Image' in Search helps people assess the credibility and context of images found online. We continue looking at ways to integrate integrity signals more directly throughout the Search experience, with a view to enhancing user experience and providing users with the context needed to make informed decisions about the information they see online. For example, we are looking at embedding image provenance into Google Search features in order to enable users to check image provenance more seamlessly.
  • Industry collaboration: Google is a member of the Coalition for Content Provenance and Authenticity (C2PA) and standard, a cross-industry effort to help provide more transparency and context for people on AI-generated content. 

Informing voters surfacing high-quality information
In the build-up to elections, people need useful, relevant and timely information to help them navigate the electoral process. Here are some of the ways Google makes it easy for people to find what they need, and which were deployed during elections that took place across the EU in 2025: 
  • High-quality Information on YouTube: For news and information related to elections, YouTube’s systems prominently surface high-quality content, on the YouTube homepage, in search results and the ‘Up Next’ panel. YouTube also displays information panels at the top of search results and below videos to provide additional context. For example, YouTube may surface various election information panels above search results or on videos related to election candidates, parties or voting.
  • Ongoing transparency on Election Ads: Starting September 2025, Google restricted political advertising in the European Union under new regulations. Since mid-August 2025, advertisers have been asked to declare if they intend to run political advertising. EU Election Ads previously shown in the Political Ads Transparency Report will remain publicly accessible in the Ads Transparency Centre, subject to retention policies.

Equipping campaigns and candidates with best-in-class security features and training
As elections come with increased cybersecurity risks, Google works hard to help high-risk users, such as campaigns and election officials, civil society and news sources, improve their security in light of existing and emerging threats, and to educate them on how to use Google’s products and services. 
  • Security tools for campaign and election teams: Google offers free services like its Advanced Protection Program — Google’s strongest set of cyber protections — and Project Shield, which provides unlimited protection against Distributed Denial of Service (DDoS) attacks. Google also partners with Possible, The International Foundation for Electoral Systems (IFES) and Deutschland sicher im Netz (DSIN) to scale account security training and to provide security tools including Titan Security Keys, which defend against phishing attacks and prevent bad actors from accessing users’ Google Accounts.
  • Tackling coordinated influence operations: Google’s Threat Intelligence Group (GTIG) helps identify, monitor and tackle emerging threats, ranging from coordinated influence operations to cyber espionage campaigns against high-risk entities. Google reports on actions taken in its quarterly bulletin, and meets regularly with government officials and others in the industry to share threat information and suspected election interference. Mandiant also helps organisations build holistic election security programs and harden their defences with comprehensive solutions, services and tools, including proactive exposure management, proactive intelligence threat hunts, cyber crisis communication services and threat intelligence tracking of information operations. A recent publication from the team gives an overview of the global election cybersecurity landscape, designed to help election organisations tackle a range of potential threats.

Google is committed to working with government, industry and civil society to protect the integrity of elections in the European Union — building on its commitments made in the EU Code of Conduct on Disinformation.

Policies and Terms and Conditions

Outline any changes to your policies

Policy - 50.1.1

N/A

Changes (such as newly introduced policies, edits, adaptation in scope or implementation) - 50.1.2

N/A

Rationale - 50.1.3

N/A

Integrity of Services

Outline approaches pertinent to this chapter, highlighting similarities/commonalities and differences with regular enforcement.

Specific Action applied - 50.4.1

YouTube works quickly to remove content that violates its policies. These policies apply to all forms of content, including videos, livestreams and comments, and YouTube’s policies are enforced across languages and locales.

Description of intervention - 50.4.2

YouTube continues to assess, evaluate, and update its policies on a regular basis, the latest updates policies, including Community Guidelines, can be found here

Indication of impact - 50.4.3

See Commitment 14 in the EU Code of Conduct Transparency Report for more details on this effort.

Specific Action applied - 50.4.4

YouTube creators are required to disclose when they upload a video that contains realistic altered or synthetic content, after which YouTube adds a transparency label so that viewers have this important context. 

Description of intervention - 50.4.5

See Commitment 15 in the EU Code of Conduct Transparency Report for details on how YouTube approaches responsible AI innovation, which may be applied to future elections.

Indication of impact - 50.4.6

See Commitment 17 in the EU Code of Conduct Transparency Report for more details on this effort.

Empowering Users

Outline approaches pertinent to this chapter, highlighting similarities/commonalities and differences with regular enforcement.

Specific Action applied - 50.5.1

YouTube’s systems prioritise connecting viewers with high-quality information, including on events such as elections in the EU.

Description of intervention - 50.5.2

YouTube’s recommendation system prominently surfaces news from high-quality sources on the homepage, in search results and the 'Up Next' panel. YouTube’s systems do this across every country where YouTube operates.

YouTube’s Top News and Breaking News shelves surface at the top of search results, prominently featuring content from high-quality news sources, which may include information about EU elections.

Indication of impact - 50.5.3

See Commitments 17 and 18 for metrics on these efforts.

Specific Action applied - 50.5.4

Election information panels may appear alongside search results and below relevant videos to provide more context and to help people make more informed decisions about election related content they are viewing.

Description of intervention - 50.5.5

Information panels may appear alongside search results and below relevant videos to provide more context and to help people make more informed decisions about the content they are viewing. During election periods, text-based information panels about a candidate, how to vote, and election results may also be displayed to users.

Indication of impact - 50.5.6

See Commitment 17 in the EU Code of Conduct Transparency Report for more details on this effort.

Empowering the Research Community

Outline approaches pertinent to this chapter, highlighting similarities/commonalities and differences with regular enforcement.

Specific Action applied - 50.6.1

YouTube established the YouTube Researcher Program, which continues to provide scaled, expanded access to global video metadata via a Data API for verified and affiliated academic researchers.

Description of intervention - 50.6.2

See Commitments 26 and 28 in the EU Code of Conduct Transparency Report for details on how YouTube provides eligible academic researchers access to global video metadata, which may be applied to EU elections during the reporting period. 

Indication of impact - 50.6.3

See Commitment 26 for metrics on these efforts.

Crisis 2025

[Note: Signatories are requested to provide information relevant to their particular response to the threats and challenges they observed on their service(s). They ensure that the information below provides an accurate and complete report of their relevant actions. As operational responses to crisis/election situations can vary from service to service, an absence of information should not be considered a priori a shortfall in the way a particular service has responded. Impact metrics are accurate to the best of signatories’ abilities to measure them].

Threats observed or anticipated

War in Ukraine

Overview
In response to the ongoing war in Ukraine, which has continued through 2025, Google remains committed to help by providing cybersecurity and humanitarian assistance, and providing high-quality information to people in the region. The following list outlines the main threats observed by Google during this conflict:

  1. Continued online services manipulation and coordinated influence operations;
  2. Advertising and monetisation linked to state-backed Russia and Ukraine disinformation;
  3. Threats to security and protection of digital infrastructure.


Israel-Gaza conflict

Overview
Following the Israel-Gaza conflict, Google has actively worked to support humanitarian and relief efforts, ensure platforms and partnerships are responsive to the current crisis, and counter the threat of disinformation. Google identified a few areas of focus for addressing the ongoing crisis:

  • Humanitarian and relief efforts;
  • Platforms and partnerships to protect our services from coordinated influence operations, hate speech, and graphic and terrorist content.

Mitigations in place

War in Ukraine

The following sections summarise Google’s main strategies and actions taken to mitigate the identified threats and react to the war in Ukraine.

1. Online services manipulation and malign influence operations
Google’s Threat Intelligence Group (GTIG) is helping Ukraine by monitoring the threat landscape in Eastern Europe and disrupting coordinated influence operations from Russian threat actors. 

2. Advertising and monetisation linked to Russia and Ukraine disinformation
During the reporting period, Google continued to pause the majority of commercial activities in Russia – including ads serving in Russia via Google demand and third-party bidding, ads on Google’s properties and networks globally for all Russian-based advertisers, AdSense ads on state-funded media sites, and monetisation features for YouTube viewers in Russia. Google paused ads containing content that exploits, dismisses, or condones the war. In addition, Google paused the ability of Russia-based publishers to monetise with AdSense, AdMob, and Ad Manager in August 2024. Free Google services such as Search, Gmail and YouTube are still operating in Russia. Google will continue to closely monitor developments.

3. Threats to security and protection of digital infrastructure
Google expanded eligibility for Project Shield, Google’s free protection against Distributed Denial of Service (DDoS) attacks, shortly after the war in Ukraine broke out. The expansion aimed to allow Ukrainian government websites and embassies worldwide to stay online and continue to offer their critical services. Since then, Google has continued to implement protections for users and track and disrupt cyber threats. 

GTIG has been tracking threat actors, both before and during the war, and sharing their findings publicly and with law enforcement. GTIG’s findings have shown that government-backed actors from Russia, Belarus, China, Iran, and North Korea have been targeting Ukrainian and Eastern European government and defence officials, military organisations, politicians, nonprofit organisations, and journalists, while financially motivated bad actors have also used the war as a lure for malicious campaigns. 

Future measures
Google aims to continue the following approach when responding to future crisis situations: 
  • Elevate access to high-quality information across Google services;
  • Protect Google users from harmful disinformation;
  • Continue to monitor and disrupt cyber threats;
  • Explore ways to provide assistance to support the affected areas more broadly.

Google will continue to monitor the situation and take additional action as needed.


Israel-Gaza conflict

Humanitarian and relief efforts
Google.org has provided more than $18 million to nonprofits providing relief to civilians affected in Israel and Gaza. This includes more than $11 million raised globally by Google employees with company match and $1 million in donated Search Ads to nonprofits so they can better connect with people in need and provide information to those looking to help. We also provided $6 million in Google.org grant funding, including $3 million provided to Natal, an apolitical nonprofit organisation focused on psychological treatment of victims of trauma. The remaining funds were provided to organisations focussed on humanitarian aid and relief in Gaza, including $1 million to Save the Children, $1 million to Palestinian Red Crescent, $1 million to International Medical Corps.

Specifically, Google’s humanitarian and relief efforts with these organisations include: 
  • Natal- Israel Trauma and Resiliency Centre: In the early days of the war, calls to Natal’s support hotline went from around 300 a day to 8,000 a day. With our funding, they were able to scale their support to patients by 450%, including multidisciplinary treatment and mental & psychosocial support to direct and indirect victims of trauma due to terror and war in Israel. 
  • [See two-year detailed report] After more than two years and thanks to Google’s support, International Medical Corps continues to deliver lifesaving health and humanitarian services across Gaza. In addition to the two field hospitals they have been operating in Dier al Balah and Al Zawaida, they announced that they opened a third field hospital in Gaza City in November 2025, significantly expanding access to critical care for civilians in the north. As of late Jan 2026, International Medical Corps has: 
    • Provided 533,119 outpatient consultations;
    • Performed more than 19,771 surgeries;
    • Supported 9,238 deliveries, including 1,930 caesarean sections;
    • Screened 154,473 children under 5 and pregnant and lactating women for malnutrition; and much more. 


Platforms and partnerships
As the conflict continues, Google is committed to tackling disinformation, hate speech, graphic content and terrorist content by continuing to find ways to provide support through its products. For example, Google has deployed language capabilities to support emergency efforts including emergency translations, and localising Google content to help users, businesses and nonprofit organisations. Google has also pledged to help its partners in these extraordinary circumstances. For example, when schools closed in October 2023, the Ministry of Education in Israel used Meet as their core teach-from-home platform and Google provided support. Google has been in touch with Gaza-based partners and participants in its Palestine Launchpad program, its digital skills and entrepreneurship program for Palestinians, to try to support those who have been significantly impacted by this crisis.

Policies and Terms and Conditions

Outline any changes to your policies

Policy - 51.1.1

War in Ukraine: Enforcement of existing policies

Changes (such as newly introduced policies, edits, adaptation in scope or implementation) - 51.1.2

War in Ukraine: YouTube continued to enforce all Community Guidelines policies during the war in Ukraine.

Rationale - 51.1.3

War in Ukraine: No changes to YouTube’s Community Guidelines and to Terms and Conditions were made as a result of the war in Ukraine during this reporting period. YouTube continues to enforce all policies, including the ones mentioned in this report.

Policy - 51.1.4

Israel-Gaza conflict: Enforcement of existing policies

Changes (such as newly introduced policies, edits, adaptation in scope or implementation) - 51.1.5

Israel-Gaza conflict: YouTube’s Hate Speech Policy prohibits content denying, trivialising, or minimising violent historical events, including the 7 October Hamas attacks in Israel. YouTube relies on a variety of factors to determine whether a major violent event is covered, using guidance from outside experts and governing bodies to inform its approach.

Rationale - 51.1.6

Israel-Gaza conflict: No changes to YouTube Community Guidelines and to Terms and Conditions were made as a result of the Israel-Gaza conflict. YouTube continues to enforce all policies, including the ones mentioned in this report.

Integrity of Services

Outline approaches pertinent to this chapter, highlighting similarities/commonalities and differences with regular enforcement.

Specific Action applied - 51.4.1

War in Ukraine: YouTube continues to enforce its Community Guidelines, including but not limited to misinformation policies, which establish what type of content and behaviour is not allowed on the platform.

Description of intervention - 51.4.2

War in Ukraine: See Commitment 14 in the EU Code of Conduct Transparency Report for information on how YouTube enforces its Community Guidelines.

Indication of impact - 51.4.3

War in Ukraine: From 24 February 2022 through 30 June 2025, YouTube took the following actions related to the ongoing war in Ukraine:
  • Removed over 160,000 videos and over 12,000 channels.
  • Blocked over 5.9 million videos and over 1,000 channels. 

Since June 2025, YouTube’s enforcement continues within its standard enforcement systems, which detect violations of its content policies, including those pertaining to misinformation, hate speech, and graphic violence. This data can be found in the Removal section of YouTube's Community Guidelines Transparency Report.

Specific Action applied - 51.4.4

Israel-Gaza conflict: YouTube’s teams have been working quickly to remove content that violates its policies including those pertaining to hate speech, violent extremism, violent or graphic content, harassment, and misinformation. These policies apply to all forms of content, including videos, livestreams and comments, and YouTube’s policies are enforced across languages and locales.

Description of intervention - 51.4.5

Israel-Gaza conflict: 
  • Per YouTube’s Hate Speech Policy, content that promotes violence or hatred against groups based on their ethnicity, nationality, race or religion is not allowed on YouTube. This includes Jewish, Muslim, and other religious or ethnic communities.
  • Per YouTube’s Violent Extremist Policy, content that praises, promotes or in any way aids violent criminal organisations is prohibited. Additionally, content produced by designated terrorist organisations, such as a Foreign Terrorist Organisation (U.S.), or organisation identified by the United Nations, is not allowed on YouTube. This includes content produced by Hamas and Palestinian Islamic Jihad (PIJ). 
    • In addition, YouTube has a dedicated button underneath every video on YouTube to flag content with the option to mark it as 'promotes terrorism.' 
  • Per YouTube’s Violent or Graphic Content Policies, YouTube prohibits violent or gory content intended to shock or disgust viewers. Additionally, content encouraging others to commit violent acts against individuals or a defined group of people, including the Jewish, Muslim and other religious communities, is not allowed on YouTube.
  • Per YouTube’s Harassment Policies, content that promotes harmful conspiracy theories or targets individuals based on their protected group status is not allowed on YouTube. Additionally, content that realistically simulates deceased minors or victims of deadly or well-documented major violent events describing their death or violence experienced, is not allowed on YouTube.
  • Per YouTube’s Misinformation Policies, content containing certain types of misinformation that can cause real-world harm, including certain types of misattributed content, is not allowed on YouTube.

Indication of impact - 51.4.6

Israel-Gaza conflict: From 6 October 2023 through 30 June 2025, YouTube took the following actions after the terrorist attack by Hamas in Israel and the escalated conflict now underway in Israel and Gaza: 
  • Removed over 140,000 videos and over 6,000 channels;
  • Removed over 500 million comments.

Since June 2025, YouTube’s enforcement continues within its standard enforcement systems, which detect violations of its content policies, including those pertaining to misinformation, hate speech, and graphic violence. This data can be found in the Removal section of YouTube's Community Guidelines Transparency Report.

Empowering Users

Outline approaches pertinent to this chapter, highlighting similarities/commonalities and differences with regular enforcement.

Specific Action applied - 51.5.1

War in Ukraine: YouTube continues its ‘Hit Pause’ global media literacy campaign, to teach viewers critical skills and to improve users’ experiences on YouTube.

Description of intervention - 51.5.2

War in Ukraine: See Commitment 17 in the EU Code of Conduct Transparency Report for details on how YouTube’s ‘Hit Pause’ campaign has been teaching viewers critical media literacy skills. These skills are important in all crisis situations, including the war in Ukraine. 

Indication of impact - 51.5.3

War in Ukraine: See Commitment 17 for metrics on these efforts.

Specific Action applied - 51.5.4

War in Ukraine: YouTube continues to surface videos from high-quality sources in search results and recommendations.

Description of intervention - 51.5.5

War in Ukraine: See Commitments 17 and 18 in the EU Code of Conduct Transparency Report for details on how YouTube surfaces videos from high-quality sources in search results and recommendations. These high-quality sources are important in all crisis situations, including the war in Ukraine.

Indication of impact - 51.5.6

War in Ukraine: See Commitments 17 and 18 for metrics on these efforts.

Specific Action applied - 51.5.7

War in Ukraine: YouTube continues to provide features to enhance access to high-quality information, including Information Panels, on YouTube.

Description of intervention - 51.5.8

War in Ukraine: See Commitments 17 and 18 in the EU Code of Conduct Transparency Report for details on how YouTube enhances access to high-quality information, including information panels on topics prone to misinformation.

Indication of impact - 51.5.9

War in Ukraine: See Commitments 17 and 18 for metrics on these efforts.

Specific Action applied - 51.5.10

Israel-Gaza conflict: YouTube is continuing to actively surface high-quality news content in search results for queries about Israel and Gaza, including through its breaking news and top news shelves.  

Description of intervention - 51.5.11

Israel-Gaza conflict: YouTube’s recommendation system is prominently surfacing news from high-quality sources on the homepage, in search results and the 'Up Next' panel. YouTube’s systems do this across every country where YouTube operates.

YouTube’s Top News and Breaking News shelves are surfacing at the top of search results related to the attacks in Israel and on the homepage, prominently featuring content from high-quality news sources.

Indication of impact - 51.5.12

Israel-Gaza conflict: See Commitments 17 and 18 for metrics on these efforts.

Empowering the Research Community

Outline approaches pertinent to this chapter, highlighting similarities/commonalities and differences with regular enforcement.

Specific Action applied - 51.6.1

War in Ukraine: YouTube established the YouTube Researcher Program, which continues to provide scaled, expanded access to global video metadata via a Data API for verified and affiliated academic researchers.

Description of intervention - 51.6.2

War in Ukraine: See Commitments 26 and 28 in the EU Code of Conduct Transparency Report for details on how YouTube provides eligible academic researchers access to global video metadata, which may include content about the ongoing war in Ukraine. 

Indication of impact - 51.6.3

War in Ukraine: See Commitment 26 for metrics on these efforts.

Specific Action applied - 51.6.4

Israel-Gaza conflict: YouTube established the YouTube Researcher Program, which continues to provide scaled, expanded access to global video metadata via a Data API for verified and affiliated academic researchers.

Description of intervention - 51.6.5

Israel-Gaza conflict: See Commitments 26 and 28 in the EU Code of Conduct Transparency Report for details on how YouTube provides eligible academic researchers access to global video metadata, which may be applied to the ongoing conflict in Israel and Gaza. 

Indication of impact - 51.6.6

Israel-Gaza conflict: See Commitment 26 for metrics on these efforts.