YouTube

Report March 2025

Submitted

Executive summary


Google’s mission is to organise the world’s information and make it universally accessible and useful. To deliver on this mission, elevating high-quality information and enhancing information quality across our services is of utmost importance. Since Google was founded, Google’s product, policy, and content enforcement decisions have been guided by the following three principles:

1. Value openness and accessibility: Aim to provide access to an open and diverse information ecosystem, while maintaining a responsible approach to supporting information quality;

2. Respect user choice: Based on users’ intent, provide access to content that is not illegal or prohibited by Google’s policies, but set a high bar for information quality where users have not clearly expressed what they are looking for;

3. Build for everyone: Take into account the diversity of users (cultures, languages, backgrounds) and seek to address their needs appropriately.

With these principles in mind, Google has teams around the world working to combat harmful misinformation. Google has long invested in ranking systems that seek to connect people with high-quality content; in developing and enforcing rules that prohibit harmful behaviours and contents on Google services; and in innovative ways to provide context to users when they might need it most. We realise that fundamental rights are interdependent and are sometimes in tension with each other. When efforts to protect or advance one right may result in limiting another right, we identify and implement mitigation measures to address potential adverse impacts such as, protecting freedom of expression via appeals mechanisms or raising high-quality content to address lower-quality content that may appear on the platform. We comply with applicable laws by removing illegal content. We also remove content that violates our policies, and regularly evolve these policies in consultation with experts. Our work is not done, and we expect to continue improving upon these efforts in the future.

However, we are cognisant that these are complex issues, affecting all of society, which no single actor is in a position to fully tackle on their own. That is why we have welcomed the multi-stakeholder approach put forward by the EU Code of Practice on Disinformation. 

As the EU Code of Practice on Disinformation is being brought under the EU Digital Services Act (DSA) framework, Google has revised its subscription to focus on reasonable, proportionate and effective measures to mitigate systemic risks related to disinformation that are tailored to our services. Accordingly, Google has exited certain commitments that are not relevant, practicable or appropriate for its services, including all commitments under the Political Advertising and Fact-Checking chapters.

Alongside our participation in the EU Code of Practice on Disinformation, we will continue to work closely with regulators to ensure that our services appropriately comply with the DSA, in full respect of EU fundamental rights such as freedom of expression. The work of supporting a healthy information ecosystem is never finished and we remain committed to it. This is in our interest and the interest of our users.

This report includes metrics and narrative detail for Google Search, YouTube, and Google Advertising users in the European Union (EU), and covers the period from 1 July 2024 to 31 December 2024.

Updates to highlight in this report include (but are not limited to): 

  • 2024 EU Elections: In 2024, a number of elections took place around the world. In H2 2024, voters cast their ballots in the Romanian presidential election and in the second round of the French legislative election. Google was committed to supporting these democratic processes by surfacing high-quality information to voters, safeguarding its platforms from abuse and equipping campaigns with the best-in-class security tools and training. In addition, Google put in place a number of policies and other measures that have helped people navigate political content that was AI-generated, including ad disclosures, content labels on YouTube, and digital watermarking tools.

  • Supporting Researchers in Technology Related to Trust & Safety: Google has continued to demonstrate its commitment to empowering the research community by hosting workshops with researchers and providing grants to support research efforts related to Trust & Safety areas of interest. These Trust & Safety workshops aim to build relationships among scholars working in different fields, to share projects and insights across the broader Trust & Safety ecosystem. We are also committed to assisting researchers with financial support to continue their research. Google provides unrestricted grants to support research efforts across areas of interest related to Trust & Safety in technology through the Trust & Safety Research Awards. This program, in partnership with University Relations, is one of Google’s largest opportunities to partner with external researchers on priority Trust & Safety topics. Similarly, we announced the first-ever winners of the Google Academic Research Awards (GARA) program in October 2024. In this first funding cycle, the program will support 95 projects led by 143 researchers globally, and their work aligns with Google's commitment to responsible innovation.

  • Advances in Artificial Intelligence (AI): In H1 2024, we announced new AI safeguards to help protect against misuse. We introduced SynthID, a technology that adds imperceptible watermarks to AI-generated images and audio so they are easier to identify; this year, we are expanding SynthID’s capabilities to watermarking AI-generated text, audio, visual and video. YouTube also introduced a new tool in Creator Studio requiring creators to disclose to viewers when realistic content is made with altered or synthetic media, including generative AI. In addition to these new tools, we are also committed to working with the greater ecosystem to help others benefit from and improve on the advances we are making. As such, we will open-source SynthID text watermarking through our updated Responsible Generative AI Toolkit. Underpinning our advancements in AI, as a member of the Standard and Coalition for Content Provenance and Authenticity (C2PA), we collaborate with Adobe, Microsoft, startups and many others to build and implement the newest version (2.1) of the coalition’s technical standard, Content Credentials. This version is more secure against a wider range of tampering attacks due to stricter technical requirements for validating the history of the content’s provenance.

Google has been working on AI for more than a dozen years to solve society’s biggest challenges and power Google services people use every day. The progress in large-scale AI models (including generative AI) has sparked additional discussion about the social impacts of AI and raised concerns on topics such as misinformation. Google is committed to developing technology responsibly and published AI Principles in 2018 to guide our work. Google’s robust internal governance focuses on responsibility throughout the AI development lifecycle, covering model development, application deployment, and post-launch monitoring. While we recently updated our Principles to adapt to shifts in technology, the global conversation, and the AI ecosystem, our deep commitment to responsible AI development remains unchanged. Through our philanthropic arm Google.org we have supported organisations that are using AI to tackle important societal issues. Google Search has published guidance on AI-generated content, outlining its approach to maintaining a high standard of information quality and the overall helpfulness of content on Search. To help enhance information quality across its services, Google has also announced that it will soon be integrating new innovations in watermarking, metadata, and other techniques into its latest generative models. Google has also joined other leading AI companies to jointly commit to advancing responsible practices in the development of artificial intelligence which will support efforts by the G7, the OECD, and national governments. Going forward we will continue to report and expand upon Google developed AI tools and are committed to advance bold and responsible AI, to maximise AI’s benefits and minimise its risks.


Lastly, the contents of this report should be read with the following context in mind: 

  • This report discusses the key approaches across the following Google services when it comes to addressing disinformation: Google Search, YouTube, and Google Advertising. 
  • For chapters of the Code that involve the same actions across all three services (e.g. participation in the Permanent Task-force or in development of the Transparency Centre), we respond as 'Google, on behalf of related services'.
  • This report follows the structure and template laid out by the Code’s Permanent Task-force, organised around Commitments and Chapters of the Code.
  • Unless otherwise specified, metrics provided cover activities and actions during the period from 1 July 2024 to 31 December 2024.
  • The data provided in this report is subject to a range of factors, including product changes and user settings, and so is expected to fluctuate over the time of the reporting period. As Google continues to evolve its approach, in part to better address user and regulatory needs, the data reported here could vary substantially over time. 
  • We are continuously working to improve the safety and reliability of our services. We are not always in a position to pre-announce specific launch dates, details or timelines for upcoming improvements, and therefore may reply 'no' when asked whether we can disclose future plans for Code implementation measures in the coming reporting period. This 'no' should be understood against the background context that we are constantly working to improve safety and reliability and may in fact launch relevant changes without the ability to pre-announce. 
  • This report is filed concurrently with two ‘crisis reports’ about our response to the Israel-Gaza conflict and to the war in Ukraine. Additionally, an annex on Google’s response toward the recent elections in Romania and France is included in this report. As such, while there will be references to our actions throughout this report, information specific to these events should be sought in dedicated reports. 

Google will continue to publish subsequent versions of this report biannually, focusing on the 6 months review period relevant to each filing, as requested under the Code.

Google looks forward to continuing to work together with other stakeholders in the EU to address challenges related to disinformation.

Download PDF

Commitment 14
In order to limit impermissible manipulative behaviours and practices across their services, Relevant Signatories commit to put in place or further bolster policies to address both misinformation and disinformation across their services, and to agree on a cross-service understanding of manipulative behaviours, actors and practices not permitted on their services. Such behaviours and practices include: The creation and use of fake accounts, account takeovers and bot-driven amplification, Hack-and-leak operations, Impersonation, Malicious deep fakes, The purchase of fake engagements, Non-transparent paid messages or promotion by influencers, The creation and use of accounts that participate in coordinated inauthentic behaviour, User conduct aimed at artificially amplifying the reach or perceived public support for disinformation.
We signed up to the following measures of this commitment
Measure 14.1 Measure 14.2 Measure 14.3
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
Yes
If yes, list these implementation measures here
YouTube
YouTube has long been updating, on a regular and ongoing basis, its internal systems and processes related to the detection of content that violates its policies. This includes investment in automated detection systems. 

Search & YouTube
In November 2024, Google released a white paper detailing how it is addressing the growing global issue of fraud and scams. In the paper, Google explains that it fights scams and fraud by taking proactive measures to protect users from harm, deliver reliable information, and partner to create a safer internet, through policies and built-in technological protections that help us to prevent, detect, and respond to harmful and illegal content. For details on YouTube and Google Search’s approaches to tackling scams, see the full report here.
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
No
If yes, which further implementation measures do you plan to put in place in the next 6 months?
N/A
Measure 14.1
Relevant Signatories will adopt, reinforce and implement clear policies regarding impermissible manipulative behaviours and practices on their services, based on the latest evidence on the conducts and tactics, techniques and procedures (TTPs) employed by malicious actors, such as the AMITT Disinformation Tactics, Techniques and Procedures Framework.
QRE 14.1.1
Relevant Signatories will list relevant policies and clarify how they relate to the threats mentioned above as well as to other Disinformation threats.
Note: The below QRE response has been reproduced (in some instances truncated in order to meet the suggested character limit) from the previous report as there is no new information to share now.

Similar to Google Search, YouTube’s systems are designed to connect people with high-quality content.

In addition, YouTube has various policies which set out what is not allowed on YouTube. These policies, which can be accessed via this landing page in YouTube’s Help Centre, address relevant TTPs. Notably, YouTube’s policies tend to be broader than the identified TTPs. As such, related SLIs providing information about actions taken related to the TTP may be overinclusive.

YouTube’s Community Guidelines, commitment to promote high-quality content and curb the spread of harmful misinformation, disclosure requirements for paid product placements, sponsorships & endorsements, and ongoing work with Google’s Threat Analysis Group (TAG) broadly address TTPs: 1, 2, 3, 5, 7, 8, 9, 10, and 11 - and notably, go beyond these TTPs.

In this report, YouTube has provided information relating to TTPs 1, 5, 7 and 9. Removals relating to the remaining TTPs are included, in part or in whole, in the Community Guidelines enforcement report, but YouTube does not have more detailed removal reporting at this time. TTPs do not necessarily map singularly to one Community Guideline, and therefore, there are challenges in providing more granular mapping for TTPs. 

YouTube continues to assess, evaluate, and update its policies on a regular basis, the latest updated policies, including Community Guidelines, can be found here
QRE 14.1.2
Signatories will report on their proactive efforts to detect impermissible content, behaviours, TTPs and practices relevant to this commitment.
YouTube’s approach to combating misinformation involves removing content that violates YouTube’s policies as quickly as possible, raising high quality information in rankings and recommendations, curbing the spread of harmful misinformation, and rewarding trusted, eligible creators and artists. YouTube applies these principles globally, including across the EU. 

A YouTube channel may be permanently terminated if the creator receives three strikes in the same 90-day period, or the channel is determined to be wholly dedicated to violating YouTube’s guidelines (as may be the case with spam accounts). In some cases, YouTube may terminate a channel for a single case of severe abuse, as explained in the Help Centre. When a channel is terminated, all of its videos are removed.

A user’s channel may be turned off or restricted from using any YouTube features. If this happens, the user is prohibited from using, creating, or acquiring another channel to get around these restrictions. This prohibition applies as long as the restriction remains active on their YouTube channel. Violation of this restriction is considered circumvention under YouTube’s Terms of Service, and may result in termination of all their existing YouTube channels, any new channels that they create or acquire, and channels in which they are repeatedly or prominently featured.

YouTube uses a combination of people and machine learning to detect problematic content automatically and at scale. Machine learning is well-suited to detect patterns, including harmful misinformation, which helps YouTube find content similar to other content that YouTube has already removed, even before it is viewed. Every quarter, YouTube publishes data in the Community Guidelines enforcement report about removals that were first detected by automated means. 

YouTube’s Intelligence Desk monitors the news, social media, and user reports to detect new trends surrounding inappropriate content, and works to make sure YouTube’s teams are prepared to address them before they can become a larger issue.

In addition, Google’s Threat Analysis Group (TAG) and Google and YouTube’s Trust and Safety Teams are central to Google’s work to monitor malicious actors around the globe, including but not limited to coordinated information operations that may affect EU Member States. More information about this work is outlined in QRE 16.1.1.

YouTube continues to invest in automated detection systems, and rely on both human evaluators and machine learning to train their systems on new data. YouTube’s engineering teams also continue to update and improve their detection systems regularly. YouTube aims to leverage an even more targeted mix of classifiers, keywords in additional languages, and information from regional analysts to identify narratives their main classifier does not catch. Over time, this will continue to make YouTube faster and more accurate at catching viral misinformation narratives.