Microsoft Bing

Report March 2026

Submitted

Executive summary

 
Microsoft Ireland Operations Limited (MIOL) – the provider of Bing’s services in the European Union (EU) – welcomes the opportunity to file this report on Bing compliance with the commitments and measures of the EU Code of Conduct on Disinformation that it subscribed to in its Subscription Document dated 15 January 2025. This report covers the period from 1 July to 31 Dec 2025 (the “Reporting Period”).    

Bing is an online search engine with the primary objective of connecting users to the most relevant search results from the web. Users come to Bing with a specific research topic in mind and expect Bing to provide links to the most relevant and authoritative third-party websites on the Internet that are responsive to their search terms. Therefore, addressing misinformation or disinformation in organic search results often requires a different approach than may be appropriate for other types of online services, as over-moderation of content in search could have a significant negative impact on the right to access information, freedom of expression, and media plurality. 

Bing carefully balances these competing fundamental rights and interests as it works to ensure that its algorithms return the most high-quality content available that is relevant to the user’s queries, working to avoid causing harm to users without unduly limiting their ability to access answers to the questions they seek. In some cases, different features may require different interventions based on functionality and user expectations. 

While Bing’s remediation efforts may on occasion involve removal of content from search results (where legal or policy considerations warrant removal), in many cases, Bing has found that actions such as targeted ranking interventions, or additional digital literacy features such as Answers pointing to high authority sources, or content provenance indicators, are more effective. Bing regularly reviews the efficacy of its measures to identify additional areas for improvement and works with internal and external subject matter experts in key policy areas to identify new threat vectors or improved mechanisms to help prevent users from being unexpectedly exposed to harmful content in search results that they did not expressly seek to find. 

Bing offers numerous generative AI experiences for users. For example, users may see generative search results on the main search engine results page for informational and complex queries. Generative search results are contained and indicated with an icon with the sentence “This summary was generated by AI from multiple online sources. Find the source links used for this summary under "Based on sources" Learn more about Bing results how Bing delivers search results.” Users continue to see traditional search results immediately below any generative results.  

Bing also offers a fully generative search experience, known Copilot Search (see Copilot Search). Copilot Search combines the foundation of Bing’s search results with the power of large and small language models (LLMs and SLMs). It understands the search query, reviews millions of sources of information, dynamically matches content, and generates search results in a new AI-generated layout to fulfil the intent of the user’s query more effectively.

Bing also offers Bing Image Creator and Bing Video Creator (see Free AI Image Generator - Bing Image Creator). These experiences allow users to create images and videos simply by using their own words to describe, and - within the Reporting Period - image uploads to inspire, the picture they want to see. These features can be accessed directly within Bing.com.  

Bing follows the “Trustworthy Search Principles” (found at How Bing delivers search results - Microsoft Support) to guide the product design, experience, algorithms, and mitigation measures that Bing adopts to ensure users’ expectations are met while addressing potential risks or harms arising from use of the service, including across Bing’s GenAI experiences. 

As confirmed by Bing’s Year Two and Three Digital Service Act (DSA) Systemic Risk Assessments, the residual risks most relevant to misinformation and disinformation (i.e., those relating to Civic Discourse and Electoral Process, Public Health and Public Security) are categorised as “Low”. While Bing is a participant in the elections Rapid Response System, it received no notifications during any of the elections for which this system was activated during the reporting period. 

Bing supports the objectives of the European Code of Conduct on Disinformation (the “Code”) and we are committed to actively working with Signatories and the European Commission in the context of this Code to defend against possible harms of disinformation on the Bing service.

Unless stated otherwise, data provided under this report covers a reporting period of 1 July 2025 to 31 Dec 2025 (“Reporting Period”).

Download PDF

Commitment 22
Relevant Signatories commit to provide users with tools to help them make more informed decisions when they encounter online information that may be false or misleading, and to facilitate user access to tools and information to assess the trustworthiness of information sources, such as indicators of trustworthiness for informed online navigation, particularly relating to societal issues or debates of general interest.
We signed up to the following measures of this commitment
Measure 22.2 Measure 22.3 Measure 22.7
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
Yes
If yes, list these implementation measures here
Microsoft works with multiple external partners on trainings, resources, and solutions for local newsrooms. Through a collaboration with the CUNY Craig Newmark Graduate School of Journalism, the Poynter Institute, and the Thomson Reuters Foundation, Microsoft trained thousands of journalists globally on developing AI policies for newsrooms and ethical AI use in news. Microsoft also partners with The Lenfest Institute to develop AI solutions for local newsrooms. The solutions focus on supporting the business sustainability of newsrooms, and many have been made public online and through in-person panels and convenings.

Microsoft continued work with the human rights-focused nonprofit WITNESS to enhance journalists’ and fact-checkers’ capacity to address AI threats to elections. Through this partnership, Microsoft and WITNESS created resources to build literacy around AI detection and how detection technology complements core information literacy approaches. These resources were debuted to global audiences at the International Journalism Festival in April 2025 and are available here: Things to know before using AI detection tools - Library.

Microsoft also supported and collaborated on the development and release of The Newsroom Toolkit from the Poynter Institute’s MediaWise. The toolkit is a resource for journalists and media professionals seeking to integrate AI literacy into their reporting and organizational practices. Poynter reported the release of the toolkit was met with high interest and enthusiasm by industry professionals, with 431 individuals attending the webinar launch from 50 countries, including European Union countries. The toolkit was downloaded 1,857 times in the first three months after release.

In September 2025, Microsoft and Minecraft Education rolled out educational materials for users around AI and AI literacy to continue to promote ongoing engagement and education: Build AI Literacy with Reed Smart | Minecraft Education. In parallel, Microsoft announced enhancements to Search Coach, a tool available through Teams for Education to help learners develop critical thinking skills when they are assessing online information: Empowering Learners for the Age of AI: New Information Literacy Features Coming to Search Progress | Microsoft Community Hub.
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
Yes
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Microsoft plans a refresh of our AI Classroom Safety Toolkit in approximately six months. This toolkit is used in classrooms around the globe to advance learning objectives that include assessing the trustworthiness of AI information sources. See here for more: Unlock generative AI safely and responsibly—classroom toolkit
Measure 22.2
Relevant Signatories will give users the option of having signals relating to the trustworthiness of media sources into the recommender systems or feed such signals into their recommender systems.
QRE 22.2.1
Relevant Signatories will report on whether and, if relevant, how they feed signals related to the trustworthiness of media sources into their recommender systems, and outline the rationale for their approach.
Bing Search utilizes a variety of signals – including trustworthiness indicators from trusted fact checkers and research organizations – as one of several means to help determine the authority score of a given website and rank it accordingly in search results. 

Microsoft also maintains additional partnerships with fact checkers and research organizations covering EU/EEA to strengthen the company’s capacity and understanding of global threats to disinformation and inform interventions in Bing search to protect users against related risks. These partnerships are part of a broader effort to empower Microsoft users to better understand the information they consume across our platforms and products.  

The above mechanisms and the Bing algorithm’s emphasis on promoting high authority content are applied equally to Bing generative AI features to help ensure that users are protected from misleading information across Bing surfaces. Ancillary and supplemental search features, such as search suggestions, can be adjusted and/or deactivated through user search setting.