Microsoft Bing

Report September 2025

Submitted
Microsoft Ireland Operations Limited (MIOL)– the provider of Bing’s services in the European Union (EU) – welcomes the opportunity to file this report on our compliance with the commitments and measures of the strengthened 2022 EU Code of Practice on Disinformation that we subscribed to in our Subscription Document dated 15 January 2025. This report covers the period from 1 January to 30 June 2025 (the “Reporting Period”).    

Bing Search is an online search engine with the primary objective of connecting users to the most relevant search results from the web. Users come to Bing with a specific research topic in mind and expect Bing to provide links to the most relevant and authoritative third-party websites on the Internet that are responsive to their search terms. Therefore, addressing misinformation or disinformation in organic search results often requires a different approach than may be appropriate for other types of online services, as over-moderation of content in search could have a significant negative impact on the right to access information, freedom of expression, and media plurality. 

Bing carefully balances these competing fundamental rights and interests as it works to ensure that its algorithms return the most high-quality content available that is relevant to the user’s queries, working to avoid causing harm to users without unduly limiting their ability to access answers to the questions they seek. In some cases, different features may require different interventions based on functionality and user expectations. 

While Bing’s remediation efforts may on occasion involve removal of content from search results (where legal or policy considerations warrant removal), in many cases, Bing has found that actions such as targeted ranking interventions, or additional digital literacy features such as Answers pointing to high authority sources, or content provenance indicators, are more effective. Bing regularly reviews the efficacy of its measures to identify additional areas for improvement and works with internal and external subject matter experts in key policy areas to identify new threat vectors or improved mechanisms to help prevent users from being unexpectedly exposed to harmful content in search results that they did not expressly seek to find. 

Bing offers numerous generative AI experiences for users. For example, users may see generative search results on the main search engine results page for informational and complex queries. Generative search results are contained and indicated with an icon with the sentence “This summary was generated by AI from multiple online sources. Find the source links used for this summary under "Based on sources".” Users continue to see traditional search results immediately below any generative results. 

Bing also offers a fully generative search experience, previously known as Bing Generative Search and rebranded in April 2025 to Copilot Search. Copilot Search combines the foundation of Bing’s search results with the power of large and small language models (LLMs and SLMs). It understands the search query, reviews millions of sources of information, dynamically matches content, and generates search results in a new AI-generated layout to fulfil the intent of the user’s query more effectively. 

Bing also offers Bing Image Creator and Bing Video Creator. These experiences, powered by the very latest DALL∙E models from our partners at OpenAI, allow a user to create images and videos simply by using their own words to describe the picture they want to see. 

Bing follows the “Trustworthy Search Principles” (found at How Bing delivers search results - Microsoft Support) to guide the product design, experience, algorithms, and mitigation measures that Bing adopts to ensure users’ expectations are met while addressing potential risks or harms arising from use of the service, including across Bing’s GenAI experiences. 

As confirmed by Bing’s Year Two and Three Digital Service Act (DSA) Systemic Risk Assessments, the residual risks most relevant to misinformation and disinformation (i.e. those relating to Civic Discourse and Electoral Process, Public Health and Public Security) are categorised as “Low”. Of note, during the Reporting Period, Bing participated in the Rapid Response Systems activated for the elections in Germany, Romania, Portugal and Poland, and received no notifications during this period. 

Bing supports the objectives of the European Code of Practice on Disinformation (the “Code”) and we are committed to actively working with Signatories and the European Commission in the context of this Code to defend against disinformation on the Bing service.

Unless stated otherwise, data provided under this report covers a reporting period of 1 Jan 2025 to 31 June 2025 (“Reporting Period”).

Download PDF

Commitment 22
Relevant Signatories commit to provide users with tools to help them make more informed decisions when they encounter online information that may be false or misleading, and to facilitate user access to tools and information to assess the trustworthiness of information sources, such as indicators of trustworthiness for informed online navigation, particularly relating to societal issues or debates of general interest.
We signed up to the following measures of this commitment
Measure 22.2 Measure 22.3 Measure 22.7
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
Yes
If yes, list these implementation measures here
Microsoft has been working with The Lenfest Institute to develop AI solutions for local news. Those solutions will be readily available for sharing across the industry, and the first case studies will be released in September. Through the Online News Association, The Poynter Institute, Thomson Reuters Foundation and Impress UK, Microsoft and its partners have trained more than 10,000 journalists on AI policy setting and ethical use cases for AI in news.

Microsoft continued work with the human rights-focused nonprofit WITNESS to enhance journalists’ and fact-checkers’ capacity to address AI threats to elections. As part of their collaboration with Microsoft, they have created resources to build literacy around AI detection and how this technology compliments core information literacy approaches, which was debuted to global audiences at the International Journalism Festival in Perugia, Italy in April 2025. The resource can be found here: Things to know before using AI detection tools - Library

Microsoft also supported and collaborated on the development and release of The Newsroom Toolkit from the Poynter Institute’s MediaWise. The toolkit is a resource for journalists and media professionals seeking to integrate AI literacy into their reporting and organizational practices. Poynter reported that the release of the toolkit was met with high interest and enthusiasm by industry professionals with 431 individuals attending the webinar launch from 50 countries, including European Union Countries and 1,857 toolkit downloads in the first three months after release. 
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
Yes
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Bing regularly evaluates opportunities to improve its product and educate users on the trustworthiness and limitations of AI. Using our existing work with Minecraft Microsoft will be rolling out educational materials for users around AI and AI literacy to continue to support our users need for ongoing engagement and education. In addition, Microsoft will have updated education materials to engage learners. Go here for more: https://techcommunity.microsoft.com/blog/educationblog/empowering-learners-for-the-age-of-ai-new-information-literacy-features-coming-t/4443052
Measure 22.2
Relevant Signatories will give users the option of having signals relating to the trustworthiness of media sources into the recommender systems or feed such signals into their recommender systems.
QRE 22.2.1
Relevant Signatories will report on whether and, if relevant, how they feed signals related to the trustworthiness of media sources into their recommender systems, and outline the rationale for their approach.
Bing Search utilizes a variety of signals – including trustworthiness indicators from trusted fact checkers and research organizations – as one of several means to help determine the authority score of a given website and rank it accordingly in search results. 

Microsoft also maintains additional partnerships with fact checkers and research organizations covering EU/EEA to strengthen the company’s capacity and understanding of global threats to disinformation and inform interventions in Bing search to protect users against related risks. These partnerships are part of a broader effort to empower Microsoft users to better understand the information they consume across our platforms and products.  

The above mechanisms and the Bing algorithm’s emphasis on promoting high authority content are applied equally to Bing generative AI features to help ensure that users are protected from misleading information across Bing surfaces. Ancillary and supplemental search features, such as search suggestions, can be adjusted and/or deactivated through user search setting.