Microsoft Bing

Report March 2025

Submitted
Commitment 15
Relevant Signatories that develop or operate AI systems and that disseminate AI-generated and manipulated content through their services (e.g. deepfakes) commit to take into consideration the transparency obligations and the list of manipulative practices prohibited under the proposal for Artificial Intelligence Act.
We signed up to the following measures of this commitment
Measure 15.1 Measure 15.2
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
No
If yes, list these implementation measures here
Not applicable
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
Yes
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Bing consistently reviews and evaluates its policies and practices related to existing and new Bing features and adjusts as needed. Bing will continue to invest in its Responsible AI program. 
Measure 15.1
Relevant signatories will establish or confirm their policies in place for countering prohibited manipulative practices for AI systems that generate or manipulate content, such as warning users and proactively detect such content.
QRE 15.1.1
In line with EU and national legislation, Relevant Signatories will report on their policies in place for countering prohibited manipulative practices for AI systems that generate or manipulate content.
Microsoft takes its commitment to responsible AI seriously and has a robust Responsible AI program. In addition to the safeguards noted earlier in this report, and discussed thoroughly at How Bing Delivers Search Results, Microsoft has implemented a number of measures and policies to help counter attempts to manipulate AI systems that generate content.  

Bing’s generative AI experiences were developed in accordance with Microsoft’s AI Principles, Microsoft’s Responsible AI Standard, and in partnership with responsible AI experts across the company, including Microsoft’s Office of Responsible AI, engineering teams, Microsoft Research, and the AI Ethics and Effects in Engineering and Research (AETHER) committee. All Microsoft processes, programs, or tools utilizing AI, including  Bing’s generative AI experiences, must adhere to Microsoft’s Responsible AI Standard and undertake impact assessments to help ensure responsible use of AI-influenced algorithms and processes for any new product features.  More details on Microsoft’s Responsible AI Standard, impact assessments, and resources on Responsible AI are located at Microsoft’s Responsible AI Hub. Bing also conducts detailed annual risk assessments that evaluate risks posed by its systems (including generative AI features) and evaluates current and potential risk mitigation measures. 

In addition to the measures discussed at QREs 14.1.1 and 14.1.2 (including pre and post launch testing, the use of classifiers and metaprompting, defensive search interventions, reporting functionality, and increased operations and incident response), Microsoft has incorporated the following safeguards and policies for countering prohibited manipulative practices for AI systems.

To help facilitate safe use of Bing’s generative AI experiences, Microsoft published Copilot AI Experiences Terms (applicable to Copilot in Bing through its retirement in October 2024) and Bing’s Image Creator Terms of Use (including a user Code of Conduct) and implemented other mechanisms to help prevent and address misuse of these features. The Supplemental Terms prohibit users from “engaging in activity that is fraudulent, false, or misleading” and “attempting to create or share content that could mislead or deceive others, including for example creation of disinformation, content enabling fraud, or deceptive impersonation.” Users that violate these terms may be suspended from the service. In addition, Bing’s generative AI experiences may block certain text prompts that violate or are likely to violate the Code of Conduct. Repeated attempts to produce prohibited content or other violations of the Code of Conduct may result in service or account suspension. In addition, 

Microsoft maintains social listening pipelines where insights and user feedback (including efforts to “jailbreak” generative AI experiences) are collected from the open Internet. These insights and user feedback are manually reviewed by humans, analyzed daily, and shared across the Bing product teams and with product leadership to identify new areas of concern and implement additional mitigations as needed. Microsoft also has set up a robust user reporting and appeal process to review and respond to user concerns of harmful or misleading content.
Bing’s generative AI experiences also provide several touchpoints for meaningful AI disclosures, where users are notified that they are interacting with an AI system and are presented with opportunities to learn more about these features and generative AI, such as through in-product disclaimers, as discussed in How Bing Delivers Search Results educational FAQs, and blog posts. Empowering users with this knowledge can help them avoid over-relying on AI and learn about the system’s strengths and limitations. 

In addition to the measures discussed above, Microsoft has worked to deliver an experience that encourages responsible use of Bing’s generative AI features and to limit the generation of harmful or unsafe images. When these systems detect that a potentially harmful image could be generated by a prompt, it blocks the prompt and warns the user. 

Microsoft’s Responsible AI systems will continue to improve, and Microsoft regularly incorporates user and third-party feedback reported via Bing and Copilot Feedback buttons and its user reporting tools. 

See also QRE 20.1.1.