Microsoft Bing

Report March 2025

Submitted
Commitment 20
Relevant Signatories commit to empower users with tools to assess the provenance and edit history or authenticity or accuracy of digital content.
We signed up to the following measures of this commitment
Measure 20.1 Measure 20.2
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
Yes
If yes, list these implementation measures here
Microsoft has continued to improve content provenance measures on its AI image generation features, including continuing to pilot Content Integrity Tools that allowed users to add content credentials to their own authentic content (discussed further below). 
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
Yes
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Microsoft expects to continue its important work in content provenance tools and ways to help counter harmful AI-generated content.  
Measure 20.1
Relevant Signatories will develop technology solutions to help users check authenticity or identify the provenance or source of digital content, such as new tools or protocols or new open technical standards for content provenance (for instance, C2PA).
QRE 20.1.1
Relevant Signatories will provide details of the progress made developing provenance tools or standards, milestones reached in the implementation and any barriers to progress.
Microsoft and key members of the Bing Search team are also involved in the Partnership on AI (“PAI”) to identify possible countermeasures against deepfakes and has participated in the drafting and refinement of PAI’s proposed Synthetic Media Code of Conduct. The proposed Code of Conduct provides guidelines for the ethical and responsible development, creation, and sharing of synthetic media (such as AI-generated artwork).

Microsoft is deeply focused on the potential risk that deepfakes and other abusive AI-generated content could be used to proliferate election-related misinformation, deceive the public, and potentially undermine trust in online content and our elections. For those reasons, we were a founding member of the Coalition for Content Provenance and Authenticity (C2PA). The C2PA is a coalition of technology companies, media, and others created to address the prevalence of misleading information online by developing technical standards to certify the source and history of media content. Pursuant to the C2PA specification, generative AI specifies techniques to add “Content Credentials” to online media consisting of metadata about the media’s provenance and authenticity.  In turn, that information provides consumers with a way to verify the history and trustworthiness of the media. Credentials are already added to all generative AI images created with our most popular consumer-facing AI image generation tools, including Image Creator, Microsoft Designer, and Copilot. 

In July 2023, Microsoft agreed to make a number of voluntary commitments related to furthering safe and trustworthy AI systems, including a commitment to deploy new state-of-the-art provenance tools to help the public identify AI generated audio-visual content and understand its provenance. See more at Our commitments to advance safe, secure, and trustworthy AI - Microsoft On the Issues.

In addition, Microsoft has continued piloting Content Integrity Tools, which allowed users to add content credentials to their own authentic content. Designed as a pilot program primarily to support the 2024 election cycle and gather feedback about Content Credentials-enabled tools, during the reporting period of this report, the tools were available to political campaigns in the EU, as well as to elections authorities and select news media organizations in the EU and globally. These tools included a partnership and collaboration with fellow Tech Accord signatory, TruePic. Announced in April 2024, this collaboration leveraged TruePic’s mobile camera SDK enabling campaign, election, and media participants to capture authentic images, videos and audio directly from a vetted and secure device. Called the “Content Integrity Capture App” (an app that makes it easy to directly capture images with C2PA enabled signing) launched for both Android and Apple and can be used by participants in the Content Integrity Tools pilot program.