Microsoft

Report March 2025

Submitted
Microsoft welcomes the opportunity to file this fifth report on our compliance with the commitments of the strengthened 2022 EU Code of Practice on Disinformation, covering the second half of 2024. At Microsoft, we are committed to instilling trust and security across our products and services, and across the broader web. We recognise that information integrity is a key element in empowering users to access the information they need and freely express themselves. We also recognize that there is not a one size fits all approach to this work, and instead there needs to be a whole of society strategy that recognizes that not all services or platforms are the same and that there are a variety of efforts that can be effective in improving the information environment and empowering the public.

One opportunity is to continue employing AI as a resource in assisting and streamlining important work in detecting and assessing cyber enabled foreign influence operations. The harmful use of AI can also pose challenges in the information integrity space, including improved efficiency of deceptive and malicious images and videos, as malicious threat actors continue to build their capacity to create highly deceptive content efficiently. This requires continuous improvement and response to changing tactics. Microsoft’s services are fully committed to utilising best in class tools and technology to help mitigate the risks of its services being misused.

Microsoft is taking a cross-product, whole-of-company approach to ensure the responsible implementation of AI. This starts with our Responsible AI Principles. Building on those principles in June of 2022, Microsoft released our Responsible AI Standard v.2 and Information Integrity Principles to help set baseline standards and guidance across product teams. Recognizing that there is an important role for government, academia and civil society to play in the responsible deployment of AI, we also created a roadmap for the governance of AI across the world as well as creating a vision for the responsible advancement of AI, both inside Microsoft and throughout the world, including specifically in Europe. For more information on Microsoft’s commitment to Responsible AI and ongoing internal and external efforts, we encourage you to review our Responsible AI hub, which offers a range of information, tools, and resources related to the ethical and responsible use of AI technologies. It includes detailed information about Microsoft’s internal Responsible AI processes and tools which can be used to responsibly develop and deploy AI products, including our first annual Responsible AI Transparency Report. In addition, Microsoft recently released a white paper focused on policy steps that can be taken to reduce the harms of abusive AI-generated content.

Serving as a leader in AI research, we are committed to proactively publicize our threat detection efforts for the benefit of the AI community, regulators, and broader society. As such, we have adopted six focus areas to combat the harmful use of deceptive AI:
  1. A strong safety architecture
  2. Durable media provenance and watermarking
  3. Safeguarding our services from abusive content and conduct
  4. Robust collaboration across industry and with governments and civil society
  5. Modernized legislation to protect people from the abuse of technology
  6. Public awareness and education

Additionally, we will continue to build upon these approaches to Responsible AI. For example, recognizing both the enormous potential for generative and other forms of AI to transform the world of work in positive ways and the potential risks AI presents in that context, LinkedIn published its framework of Responsible AI Principles, which is inspired by and aligned with Microsoft’s Responsible AI Principles. LinkedIn provides more details on these principles in our response to Commitment 15.

Since our last report, Microsoft has continued to work with EU Member States and EU institutions to protect elections from cyber enabled influence operations by malicious threat actors. As part of that work, Microsoft and LinkedIn, along with 25 other companies, continued efforts to meet the commitments of the Tech Accord to Combat Deceptive Use of AI in 2024 Elections (Tech Accord). We believe the success of the Tech Accord and our work together have contributed to the limited impact of deceptive AI-generated election content throughout the elections across the European Union in 2024.

Meeting the Tech Accord’s commitments made it more difficult for malicious threat actors to use legitimate tools to create deceptive AI-generated election content, while simultaneously simplifying the process for users to identify authentic content. To meet its Tech Accord commitments, Microsoft moved forward with several important initiatives that are detailed further in this report. For example:
  • Microsoft is harnessing the data science and technical capabilities of our AI for Good Lab and Microsoft Threat Analysis Center (MTAC) teams to better assess whether abusive content—including that created and disseminated by malicious threat actors—is synthetic or not. Microsoft AI for Good has been improving our detection model (image, video) to assess whether media was generated by AI. The model is trained on approximately 200,000 examples of AI and real content. AI for Good continues to invest in creating sample datasets representing the latest generative AI technology. When appropriate, the team calls on the expertise of Microsoft’s Digital Crimes Unit to invest in and operationalize the early detection of AI-powered criminal activity and respond fittingly, through the filing of affirmative civil actions to disrupt and deter that activity and through threat intelligence programs and data sharing with customers and government.
  • As part of our commitments related to public awareness and engagement, Microsoft ran a campaign titled Check. Recheck. Vote. containing a series of public messages and stood up an AI and Elections website focused on engaging voters about the risks of deceptive AI and where to find authoritative election information. This campaign ran across the EU, UK, and the US in the lead up to major elections. Globally, the campaign reached hundreds of millions of people, with millions interacting with the content, connecting them with official election information.
  • We developed a dedicated web portal – Microsoft-2024 Elections – political candidates and election authorities can report to us a concern about a deepfake of themselves or the election process that would violate our policy on deceptive AI-generated content.
  • In advance of elections across the EU, we kicked off a global effort to engage campaigns and elections authorities to deepen understanding of the possible risks of deceptive AI in elections and empower those campaigns and election officials to speak directly to their voters about these steps they can take to build resilience and increase confidence in the election. In 2024, we delivered nearly 200 training sessions for political stakeholders in 25 countries, reaching over 4300 participants. This includes almost fifty separate training events with over 500 participants across EEA, including in France prior to the parliamentary elections.

Microsoft is committed to advancing information integrity and believes that including content credentials is an important driver for this. We were a founding member of the Coalition for Content Provenance and Authenticity (C2PA). To achieve transparency, support information integrity, and empower our users, we are leveraging C2PA’s “content credentials” open standard across several products. For example, since 15 May 2024, content containing the “Content Integrity” technology has been automatically labelled on LinkedIn, with users beginning to see the “Cr” icon on images and videos that contain C2PA metadata.

During the reporting period, Microsoft continued piloting Content Integrity Tools, which allowed users to add content credentials to their own authentic content. Designed as a pilot program primarily to support the 2024 election cycle and gather feedback about Content Credentials-enabled tools, during the reporting period of this report, the tools were available to political campaigns in the EU, as well as to elections authorities and select news media organizations in the EU and globally. These tools included a partnership and collaboration with fellow Tech Accord signatory, TruePic. Announced in April 2024, this collaboration leveraged TruePic’s mobile camera SDK enabling campaign, election, and media participants to capture authentic images, videos and audio directly from a vetted and secure device. Called the “Content Integrity Capture App” (an app that makes it easy to directly capture images with C2PA enabled signing) launched for both Android and Apple and can be used by participants in the Content Integrity Tools pilot program.

Beyond our commitment to combat deceptive use of AI during the electoral process, we implemented additional actions safeguarding candidates, election campaigns, election authorities, and voters:
  • Microsoft’s Campaign Success Team supported political parties and campaigns around the world to navigate the world of AI, combat the spread of cyber influence campaigns, and protect the authenticity of their own content and images.
  • Microsoft’s Election Communications Hub continued to support democratic governments around the world as they build secure and resilient election processes.
  • Microsoft established a Virtual Situation Room, bringing together resources across the company to monitor, support, and protect elections in France and UK.
  • Bing Search implemented a multifaceted approach to election integrity and integrated specialised answers and information panels for the elections across the European Union, with a link to official sources of information, which included voting information relevant to each EU Member State.

Microsoft continued its work with other trusted third-parties as part of a larger effort to empower Microsoft users to access the trusted information they are seeking. Microsoft also announced $2M in societal resilience grants with OpenAI and several organizations benefited from the grants during this reporting period. Additionally, WITNESS, received a grant to improve journalists’ ability to counter AI threats to elections. Training sessions were conducted ahead of the 2024 elections in Ghana, Georgia, and Venezuela, reaching 250 global participants. Microsoft's collaboration with WITNESS also includes co-leading the Deepfakes Rapid Response Force.

  • Microsoft continues to provide pro-bono advertising space across Microsoft surfaces to disseminate media literacy campaigns, averaging 50 million impressions per month. Beginning in March 2024 and continuing through Fall 2024, Microsoft launched a new “Be Informed, Not Misled” campaign from the News Literacy Project. Microsoft also continues their partnership with the Trust Project, boosting their campaign to build audience literacy on evaluating the credibility of the content they encounter.
  • In May 2024, Microsoft, in collaboration with OpenAI, launched the Societal Resilience Grants to support various organizations in promoting AI literacy, ethical AI use, and societal resilience against AI-related challenges. The grants were awarded to the Older Adults Technology Services from AARP, International IDEA, Partnership on AI, Coalition for Content Provenance and Authenticity (C2PA), and WITNESS. These initiatives have reached national election bodies in 26 countries, 500,000 older adults, and 250 global journalists, demonstrating a comprehensive approach to addressing AI threats and fostering responsible AI practices.

These initiatives underscore Microsoft's commitment to fostering a resilient and informed society in the age of AI. These grants build on an existing effort by Microsoft to support media, AI, and information literacy globally. We have continued our work with leading news and media literacy nonprofits, including the News Literacy Project (NLP), a collaboration led by The Trust Project on the Trust Indicators, and Verified, to develop campaigns built on industry research and best practices. Microsoft provided funding for the research and development of public awareness and education campaigns and supported partners with threat intelligence insights, technical expertise, and increased visibility through in-kind ad space on Microsoft platforms. Microsoft also worked to reach young learners with dynamic and entertaining content that builds knowledge and skills. For instance,

Microsoft has subscribed to the Code of Practice with the following services:
  • Bing Search is an online search engine with the primary objective of connecting users to the most relevant search results from the web. Users come to Bing with a specific research topic in mind and expect Bing to provide links to the most relevant and authoritative third-party websites on the Internet that are responsive to their search terms. Therefore, addressing misinformation or disinformation in organic search results often requires a different approach than may be appropriate for other types of online services, as over-moderation of content in search could have a significant negative impact on the right to access information, freedom of expression, and media plurality. Therefore, Bing must carefully balance these fundamental rights and interests as it works to ensure that its algorithms return the most high-quality content available that is relevant to the user’s queries, working to avoid causing harm to users without unduly limiting their ability to access answers to the questions they seek. In some cases, different features may require different interventions based on functionality and user expectations. While Bing’s remediation efforts may on occasion involve removal of content from search results (where legal or policy considerations warrant removal), in many cases, Bing has found that actions such as targeted ranking interventions, or additional digital literacy features such as Answers pointing to high authority sources and content provenance indicators, are more effective. Bing regularly reviews the efficacy of its measures to identify additional areas for improvement and works with internal and external subject matter experts in key policy areas to identify new threat vectors or improved mechanisms to help prevent users from being unexpectedly exposed to harmful content in search results that they did not expressly seek to find. During the Reporting Period, the nature of Bing generative AI experiences evolved. In October 2024, Microsoft launched a separate, standalone consumer service known as Microsoft Copilot at copilot.microsoft.com, which offers conversational experiences powered by generative AI, and the Copilot in Bing (formerly known as Bing Chat) generative AI experience was phased out. Bing continues to offer generative AI experiences, such as Bing Image Creator and Bing Generative Search, which was launched this Reporting Period. Bing Generative Search utilizes AI to deliver a unique experience by not only optimizing search results but presenting information in a user-friendly, cohesive layout. Results also include citations and links that enable users to explore further and evaluate websites for themselves. For both of these AI-powered experiences, Bing has partnered closely with Microsoft’s Responsible AI team to proactively address AI-related risks and continues to evolve these features based on user and external stakeholder feedback.
  • LinkedIn is a real identity online social networking service for professionals to connect and interact with other professionals, grow their professional network and brand, and seek career development opportunities. LinkedIn is part of its members’ professional identity and has a specific purpose. Activity on the platform and content members share can be seen by current and future employers, colleagues, potential business partners and recruitment firms, among others. Given this audience, members by and large tend to limit their activity to professional areas of interest and expect the content they see to be professional in nature. LinkedIn is committed to keeping its platform safe, trusted, and professional and respects the laws that apply to its services. On joining LinkedIn, members agree to abide by LinkedIn’s User Agreement and its Professional Community Policies, which expressly forbid members from posting information that is false or misleading.
  • Microsoft Advertising is our proprietary advertising platform, which serves the vast majority of ads displayed on Bing Search and provides advertising to most other Microsoft services that display ads, as well as many third-party services. Microsoft Advertising works both with advertisers, who provide it with advertising content, and publishers, such as Bing Search, who display these advertisements on their services. Microsoft Advertising employs a distinct set of policies and enforcement measures with respect to each of these two categories of business partners to prevent the spread of disinformation, including through discouraging and reducing the dissemination and monetization of disinformation through advertising.

As a company, we continued our efforts during the reporting period to empower users to better understand the information they consume across our platforms and products. For example, Bing compiled a specialized dataset of European Parliament election related queries in different EU languages for use by the research community and to support transparency; researchers can apply using the form found here. Over the course of the next reporting period, we will continue to make this information transparent and public. Specifically, we will continue to focus on the following areas:
  • Further de-funding the mechanisms malicious threat actors are using to push their narratives and propaganda and regularly evaluating and improving user and advertiser policies as needed.
  • Ensuring Microsoft and LinkedIn AI products are developed consistent with Microsoft’s Responsible AI Standards and LinkedIn's Responsible AI Principles, as relevant, and that risks associated with AI systems are mitigated to provide safe, trustworthy, and ethical experiences for users and, further, ensuring that our information integrity principles are integrated into AI systems included in Microsoft products.
  • Continuing to monitor foreign information influence operations and actioning such intelligence appropriately through defensive search and other techniques. This includes working with trusted third parties Microsoft uses to inform its work detecting and disrupting these influence operations. This also includes adding trusted third parties in additional languages, ensuring global coverage for our information integrity work.
  • Strengthening our efforts and expanding our funding in the areas of media literacy and critical thinking, aiming to include vulnerable groups and having greater language access. As part of our focus areas and commitments under the Tech Accord we will increase our partnerships to increase AI literacy efforts and build greater understanding of provenance and other trustworthiness indicators.
  • Supporting good faith research into disinformation and broader disinformation trends and tactics.
  • Continue to share learnings pertaining to generative AI and Responsible AI practices as products and services evolve and new threats emerge. In addition, Microsoft will continue to regularly evaluate, implement, and share best practices for addressing disinformation trends as we navigate the technological changes posed by the malicious use of AI.
  • Develop new partnerships to support EU-specific risks and continue to explore further ways to help users evaluate content on our services.
  • Enhance existing research tooling to provide enhanced data reporting and continue to deliver relevant data and research to support research into the spread of disinformation.
  • Educating users on generative AI features, including their risks and limitations, and providing the broader public and research community with information on our approach to Responsible AI
  • Implementing and regularly evaluating measures to support safe and democratic elections in the EU and to direct users to high authority sources of information about elections.

Unless stated otherwise, data provided under this report covers a reporting period of 1 July 2024 to 31 December 2024 (“Reporting Period”).

Download PDF