Integrity of Services
Commitment 14
In order to limit impermissible manipulative behaviours and practices across their services, Relevant Signatories commit to put in place or further bolster policies to address both misinformation and disinformation across their services, and to agree on a cross-service understanding of manipulative behaviours, actors and practices not permitted on their services. Such behaviours and practices include: The creation and use of fake accounts, account takeovers and bot-driven amplification, Hack-and-leak operations, Impersonation, Malicious deep fakes, The purchase of fake engagements, Non-transparent paid messages or promotion by influencers, The creation and use of accounts that participate in coordinated inauthentic behaviour, User conduct aimed at artificially amplifying the reach or perceived public support for disinformation.
We signed up to the following measures of this commitment
Measure 14.1 Measure 14.2 Measure 14.3
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
Yes
If yes, list these implementation measures here
During the reporting period Microsoft continued piloting Content Integrity Tools, which allowed users to add content credentials to their own authentic content. Designed
as a pilot program primarily to support the 2024 election cycle and gather feedback about Content Credentials-enabled tools
, during the reporting period of this report, the tools were available to political campaigns in the EU, as well as to elections authorities and select news media organizations in the EU and globally. These tools included a partnership and collaboration with fellow Tech Accord signatory, TruePic.
Announced in April 2024, this collaboration leveraged TruePic’s mobile camera SDK enabling campaign, election, and media participants to capture authentic images, videos and audio directly from a vetted and secure device. Called the “Content Integrity Capture App” (an app that makes it easy to directly capture images with C2PA enabled signing) launched for both Android and Apple and can be used by participants in the Content Integrity Tools pilot program.
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
No
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Not applicable
Measure 14.1
Relevant Signatories will adopt, reinforce and implement clear policies regarding impermissible manipulative behaviours and practices on their services, based on the latest evidence on the conducts and tactics, techniques and procedures (TTPs) employed by malicious actors, such as the AMITT Disinformation Tactics, Techniques and Procedures Framework.
QRE 14.1.1
Relevant Signatories will list relevant policies and clarify how they relate to the threats mentioned above as well as to other Disinformation threats.
Bing Search is an online search engine, the primary purpose of which is to provide a searchable index of webpages available on the internet to help users find the content they are looking for. Bing Search does not host the content or control the operation, content, or design of indexed websites. Users come to Bing Search with a specific research topic in mind and expect Bing to provide links to the most relevant and authoritative third-party websites on the Internet that are responsive to their search terms. Bing Search does not allow users to post and share content or otherwise enable content to go “viral” through user-to-user exchanges of information on Bing.
As such, addressing misinformation in organic search results often requires a different approach than may be appropriate for other types of online services. The majority of the TTPs (namely, TTPs 1-9 and 11-12) are more pertinent to social media or account-driven services in that they specifically relate to user accounts, subscribers/followers, inauthentic coordination, influencers, or targeting users of a service, account hijacking, etc., and thus are not relevant to search engines.
The highest potential for abuse in web search arises under TTP 10, which involves “use of deceptive practices to deceive/manipulate platform algorithms, such as by exploiting data voids, spam tactics, or keyword stuffing.” Therefore, relevant Bing Search policies and practices that help combat manipulative behaviors primarily address TTP Number 10.
Although as a search engine Bing does not have any control over third party websites appearing in search results, Bing’s ranking algorithms, spam policies, and other safeguards described below can also address and mitigate the risks arising from malicious websites that use other TTPs attempting to manipulate our search engine rankings. For example, pages employing social media schemes (e.g., fake followers – TTP 3), using inauthentic domains (TTP 4), or keyword stuffing (TTP 9) are considered abusive practices that are addressed in Bing’s ranking system and Webmaster Guidelines. In addition, in connection with generative AI features, Microsoft has implemented measures intended to address TTP No. 7 (related to deceptive deepfakes), which are discussed in more detail below.
Bing’s primary mechanism for combatting manipulative behaviors in search results is via its ranking algorithms and systems designed to identify and combat attempts to abuse search engine optimization techniques (i.e., spam). Bing Search describes the main parameters of its ranking systems in depth in
How Bing Delivers Search Results. Abusive techniques and examples of prohibited SEO activities are described in more detail in the
Bing Webmaster Guidelines.
As described in these documents, Bing’s ranking algorithms are designed to identify and prioritize high quality, highly authoritative content available online that is relevant to the user’s query and to prevent abusive search engine optimization techniques (spam).
One of the key ranking techniques Bing uses to prevent low quality or deceptive websites from returning high in search results is through the “quality and credibility” score. Determining the quality and credibility (QC) of a website includes evaluating the clarity of purpose of the site, its usability, and presentation. QC also consists of an evaluation of the page’s “authority”, which includes factors such as:
§ Reputation: What types of other websites link to the site? A well-known news site is considered to have a higher reputation than a brand-new blog.
§ Level of discourse: Is the purpose of the content solely to cause harm to individuals or groups of people? For example, a site that promotes violence or resorts to name-calling or bullying will be considered to have a low level of discourse, and therefore lower authority, than a balanced news article.
§ Level of distortion: How well does the site differentiate fact from opinion? A site that is clearly labeled as satire or parody will have more authority than one that tries to obscure its intent.
§ Origination and transparency of the ownership: Is the site reporting first-hand information, or does it summarize or republish content from others? If the site doesn’t publish original content, do they attribute the source? A first-hand account published on a personal blog could have more authority than unsourced content.
In addition to its ranking algorithms, Bing Search’s general abuse/spam policies prohibit certain practices intended to manipulate or deceive the Bing Search algorithms, including those that could be employed by malicious actors in the spread of disinformation. Pursuant to the Webmaster Guidelines, Bing may take action on websites employing spam tactics (such as social media schemes, keyword stuffing, malicious behavior, cloaking, link schemes, or misleading structured data markups) or that otherwise violate the Webmaster Guidelines, including by applying ranking penalties (such as demoting a website) or delisting a website from the index.
Note that it is not feasible to distinguish between general spam tactics and spam tactics employed by malicious actors specifically for the purpose of spreading disinformation. Therefore, Bing Search has not presented data on the amount of spam detected and actioned under its policies since these figures are indicative of actions taken toward spam overall and presently cannot be used to provide an accurate assessment of whether it pertains to spam used in connection with disinformation campaigns or spam used for another purpose (e.g., phishing).
Generative AI Features
During the Reporting Period, the nature of Bing generative AI experiences evolved. In October 2024, Microsoft launched a separate, standalone consumer service known as Microsoft Copilot at copilot.microsoft.com, which offers conversational experiences powered by generative AI, and the Copilot in Bing (formerly known as Bing Chat) generative AI experience was phased out. Bing continues to offer generative AI experiences, such as Bing Image Creator and Bing Generative Search, which was launched this Reporting Period. Bing Generative Search utilizes AI to deliver a unique experience by not only optimizing search results but presenting information in a user-friendly, cohesive layout. Results also include citations and links that enable users to explore further and evaluate websites for themselves. For AI-powered experiences, Bing has partnered closely with Microsoft’s Responsible AI team to proactively address AI-related risks and continues to evolve these features based on user and external stakeholder feedback. Bing generative AI experiences continue to rely on the same infrastructure and mitigations previously discussed in Microsoft’s last report.
Bing Generative Search’s primary functionality is, like traditional Bing search, to provide users with links to third party content responsive to their search queries. As such, the ranking algorithms and spam/abuse policies described above continue to be Bing’s primary defense against manipulation and abuse, supplemented by interventions designed specifically to address manipulation in generative AI features. As to answers triggering creative inspiration, Microsoft has worked continuously to improve and adjust safety mitigations, policies, and user experiences within Bing’s generative AI experiences to minimize the risk they may be used for manipulative purposes. Additional information on how Microsoft approached responsible AI in Bing’s generative AI experiences is available How Bing Delivers Search Results.
TTP 10 remains the most relevant TTP to Bing’s generative AI experiences, as users cannot post or share content directly on the Bing service. In addition, Microsoft undertakes specific mitigations to address TTP 7 given the risks that users may attempt to use generative AI to create deepfakes or manipulated media to spread disinformation. Although Bing does not have the ability to monitor third party platforms for publication of content created through Bing’s services, Bing has implemented safeguards to help to minimize the risk that bad actors can use Bing generative AI experiences to create mis/disinformation.
Microsoft’s
Copilot AI Experiences Terms(applicable to Copilot in Bing through October 2024) and Bing’s Image Creator Terms of Use(referred to here as “Supplemental Terms”) advise users on prohibited conduct and content. These Supplemental Terms primarily address TTPs No. 10 and 7 by restricting attempts to create or spread mis/disinformation or deceptive images using Bing’s generative AI experiences. Users that violate the Supplemental Terms and Code of Conduct may be suspended from the service. In addition, Bing’s generative AI experiences work to prevent generation of problematic text or images by blocking user prompts that (i) violate the Code of Conduct or (ii) are likely to lead to creation of material that violates the Code of Conduct. Repeated attempts to produce prohibited content or other violations of the Code of Conduct may also result in service or account suspension.
For further information as to how Bing Search and Bing’s generative AI experiences implement these policies see QRE 14.1.2.
QRE 14.1.2
Signatories will report on their proactive efforts to detect impermissible content, behaviours, TTPs and practices relevant to this commitment.
As discussed under QRE 14.1.1, TTP No. 10 tends to be the primary mechanism for manipulation and abuse in the context of search engines and is addressed through Bing’s ranking systems and abuse policies (for both traditional search and Bing’s generative AI experiences).
Blocking content in organic search results based solely on the truth or falsity of the content can raise significant concerns relating to fundamental rights of freedom of expression and the freedom to receive and impart information. Instead of blocking access to content to address these TTPs, Bing Search focuses on ranking its organic search results so that trusted, authoritative news and information appears first and provides tools to help its users evaluate the trustworthiness of certain sites and ensure they are not misled or harmed by the content that appears in search results. Bing presumes the user seeks high quality, authoritative content unless the user clearly indicates an intent to research low quality content. Bing Search takes actions to promote high authority, high quality content and thereby reduce the impact of misinformation appearing in Bing Search results. This includes Bing Search’s continued improvement of its ranking algorithms to ensure that authoritative, relevant content is returned at the top of search results, regular review and actioning of disinformation threat intelligence, partnership with third party information intelligence and media literacy organizations, contributing to and supporting the research community, and enforcement of clear policies concerning the use of manipulative tactics on Bing Search, among other initiatives described elsewhere in this report.
Although the Bing Search algorithm endeavors to prioritize relevance, quality, and credibility, in some cases Bing Search identifies threats arising from emerging or evolving world events and/or activities by external actors that attempt to undermine the efficacy of its algorithms. When this happens, Bing Search employs “defensive search” strategies and interventions to counteract threats and TTPs in accordance with its trustworthy search principles (which are discussed in further detail in the
How Bing Delivers Search Results.
“Defensive search interventions” may include algorithmic interventions (such as authority signal boost in ranking or demotions of a website), restricting autosuggest or related search terms to avoid directing users to problematic queries, prioritizing additional features promoting high authority information (e.g., Answers or Public Service Announcements), and in limited cases manual interventions for individual reported issues or broader areas more prone to misinformation or disinformation. Bing actively monitors manipulation trends in identified high-risk areas and deploys mitigation methods as needed to ensure users are provided with high quality, high authority search results.
In addition to defensive search, Bing Search regularly monitors for other violations of its Webmaster Guidelines, including attempts to manipulate the Bing Search algorithm through prohibited practices such as cloaking, link spamming, keyword stuffing, and phishing. Bing Search dedicates meaningful resources to maintaining the integrity of the platform, promoting high authority, relevant results, and reducing spam (including spam aimed at distributing low authority information and manipulative content). Bing Search utilizes a combination of human intervention and AI-driven analysis to regularly review, detect, and address spam tactics occurring on Bing Search. When Bing Search detects websites deploying manipulative techniques or engaging in spam tactics, those websites may incur ranking penalties or be removed from the Bing Search index altogether.
Microsoft also works to identify and track nation-state information operations targeting democracies across the world and works with a number of trusted third-party partners for early indicators of narratives, hashtags, or information operations that can be leveraged to inform early detection and defensive search strategies. Through Microsoft’s Democracy Forward team and the Microsoft Threat Assessment Center (MTAC), Microsoft also offers mediums for election authorities, including in the EU and EEA Member States, to have lines of communication with Microsoft to identify possible foreign information operations targeting elections.
The above measures also apply to Bing’s generative AI experiences. responses to user prompts are "grounded” on high authority content from the web (except in certain creative use cases), which are based on the same ranking algorithms and moderation infrastructure that are used by Bing’s traditional web search, and, as such, benefit from Bing’s longstanding safety infrastructure described above. Nonetheless, Microsoft recognizes that generative AI technology may also raise novel risks and possibilities of harm that are not present in traditional web search and has supplemented its existing threat identification and mitigation processes with additional risk assessments and mitigation processes based on
Microsoft’s Responsible AI program.
Microsoft’s Responsible AI program is designed to identify potential harms, measure their propensity to occur, and build mitigations to address them. Guided by its
Responsible AI Standard, Microsoft identifies, measures, and mitigates potential harms and misuse of new generative AI experiences while securing the transformative and beneficial uses that these tools provide. , Microsoft has implemented a range of safety mitigations to help address, among other things, impermissible content, behaviours, and other TTPs that could potentially be used to create or spread misinformation.
Below are several examples of Microsoft’s iterative approach to identify, measure, and mitigate potential harms, including the spread of misinformation.
- Pre-launch and ongoing testing. Before launching Bing’s generative AI experiences, Microsoft conducted “red team” testing. A multidisciplinary team of experts evaluated how well the system responded when pressed to produce harmful responses, surface potential avenues for misuse, and identify capabilities and limitations. Post-release, generative AI experiences are integrated into Microsoft engineering organizations’ existing production measurement and testing infrastructure. More information on Microsoft’s approach to red-team testing is available at
Microsoft AI Red Team building future of safer AI | Microsoft Security Blog.
- Classifiers, Metaprompting, and Filtering Interventions: Microsoft has created special mitigations in the form of “classifiers” and “metaprompting” to help reduce the risk of certain harms and misuse of generative AI features. Classifiers classify text to flag different types of potentially harmful content in search queries, chat prompts, or generated responses. Microsoft uses AI-based classifiers and content filters, which apply to all search results and relevant features; it also designed additional prompt classifiers and content filters specifically to address possible harms raised by new generative AI features.. Flags lead to potential mitigations, such as not returning generated content to the user, diverting the user to a different topic, or redirecting the user to traditional search. Metaprompting involves giving instructions to the model to guide its behavior, including so that the system behaves in accordance with Microsoft's AI Principles and user expectations. Microsoft has also implemented additional filtering and classifiers to prevent chat responses from returning what Bing considers “low authority” content as part of an answer and to help address impermissible content, behaviours, and other TTPs (e.g., TTP No. 7) that could potentially be used to create or spread misinformation.
- Content Provenance Tools. Microsoft also makes it clear that images created in Bing Image Creator (and Copilot in Bing prior to its phase out) are AI-generated by including content provenance information in each image. These content provenance features use cryptographic methods to mark and sign AI-generated content with metadata about its source and history. The invisible digital watermark feature shows the source, time, and date of original creation, and this information cannot be altered. Providing clear indications of image provenance helps reduce the risk of deepfakes (e.g., TTP No. 7) and helps users identify when an image was generated with the assistance of Microsoft generative AI tools. Microsoft has partnered with other industry leaders to create the Coalition for Content Provenance and Authenticity (C2PA) standards body to help develop and apply content provenance standards across the industry.
- Expanded and Prominent Reporting Functionality. Bing’s generative AI experiences allow users to submit feedback and report their concerns, which are then reviewed by Microsoft’s operations teams. Microsoft has made it easy for users to report problematic content they encounter while using generative AI features in Bing by including a “Feedback” portal on the footer of every Bing page, with direct links to its “Report a Concern” tool.
- Regular Improvements Based on Real World Usage. Microsoft continues to make changes to Bing generative AI experiences regularly to improve product performance, update existing mitigations, and implement new mitigations in response to our learnings based on real-world usage of the product.
- Operations and incident response. Bing also uses Microsoft’s ongoing monitoring and operational processes to address when Bing’s generative AI features receive signals or a report indicating possible misuse or violations of the terms of use.
- Cooperation with Industry Partners. The third-party content that grounds Bing’s generative AI experiences relies on the same ranking algorithms and defensive interventions that power traditional Bing search, including reliance on signals of page authority that Bing receives from its third-party partners and fact-checks using the ClaimReview protocol.
Our approach to identifying, measuring, and mitigating harms will continue to evolve as we learn more, and we continue to make improvements based on feedback from users, civil society groups, and other third-party stakeholders.
Microsoft also maintains a web page –
Microsoft-2024 Elections – where political candidates and election authorities can report alleged deepfakes of themselves or the election process on Microsoft platforms to Microsoft.
See also response to QRE 14.1.1.
Measure 14.2
Relevant Signatories will keep a detailed, up-to-date list of their publicly available policies that clarifies behaviours and practices that are prohibited on their services and will outline in their reports how their respective policies and their implementation address the above set of TTPs, threats and harms as well as other relevant threats.
QRE 14.2.1
Relevant Signatories will report on actions taken to implement the policies they list in their reports and covering the range of TTPs identified/employed, at the Member State level.
The
Webmaster Guidelines – and related defensive search and spam interventions – are global policies that are enforced globally by Bing Search, including in EU Member States. Websites that appear in Bing search results (in traditional search or in generative AI chat) are not hosted by Bing Search and, as such, Bing Search has limited information about the hosting location of these third-party websites. When addressing spam activity, Bing Search takes action at the global level (which necessarily carry through to Copilot AI features reliant on the Bing search index) to benefit Bing Search users in all countries (including EU Member States). Bing Search’s defensive search interventions are also applied at a global level (thereby encompassing all EU member states) and automatically applied to queries searched in all EU languages. Metrics on defensive search interventions are provided in SLI 14.2.1.
See also responses to QRE 14.1.1-2.
SLI 14.2.1
Number of instances of identified TTPs and actions taken at the Member State level under policies addressing each of the TTPs as well as information on the type of content.
This Section addresses TTP No. 10 (“Use of deceptive practices to deceive/manipulate platform algorithms, such as to create, amplify or hijack hashtags, data voids, filter bubbles, or echo chambers”), which is the TTP primarily applicable to Bing Search (including Bing generative search experiences).
SLI 14.1.2: In response to this SLI, Bing Search is providing data on defensive search interventions employed to counteract threats and TTPs on the Bing platform. This response includes the following data categories:
· “New DSI” reflects the net new number of queries treated with defensive search interventions during the Reporting Period (across all of Bing) since the preceding reporting period. Although Member State reporting is requested, because Bing Search takes defensive search actions globally (rather than on a per country basis), each defensive search action would necessarily be implemented in an EU member state, and it is not feasible to provide Member State reporting for globally actioned measures. See SLI 14.2.2 for more detailed Member State reporting.
· “AutoSuggest DSI” reflects the number of search query suggestions that were suppressed for queries entered by users in the EEA (including traditional web search and Copilot in Bing) during the Reporting Period.
SLI 14.2.2: Bing cannot provide data on interaction or engagement, as Bing does not allow users to “like” or “share” content within Bing and this SLI metric appears oriented to social or sharing platforms. Bing also cannot provide “before and after” data due to the preventative nature of search interventions and query-driven nature of web search. Nonetheless, below Bing Search has provided user impressions for queries that were treated with “defensive search” interventions across all of Bing Search during the Reporting Period.
· “Unique Queries DSI” reflects the total number of unique queries searched by users in the EEA that were treated with defensive search interventions during the Reporting Period.
· “DSI Query Impressions” reflects the number of impressions for unique queries treated with defensive search interventions that appeared to users in the EEA during the Reporting Period.
SLI 14.2.3 – This SLI is not applicable to search engines, as Bing Search is not an online platform that allows for user hosted content or public sharing of user generated content with other users. User accounts in the manner contemplated under this provision are not available in search (i.e. registered user accounts are not capable of amplifying creating or amplifying content as one may through a social media network).
For SLI 14.2.4 – This SLI is also not applicable to search engines for the above reasons.
Country |
TTP 10 - Nr of actions taken by type - New DSI |
TTP 10 - Nr of actions of actions taken by type - Autosuggest DSI |
Austria |
|
1,163,867 |
Belgium |
|
1,997,745 |
Bulgaria |
|
1,475 |
Croatia |
|
335 |
Cyprus |
|
203 |
Czech Republic |
|
3,658 |
Denmark |
|
672,541 |
Estonia |
|
231 |
Finland |
|
498,591 |
France |
|
8,892,324 |
Germany |
|
12,369,375 |
Greece |
|
1,158 |
Hungary |
|
3,249 |
Iceland |
|
506,151 |
Ireland |
|
4,268,446 |
Italy |
|
484 |
Latvia |
|
397 |
Lithuania |
|
90 |
Luxembourg |
|
119 |
Malta |
|
3,242,718 |
Netherlands |
|
2,606,075 |
Poland |
|
739,093 |
Portugal |
|
2,755 |
Romania |
|
874 |
Slovakia |
|
228 |
Slovenia |
|
3,732,912 |
Spain |
|
1,508,028 |
Sweden |
|
28 |
Liechtenstein |
|
0 |
Norway |
|
1,473,136 |
Total EU |
|
42,213,122 |
Total EEA |
|
43,686,286 |
Total Global |
11.4 million |
- |
Measure 14.3
Relevant Signatories will convene via the Permanent Task-force to agree upon and publish a list and terminology of TTPs employed by malicious actors, which should be updated on an annual basis.
QRE 14.3.1
Signatories will report on the list of TTPs agreed in the Permanent Task-force within 6 months of the signing of the Code and will update this list at least every year. They will also report about the common baseline elements, objectives and benchmarks for the policies and measures.
The relevant Taskforce Subgroup has considered the list of TTPs adopted in the second half of 2022 (and reported on in Microsoft’s previous reports) as being fit for purpose for the current reporting cycle. The list can be consulted below.
However, as noted in QRE 14.1.1 and 14.1.2, many of these TTPs are inapplicable to or irrelevant to search engines. Bing reiterates the need for flexibility amongst different types of services signatories to address TTPs that are most relevant to their platforms.
The following TTPs pertain to the creation of assets for the purpose of a disinformation campaign, and to ways to make these assets seem credible:
- 1. Creation of inauthentic accounts or botnets (which may include automated, partially automated, or non-automated accounts)
- 2. Use of fake / inauthentic reactions (e.g. likes, up votes, comments)
- 3. Use of fake followers or subscribers
- 4. Creation of inauthentic pages, groups, chat groups, fora, or domains
- 5. Account hijacking or impersonation
The following TTPs pertain to the dissemination of content created in the context of a disinformation campaign, which may or may not include some forms of targeting or attempting to silence opposing views. Relevant TTPs include:
- 6. Deliberately targeting vulnerable recipients (e.g. via personalized advertising, location spoofing or obfuscation)
- 7. Deploy deceptive manipulated media (e.g. “deep fakes”, “cheap fakes”...)
- 8. Use “hack and leak” operation (which may or may not include doctored content)
- 9. Inauthentic coordination of content creation or amplification, including attempts to deceive/manipulate platforms algorithms (e.g. keyword stuffing or inauthentic posting/reposting designed to mislead people about popularity of content, including by influencers)
- 10. Use of deceptive practices to deceive/manipulate platform algorithms, such as to create, amplify or hijack hashtags, data voids, filter bubbles, or echo chambers
- 11. Non-transparent compensated messages or promotions by influencers
- 12. Coordinated mass reporting of non-violative opposing content or accounts
Commitment 15
Relevant Signatories that develop or operate AI systems and that disseminate AI-generated and manipulated content through their services (e.g. deepfakes) commit to take into consideration the transparency obligations and the list of manipulative practices prohibited under the proposal for Artificial Intelligence Act.
We signed up to the following measures of this commitment
Measure 15.1 Measure 15.2
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
No
If yes, list these implementation measures here
Not applicable
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
Yes
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Bing consistently reviews and evaluates its policies and practices related to existing and new Bing features and adjusts as needed. Bing will continue to invest in its Responsible AI program.
Measure 15.1
Relevant signatories will establish or confirm their policies in place for countering prohibited manipulative practices for AI systems that generate or manipulate content, such as warning users and proactively detect such content.
QRE 15.1.1
In line with EU and national legislation, Relevant Signatories will report on their policies in place for countering prohibited manipulative practices for AI systems that generate or manipulate content.
Microsoft takes its commitment to responsible AI seriously and has a robust Responsible AI program. In addition to the safeguards noted earlier in this report, and discussed thoroughly at H
ow Bing Delivers Search Results, Microsoft has implemented a number of measures and policies to help counter attempts to manipulate AI systems that generate content.
Bing’s generative AI experiences were developed in accordance with Microsoft’s AI Principles, Microsoft’s
Responsible AI Standard, and in partnership with responsible AI experts across the company, including Microsoft’s Office of Responsible AI, engineering teams, Microsoft Research, and the AI Ethics and Effects in Engineering and Research (AETHER) committee. All Microsoft processes, programs, or tools utilizing AI, including Bing’s generative AI experiences, must adhere to Microsoft’s
Responsible AI Standard and undertake impact assessments to help ensure responsible use of AI-influenced algorithms and processes for any new product features. More details on Microsoft’s Responsible AI Standard, impact assessments, and resources on Responsible AI are located at Microsoft’s
Responsible AI Hub. Bing also conducts detailed annual risk assessments that evaluate risks posed by its systems (including generative AI features) and evaluates current and potential risk mitigation measures.
In addition to the measures discussed at QREs 14.1.1 and 14.1.2 (including pre and post launch testing, the use of classifiers and metaprompting, defensive search interventions, reporting functionality, and increased operations and incident response), Microsoft has incorporated the following safeguards and policies for countering prohibited manipulative practices for AI systems.
To help facilitate safe use of Bing’s generative AI experiences, Microsoft published
Copilot AI Experiences Terms (applicable to Copilot in Bing through its retirement in October 2024) and Bing’s Image Creator Terms of Use (including a user Code of Conduct) and implemented other mechanisms to help prevent and address misuse of these features. The Supplemental Terms prohibit users from “engaging in activity that is fraudulent, false, or misleading” and “attempting to create or share content that could mislead or deceive others, including for example creation of disinformation, content enabling fraud, or deceptive impersonation.” Users that violate these terms may be suspended from the service. In addition, Bing’s generative AI experiences may block certain text prompts that violate or are likely to violate the Code of Conduct. Repeated attempts to produce prohibited content or other violations of the Code of Conduct may result in service or account suspension. In addition,
Microsoft maintains social listening pipelines where insights and user feedback (including efforts to “jailbreak” generative AI experiences) are collected from the open Internet. These insights and user feedback are manually reviewed by humans, analyzed daily, and shared across the Bing product teams and with product leadership to identify new areas of concern and implement additional mitigations as needed. Microsoft also has set up a robust user reporting and appeal process to review and respond to user concerns of harmful or misleading content.
Bing’s generative AI experiences also provide several touchpoints for meaningful AI disclosures, where users are notified that they are interacting with an AI system and are presented with opportunities to learn more about these features and generative AI, such as through in-product disclaimers, as discussed in
How Bing Delivers Search Results educational FAQs, and blog posts. Empowering users with this knowledge can help them avoid over-relying on AI and learn about the system’s strengths and limitations.
In addition to the measures discussed above, Microsoft has worked to deliver an experience that encourages responsible use of Bing’s generative AI features and to limit the generation of harmful or unsafe images. When these systems detect that a potentially harmful image could be generated by a prompt, it blocks the prompt and warns the user.
Microsoft’s Responsible AI systems will continue to improve, and Microsoft regularly incorporates user and third-party feedback reported via Bing and Copilot Feedback buttons and its user reporting tools.
See also QRE 20.1.1.
Measure 15.2
Relevant Signatories will establish or confirm their policies in place to ensure that the algorithms used for detection, moderation and sanctioning of impermissible conduct and content on their services are trustworthy, respect the rights of end-users and do not constitute prohibited manipulative practices impermissibly distorting their behaviour in line with Union and Member States legislation.
QRE 15.2.1
Relevant Signatories will report on their policies and actions to ensure that the algorithms used for detection, moderation and sanctioning of impermissible conduct and content on their services are trustworthy, respect the rights of end-users and do not constitute prohibited manipulative practices in line with Union and Member States legislation.
As a search engine, Bing Search operates differently from social media websites and other online platforms that host content. Bing Search does not host user-generated content and does not use algorithms to detect, moderate or sanction user-provided content except for limited circumstances outside the scope of this Code of Practice (e.g., the use of PhotoDNA software to detect and report child sexually explicit imagery uploaded to visual search). As to third party websites indexed by Bing Search, Bing Search does not use algorithms to detect, monitor or sanction such websites, except for limited circumstances outside the scope of this Code of Practice (e.g., the use of PhotoDNA software to detect and report child sexually explicit imagery). Bing users have many legitimate reasons for seeking out content in search that may be harmful or offensive in other contexts, and so Bing Search works to provide as comprehensive and useful of a collection of results as possible and does not proactively intervene to limit access to legal content. In some limited cases Bing Search may take action to remove or limit access to third party links where quality, safety, user demand, relevant laws, and/or public policy concerns exist, but these interventions are reactive; Bing generally does not engage in proactive algorithmic interventions to remove content.
Bing’s generative AI features include additional enhanced safety features such as classifiers, filters, and a bespoke metaprompt that further limits the likelihood of harmful content appearing in generative AI features. Microsoft has engaged in extensive Responsible AI reviews regarding generative AI features in order to ensure outputs are not biased or discriminatory. It has also implemented additional filtering and classifiers to prevent generative AI experiences from returning what Bing considers “low authority” content as part of an answer and to help address impermissible content and behaviors. Microsoft is also continually working to ensure that its generative features do not over-block outputs so that users are able to access the information they seek and measures and monitors conversation metrics to improve the interventions to balance the harm prevention and provide users with useful information.
Lastly, Microsoft has endeavored to provide transparency about how it designed and tested its generative AI features with responsible AI in mind via blog posts, FAQs, presentations, and responsible AI documentation, for example
How Bing Delivers Search Results. In May 2024, Microsoft also released its first ever
Responsible AI Transparency Report, which provides additional detail on Microsoft’s Responsible AI programs, including insights into how Microsoft builds applications that use generative AI and makes decisions and oversees the deployment of those applications, among other things.
Commitment 16
Relevant Signatories commit to operate channels of exchange between their relevant teams in order to proactively share information about cross-platform influence operations, foreign interference in information space and relevant incidents that emerge on their respective services, with the aim of preventing dissemination and resurgence on other services, in full compliance with privacy legislation and with due consideration for security and human rights risks.
We signed up to the following measures of this commitment
Measure 16.1 Measure 16.2
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
Yes
If yes, list these implementation measures here
Bing participated in the Elections Working Group and established additional intake channels to facilitate cross-platform information sharing in relation to the French, Romanian and Croatian Elections.
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
Yes
If yes, which further implementation measures do you plan to put in place in the next 6 months?
We look forward to continuing to work on this commitment with the other signatories as we develop further cross platform information sharing.
Measure 16.1
Relevant Signatories will share relevant information about cross-platform information manipulation, foreign interference in information space and incidents that emerge on their respective services for instance via a dedicated sub-group of the permanent Task-force or via existing fora for exchanging such information.
QRE 16.1.1
Relevant Signatories will disclose the fora they use for information sharing as well as information about learnings derived from this sharing.
Bing Search, through Microsoft, is an active participant in and contributor to the Task-force’s Crisis Response subgroup, in which it proactively provides analysis and data related to influence operations, foreign interference in information space and relevant incidents that emerges on its service.
Microsoft’s internal threat detection and research teams, including Microsoft Threat Analysis Center (MTAC), Microsoft Threat Intelligence Center (MSTIC), Microsoft Research (MSR), and AI For Good, collect and analyse data on actors of disinformation, misinformation and information manipulation across platforms. These teams work with external organisations and companies to share and ingest data that help support Microsoft product and service teams effectively respond to issues and threats.
Microsoft also works to identify and track nation-state information operations targeting democracies across the world and works with trusted third-party partners for early indicators of narratives, hashtags, or information operations that can be leveraged to inform early detection and defensive search strategies for Bing. Through Microsoft’s Democracy Forward team and MTAC, Microsoft also offers mediums for election authorities, including in the EEA member states, to have lines of communication with Microsoft to identify possible foreign information operations targeting elections.
See also QRE 14.1.2.
SLI 16.1.1
Number of actions taken as a result of the collaboration and information sharing between signatories. Where they have such information, they will specify which Member States that were affected (including information about the content being detected and acted upon due to this collaboration).
See SLI 14.1.2 for defensive search interventions data, which is based in part on information and threat intelligence gathered through information sharing with third parties, as well as the internal Microsoft and Bing resources noted in QREs 16.1.1 and 14.1.2. Given the multipronged approach Microsoft and Bing take to monitoring and actioning influence operations and sources of misinformation and disinformation and the multiple internal and external sources relied upon, it is challenging to provide precise reporting on whether an incidence of single information sharing results in a corresponding defensive search intervention or other action.
Measure 16.2
Relevant Signatories will pay specific attention to and share information on the tactical migration of known actors of misinformation, disinformation and information manipulation across different platforms as a way to circumvent moderation policies, engage different audiences or coordinate action on platforms with less scrutiny and policy bandwidth.
QRE 16.2.1
As a result of the collaboration and information sharing between them, Relevant Signatories will share qualitative examples and case studies of migration tactics employed and advertised by such actors on their platforms as observed by their moderation team and/or external partners from Academia or fact-checking organisations engaged in such monitoring.
We look forward to working on this commitment with the other signatories as we develop further cross-platform information sharing and best practices for measuring such information consistently.
Empowering Users
Commitment 17
In light of the European Commission's initiatives in the area of media literacy, including the new Digital Education Action Plan, Relevant Signatories commit to continue and strengthen their efforts in the area of media literacy and critical thinking, also with the aim to include vulnerable groups.
We signed up to the following measures of this commitment
Measure 17.1 Measure 17.2 Measure 17.3
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
No
If yes, list these implementation measures here
Not applicable
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
No
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Bing Search regularly reviews and evaluates its user tools, policies, and practices and adjusts and updates policies as needed. While Bing's existing programs are already designed to address these issues, Bing regularly evaluates its measures and endeavors to improve and work to respond quickly to new threats or issues as they arise.
Measure 17.1
Relevant Signatories will design and implement or continue to maintain tools to improve media literacy and critical thinking, for instance by empowering users with context on the content visible on services or with guidance on how to evaluate online content.
QRE 17.1.1
Relevant Signatories will outline the tools they develop or maintain that are relevant to this commitment and report on their deployment in each Member State.
Bing Search offers a number of tools to help users understand the context and trustworthiness of search results. Even in circumstances where a user is expressly seeking low authority content (or if there is a data void so little to no high authority content exists for a query), Bing Search provides tools to users that can help improve their digital literacy and avoid harms resulting from engaging with misleading or inaccurate content. For example, Bing Search may include answers or public service announcements at the top of search results pointing users to high authority information on a searched topic such as key global elections or warnings on particular URLs known to contain harmful information (such as unaccredited online pharmacies and sites containing malware).
Where circumstances warrant (such as public health crises or major elections), Bing Search may provide information hubs for users to easily access a centralized repository of high authority information.
Bing Search also provides users with informative panels and direct answers to certain search queries and is now available in a multitude of global languages.
Bing Search’s “Knowledge Cards” feature also gives users a single view of authoritative information on a specific topic. An example is shown on page 95 of our PDF report.
Bing Search provides users with public service announcements (PSAs). PSAs are user messages that appear as answer boxes at the top of a list of search results for certain triggering queries, providing information on potential risks associated with that query and/or pointing to support resources. PSAs are triggered by queries on specific topics, such as child pornography, attempts to purchase illegal pharmaceuticals, suicide, etc.
Bing search also partners with trusted election authorities to empower voters with authoritative election information on Bing. Bing leverages partnerships with EU election authorities to help direct users to trusted and/or official sources of information concerning elections and voting information.
For example, Bing launched specialized How to Vote Answers in advance of the French snap elections held June-July 2024. Examples of how these special answers appear to users are shown on page 96 of our PDF report.
In addition to the features available for core search experiences, Bing generative features notify users that they are interacting with an AI system and are presented with opportunities to learn more about these features and generative AI, such as through in-product disclaimers. Bing Generative Search results displays a disclosure shown on page 97 of our PDF report.
Microsoft also offers meaningful resources for users interested in learning more about generative AI features and tools though blog posts, articles, information hubs, and support pages. In addition to teaching AI basics and how-tos, these resources reiterate the importance of checking AI-generated materials and understanding the strengths and limitations of AI. See e.g.,
Microsoft AI help & learning.
As part of Microsoft’s Tech Accord commitments, Microsoft has created trainings for political parties, candidates, and election officials to improve their understanding of what deepfakes are and how they can protect against their use in elections. At the time of writing, Microsoft has completed 50 separate training sessions across EEA countries reaching over 500 participants. Countries include Austria, Belgium, Denmark, Finland, France, Germany, Greece, Ireland, Italy, Netherlands, Norway, Portugal, Slovakia, Spain, and Sweden.
Microsoft is committed to providing resources, educational materials, and guides so that users can develop literacy when interacting with AI systems and will continue to explore ways to further educate the public on important generative AI topics.
SLI 17.1.1
Relevant Signatories will report, at the Member State level, on metrics pertinent to assessing the effects of the tools described in the qualitative reporting element for Measure 17.1, which will include: the total count of impressions of the tool; and information on the interactions/engagement with the tool.
Knowledge Cards (“KC)” – Represents viewership of Knowledge Cards (of all types/topics) during the Reporting Period/
Transparency Hub Viewership (“TH”) – Represents the total views of the Microsoft Transparency Report Hub during the Reporting Period.
Public Service Announcement (“PSA”) – Represents views of public service announcement panels (of all types/topics) rendered in Bing to EU users during the Reporting Period.
Country |
Interactions/ engagement with the tool - KC |
Total count of the tool’s impressions - TH |
Total count of the tool’s impressions - PSA |
Austria |
108,761,673 |
57 |
70,220 |
Belgium |
200,650,612 |
58 |
133,860 |
Bulgaria |
38,569,264 |
21 |
27,380 |
Croatia |
27,201,309 |
8 |
21,260 |
Cyprus |
8,784,806 |
11 |
5,280 |
Czech Republic |
138,432,602 |
54 |
66,940 |
Denmark |
80,485,036 |
58 |
62,580 |
Estonia |
15,330,027 |
10 |
11,760 |
Finland |
86,792,314 |
46 |
49,120 |
France |
1,063,056,890 |
144 |
497,580 |
Germany |
1,108,464,294 |
324 |
604,440 |
Greece |
56,864,591 |
19 |
36,900 |
Hungary |
70,877,762 |
32 |
44,360 |
Ireland |
110,462,651 |
17 |
195,720 |
Italy |
686,807,990 |
81 |
183,140 |
Latvia |
17,582,909 |
11 |
223,340 |
Lithuania |
28,626,206 |
14 |
23,560 |
Luxembourg |
11,744,763 |
8 |
5,400 |
Malta |
8,206,968 |
0 |
9,080 |
Netherlands |
309,694,218 |
313 |
251,780 |
Poland |
463,903,783 |
112 |
202,920 |
Portugal |
151,513,785 |
26 |
82,700 |
Romania |
83,347,298 |
30 |
59,440 |
Slovakia |
39,365,393 |
21 |
26,460 |
Slovenia |
18,367,447 |
11 |
13,860 |
Spain |
816,296,463 |
136 |
206,520 |
Sweden |
173,528,035 |
143 |
138,620 |
Iceland |
6,685,380 |
24 |
5,480 |
Liechtenstein |
729,055 |
6 |
200 |
Norway |
94,912,507 |
50 |
76,540 |
Total EU |
5,923,719,089 |
1,765 |
3,155,880 |
Total EEA |
6,026,046,031 |
1,845 |
3,073,660 |
Measure 17.2
Relevant Signatories will develop, promote and/or support or continue to run activities to improve media literacy and critical thinking such as campaigns to raise awareness about Disinformation, as well as the TTPs that are being used by malicious actors, among the general public across the European Union, also considering the involvement of vulnerable communities.
QRE 17.2.1
Relevant Signatories will describe the activities they launch or support and the Member States they target and reach. Relevant signatories will further report on actions taken to promote the campaigns to their user base per Member States targeted.
Microsoft works with leading media and information literacy partners globally to support the development and promotion of media literacy campaigns that benefit users across Microsoft services and the broader ecosystem.
In May 2024, Microsoft, in collaboration with OpenAI, launched the Societal Resilience Grants to support various organizations in promoting AI literacy, ethical AI use, and societal resilience against AI-related challenges. The grants were awarded to the Older Adults Technology Services from AARP, International IDEA, Partnership on AI, Coalition for Content Provenance and Authenticity (C2PA), and WITNESS. These initiatives have reached national election bodies in 26 countries, 500,000 older adults, and 250 global journalists, demonstrating a comprehensive approach to addressing AI threats and fostering responsible AI practices.
As part of Microsoft’s Tech Accord commitments, Microsoft has created trainings for political parties, candidates, and election officials to improve their understanding of what deepfakes are and how they can protect against their use in elections. At the time of writing, Microsoft has completed 50 separate training sessions across EEA countries reaching over 500 participants. Countries include Austria, Belgium, Denmark, Finland, France, Germany, Greece, Ireland, Italy, Netherlands, Norway, Portugal, Slovakia, Spain, and Sweden.
Measure 17.3
For both of the above Measures, and in order to build on the expertise of media literacy experts in the design, implementation, and impact measurement of tools, relevant Signatories will partner or consult with media literacy experts in the EU, including for instance the Commission's Media Literacy Expert Group, ERGA's Media Literacy Action Group, EDMO, its country-specific branches, or relevant Member State universities or organisations that have relevant expertise.
QRE 17.3.1
Relevant Signatories will describe how they involved and partnered with media literacy experts for the purposes of all Measures in this Commitment.
Microsoft continues to work with multiple organisations to develop and promote media literacy campaigns, including 2024 campaigns from the News Literacy Project and The Trust Project to promote information literacy resources on Microsoft platforms. Since its last report, Microsoft continues to grow partnerships to strengthen the company’s capacity and ability to combat information operations globally. Microsoft is continuing to work with existing and new partners to create, disseminate, and report on expanded literacy campaigns in EEA markets, such as delivering additional deepfake awareness trainings. Including International IDEA is on track to deliver 5 AI and Elections Trainings between November 2024 and May 2025. These trainings will reach global election officials, civil society and journalists. The training will provide participants with an enhanced understanding of AI, its ethical implications, and its uses and benefits in electoral management. Microsoft’s Deepfake content will be integrated into the curriculum.
Commitment 18
Relevant Signatories commit to minimise the risks of viral propagation of Disinformation by adopting safe design practices as they develop their systems, policies, and features.
We signed up to the following measures of this commitment
Measure 18.2 Measure 18.3
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
Yes
If yes, list these implementation measures here
Bing regularly reviews and implements mitigations, safeguards, and safe design considerations to help proactively address, prevent and mitigate harms arising from potential misuse of generative AI search experiences, including viral propagation of content, and provides updates to public facing transparency documents, such as
How Bing Delivers Search Results. However, Bing features do not allow users to post or share content within Bing so virality is not possible on the platform itself.
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
Yes
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Bing is regularly reviewing and evaluating its policies and practices related to features and adjusts and updates policies as needed. Bing continues to explore additional potential research opportunities and partnerships related to the spread of harmful misinformation and/or disinformation.
Measure 18.2
Relevant Signatories will develop and enforce publicly documented, proportionate policies to limit the spread of harmful false or misleading information (as depends on the service, such as prohibiting, downranking, or not recommending harmful false or misleading information, adapted to the severity of the impacts and with due regard to freedom of expression and information); and take action on webpages or actors that persistently violate these policies.
QRE 18.2.1
Relevant Signatories will report on the policies or terms of service that are relevant to Measure 18.2 and on their approach towards persistent violations of these policies.
Unlike social media services, for which this Commitment appears primarily oriented, search engines do not typically cause or facilitate the viral propagation of disinformation, as they do not allow users to post or share content directly on the service. Please see
How Bing delivers search results and
Microsoft Bing Webmaster Guidelines for an overview of how Bing Search designs its algorithms to deliver high authority and highly relevant content while minimizing the negative impact of spam and less credible information sources. Bing Search’s ranking algorithms and related policies are intended to address deceptive tactics intended to manipulate the algorithms and are discussed in more detail at QREs 14.1.1 and 14.1.2. Bing Search features such as news carousels as well as the other features and policies discussed throughout this report further help minimize the risk of viral propagation of misinformation through Bing Search.
Bing’s suggestions features offer possible search queries to users to facilitate a more efficient search experience. While search suggestions are not directly tied to virality of content, Bing Search also undertakes measures to help ensure it does not inadvertently lead users to misleading or other harmful content through suggestions. Specifically, Bing Search uses a combination of proactive and reactive algorithmic and manual interventions to prevent the display of search suggestions that could lead to low authority content.
Bing Search also provides a tool for users to provide feedback on suggestions they encounter. The feedback tool is shown on page 108 of our PDF report.
Clicking “Feedback on these suggestions” allows users to provide specific feedback on individual suggestions on several bases, as seen on page 108 of our PDF report.
There is generally no risk of viral spread of generated content through Bing and Bing’s generative AI experiences because Bing does not allow users to directly post or otherwise share content on the platform. Bing also takes steps to prevent the service from being used to create content or images that might be shared on other platforms through a multipronged approach. This approach includes terms of use and a code of conduct, classifiers, filters, bespoke metaprompts, and robust reporting mechanisms designed to mitigate the risk of potential misuse of the platform. Supplemental Terms addressing AI powered search experiences in Bing, for example, prohibit users from using the service to generate fraudulent or misleading information, including the creation of disinformation. Bing’s ranking and relevance systems for search, which are an essential component to answering user questions, work to ensure that high authority content is returned first in search results in traditional search and in chat. Where Bing’s systems flag that a user’s prompt or generated output may result in low authority or misleading information, the system will take steps to mitigate that possible harm through solutions, such as not returning generated content to the user, diverting the user to a different topic, or redirecting the user to traditional search. Users who encounter problematic content can report concerns via Feedback or Report a Concern tools.
SLI 18.2.1
Relevant Signatories will report on actions taken in response to violations of policies relevant to Measure 18.2, at the Member State level. The metrics shall include: Total number of violations and Meaningful metrics to measure the impact of these actions (such as their impact on the visibility of or the engagement with content that was actioned upon).
Bing Search does not have data relevant to this SLI. Users come to Bing Search with a specific research topic in mind and expect Bing Search to provide links to the most relevant and authoritative third-party websites on the Internet that are responsive to their search terms. Bing Search does not have a news feed for users of user content, allow users to post and share content within Bing, or otherwise enable content to go “viral” on Bing. See response to SLI 14.2.1 for relevant metrics.
Measure 18.3
Relevant Signatories will invest and/or participate in research efforts on the spread of harmful Disinformation online and related safe design practices, will make findings available to the public or report on those to the Code's taskforce. They will disclose and discuss findings within the permanent Task-force, and explain how they intend to use these findings to improve existing safe design practices and features or develop new ones.
QRE 18.3.1
Relevant Signatories will describe research efforts, both in-house and in partnership with third-party organisations, on the spread of harmful Disinformation online and relevant safe design practices, as well as actions or changes as a result of this research. Relevant Signatories will include where possible information on financial investments in said research. Wherever possible, they will make their findings available to the general public.
Bing Search regularly reviews and considers safe design practices and research and conducts user studies as part of its product and new feature development processes. Bing Search employees have actively partnered with Microsoft Research and third-party research organizations to contribute to novel research and internal studies concerning safe design practices, responsible AI, and disinformation.
Microsoft also funds and works with Princeton University on the creation of hub for researchers to access data from social media companies to improve the identification and tracking of cyber enabled information operations. This accelerator will be available to researchers around the world including in Europe.
Microsoft Research and Microsoft’s AI for Good Lab regularly undertake and publish research that addresses or can be used in understanding online misinformation and disinformation. Microsoft researchers are currently engaged in research leveraging search data to explore how medical hoaxes went viral during the COVID-19 pandemic and research concerning the detection of bias in mainstream news in connection with elections.
Microsoft maintains an internal research team—the Microsoft Threat Analysis Center (MTAC)—that conducts research on information influence operations and publishes both internal and public reports on its findings. MTAC maintains global hubs and conducts intelligence analysis in over 13 languages. Additionally, Microsoft funds and works with external organizations to ingest data and research that they conduct into Microsoft products, including Bing Search.
Bing Search looks forward to continued opportunities to contribute to and collaborate with the research community on future research and is in active discussions with third party organizations and the research community on best practices and mitigations for core web search and new generative AI experiences.
Commitment 19
Relevant Signatories using recommender systems commit to make them transparent to the recipients regarding the main criteria and parameters used for prioritising or deprioritising information, and provide options to users about recommender systems, and make available information on those options.
We signed up to the following measures of this commitment
Measure 19.1 Measure 19.2
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
No
If yes, list these implementation measures here
Not applicable
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
Yes
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Bing regularly updates its policies and terms and conditions to account for product changes, user feedback, and evolving legal considerations.
Measure 19.1
Relevant Signatories will make available to their users, including through the Transparency Centre and in their terms and conditions, in a clear, accessible and easily comprehensible manner, information outlining the main parameters their recommender systems employ.
QRE 19.1.1
Relevant Signatories will provide details of the policies and measures put in place to implement the above-mentioned measures accessible to EU users, especially by publishing information outlining the main parameters their recommender systems employ in this regard. This information should also be included in the Transparency Centre.
Bing’s search engine ranking algorithms are not a traditional “recommender system” in that Bing Search only provides content to users as a result of their express request, rather than pushing content to users who were not expressly seeking it. That said, the main parameters of Bing Search’s ranking algorithms are published in the “How Bing Ranks Search Results” section of
How Bing Delivers Search, which is available to Bing Search users in the EU. Bing Search also provides information on how it ranks and returns search suggestions in the Enhanced Search Experiences section of
How Bing Delivers Search Results. Bing’s ranking algorithms apply equally to traditional search results and generative AI features that extend to Bing.
Please also see QREs 14.1.1, and 22.2.1.
Measure 19.2
Relevant Signatories will provide options for the recipients of the service to select and to modify at any time their preferred options for relevant recommender systems, including giving users transparency about those options.
SLI 19.2.1
Relevant Signatories will provide aggregated information on effective user settings, such as the number of times users have actively engaged with these settings within the reporting period or over a sample representative timeframe, and clearly denote shifts in configuration patterns.
Bing Search allows users to turn off search suggestions (including auto-suggest and related search suggestions) (“AS/RS”) in its user setting page, as shown on page 115 of our PDF report.
In the Bing image experience, users can turn off personalized search suggestions through the Settings pane. Bing anticipates providing reporting on utilization of this new measure in forthcoming reports.
Users may also access, view, and delete their previous search queries in their Microsoft Account Privacy dashboard or clear their search history in Bing Search settings, which in turn will remove that content from any personalized search suggestions.
Bing is currently building out expanded data retention and reporting functionalities related to this Commitment.
|
No of times users engaged with the search suggestion feature |
Total EEA |
81,409,838 |
Commitment 20
Relevant Signatories commit to empower users with tools to assess the provenance and edit history or authenticity or accuracy of digital content.
We signed up to the following measures of this commitment
Measure 20.1 Measure 20.2
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
Yes
If yes, list these implementation measures here
Microsoft has continued to improve content provenance measures on its AI image generation features, including continuing to pilot Content Integrity Tools that allowed users to add content credentials to their own authentic content (discussed further below).
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
Yes
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Microsoft expects to continue its important work in content provenance tools and ways to help counter harmful AI-generated content.
Measure 20.1
Relevant Signatories will develop technology solutions to help users check authenticity or identify the provenance or source of digital content, such as new tools or protocols or new open technical standards for content provenance (for instance, C2PA).
QRE 20.1.1
Relevant Signatories will provide details of the progress made developing provenance tools or standards, milestones reached in the implementation and any barriers to progress.
Microsoft and key members of the Bing Search team are also involved in the Partnership on AI (“PAI”) to identify possible countermeasures against deepfakes and has participated in the drafting and refinement of PAI’s proposed Synthetic Media Code of Conduct. The proposed Code of Conduct provides guidelines for the ethical and responsible development, creation, and sharing of synthetic media (such as AI-generated artwork).
Microsoft is deeply focused on the potential risk that deepfakes and other abusive AI-generated content could be used to proliferate election-related misinformation, deceive the public, and potentially undermine trust in online content and our elections. For those reasons, we were a founding member of the Coalition for Content Provenance and Authenticity (C2PA). The C2PA is a coalition of technology companies, media, and others created to address the prevalence of misleading information online by developing technical standards to certify the source and history of media content. Pursuant to the C2PA specification, generative AI specifies techniques to add “Content Credentials” to online media consisting of metadata about the media’s provenance and authenticity. In turn, that information provides consumers with a way to verify the history and trustworthiness of the media. Credentials are already added to all generative AI images created with our most popular consumer-facing AI image generation tools, including Image Creator, Microsoft Designer, and Copilot.
In addition, Microsoft has continued piloting Content Integrity Tools, which allowed users to add content credentials to their own authentic content. Designed
as a pilot program primarily to support the 2024 election cycle and gather feedback about Content Credentials-enabled tools
, during the reporting period of this report, the tools were available to political campaigns in the EU, as well as to elections authorities and select news media organizations in the EU and globally. These tools included a partnership and collaboration with fellow Tech Accord signatory, TruePic.
Announced in April 2024, this collaboration leveraged TruePic’s mobile camera SDK enabling campaign, election, and media participants to capture authentic images, videos and audio directly from a vetted and secure device. Called the “Content Integrity Capture App” (an app that makes it easy to directly capture images with C2PA enabled signing) launched for both Android and Apple and can be used by participants in the Content Integrity Tools pilot program.
Measure 20.2
Relevant Signatories will take steps to join/support global initiatives and standards bodies (for instance, C2PA) focused on the development of provenance tools.
QRE 20.2.1
Relevant Signatories will provide details of global initiatives and standards bodies focused on the development of provenance tools (for instance, C2PA) that signatories have joined, or the support given to relevant organisations, providing links to organisation websites where possible.
The Tech Accord’s commitments make it more difficult for bad actors to use legitimate tools to create deepfakes and easier for users to identify authentic content. This focuses on the work of companies that generate AI content as well as those that distribute it and calls on them to strengthen the safety architecture in AI services by assessing risks and strengthening controls to help prevent abuse. For its part, Microsoft has
taken steps to meet the commitments in the Tech Accord by further implementing content provenance, establishment of reporting channels, and improved detection capability. For example, Microsoft maintained launched a new web page –
Microsoft-2024 Elections – where a political candidates and election authorities can report a concern about a deepfake of themselves deceptive AI targeting themselves or their election.
Microsoft has worked to harness the data science and technical capabilities of our AI for Good Lab and MTAC teams to better assess whether abusive content—including that created and disseminated by foreign actors—is synthetic or not. Microsoft AI for Good lab has been developing detection models (image, video) to assess whether media was generated or manipulated by AI. The model is trained on approximately 200,000 examples of AI and real content. AI for Good continues to invest in creating sample dataset representing the latest generative AI technology. When appropriate, Microsoft calls on the expertise of Microsoft’s Digital Crimes Unit to invest in and operationalize the early detection of AI-powered criminal activity and respond appropriately, through the filing of affirmative civil actions to disrupt and deter that activity and through threat intelligence programs and data sharing with customers and government.
Commitment 21
Relevant Signatories commit to strengthen their efforts to better equip users to identify Disinformation. In particular, in order to enable users to navigate services in an informed way, Relevant Signatories commit to facilitate, across all Member States languages in which their services are provided, user access to tools for assessing the factual accuracy of sources through fact-checks from fact-checking organisations that have flagged potential Disinformation, as well as warning labels from other authoritative sources.
We signed up to the following measures of this commitment
Measure 21.1 Measure 21.2 Measure 21.3
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
No
If yes, list these implementation measures here
Not applicable
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
Yes
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Bing continues to evaluate additional tools and resources that support the spirit of this commitment.
Measure 21.2
Relevant Signatories will, in light of scientific evidence and the specificities of their services, and of user privacy preferences, undertake and/or support research and testing on warnings or updates targeted to users that have interacted with content that was later actioned upon for violation of policies mentioned in this section. They will disclose and discuss findings within the permanent Task-force in view of identifying relevant follow up actions.
QRE 21.2.1
Relevant Signatories will report on the research or testing efforts that they supported and undertook as part of this commitment and on the findings of research or testing undertaken as part of this commitment. Wherever possible, they will make their findings available to the general public.
Measure 21.3
Where Relevant Signatories employ labelling and warning systems, they will design these in accordance with up-to-date scientific evidence and with analysis of their users' needs on how to maximise the impact and usefulness of such interventions, for instance such that they are likely to be viewed and positively received.
QRE 21.3.1
Relevant Signatories will report on their procedures for developing and deploying labelling or warning systems and how they take scientific evidence and their users' needs into account to maximise usefulness.
Bing Search regularly consults research and evidence, including from internal Microsoft research and data science teams, related to safe design practices, labeling, and user experience and considers such research as part of its product design and testing. Bing Search also conducts internal research and user studies for product features, such as by analyzing impressions, engagement, or clicks of various features. Bing Search also has a “feedback” button easily accessible from any page of Bing. Bing Search reviews and may make improvements based on user feedback. Bing Search also regularly consults with third party organizations to hear feedback about product design and related safety considerations.
As to fact check labels, Bing Search participated in the W3C organization that helped to design and promote Schema.org and ClaimReview and regularly meets with stakeholders to discuss common issues, including whether updates to these common schemas are necessary.
Microsoft’s Responsible AI team and product teams have worked to develop labeling and warning systems – as well as robust support and educational resources – to help ensure users are informed that AI-powered answers can have inaccuracies and to encourage users to consult the source links provided. Microsoft’s Responsible AI team is staffed by a cross-disciplinary team of experts in AI, who consult regularly with external experts in the field to ensure our labels and warnings are designed in accordance with best practices.
Commitment 22
Relevant Signatories commit to provide users with tools to help them make more informed decisions when they encounter online information that may be false or misleading, and to facilitate user access to tools and information to assess the trustworthiness of information sources, such as indicators of trustworthiness for informed online navigation, particularly relating to societal issues or debates of general interest.
We signed up to the following measures of this commitment
Measure 22.1 Measure 22.2 Measure 22.3 Measure 22.7
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
No
If yes, list these implementation measures here
Not applicable
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
No
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Bing regularly evaluates opportunities to improve its product and educate users on the trustworthiness and limitations of AI.
Measure 22.2
Relevant Signatories will give users the option of having signals relating to the trustworthiness of media sources into the recommender systems or feed such signals into their recommender systems.
QRE 22.2.1
Relevant Signatories will report on whether and, if relevant, how they feed signals related to the trustworthiness of media sources into their recommender systems, and outline the rationale for their approach.
Bing Search utilizes a variety of signals – including from trusted third parties as one of several means to help determine the authority score of a given website and rank it accordingly in search results.
Bing Search also relies upon signals to help ensure that its search systems and features, such as auto-suggest and related search functions, direct users to high authority, trustworthy results and do not inadvertently suggest low authority or misleading content.
The above mechanisms and the Bing algorithm’s emphasis on promoting high authority content are applied equally to the new Bing generative AI features to help ensure that users are protected from inadvertently being exposed to harmful or low authority information in the new Bing experience.
Measure 22.3
Relevant Signatories will make details of the policies and measures put in place to implement the above-mentioned measures accessible to EU users, especially by publishing information outlining the main parameters their recommender systems employ in this regard. This information should also be included in the Transparency Centre.
QRE 22.3.1
Relevant Signatories will provide details of the policies and measures put in place to implement the above-mentioned measures accessible to EU users, especially by publishing information outlining the main parameters their recommender systems employ in this regard. This information should also be included in the Transparency Centre.
Measure 22.7
Relevant Signatories will design and apply products and features (e.g. information panels, banners, pop-ups, maps and prompts, trustworthiness indicators) that lead users to authoritative sources on topics of particular public and societal interest or in crisis situations.
QRE 22.7.1
Relevant Signatories will outline the products and features they deploy across their services and will specify whether those are available across Member States.
In addition to broader measures described in this report, Bing Search has taken special care to address low authority information and misinformation in relation to elections, the Russian invasion of Ukraine, the Israel-Hamas conflict, and EU elections as detailed below and further in the Crisis Reporting appendices.
Microsoft is also an active participant in the elections working group established by the Crisis Response Working Group. In addition, Microsoft works with election authorities responsible for running elections to promote trusted information regarding elections and monitors for foreign information operations targeting elections. Bing has launched special features such as info panels and specialized answers directing users to high authority content concerning elections and voting (an example for France’s “snap” parliamentary election can be seen on page 134 of our PDF report).
Additional detail is provided in the Crisis Reporting appendix.
In response to Russia’s invasion of Ukraine in 2022, Bing Search has closely monitored low authority information trend and is working to promote authoritative content related to the conflict.
· Bing Search has taken steps to algorithmically boost authority signals and has downgraded less authoritative information (see SLI 22.7.1). These queries are translated automatically into other languages supported by Bing Search and integrated into Bing’s generative AI experiences.
· Bing search works with Microsoft’s Democracy Forward, Threat Assessment Center (MTAC) and Threat Intelligence Center (MSTIC) to ensure access to signals regarding Russian cyber and information operations targeting Ukraine to inform potential algorithmic interventions both for traditional and generative AI search tools.
· Bing Search regularly partners with independent research organizations and nonprofit organizations to maintain threat intelligence and inform potential algorithmic interventions both for traditional and generative AI search tools.
· Bing Search also takes action to remove autosuggest and related search terms that have been found likely to lead users to low authority content. These measures have helped ensure that Bing Search is promoting authoritative news sources, timelines, and other factual information at the top of algorithmic search results and in Bing generative AI experiences.
· Bing Search has also complied with EU sanctions orders requiring the removal of certain Russian media sources, such as Russia Today and Sputnik.
SLI 22.7.1
Relevant Signatories will report on the reach and/or user interactions with the products or features, at the Member State level, via the metrics of impressions and interactions (clicks, click-through rates (as relevant to the tools and services in question) and shares (as relevant to the tools and services in question).
Bing has revised the SLI action descriptions below for accuracy with respect to the metrics provided pursuant to this Section. Please see below key metrics.
Although defensive search actions are taken at a global level (and therefore applied in every Member State), Bing has endeavored to provide the additional following data for this SLI:
- “Defensive Interventions (RU/UA)” refers to the total number of queries entered by users that were addressed with defensive search interventions related to the Ukraine/Russia crisis during the Reporting Period.
- “Impressions (RU/UA)” reflects the number of user impressions for queries searched by users where defensive search interventions related to the Ukraine/Russia crisis were applied during the Reporting Period.
Country |
Defensive Interventions (RU/UA) |
Impressions - (RU/UA) |
Austria |
3,453 |
47,947 |
Belgium |
5,014 |
65,142 |
Bulgaria |
11 |
22 |
Croatia |
2 |
2 |
Cyprus |
0 |
0 |
Czech Republic |
23 |
28 |
Denmark |
1,660 |
12,233 |
Estonia |
5 |
5 |
Finland |
1,496 |
10,619 |
France |
11,724 |
286,787 |
Germany |
14,319 |
601,938 |
Greece |
6 |
7 |
Hungary |
4 |
4 |
Ireland |
3,587 |
33,749 |
Italy |
6,832 |
122,340 |
Latvia |
11 |
14 |
Lithuania |
3 |
3 |
Luxembourg |
0 |
0 |
Malta |
2 |
2 |
Netherlands |
61,22 |
53,143 |
Poland |
6,028 |
91,154 |
Portugal |
3,410 |
26,600 |
Romania |
4 |
5 |
Slovakia |
5 |
5 |
Slovenia |
0 |
0 |
Spain |
11237 |
313,141 |
Sweden |
4696 |
35,766 |
Iceland |
0 |
0 |
Liechtenstein |
0 |
0 |
Norway |
2,988 |
31,233 |
Total EU |
79,654 |
1,700,656 |
Total EEA |
82,642 |
1,731,889 |
Commitment 23
Relevant Signatories commit to provide users with the functionality to flag harmful false and/or misleading information that violates Signatories policies or terms of service.
We signed up to the following measures of this commitment
Measure 23.1 Measure 23.2
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
No
If yes, list these implementation measures here
Not applicable.
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
No
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Bing monitors user feedback and regularly evolves its product reporting process and forms in response to user feedback, new legal obligations, or product developments.
Measure 23.1
Relevant Signatories will develop or continue to make available on all their services and in all Member States languages in which their services are provided a user-friendly functionality for users to flag harmful false and/or misleading information that violates Signatories' policies or terms of service. The functionality should lead to appropriate, proportionate and consistent follow-up actions, in full respect of the freedom of expression.
QRE 23.1.1
Relevant Signatories will report on the availability of flagging systems for their policies related to harmful false and/or misleading information across EU Member States and specify the different steps that are required to trigger the systems.
As a search engine that does not host or display user generated content, Bing Search does not have a reporting function for user generated content.
Bing Search’s
Report a Concern Form permits users to report third-party websites for a variety of reasons including disclosure of private information, spam and malicious pages, and illegal materials.
Bing Search’s “Feedback” tool, which is accessible on the lower right corner on a search results page, allows users to provide feedback on search results (including a screenshot of the results page) to Bing Search. Depending on the nature of the feedback, Bing Search may take appropriate action, such as to engage in algorithmic interventions to ensure high authority content appears above low authority content in search results, remove links that violate local law or Bing policies, add answers, warnings or other media literacy interventions on certain topics, or remove autosuggest terms.
As discussed in QRE 14.1.2, these tools have also been updated to make it easy for users to report problematic content they encounter in Bing’s generative AI experiences by including the same “Feedback” button with direct links to the respective service’s “Report a Concern” tool on the footer of each page.
Measure 23.2
Relevant Signatories will take the necessary measures to ensure that this functionality is duly protected from human or machine-based abuse (e.g., the tactic of 'mass-flagging' to silence other voices).
QRE 23.2.1
Relevant Signatories will report on the general measures they take to ensure the integrity of their reporting and appeals systems, while steering clear of disclosing information that would help would-be abusers find and exploit vulnerabilities in their defences.
See QRE 23.1.1. Bing Search generally does not experience issues with mass flagging of content or abuse of its reporting features. This concern appears more applicable to other types of services (e.g., social media and online media websites) or content outside the scope of this regulation that is more prone to mass flagging, such as copyright infringement. Bing Search engages in human review of reports submitted through its reporting functionality and evaluates each report consistent with its policies and procedures.
Empowering Researchers
Commitment 26
Relevant Signatories commit to provide access, wherever safe and practicable, to continuous, real-time or near real-time, searchable stable access to non-personal data and anonymised, aggregated, or manifestly-made public data for research purposes on Disinformation through automated means such as APIs or other open and accessible technical solutions allowing the analysis of said data.
We signed up to the following measures of this commitment
Measure 26.1 Measure 26.2 Measure 26.3
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
Yes
If yes, list these implementation measures here
Bing released a specialized dataset of European Parliament election related queries in different EU languages for use by the research community and to support transparency. Researchers can apply using the form found
here.
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
Yes
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Bing is actively exploring additional mechanisms to meet this commitment and welcomes feedback from the research community and Commission on the types of data that would be most useful to the research community. Bing is working to provide additional open datasets and resources that may be used by the research community.
Measure 26.1
Relevant Signatories will provide public access to non-personal data and anonymised, aggregated or manifestly-made public data pertinent to undertaking research on Disinformation on their services, such as engagement and impressions (views) of content hosted by their services, with reasonable safeguards to address risks of abuse (e.g. API policies prohibiting malicious or commercial uses).
QRE 26.1.1
Relevant Signatories will describe the tools and processes in place to provide public access to non-personal data and anonymised, aggregated and manifestly-made public data pertinent to undertaking research on Disinformation, as well as the safeguards in place to address risks of abuse.
Bing Search and Microsoft are dedicated to supporting the research community and regularly provide information and data to the research community in a variety of ways.
Bing Search already provides researchers and the public with access to
MS MARCO, a collection of datasets focused on deep learning in search that are derived from Bing Search queries and related data. Research organizations can gain access to the MS MARCO datasets instantaneously via the
MS MARCO homepage. The MS MARCO dataset has been cited in numerous research papers since its release and has been utilized for a range of research issues, including in connection with misinformation and disinformation. Because the dataset is provided open source, the extent to which it has been used for disinformation related research purposes cannot easily be ascertained.
In 2020, Bing Search also shared
a search dataset for Coronavirus Intent comprised of queries from all over the world that had an intent related to the Coronavirus or Covid-19 (e.g., searches for “Coronavirus updates Seattle” or “Shelter in place”) for use by researchers and the public. This data, which is divisible by country, is particularly relevant to misinformation research on public health issues and the COVID-19 pandemic, as it provides insights into how users sought information related to the coronavirus during the pandemic. The dataset was also posted to
Azure Open datasets for Machine Learning,
Tensorflow.org, and
Kaggle. See additional information on the dataset at
Extracting Covid-19 insights from Bing Search data | Bing Search Blog.
In 2024, Microsoft publicly released a new information rich dataset, MS MARCO Web Search dataset, leveraging Bing search data. This dataset closely mimics real-world web document and query distribution and provides rich information for various kinds of downstream tasks and encourages research in various areas, It also contains rich information from the web pages, such as visual representation rendered by web browsers, raw HTML structure, clean text, semantic annotations, language and topic tags labeled by industry document understanding systems, etc. MS MARCO Web Search further contains 10 million unique queries from 93 languages with millions of relevant labeled query-document pairs collected from the search log of the Microsoft Bing search engine to serve as the query set.
Additionally, researchers who are registered webmasters may utilize Bing Search’s
Keyword Tools and
Backlinks Webmaster Tools to provide insights into search usage and keywords. Bing is also working on ways to provide deeper research access to the tool across the research community and hopes to provide updates in its next report.
Bing Search also offers use of
Bing APIs to the public, which include Bing Image Search, Bing News Search, Bing Video Search, Bing Visual Search, Bing Web Search, Bing Entity Search, Bing Autosuggest, and Bing Spell Check. Bing Search provides free access to these APIs for up to 1,000 transactions per month, which may be leveraged by the research community.
In addition to the above datasets, Microsoft Research maintains a public portal of codes, APIs, software development kits, and datasets that are available to the Research Community at
Researcher tools: code & datasets - Microsoft Research. These public research tools can be accessed by researchers and downloaded instantaneously without formal applications or login credentials.
Bing launched a
Qualified Researcher Program to enable EU researchers to easily request access for publicly accessible Bing data from a singular landing page. However, because these datasets are already available open-source (see below), we expect some researchers may elect to obtain datasets via the above means to avoid the burden of an application and credentialing process.
Bing compiled a specialized dataset of European Parliament election related queries in different EU languages for use by the research community and to support transparency; researchers can apply using the form found
here Additionally, Bing has engaged with European researchers to discuss the types of data that will be most useful to the research community.
Microsoft is also a leader in research in Responsible AI and provides
a range of tools and resources dedicated to promoting responsible usage of artificial intelligence to allow practitioners and researchers to maximize the benefits of AI systems while mitigating harms.
Lastly, given the open nature of the Bing Search index and public nature of search results, researchers can utilize Bing Search or Bing’s generative AI experiences to run specific queries and analyze results (unlike social media which may require private accounts or connections between users to access certain materials).
QRE 26.1.2
Relevant Signatories will publish information related to data points available via Measure 25.1, as well as details regarding the technical protocols to be used to access these data points, in the relevant help centre. This information should also be reachable from the Transparency Centre. At minimum, this information will include definitions of the data points available, technical and methodological information about how they were created, and information about the representativeness of the data.
Bing Search will publish information as it continues to build further data research infrastructure pertinent to these commitments.
SLI 26.1.1
Relevant Signatories will provide quantitative information on the uptake of the tools and processes described in Measure 26.1, such as number of users.
Because the above-mentioned tools discussed in QRE 26.1.2 predate the Code of Practice and were provided open source without tracking mechanisms, Microsoft is working on developing improved usage tracking for these publicly accessible researcher tools and datasets.
Measure 26.2
Relevant Signatories will provide real-time or near real-time, machine-readable access to non-personal data and anonymised, aggregated or manifestly-made public data on their service for research purposes, such as accounts belonging to public figures such as elected official, news outlets and government accounts subject to an application process which is not overly cumbersome.
QRE 26.2.1
Relevant Signatories will describe the tools and processes in place to provide real-time or near real-time access to non-personal data and anonymised, aggregated and manifestly-made public data for research purposes as described in Measure 26.2.
Unlike social media platforms, Bing Search does not have private user accounts or other personal data provided by users as contemplated by Measure 26.2. However, Bing does enable researcher access to data on the platform through a number of mechanisms, as described in QRE 26.1.1 and the research partnerships described in QRE 18.3.1. Researchers can also submit real-time queries in Bing Search and Copilot.
QRE 26.2.2
Relevant Signatories will describe the scope of manifestly-made public data as applicable to their services.
See QRE 26.2.1 and 26.1.1.
QRE 26.2.3
Relevant Signatories will describe the application process in place to in order to gain the access to non-personal data and anonymised, aggregated and manifestly-made public data described in Measure 26.2.
Currently, there is not an application process to access the MS MARCO, ORCAS, MS MARCO Web Search, or Bing Coronavirus Query datasets in their original download locations, as Microsoft intended to allow open and easy access to the public and research community without arduous credentialing or account creation processes. Users may freely access the datasets instantaneously through the
MS MARCO and
ORCAS websites and
MS-MARCO-Web-Search and
Bing Coronavirus Query pages on Github. No application or credentialing is required, but unfortunately this open source model makes tracking usage more challenging and requires investment in additional tooling.
For the Bing Qualified Researcher Program, eligible EU researchers may request access for publicly accessible Bing data and APIs, including the resources mentioned above, through an application form. If their request meets the criteria highlighted on the application page, data will be made available for the approved research purpose. More details are available at
Bing Qualified Researcher Program - Microsoft Support.
For other research data, researchers may be provided datasets and information as part of research partnerships with Microsoft Research. Researchers may contact Microsoft Research to discuss research opportunities.
And of course, the Bing Search service, including Bing’s generative AI experiences, are also public and may be used for a variety of research purposes without login or credentials.
Microsoft is continuing to explore possibilities to streamline data access consistent with this provision and in accordance with Microsoft Research’s longstanding data sharing and collaboration with the research community. Bing also regularly explores additional research partnerships
SLI 26.2.1
Relevant Signatories will provide meaningful metrics on the uptake, swiftness, and acceptance level of the tools and processes in Measure 26.2, such as: Number of monthly users (or users over a sample representative timeframe), Number of applications received, rejected, and accepted (over a reporting period or a sample representative timeframe), Average response time (over a reporting period or a sample representative timeframe).
Bing does not gate access for most of its datasets and therefore this metric is inapplicable, as any individual may freely access the tools. Bing is working on approaches for better tracking of usage of publicly released datasets and APIs.
Bing tracks applications to its Qualified Researcher Program and will provide additional reporting in its next report.
“MSMARCO” under “Other Metrics” provides the total global number of downloads of the MS Marco database
Measure 26.3
Relevant Signatories will implement procedures for reporting the malfunctioning of access systems and for restoring access and repairing faulty functionalities in a reasonable time.
QRE 26.3.1
Relevant Signatories will describe the reporting procedures in place to comply with Measure 26.3 and provide information about their malfunction response procedure, as well as about malfunctions that would have prevented the use of the systems described above during the reporting period and how long it took to remediate them.
Users can report issues accessing MS MARCO and ORCAS datasets to
[email protected]. Microsoft endeavors to restore access and address any issues with dataset access expeditiously.
Commitment 27
Relevant Signatories commit to provide vetted researchers with access to data necessary to undertake research on Disinformation by developing, funding, and cooperating with an independent, third-party body that can vet researchers and research proposals.
We signed up to the following measures of this commitment
Measure 27.1 Measure 27.2 Measure 27.3 Measure 27.4
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
Yes
If yes, list these implementation measures here
Microsoft was an active participant in the EDMO Working Group for the Creation of an Independent Intermediary Body to Support Research on Digital Platforms.
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
No
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Not applicable
Measure 27.1
Relevant Signatories commit to work with other relevant organisations (European Commission, Civil Society, DPAs) to develop within a reasonable timeline the independent third-party body referred to in Commitment 27, taking into account, where appropriate, ongoing efforts such as the EDMO proposal for a Code of Conduct on Access to Platform Data.
QRE 27.1.1
Relevant Signatories will describe their engagement with the process outlined in Measure 27.1 with a detailed timeline of the process, the practical outcome and any impacts of this process when it comes to their partnerships, programs, or other forms of engagement with researchers.
Microsoft supports the development of an independent third-party body, in line with the Digital Services Act, and the upcoming associated Delegated Regulation on data access provided for in the Digital Services Act. Microsoft has also been a member of the Working Group for the Creation of an Independent Intermediary Body to Support Research on Digital Platforms. The Working Group started its work on 10 May 2023 under the coordination of the European Digital Media Observatory (EDMO). Its main task has been to develop an organizational model for a new independent intermediary body that will facilitate data sharing between digital platforms and independent, external researchers.
Measure 27.2
Relevant Signatories commit to co-fund from 2022 onwards the development of the independent third-party body referred to in Commitment 27.
QRE 27.2.1
Relevant Signatories will disclose their funding for the development of the independent third-party body referred to in Commitment 27.
As the development of the independent third-party body has not yet been finalized, there was no funding allocated to the implementation of Measure 27.2 during the period covered by this report.
Measure 27.3
Relevant Signatories commit to cooperate with the independent third-party body referred to in Commitment 27 once it is set up, in accordance with applicable laws, to enable sharing of personal data necessary to undertake research on Disinformation with vetted researchers in accordance with protocols to be defined by the independent third-party body.
QRE 27.3.1
Relevant Signatories will describe how they cooperate with the independent third-party body to enable the sharing of data for purposes of research as outlined in Measure 27.3, once the independent third-party body is set up.
As the development of the independent third-party body has not yet been finalized, no data was shared with this body for the purposes of research as outlined under Measure 27.3 during the period covered by this report.
SLI 27.3.1
Relevant Signatories will disclose how many of the research projects vetted by the independent third-party body they have initiated cooperation with or have otherwise provided access to the data they requested.
As the development of the independent third-party body has not yet been finalized, no research projects were vetted by this body, as set out under Measure 27.3, during the period covered by this report.
Measure 27.4
Relevant Signatories commit to engage in pilot programs towards sharing data with vetted researchers for the purpose of investigating Disinformation, without waiting for the independent third-party body to be fully set up. Such pilot programmes will operate in accordance with all applicable laws regarding the sharing/use of data. Pilots could explore facilitating research on content that was removed from the services of Signatories and the data retention period for this content.
QRE 27.4.1
Relevant Signatories will describe the pilot programs they are engaged in to share data with vetted researchers for the purpose of investigating Disinformation. This will include information about the nature of the programs, number of research teams engaged, and where possible, about research topics or findings.
Microsoft is working with leading academics and researchers to help us better detect, understand, and mitigate the risks to elections posed by deceptive media generated by AI. For instance, Bing worked with Princeton University to address the question of how to build scalable measurement techniques to evaluate deceptive AI in images.
More broadly, Microsoft is a leader in research in Responsible AI and provides a range of tools and resources dedicated to promoting responsible usage of artificial intelligence to allow practitioners and researchers to maximize the benefits of AI systems while mitigating harms. For example, as part of its Responsible AI Toolbox, Microsoft provides a mitigations library, which enables practitioners to experiment with different techniques to address the failure of AI systems (which could include the production of inaccurate outputs). We also provide the Responsible AI tracker, which uses visualizations to show the effectiveness of the different techniques for more informed decision-making. These tools are available to the public and research community for free.
These are just a few of the examples of partnerships Microsoft forged with third parties to combat the creation and dissemination of deceptive AI-generated content targeted at our elections. Microsoft teams regularly engage with external stakeholders on these issues to inform our internal policies, practices, and standards, to improve our products, and to understand emerging threats.
Commitment 28
COOPERATION WITH RESEARCHERS Relevant Signatories commit to support good faith research into Disinformation that involves their services.
We signed up to the following measures of this commitment
Measure 28.1 Measure 28.2 Measure 28.3 Measure 28.4
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
Yes
If yes, list these implementation measures here
Microsoft and IE University’s Center for the Governance of Change in Spain collaborate on
AI4Democracy to create knowledge and promote action for a responsible use of artificial intelligence to defend and strengthen democracy. Launched in November 2023, AI4Democracy is ongoing and the continuation of the Tech4Democracy program, an initiative led by IE University in partnership with the United States Department of State and with the strategic support of Microsoft.
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
Yes
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Ongoing review of researcher feedback and requests may result in additional measures and resources.
In addition, Microsoft Research regularly explores potential partnerships with third party research institutions and is actively in discussions with several research institutions on potential misinformation and disinformation related research that may leverage Bing Search data. Microsoft’s internal research divisions also regularly initiate and support research relevant to misinformation and disinformation.
Measure 28.1
Relevant Signatories will ensure they have the appropriate human resources in place in order to facilitate research, and should set-up and maintain an open dialogue with researchers to keep track of the types of data that are likely to be in demand for research and to help researchers find relevant contact points in their organisations.
QRE 28.1.1
Relevant Signatories will describe the resources and processes they deploy to facilitate research and engage with the research community, including e.g. dedicated teams, tools, help centres, programs, or events.
Bing Search facilitates research, engages with the research community, and provides data to the research community in a variety of ways, as described below and in QRE 26.1-2 and 18.3.1.
More broadly, Microsoft dedicates significant resources to supporting, promoting, and developing research on emerging issues including responsible AI, safe design, search and information retrieval, language learning models, and algorithms.
Microsoft Research and other research groups within the company, such as the
AI for Good Research Lab, employ robust teams of researchers and data scientists and regularly utilize Bing Search datasets and web search as part of important research efforts, including research focused on misinformation and/or disinformation.
Microsoft Research and the AI for Good Lab regularly explore potential partnerships with third party research institutions and are actively in discussions with research institutions on potential misinformation and disinformation related research that may leverage Bing Search data. Microsoft also works with Princeton University to increase researcher access to data on cyber enabled influence operations.
Microsoft is currently undertaking additional research and education on how users interact with content provenance tools and the use of content provenance tools for AI including through its grant with C2PA..
Microsoft Research has also undertaken research related to information integrity and elections in the age of generative AI.
Lastly, Bing Search regularly partners with third party nonprofits and research organizations and NGO partners to review and evaluate emerging trends, techniques, tactics, and threat intelligence in misinformation and/or disinformation and related topics.
Measure 28.2
Relevant Signatories will be transparent on the data types they currently make available to researchers across Europe.
QRE 28.2.1
Relevant Signatories will describe what data types European researchers can currently access via their APIs or via dedicated teams, tools, help centres, programs, or events.
See QRE 26.2.3 and 26.1.1.
Measure 28.3
Relevant Signatories will not prohibit or discourage genuinely and demonstratively public interest good faith research into Disinformation on their platforms, and will not take adversarial action against researcher users or accounts that undertake or participate in good-faith research into Disinformation.
QRE 28.3.1
Relevant Signatories will collaborate with EDMO to run an annual consultation of European researchers to assess whether they have experienced adversarial actions or are otherwise prohibited or discouraged to run such research.
We look forward to partnering with other relevant signatories on this project and will provide further reporting as the annual consultation is established.
Measure 28.4
As part of the cooperation framework between the Signatories and the European research community, relevant Signatories will, with the assistance of the EDMO, make funds available for research on Disinformation, for researchers to independently manage and to define scientific priorities and transparent allocation procedures based on scientific merit.
QRE 28.4.1
Relevant Signatories will disclose the resources made available for the purposes of Measure 28.4 and procedures put in place to ensure the resources are independently managed.
There were no relevant developments during the period covered by this report.
Empowering fact-checkers
Commitment 30
Relevant Signatories commit to establish a framework for transparent, structured, open, financially sustainable, and non-discriminatory cooperation between them and the EU fact-checking community regarding resources and support made available to fact-checkers.
We signed up to the following measures of this commitment
Measure 30.1 Measure 30.2 Measure 30.3 Measure 30.4
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
Yes
If yes, list these implementation measures here
Bing entered agreements with independent organizations to improve language coverage across EEA Member States and languages.
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
No
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Not applicable
Measure 30.1
Relevant Signatories will set up agreements between them and independent fact-checking organisations (as defined in whereas (e)) to achieve fact-checking coverage in all Member States. These agreements should meet high ethical and professional standards and be based on transparent, open, consistent and non-discriminatory conditions and will ensure the independence of fact-checkers.
QRE 30.1.1
Relevant Signatories will report on and explain the nature of their agreements with fact-checking organisations; their expected results; relevant quantitative information (for instance: contents fact-checked, increased coverage, changes in integration of fact-checking as depends on the agreements and to be further discussed within the Task-force); and such as relevant common standards and conditions for these agreements.
Bing Search supports the schema.org ClaimReview fact-check protocol as part of its search ingestion, as discussed further in QRE 21.1.1.
In addition to organic fact checks and fact check content leveraging ClaimReview tags that may surface in search results, articles from news and fact checking organizations may appear as part of specialized Bing Answers. In addition, news and fact-check articles can appear in Bing News carousels, which are often presented at the top of search results pages, depending on the nature of the user query. Microsoft maintains agreements with news publishers to surface high authority content, including articles from well-regarded fact checking organizations and journalist-driven fact-checks, high in relevant search results.
QRE 30.1.2
Relevant Signatories will list the fact-checking organisations they have agreements with (unless a fact-checking organisation opposes such disclosure on the basis of a reasonable fear of retribution or violence).
See QRE 30.1.1 and QRE 21.1.1. The trusted third-party partners Bing search leverages provide coverage in several EU languages, such as Bulgarian, Croatian, Czech, Dutch, English, Finnish, French, German, Greek, Hungarian, Polish, Portuguese, Romanian, Slovak, Spanish, and Swedish. In addition to the EU languages enumerated above, contracted fact-checking data also includes coverage of Catalan and Serbian languages.
QRE 30.1.3
Relevant Signatories will report on resources allocated where relevant in each of their services to achieve fact-checking coverage in each Member State and to support fact-checking organisations' work to combat Disinformation online at the Member State level.
See QREs 30.1.1-2.
As noted above, any authorized fact-checking organization can leverage the ClaimReview protocol to provide fact-checks to Bing Search. Bing Search would welcome additional usage of the ClaimReview protocols in EU Member States and actively partners with third party partners including, news organizations, fact checking organizations and nonprofits in the EU to inform defensive search interventions, threat intelligence, and issue monitoring. Bing has dedicated internal teams that leverage this information to inform product mitigations and defensive search interventions.
SLI 30.1.1
Relevant Signatories will report on Member States and languages covered by agreements with the fact-checking organisations, including the total number of agreements with fact-checking organisations, per language and, where relevant, per service.
As described in QRE 30.1.1, Microsoft has a number of news agreements that include journalism, news, and fact checking coverage and provide remuneration to fact checkers and news organizations for news that is surfaced on Bing. These agreements, the nature of which are confidential, cover a range of languages and markets, including EEA member states. While certain agreements include fact checking coverage, because these arrangements are not strictly for fact-checking services, we do not reflect these agreements in this SLI.
In addition, as set out in QRE 30.1.2 and SLI 31.1.1, any fact-checking organisation can leverage the ClaimReview protocol to embed fact-check tags into their website (thereby adding fact-check tags or flags into indexed results) and there is no limitation in terms of languages and Member States covered. Because ClaimReview is an open protocol available for all websites and search engines to use, Bing does not have agreements with individual fact-checking organisations to tag articles in Claim Review.
Nr of agreements with fact-checking organisations |
- |
Measure 30.2
Relevant Signatories will provide fair financial contributions to the independent European fact-checking organisations for their work to combat Disinformation on their services. Those financial contributions could be in the form of individual agreements, of agreements with multiple fact-checkers or with an elected body representative of the independent European fact-checking organisations that has the mandate to conclude said agreements.
QRE 30.2.1
Relevant Signatories will report on actions taken and general criteria used to ensure the fair financial contributions to the fact-checkers for the work done, on criteria used in those agreements to guarantee high ethical and professional standards, independence of the fact-checking organisations, as well as conditions of transparency, openness, consistency and non-discrimination.
Microsoft provides fair compensation and has engaged in arms-length negotiations with news and fact-checking organizations to secure fact checking coverage in the EU, through news partnership arrangements that support Bing news product features, such as specialized answers and news carousels. These partners operate independently and Microsoft agreements respect their editorial independence.
QRE 30.2.2
Relevant Signatories will engage in, and report on, regular reviews with their fact-checking partner organisations to review the nature and effectiveness of the Signatory's fact-checking programme.
As noted above, Bing Search ingests ClaimReview tags embedded in fact-check content posted on websites that are indexed in the Bing Search index.
Webmasters for fact-checking organizations have self-help tools available as part of Bing Search’s
Webmaster Tools,that allow them to review website analytics and search effectiveness (including insights into keywords or search queries used in Bing Search to reach their website) for websites containing ClaimReview tags . This dashboard provides website operators with a range of data and analytics that can be used by fact-checking organizations to assess how users found their fact-checked content, website traffic patterns, and the effectiveness of their fact-check tags.
See QREs 30.1.1-2 for additional information on Bing Search’s ClaimReview fact check program.
Bing has also engaged in conversations with members of the fact-checking community and signatories to solicit feedback on search considerations and fact-checking.
QRE 30.2.3
European fact-checking organisations will, directly (as Signatories to the Code) or indirectly (e.g. via polling by EDMO or an elected body representative of the independent European fact-checking organisations) report on the fairness of the individual compensations provided to them via these agreements.
This QRE is not applicable to Bing Search, as it is not a fact-checking organization.
Measure 30.3
Relevant Signatories will contribute to cross-border cooperation between fact-checkers.
QRE 30.3.1
Relevant Signatories will report on actions taken to facilitate their cross-border collaboration with and between fact-checkers, including examples of fact-checks, languages, or Member States where such cooperation was facilitated.
Bing meets with its fact-checking partner to discuss improvements in process.
Measure 30.4
To develop the Measures above, relevant Signatories will consult EDMO and an elected body representative of the independent European fact-checking organisations.
QRE 30.4.1
Relevant Signatories will report, ex ante on plans to involve, and ex post on actions taken to involve, EDMO and the elected body representative of the independent European fact-checking organisations, including on the development of the framework of cooperation described in Measures 30.3 and 30.4.
There were no relevant developments during the period covered by this report.
Commitment 31
Relevant Signatories commit to integrate, showcase, or otherwise consistently use fact-checkers' work in their platforms' services, processes, and contents; with full coverage of all Member States and languages.
We signed up to the following measures of this commitment
Measure 31.1 Measure 31.2 Measure 31.3 Measure 31.4
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
No
If yes, list these implementation measures here
Not applicable
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
No
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Not applicable
Measure 31.2
Relevant Signatories that integrate fact-checks in their products or processes will ensure they employ swift and efficient mechanisms such as labelling, information panels, or policy enforcement to help increase the impact of fact-checks on audiences.
QRE 31.2.1
Relevant Signatories will report on their specific activities and initiatives related to Measures 31.1 and 31.2, including the full results and methodology applied in testing solutions to that end.
Articles from news and fact checking organizations may appear as part of specialized Bing Answers. In addition, news and fact-check articles can appear in Bing News carousels, which are often presented at the top of search results pages, depending on the nature of user query. Microsoft maintains agreements with news publishers to surface high authority content, including articles from well-regarded fact checking organizations and journalist-driven fact-checks, high in relevant search results.
During the Reporting Period, Bing maintained a fact-checking agreement to provide coverage in the following EU languages: Bulgarian, Croatian, Czech, Dutch, English, Finnish, French, German, Greek, Hungarian, Polish, Portuguese, Romanian, Slovak, Spanish, and Swedish; the fact-checking agreement also includes coverage of Catalan and Serbian languages, among others.
In addition, Bing uses threat intelligence to inform the Bing algorithm and defensive search measures used for Bing search and Bing’s generative search features. Bing works with trusted third-party partners for leads of potential threats, including in EEA member state languages) to inform defensive search strategies for Bing. Bing also utilises the ClaimReview open protocol to ingest fact checks into search results.
Bing has increased coverage of EEA languages, informing interventions across monitored themes and sources.
In addition, Bing Search uses ClaimReview tags embedded in websites with fact-checked content to help inform its algorithms (i.e., by leading users to more authoritative sources of information) and to provide useful context and indications of trustworthiness to its users in search results.
See QREs 21.1.1 and 30.2.1-2.
SLI 31.1.1 (for Measures 31.1 and 31.2)
Member State level reporting on use of fact-checks by service and the swift and efficient mechanisms in place to increase their impact, which may include (as depends on the service): number of fact-check articles published; reach of fact-check articles; number of content pieces reviewed by fact-checkers.
Fact Check URLs (“FC URL”) – This represents the number of distinct URLs containing a ClaimReview tag (i.e. fact-check content) that appeared on the first page of Bing search results for any number of users located in the EU Member States.
Fact Check Impressions (“FCI”) – The number of times the above-mentioned URLs appeared on the first page of Bing search results to a user located in the EU Member States.
Country |
Nr of fact-checked articles published |
Reach of fact-checked (FCI) |
Nr of content pieces reviewed by fact-checkers |
Other (FC URLS) |
Austria |
|
34,509 |
|
4,698 |
Belgium |
|
60,454 |
|
5,999 |
Bulgaria |
|
2 |
|
2 |
Croatia |
|
1 |
|
1 |
Cyprus |
|
1 |
|
1 |
Czech Republic |
|
1 |
|
1 |
Denmark |
|
13,443 |
|
2,519 |
Estonia |
|
1 |
|
1 |
Finland |
|
9,591 |
|
1,629 |
France |
|
212,276 |
|
9,081 |
Germany |
|
3,968,924 |
|
17,342 |
Greece |
|
1 |
|
1 |
Hungary |
|
3 |
|
3 |
Ireland |
|
38,151 |
|
5,806 |
Italy |
|
68,653 |
|
5,939 |
Latvia |
|
1 |
|
1 |
Lithuania |
|
3 |
|
3 |
Luxembourg |
|
1 |
|
1 |
Malta |
|
0 |
|
0 |
Netherlands |
|
97,193 |
|
9,292 |
Poland |
|
591,84 |
|
4,690 |
Portugal |
|
38,493 |
|
4,212 |
Romania |
|
22 |
|
18 |
Slovakia |
|
1 |
|
1 |
Slovenia |
|
0 |
|
0 |
Spain |
|
161,600 |
|
7,793 |
Sweden |
|
39,686 |
|
5,756 |
Iceland |
|
6 |
|
6 |
Liechtenstein |
|
1 |
|
1 |
Norway |
|
23,942 |
|
4,456 |
Total EU |
N/A |
4,802,195 |
N/A |
84,790 |
Total EEA |
N/A |
4,826,144 |
N/A |
89,253 |
Measure 31.3
Relevant Signatories (including but not necessarily limited to fact-checkers and platforms) will create, in collaboration with EDMO and an elected body representative of the independent European fact-checking organisations, a repository of fact-checking content that will be governed by the representatives of fact-checkers. Relevant Signatories (i.e. platforms) commit to contribute to funding the establishment of the repository, together with other Signatories and/or other relevant interested entities. Funding will be reassessed on an annual basis within the Permanent Task-force after the establishment of the repository, which shall take no longer than 12 months.
QRE 31.3.1
Relevant Signatories will report on their work towards and contribution to the overall repository project, which may include (depending on the Signatories): financial contributions; technical support; resourcing; fact-checks added to the repository. Further relevant metrics should be explored within the Permanent Task-force.
There were no discussions in the relevant Subgroup of the Permanent Task-force on the development of the repository of fact-checking content during the period covered by this report.
Measure 31.4
Relevant Signatories will explore technological solutions to facilitate the efficient use of this common repository across platforms and languages. They will discuss these solutions with the Permanent Task-force in view of identifying relevant follow up actions.
QRE 31.4.1
Relevant Signatories will report on the technical solutions they explore and insofar as possible and in light of discussions with the Task-force on solutions they implemented to facilitate the efficient use of a common repository across platforms.
There were no discussions in the relevant Subgroup of the Permanent Task-force on the development of the repository of fact-checking content during the period covered by this report.
Commitment 32
Relevant Signatories commit to provide fact-checkers with prompt, and whenever possible automated, access to information that is pertinent to help them to maximise the quality and impact of fact-checking, as defined in a framework to be designed in coordination with EDMO and an elected body representative of the independent European fact-checking organisations.
We signed up to the following measures of this commitment
Measure 32.1 Measure 32.2 Measure 32.3
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
No
If yes, list these implementation measures here
Not applicable
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
No
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Not applicable
Measure 32.3
Relevant Signatories will regularly exchange information between themselves and the fact-checking community, to strengthen their cooperation.
QRE 32.3.1
Relevant Signatories will report on the channels of communications and the exchanges conducted to strengthen their cooperation - including success of and satisfaction with the information, interface, and other tools referred to in Measures 32.1 and 32.2 - and any conclusions drawn from such exchanges.
Bing Search currently uses the Code’s Task-force, in particular the Crisis Response and Empowerment of Fact-checkers subgroups, as a channel of communication with the fact-checking community represented by the signatories to the Code. We continue to explore ways in which we can further support information exchange with the fact-checking community.