Microsoft Advertising

Report March 2026

Submitted

Microsoft Advertising is a proprietary advertising platform that works both with advertisers who provide it with advertising content, and publishers who display these advertisements on their services

Advertising

Commitment 1

Relevant signatories participating in ad placements commit to defund the dissemination of disinformation, and improve the policies and systems which determine the eligibility of content to be monetised, the controls for monetisation and ad placement, and the data to report on the accuracy and effectiveness of controls and services around ad placements.

We signed up to the following measures of this commitment

Measure 1.1 Measure 1.2 Measure 1.3 Measure 1.6

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

If yes, list these implementation measures here

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

If yes, which further implementation measures do you plan to put in place in the next 6 months?

Measure 1.1

Relevant Signatories involved in the selling of advertising, inclusive of media platforms, publishers and ad tech companies, will deploy, disclose, and enforce policies with the aims of: - first avoiding the publishing and carriage of harmful Disinformation to protect the integrity of advertising supported businesses - second taking meaningful enforcement and remediation steps to avoid the placement of advertising next to Disinformation content or on sources that repeatedly violate these policies; and - third adopting measures to enable the verification of the landing / destination pages of ads and origin of ad placement.

QRE 1.1.1

Signatories will disclose and outline the policies they develop, deploy, and enforce to meet the goals of Measure 1.1 and will link to relevant public pages in their help centres.

Microsoft Advertising prohibits misinformation and disinformation on its network. To enforce this policy, Microsoft Advertising may use a combination of internal signals and trusted third-party data or information sources to reject, block, or remove ads or sites that contain disinformation or that direct traffic to disinformation content. Enforcement actions may include blocking at the domain level where landing pages or sites violate this policy. Our primary policy is available here.

During the reporting period, Microsoft Advertising enhanced its detection capabilities through continued integration with services provided by the Microsoft Threat Analysis Center (MTAC), including signals related to Foreign Information Manipulation and Interference (FIMI) domains. MTAC operates as part of Microsoft’s broader threat intelligence structure, alongside the Microsoft Threat Intelligence Center (MSTIC) and the Digital Crimes Unit, and focuses on identifying foreign malign influence operations and state‑aligned information campaigns. MTAC applies a geopolitical influence‑operations lens to analyse propaganda, coordinated information operations, and state‑backed manipulation campaigns, including cross‑platform narrative efforts and manipulated or deceptive content targeting public opinion.

Microsoft Advertising regularly consumes domain- and web-property-level intelligence informed by MTAC analysis to detect ads and publishers associated with disinformation, misinformation, impersonation, or influence operations. Detection methods are continuously refined to address evolving tactics. Microsoft Advertising prioritises early, preventive enforcement, stopping non-compliant advertising content prior to delivery. As a result, significant enforcement actions occur at or before demand creation, reflecting platform improvements that reduce user exposure to harmful or misleading content at the earliest possible stage.

SLI 1.1.1

Signatories will report, quantitatively, on actions they took to enforce each of the policies mentioned in the qualitative part of this service level indicator, at the Member State or language level. This could include, for instance, actions to remove, to block, or to otherwise restrict advertising on pages and/or domains that disseminate harmful Disinformation.

Microsoft Advertising reports enforcement based on ads and domains blocked prior to delivery.

  • Blocked Ads represent individual advertisements detected and prevented from serving globally due to non-compliance with misinformation and disinformation policies.
  • Blocked Domains represent web domains proactively blocked from the advertising network, including based on MTAC domain intelligence, across all regions.

Because enforcement now occurs upstream before ads reach the impression or page-view stage, exposure-based metrics (e.g., blocked impressions or page views) are no longer reliable or consistently available. This methodology reflects current system design and provides an accurate representation of preventive enforcement activity.

Country Blocked Ads Blocked Domains
Global 512,442 202

SLI 1.1.2

Please insert the relevant data

Following the methodology developed by the Task-force Subgroup on Ad Scrutiny, this SLI estimates the financial impact of demonetisation by applying an agreed-upon CPM-based conversion factor to publisher page-view data. Microsoft Advertising operates a pay-per-click (PPC) advertising model, under which advertisers are charged when users click on ads rather than based on impressions. To align with the Task-force methodology, Microsoft Advertising therefore estimates monetisation impact using blended CPM values provided by Ebiquity Plc., calculated as (Page Views / 1,000) × CPM. Luxembourg and Liechtenstein are not included in the Ebiquity dataset; Belgium and Austria CPM values are applied as proxies based on economic and market comparability.

As described above, Microsoft Advertising prevents serving ads on web domains that spread disinformation, including those associated with Foreign Information Manipulation and Interference (FIMI), blocking monetisation on those properties at the earliest point of identification. Publisher page-view data associated with the set of 202 domains blocked during 2025 was used as the basis for this calculation. While prior reports relied on exposure-based metrics that could be reconstructed across the reporting period, the 2025 application of this methodology relies on publisher-side page-view data that was not historically retained for this specific purpose and only became consistently available from 19 August 2025 onward. Accordingly, the amounts below are based on observed data for that period and prorated to support full-year comparability.

Country Euro value of ads demonetised
Austria 592.20
Belgium 1169.06
Bulgaria 9.89
Croatia 22.61
Cyprus 8.09
Czech Republic 28.68
Denmark 359.91
Estonia 101.66
Finland 296.52
France 2110.38
Germany 7339.71
Greece 18.00
Hungary 440.94
Ireland 1018.51
Italy 14838.64
Latvia 6.05
Lithuania 4.03
Luxembourg 0.35
Malta 2.89
Netherlands 1316.72
Poland 40.29
Portugal 50.47
Romania 297.01
Slovakia 68.91
Slovenia 8.14
Spain 681.92
Sweden 996.86
Iceland 3.02
Liechtenstein 1.01
Norway 419.07
Total EU
Total EEA

Measure 1.2

Relevant Signatories responsible for the selling of advertising, inclusive of publishers, media platforms, and ad tech companies, will tighten eligibility requirements and content review processes for content monetisation and ad revenue share programmes on their services as necessary to effectively scrutinise parties and bar participation by actors who systematically post content or engage in behaviours which violate policies mentioned in Measure 1.1 that tackle Disinformation.

QRE 1.2.1

Signatories will outline their processes for reviewing, assessing, and augmenting their monetisation policies in order to scrutinise and bar participation by actors that systematically provide harmful Disinformation.

Microsoft Advertising works with selected, trusted publishing partners and requires them to comply with strict brand‑safety policies designed to prevent the monetisation of misleading, deceptive, harmful, or insensitive content. These publishers also benefit from the safeguards Microsoft Advertising applies to advertisers, helping ensure that ads delivered through the platform are high‑integrity and non‑deceptive.

Microsoft Advertising’s publisher policies include a comprehensive list of prohibited content that ads may not serve against. This includes, among other things, disinformation, sensitive political content (such as extreme, aggressive, or misleading interpretations of news, events, or individuals), unmoderated user‑generated content, and other unsavoury content (including content that disparages individuals or organisations). Publishers are required to maintain prohibited‑term lists and, where applicable, provide information about their content‑management practices. Publishers must also comply with restrictions on harmful business practices, such as the distribution of malware.

Microsoft Advertising reviews publisher properties and domains for compliance with these policies, including restrictions on prohibited content. This review also considers advertiser feedback and includes a process for investigating advertiser complaints. Properties or domains that violate policy are not approved for live ad traffic, or are removed from the network if already live, until the issue is remedied.

As stated in Microsoft Advertising’s policies, the platform may use a combination of internal signals and trusted third‑party data to reject, block, or remove ads or sites that contain disinformation or direct traffic to disinformation content. In 2025, Microsoft Advertising further evolved these practices by integrating Microsoft Threat Analysis Center (MTAC) services to support the detection of Foreign Information Manipulation and Interference (FIMI) domains.

SLI 1.2.1

Signatories will report on the number of policy reviews and/or updates to policies relevant to Measure 1.2 throughout the reporting period. In addition, Signatories will report on the numbers of accounts or domains barred from participation to advertising or monetisation as a result of these policies at the Member State level.

As reported in SLI 1.1.1, Microsoft Advertising applies domain-level enforcement globally rather than at the Member State level. Accordingly, we report a single global aggregate of 202 domains barred during the reporting period.

Microsoft Advertising did not bar advertiser accounts during this period, as enforcement actions are taken by blocking non-compliant web domains across the network, irrespective of the advertising account promoting them.

No new policies were introduced during the reporting period; therefore, the number of policy reviews is zero.

Country Nr of policy reviews Nr of updates to policies Nr of accounts barred Nr of domains barred
Global 0 0 0 202

Measure 1.3

Relevant Signatories responsible for the selling of advertising, inclusive of publishers, media platforms, and ad tech companies, will take commercial and technically feasible steps, including support for relevant third-party approaches, to give advertising buyers transparency on the placement of their advertising.

QRE 1.3.1

Signatories will report on the controls and transparency they provide to advertising buyers with regards to the placement of their ads as it relates to Measure 1.3.

Microsoft Advertising provides customers with campaign reporting and tools to monitor and control ad placement across its network. Transparency controls are available through the campaign user interface and customer support, and include:
  • Ad delivery reports (domain level): Reports showing the website or domain where ads are served.
  • Site exclusions: Ability to exclude specific websites or domains from campaigns to prevent ads from serving on those properties.
  • Negative keyword exclusions: Ability to exclude keywords to prevent ads from serving against certain search queries.
  • Syndication Publisher Network opt-out: Ability to prevent ad delivery on the extended publisher network, in which case ads serve only on Microsoft-owned and operated properties.

Measure 1.6

Relevant Signatories will advance the development, improve the availability, and take practical steps to advance the use of brand safety tools and partnerships, with the following goals: - To the degree commercially viable, relevant Signatories will provide options to integrate information and analysis from source-raters, services that provide indicators of trustworthiness, fact-checkers, researchers or other relevant stakeholders providing information e.g., on the sources of Disinformation campaigns to help inform decisions on ad placement by ad buyers, namely advertisers and their agencies. - Advertisers, agencies, ad tech companies, and media platforms and publishers will take effective and reasonable steps to integrate the use of brand safety tools throughout the media planning, buying and reporting process, to avoid the placement of their advertising next to Disinformation content and/or in places or sources that repeatedly publish Disinformation. - Brand safety tool providers and rating services who categorise content and domains will provide reasonable transparency about the processes they use, insofar that they do not release commercially sensitive information or divulge trade secrets, and that they establish a mechanism for customer feedback and appeal.

QRE 1.6.1

Signatories that place ads will report on the options they provide for integration of information, indicators and analysis from source raters, services that provide indicators of trustworthiness, fact-checkers, researchers, or other relevant stakeholders providing information e.g. on the sources of Disinformation campaigns to help inform decisions on ad placement by buyers.

Microsoft Advertising provides customers with campaign reporting and tools to monitor and control ad placement across its network. Transparency controls are available through the campaign user interface and customer support, and include:
  • Ad delivery reports (domain level): Reports showing the website or domain where ads are served.
  • Site exclusions: Ability to exclude specific websites or domains from campaigns to prevent ads from serving on those properties.
  • Negative keyword exclusions: Ability to exclude keywords to prevent ads from serving against certain search queries.
  • Syndication Publisher Network opt-out: Ability to prevent ad delivery on the extended publisher network, in which case ads serve only on Microsoft-owned and operated properties.

QRE 1.6.2

Signatories that purchase ads will outline the steps they have taken to integrate the use of brand safety tools in their advertising and media operations, disclosing what percentage of their media investment is protected by such services.

This QRE is not relevant or pertinent as Microsoft Advertising does not buy advertising.

QRE 1.6.3

Signatories that provide brand safety tools will outline how they are ensuring transparency and appealability about their processes and outcomes.

This QRE is not relevant or pertinent as Microsoft Advertising is not a brand safety tool provider.

QRE 1.6.4

Relevant Signatories that rate sources to determine if they persistently publish Disinformation shall provide reasonable information on the criteria under which websites are rated, make public the assessment of the relevant criteria relating to Disinformation, operate in an apolitical manner and give publishers the right to reply before ratings are published.

This QRE is not relevant or pertinent as Microsoft Advertising is not a ratings service.

Commitment 2

Relevant Signatories participating in advertising commit to prevent the misuse of advertising systems to disseminate Disinformation in the form of advertising messages.

We signed up to the following measures of this commitment

Measure 2.1 Measure 2.2 Measure 2.3 Measure 2.4

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

If yes, list these implementation measures here

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

If yes, which further implementation measures do you plan to put in place in the next 6 months?

Measure 2.1

Relevant Signatories will develop, deploy, and enforce appropriate and tailored advertising policies that address the misuse of their advertising systems for propagating harmful Disinformation in advertising messages and in the promotion of content.

QRE 2.1.1

Signatories will disclose and outline the policies they develop, deploy, and enforce to meet the goals of Measure 2.1 and will link to relevant public pages in their help centres.

As described in QRE 2.2.1, Microsoft Advertising maintains network-wide policies, implemented in December 2022, that prohibit the monetisation of disinformation and the placement of advertising on or adjacent to disinformation content. These policies prohibit ads or sites that contain, promote, or direct traffic to disinformation. Enforcement may rely on internal signals and trusted third-party data sources to reject, block, or remove ads or sites, including blocking at the domain level where landing pages or sites violate policy.

Please see here for our main policy page.

SLI 2.1.1

Signatories will report, quantitatively, on actions they took to enforce each of the policies mentioned in the qualitative part of this service level indicator, at the Member State or language level. This could include, for instance, actions to remove, to block, or to otherwise restrict harmful Disinformation in advertising messages and in the promotion of content.

Microsoft Advertising assesses the impact of its enforcement actions by reporting on blocked ads, customers suspended as a result of risk detection, and impressions generated prior to enforcement.

“Blocked Ads” are the individual advertisements (or ad creatives) loaded in the Microsoft Advertising campaign system that we blocked. Because these ads are blocked globally, we are reporting the same number across all Member States for the reporting period.

“Unique Domains” means the web domain or URLs that the Unique Ads would have directed customers to. Because these ad domains are blocked globally, we are reporting the same number across all Member States for the reporting period.

“Customer suspensions” means the suspension of advertiser access to Microsoft Advertising services for willful or repeated violations of policies relating to phishing, malware, or payment instrument fraud. Suspended advertisers cannot serve ads until the violation is remedied.

“Blocked Impressions from Suspended Customers” means ad impressions generated by customer prior to suspension. Where customers are suspended before ads are served, no impressions are recorded.

Country Blocked Ads Unique Domains Customer Suspensions Blocked Impressions from Suspended Customers
Austria 512,442 9,984 363 0
Belgium 512,442 9,984 719 7,033
Bulgaria 512,442 9,984 153 1,638
Croatia 512,442 9,984 63 0
Cyprus 512,442 9,984 177 905,374
Czech Republic 512,442 9,984 333 78,516
Denmark 512,442 9,984 978 414,732
Estonia 512,442 9,984 415 0
Finland 512,442 9,984 498 37,802
France 512,442 9,984 6,540 808,487
Germany 512,442 9,984 7,710 2,612,554
Greece 512,442 9,984 113 30,278
Hungary 512,442 9,984 110 0
Ireland 512,442 9,984 674 66,002
Italy 512,442 9,984 1,643 1,237,368
Latvia 512,442 9,984 290 3,831,145
Lithuania 512,442 9,984 118 935,425
Luxembourg 512,442 9,984 36 0
Malta 512,442 9,984 65 2,449
Netherlands 512,442 9,984 1,358 2,470,021
Poland 512,442 9,984 1,100 1,611,797
Portugal 512,442 9,984 302 182,225
Romania 512,442 9,984 603 130,474
Slovakia 512,442 9,984 140 0
Slovenia 512,442 9,984 92 0
Spain 512,442 9,984 2,338 138,633,067
Sweden 512,442 9,984 701 575,859
Iceland 512,442 9,984 112 0
Liechtenstein 512,442 9,984 4 0
Norway 512,442 9,984 1,166 514,617
Total EU
Total EEA

Measure 2.2

Relevant Signatories will develop tools, methods, or partnerships, which may include reference to independent information sources both public and proprietary (for instance partnerships with fact-checking or source rating organisations, or services providing indicators of trustworthiness, or proprietary methods developed internally) to identify content and sources as distributing harmful Disinformation, to identify and take action on ads and promoted content that violate advertising policies regarding Disinformation mentioned in Measure 2.1.

QRE 2.2.1

Signatories will describe the tools, methods, or partnerships they use to identify content and sources that contravene policies mentioned in Measure 2.1 - while being mindful of not disclosing information that'd make it easier for malicious actors to circumvent these tools, methods, or partnerships. Signatories will specify the independent information sources involved in these tools, methods, or partnerships.

Microsoft Advertising employs dedicated operational support and engineering resources to enforce its advertising policies detailed below, combining automated and manual enforcement methods to prevent or take down advertisements that violate its policies. Every ad loaded into the Microsoft Advertising system is subject to these enforcement methods, which leverage machine-learning models, automated screening, the expertise of operations team, and dedicated user safety experts. In addition, Microsoft Advertising conducts manual reviews of advertisements flagged to its customer support team and removes advertisements that violate its policies.

As set out in Microsoft Advertising’s Disinformation policies, the service may use a combination of internal signals and trusted third-party data sources to reject, block, or remove ads or sites that contain disinformation or direct traffic to disinformation content, including blocking at the domain level where landing pages or sites violate policy. During the reporting period, Microsoft Advertising continued to evolve its enforcement methods by applying Microsoft Threat Analysis Center (MTAC) services, including signals related to Foreign Information Manipulation and Influence (FIMI), and actively blocks domains identified through these services as spreading disinformation.

In addition to its Disinformation policies, Microsoft Advertising’s Misleading Content Policies prohibit advertising content that is misleading, deceptive, fraudulent, or otherwise harmful to users, including ads containing unsubstantiated claims or false endorsements or affiliations. Microsoft Advertising also applies Relevance and Quality Policies to manage the quality and accuracy of ads served across its network, including by deterring tactics that misrepresent the origin or intent of advertised content.

Measure 2.3

Relevant Signatories will adapt their current ad verification and review systems as appropriate and commercially feasible, with the aim of preventing ads placed through or on their services that do not comply with their advertising policies in respect of Disinformation to be inclusive of advertising message, promoted content, and site landing page.

QRE 2.3.1

Signatories will describe the systems and procedures they use to ensure that ads placed through their services comply with their advertising policies as described in Measure 2.1.

Please see QRE 2.2.1. Microsoft blocks sites or domains that our Microsoft Threat Analysis Center (MTAC) deems as spreading Disinformation.

Microsoft also rejects all ads associated with such domains and instructs its publishing partners to block ads from showing on such domains.

SLI 2.3.1

Signatories will report quantitatively, at the Member State level, on the ads removed or prohibited from their services using procedures outlined in Measure 2.3. In the event of ads successfully removed, parties should report on the reach of violatory content and advertising.

Microsoft Advertising removed ads after they were shown to consumers during the reporting period. Ads prohibited are the same as Blocked ads. Both removals and ad prohibitions are applied globally.

Country Nr of ads removed Nr of ads prohibited
Global 117,499 512,422

Measure 2.4

Relevant Signatories will provide relevant information to advertisers about which advertising policies have been violated when they reject or remove ads violating policies described in Measure 2.1 above or disable advertising accounts in application of these policies and clarify their procedures for appeal.

QRE 2.4.1

Signatories will describe how they provide information to advertisers about advertising policies they have violated and how advertisers can appeal these policies.

Microsoft Advertising informs advertiser customers of policy violations and editorial decisions through multiple channels to ensure timely notice and transparency.

  • Prompts in the campaign User Interface (UI)
  • Email notification (for example, for account suspension)
  • Notifications from the assigned account representatives, as applicable.

Advertisers are provided with the opportunity to appeal editorial decisions through Microsoft Advertising’s established conflict-resolution process. Appeals may be submitted by the advertiser following notification of a policy violation and are reviewed to assess whether the ad complies with applicable advertising policies. The appeal process allows advertisers to request reconsideration of disapproved ads, suspended campaigns, or other enforcement actions and is designed to provide a consistent and structured mechanism for resolving disputes related to editorial outcomes.

Appeals are evaluated by trained review teams in accordance with Microsoft Advertising policies and procedures. Where an appeal is upheld, the relevant enforcement action is reversed or modified as appropriate. Where an appeal is not upheld, the original decision remains in effect. This appeals framework applies uniformly across markets and does not involve election-specific review programs or workflows.

Microsoft Advertising provides publicly available guidance describing the advertiser appeal process here: How do I challenge a disapproval?

SLI 2.4.1

Signatories will report quantitatively, at the Member State level, on the number of appeals per their standard procedures they received from advertisers on the application of their policies and on the proportion of these appeals that led to a change of the initial policy decision.

During the reporting period, Microsoft Advertising transitioned from reporting aggregate appeal events to reporting appeals at the individual customer level. This methodological change was implemented to support Member State-level reporting by enabling appeals to be attributed to the customer account associated with the original editorial decision, rather than counted only in aggregate across the platform.

Country Nr of appeals Proportion of appeals that led to a change of the initial decision
Austria 47 27
Belgium 193 49
Bulgaria 32 10
Croatia 17 5
Cyprus 69 28
Czech Republic 127 55
Denmark 292 45
Estonia 161 29
Finland 97 13
France 818 215
Germany 973 386
Greece 24 13
Hungary 28 12
Ireland 54 24
Italy 295 109
Latvia 32 10
Lithuania 30 12
Luxembourg 5 3
Malta 19 7
Netherlands 261 107
Poland 294 119
Portugal 41 15
Romania 275 36
Slovakia 59 22
Slovenia 9 5
Spain 358 113
Sweden 72 40
Iceland 1 0
Liechtenstein 3 0
Norway 96 29
Total EU
Total EEA

Political Advertising

Commitment 5

Relevant Signatories commit to apply a consistent approach across political and issue advertising on their services and to clearly indicate in their advertising policies the extent to which such advertising is permitted or prohibited on their services.

We signed up to the following measures of this commitment

Measure 5.1

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

If yes, list these implementation measures here

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

If yes, which further implementation measures do you plan to put in place in the next 6 months?

Measure 5.1

Relevant Signatories will apply the labelling, transparency and verification principles (as set out below) across all ads relevant to their Commitments 4 and 5. They will publicise their policy rules or guidelines pertaining to their service's definition(s) of political and/or issue advertising in a publicly available and easily understandable way.

QRE 5.1.1

Relevant Signatories will report on their policy rules or guidelines and on their approach towards publicising them.

Microsoft Advertising policies prohibit ads for election-related content, political candidates, parties, ballot measures, and political fundraising globally, including fundraising for political candidates, parties, political action committees (PACs), and ballot measures. These restrictions apply across Microsoft-owned and third-party services that rely on Microsoft Advertising to serve ads.

Microsoft Advertising also prohibits certain issue-based advertising, including ads that exploit political agendas, sensitive political or religious issues, or “hot-button” topics, as well as ads that promote extreme political or religious agendas or are associated with hate, criminal, or terrorist activities, regardless of the advertiser’s stated intent.

In September 2025, Microsoft Advertising updated its policies to address European Union Regulation (EU) 2024/900, reflecting new requirements applicable to ads served in the EU.

See here: Political content and Religious content.

Commitment 7

Relevant Signatories commit to put proportionate and appropriate identity verification systems in place for sponsors and providers of advertising services acting on behalf of sponsors placing political or issue ads. Relevant signatories will make sure that labelling and user-facing transparency requirements are met before allowing placement of such ads.

We signed up to the following measures of this commitment

Measure 7.3

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

If yes, list these implementation measures here

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

If yes, which further implementation measures do you plan to put in place in the next 6 months?

Measure 7.3

Relevant Signatories will take appropriate action, such as suspensions or other account-level penalties, against political or issue ad sponsors who demonstrably evade verification and transparency requirements via on-platform tactics. Relevant Signatories will develop - or provide via existing tools - functionalities that allow users to flag ads that are not labelled as political.

QRE 7.3.1

Relevant Signatories will report on the tools and processes in place to request a declaration on whether the advertising service requested constitutes political or issue advertising.

Microsoft Advertising policies prohibit ads for election-related content, political candidates, parties, ballot measures, and political fundraising globally, including fundraising for political candidates, parties, political action committees (PACs), and ballot measures. These restrictions apply across Microsoft-owned and third-party services that rely on Microsoft Advertising to serve ads.

Microsoft Advertising also prohibits certain issue-based advertising, including ads that exploit political agendas, sensitive political or religious issues, or “hot-button” topics, as well as ads that promote extreme political or religious agendas or are associated with hate, criminal, or terrorist activities, regardless of the advertiser’s stated intent.

In September 2025, Microsoft Advertising updated its policies to address European Union Regulation (EU) 2024/900, reflecting new requirements applicable to ads served in the EU.

See here: Political content and Religious content.

QRE 7.3.2

Relevant Signatories will report on policies in place against political or issue ad sponsors who demonstrably evade verification and transparency requirements on-platform.

Microsoft Advertising does not offer its advertising services to customers or partners seeking to promote political content. For example, political parties are informed by customer support that political advertising campaigns are not permitted on the Microsoft Advertising network.

To support compliance with EU Regulation (EU) 2024/900, Microsoft Advertising updated its political advertising policies and introduced a declaration requirement for campaigns targeting the European Union. Advertisers must declare whether a campaign is intended for political advertising prior to campaign or creative creation or import. This requirement applies to all campaigns as of 10 October 2025. Campaigns or creatives declared as intended for political advertising are not permitted to proceed under Microsoft Advertising policies. See here: Political content.

Microsoft Advertising employs dedicated operational and engineering resources to enforce restrictions on political advertising using both proactive and reactive measures. Proactively, Microsoft Advertising applies automated processes to identify and block political ads from serving across its advertising network, including restrictions based on specific terms and domains. For example, Microsoft Advertising maintains lists of terms associated with known political parties, candidates, and ballot measures and blocks ads that would otherwise serve in response to searches for those terms.

Reactively, if Microsoft Advertising becomes aware that an ad suspected of violating policy is being served, for example, through a report submitted to customer support—the ad is promptly reviewed. Ads found to violate Microsoft Advertising policies are removed. Users may report ads that may violate Microsoft Advertising policies through publisher-specific reporting tools or via this form: Report a Concern | Microsoft Advertising.

These actions apply across all websites that use Microsoft Advertising to serve ads, including Microsoft-owned and operated properties (such as Bing) and third-party websites.

Permanent Task-Force

Commitment 37

Signatories commit to participate in the permanent Task-force. The Task-force includes the Signatories of the Code and representatives from EDMO and ERGA. It is chaired by the European Commission, and includes representatives of the European External Action Service (EEAS). The Task-force can also invite relevant experts as observers to support its work. Decisions of the Task-force are made by consensus.

We signed up to the following measures of this commitment

Measure 37.1 Measure 37.4 Measure 37.5 Measure 37.6

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

If yes, list these implementation measures here

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

If yes, which further implementation measures do you plan to put in place in the next 6 months?

Measure 37.6

Signatories agree to notify the rest of the Task-force when a Commitment or Measure would benefit from changes over time as their practices and approaches evolve, in view of technological, societal, market, and legislative developments. Having discussed the changes required, the Relevant Signatories will update their subscription document accordingly and report on the changes in their next report.

QRE 37.6.1

Signatories will describe how they engage in the work of the Task-force in the reporting period, including the sub-groups they engaged with.

Microsoft Advertising has actively engaged in and contributed to all the Task-force Plenary meetings as well as to the meetings of all Sub-groups and Working Groups relevant to its subscription that were active during this reporting cycle. 

Crisis and Elections Response

Elections 2025

[Note: Signatories are requested to provide information relevant to their particular response to the threats and challenges they observed on their service(s). They ensure that the information below provides an accurate and complete report of their relevant actions. As operational responses to crisis/election situations can vary from service to service, an absence of information should not be considered a priori a shortfall in the way a particular service has responded. Impact metrics are accurate to the best of signatories’ abilities to measure them].

Threats observed or anticipated

European National Elections (Germany, Romania, Portugal, Poland, Czech Republic, Ireland, Netherlands)


As an online advertising network, Microsoft Advertising recognises that election periods are commonly associated with heightened attempts to exploit advertising ecosystems to influence public discourse or monetise misleading narratives. In general, such risks may include attempts to place misleading or deceptive advertising content, to disguise political or issue-based messaging as commercial advertising, or to direct users to external websites that host election-related disinformation.

However, Microsoft Advertising does not support political or issue-based advertising, including election-related advertising. As a result, Microsoft Advertising does not provide a paid advertising vector through which election-specific disinformation campaigns could be promoted or amplified during the European national elections listed above.

Microsoft Advertising nonetheless anticipates that malicious actors may attempt to test or circumvent advertising restrictions during election periods, for example by masking political messaging as commercial content, using indirect or intermediary landing pages, or rapidly iterating creatives or domains to evade detection. Such attempted misuse is consistent with broader trends observed across the digital advertising ecosystem during periods of heightened public attention, including elections.

Given Microsoft Advertising’s policy posture, these risks are primarily limited to attempted or indirect misuse, rather than sustained or systemic election-related advertising activity.

Mitigations in place

Microsoft Advertising’s standing policies prohibit political and issue-based advertising globally, including advertising intended to influence electoral outcomes, voting behaviour, or public opinion on political or social issues. These prohibitions apply consistently during election periods and materially limit the risk of election-related disinformation being disseminated or monetised through Microsoft Advertising.

In addition, Microsoft Advertising applies its existing enforcement mechanisms for misleading, deceptive, or harmful advertising content, including automated and manual review processes designed to identify attempts to circumvent policy restrictions. These baseline controls are continuously applied and are considered sufficient to address attempted misuse during election periods without the need for election-specific mitigations.

No additional election-specific mitigation measures were introduced or planned for the European national elections listed above, as Microsoft Advertising’s existing policies and enforcement framework are designed to operate effectively during periods of heightened risk.

Policies and Terms and Conditions

Outline any changes to your policies

Policy - 50.1.1

Changes (such as newly introduced policies, edits, adaptation in scope or implementation) - 50.1.2

Updated to incorporate advertiser declaration requirements under EU Regulation 2024/900.

Rationale - 50.1.3

Microsoft Advertising updated its political content policy to incorporate requirements under EU Regulation 2024/900 on the transparency and targeting of political advertising, which entered into force in October 2025. These updates introduced a declaration requirement for advertisers targeting the EU to confirm whether campaigns or creatives are intended for political advertising.

The change was procedural in nature and did not alter Microsoft Advertising’s longstanding prohibition on political or issue-based advertising. Accordingly, the update was not related to European national elections or election-specific advertising activity.

Scrutiny of Ads Placements

Outline approaches pertinent to this chapter, highlighting similarities/commonalities and differences with regular enforcement.

Description of intervention - 50.2.2

Microsoft Advertising applies proactive ad-placement scrutiny to prevent the submission or serving of political advertising, including election-related advertising, consistent with its global prohibition on political and issue-based ads. During election periods, this scrutiny includes maintaining keyword- and phrase-level blocks associated with election candidates and political parties to prevent attempted submissions of political advertising through the Microsoft Advertising platform.

These controls are applied uniformly across the advertising network and are designed to prevent advertisers from submitting ads related to election candidates or parties, including attempts to disguise political messaging as commercial advertising. This approach relies on global political blocks that apply across all markets, together with ongoing enhancements to automated detection of political content and does not involve election-specific advertising programs or placement workflows. Where detection gaps are identified, Microsoft Advertising retains the ability to implement market-specific keyword and phrase blocks as needed.

Indication of impact - 50.2.3

During the reporting period, Microsoft Advertising maintained keyword- and phrase-level blocks related to election candidates and political parties in several European markets. No new election-related blocks were added in 2025.

The following election-related blocks were in place during the reporting period:
  • Germany: 54 election candidates and parties blocked (existing blocks; no updates in 2025)
  • Portugal: 17 election candidates and parties blocked (existing blocks; no updates in 2025)
  • Romania: 19 election candidates and parties blocked (existing blocks; no updates in 2025)
  • Ireland: 78 election candidates and parties blocked (existing blocks; no updates in 2025)
No election related blocks were in place for the Poland, Czech Republic, or Netherlands markets during the reporting period.

Political Advertising

Outline approaches pertinent to this chapter, highlighting similarities/commonalities and differences with regular enforcement.

Description of intervention - 50.3.2

Not applicable. Microsoft Advertising does not support political advertising.

Indication of impact - 50.3.3

Not applicable.

Crisis 2025

[Note: Signatories are requested to provide information relevant to their particular response to the threats and challenges they observed on their service(s). They ensure that the information below provides an accurate and complete report of their relevant actions. As operational responses to crisis/election situations can vary from service to service, an absence of information should not be considered a priori a shortfall in the way a particular service has responded. Impact metrics are accurate to the best of signatories’ abilities to measure them].

Threats observed or anticipated

War of aggression by Russia on Ukraine


As an online advertising network, Microsoft Advertising may be subject to attempted misuse of its services in ways that could contribute to the dissemination or monetisation of disinformation. These risks primarily arise in two forms: the placement of misleading or deceptive advertising content, and the potential funneling of advertising revenue to websites or domains that spread disinformation.

Threat actors may seek to exploit advertising systems by promoting ads containing false or misleading claims, by masking political or issue-based messaging as commercial content, or by directing users to external sites that host disinformation. In some cases, such activity may form part of broader, coordinated influence operations, including those associated with foreign information manipulation efforts.

Microsoft Advertising also anticipates ongoing efforts by malicious actors to evade detection through tactics such as frequent changes to domains, use of intermediary landing pages, keyword obfuscation, or rapid iteration of ad creatives. These risks may be heightened during elections, major geopolitical events, or periods of heightened public attention, when incentives to influence public discourse or monetise misleading narratives increase.

Consistent with trends observed across the advertising ecosystem, Microsoft Advertising recognises that such threats are not static and continues to evolve its policies, enforcement mechanisms, and detection capabilities to address emerging tactics and risks.


Israel–Hamas conflict


As an online advertising network, Microsoft Advertising may be subject to attempted misuse of its services in ways that could contribute to the dissemination or monetisation of disinformation. These risks primarily arise in two forms: the placement of misleading or deceptive advertising content, and the potential funneling of advertising revenue to websites or domains that spread disinformation.

Threat actors may seek to exploit advertising systems by promoting ads containing false or misleading claims, by masking political or issue-based messaging as commercial content, or by directing users to external sites that host disinformation. In some cases, such activity may form part of broader, coordinated influence operations, including those associated with foreign information manipulation efforts.

Microsoft Advertising also anticipates ongoing efforts by malicious actors to evade detection through tactics such as frequent changes to domains, use of intermediary landing pages, keyword obfuscation, or rapid iteration of ad creatives. These risks may be heightened during elections, major geopolitical events, or periods of heightened public attention, when incentives to influence public discourse or monetise misleading narratives increase.

Consistent with trends observed across the advertising ecosystem, Microsoft Advertising recognises that such threats are not static and continues to evolve its policies, enforcement mechanisms, and detection capabilities to address emerging tactics and risks.

Mitigations in place

War of aggression by Russia on Ukraine


As set out in QRE 1.1.1, Microsoft Advertising continues to evolve its disinformation-detection methods. In 2025, Microsoft Advertising expanded detection by applying Microsoft Threat Analysis Center (MTAC) services to support identification of domains associated with Foreign Information Manipulation and Influence (FIMI). MTAC operates within Microsoft’s coordinated threat-intelligence structure, alongside the Microsoft Threat Intelligence Center (MSTIC) and the Digital Crimes Unit (DCU). MTAC is Microsoft’s dedicated center for detecting, assessing, and disrupting global digital threats, with a particular focus on foreign malign influence operations targeting customers, public institutions, and democratic processes, and it supports Microsoft, governments, and select commercial customers.

While MSTIC focuses on technical cyber threats and malicious actors, MTAC adds a geopolitical and influence-operations lens, analysing propaganda, coordinated information operations, and state-backed manipulation campaigns. MTAC combines human intelligence, language and regional expertise, influence-operations analysis, and technical telemetry to support the identification of domains and web properties associated with disinformation. Microsoft Advertising consumes regular domain and web-property intelligence feeds derived from this work to assist in detecting non-compliant ads and publishers across its network.

Microsoft Advertising also restricts advertising related to sensitive or high-profile events under its Critical Events policy. This policy allows Microsoft Advertising to remove or limit advertising in response to such events to prevent commercial exploitation and to protect user safety.

In addition, Microsoft Advertising has further optimised its detection methods to identify evolving tactics closely associated with disinformation, including misinformation and impersonation content. Microsoft Advertising’s approach prioritises early, preventive enforcement, with the objective of stopping non-compliant advertising content prior to delivery. As a result, a significant portion of enforcement actions now occur at or before demand creation, rather than after impressions or page views have occurred, reflecting platform improvements implemented during the reporting period to reduce user exposure to harmful or misleading content at the earliest possible stage.


Israel-Hamas conflict


As set out in QRE 1.1.1, Microsoft Advertising continues to evolve its disinformation-detection methods. In 2025, Microsoft Advertising expanded detection by applying Microsoft Threat Analysis Center (MTAC) services to support identification of domains associated with Foreign Information Manipulation and Influence (FIMI). MTAC operates within Microsoft’s coordinated threat-intelligence structure, alongside the Microsoft Threat Intelligence Center (MSTIC) and the Digital Crimes Unit (DCU). MTAC is Microsoft’s dedicated centre for detecting, assessing, and disrupting global digital threats, with a particular focus on foreign malign influence operations targeting customers, public institutions, and democratic processes, and it supports Microsoft, governments, and select commercial customers.

While MSTIC focuses on technical cyber threats and malicious actors, MTAC adds a geopolitical and influence-operations lens, analysing propaganda, coordinated information operations, and state-backed manipulation campaigns. MTAC combines human intelligence, language and regional expertise, influence-operations analysis, and technical telemetry to support the identification of domains and web properties associated with disinformation. Microsoft Advertising consumes regular domain and web-property intelligence feeds derived from this work to assist in detecting non-compliant ads and publishers across its network.

Microsoft Advertising also restricts advertising related to sensitive or high-profile events under its Critical Events policy. This policy allows Microsoft Advertising to remove or limit advertising in response to such events to prevent commercial exploitation and to protect user safety.

In addition, Microsoft Advertising has further optimised its detection methods to identify evolving tactics closely associated with disinformation, including misinformation and impersonation content. Microsoft Advertising’s approach prioritises early, preventive enforcement, with the objective of stopping non-compliant advertising content prior to delivery. As a result, a significant portion of enforcement actions now occur at or before demand creation, rather than after impressions or page views have occurred, reflecting platform improvements implemented during the reporting period to reduce user exposure to harmful or misleading content at the earliest possible stage.

Policies and Terms and Conditions

Outline any changes to your policies

Policy - 51.1.1

Changes (such as newly introduced policies, edits, adaptation in scope or implementation) - 51.1.2

Microsoft Advertising did not introduce new policies or additional measures specific to this crisis during the reporting period.

Rationale - 51.1.3

No changes or new policies or measures were introduced because the existing policy framework and enforcement measures continue to operate effectively. Microsoft Advertising continues to prevent serving advertising related to the Ukrainian crisis pursuant to its Critical Events policy, which allows Microsoft Advertising to remove or limit advertising in response to a sensitive or high-profile news event to prevent commercial exploitation and to ensure user safety. In addition, Microsoft Advertising’s Information integrity and misleading content policies prohibit advertising that is misleading, deceptive, fraudulent, or otherwise harmful to its users, including advertisements that spread disinformation. Microsoft Advertising also requires its syndication partners (i.e., partners that display Microsoft Advertising advertisements on their services) to comply with strict brand safety policies to prevent advertising revenue from flowing to websites engaging in misleading, deceptive, harmful, or insensitive behaviours. Microsoft Advertising publisher policies include a comprehensive list of prohibited content against which ads may not serve, including, but not limited to, sensitive political content (e.g., extreme, aggressive, or misleading interpretations of news, events, or individuals), and unsavoury content (such as content disparaging individuals or organisations). Partner properties that violate these policies are removed from our network until the partner remedies the issue. Separately, Microsoft Advertising continues to enforce its prohibition on advertising from Russia Today (RT) and Sputnik across our advertising network and does not place ads on their sites.

Policy - 51.1.4

Changes (such as newly introduced policies, edits, adaptation in scope or implementation) - 51.1.5

Microsoft Advertising did not introduce new policies or additional measures specific to this crisis during the reporting period.

Rationale - 51.1.6

No changes or new policies or measures were introduced because the existing policy framework and enforcement measures continue to operate effectively. Microsoft Advertising continues to prevent serving advertising related to the Ukrainian crisis pursuant to its Critical Events policy, which allows Microsoft Advertising to remove or limit advertising in response to a sensitive or high-profile news event to prevent commercial exploitation and to ensure user safety. In addition, Microsoft Advertising’s Information integrity and misleading content policies prohibit advertising that is misleading, deceptive, fraudulent, or otherwise harmful to its users, including advertisements that spread disinformation. Microsoft Advertising also requires its syndication partners (i.e., partners that display Microsoft Advertising advertisements on their services) to comply with strict brand safety policies to prevent advertising revenue from flowing to websites engaging in misleading, deceptive, harmful, or insensitive behaviours. Microsoft Advertising publisher policies include a comprehensive list of prohibited content against which ads may not serve, including, but not limited to, sensitive political content (e.g., extreme, aggressive, or misleading interpretations of news, events, or individuals), and unsavoury content (such as content disparaging individuals or organisations). Partner properties that violate these policies are removed from our network until the partner remedies the issue. Separately, Microsoft Advertising continues to enforce its prohibition on advertising from Russia Today (RT) and Sputnik across our advertising network and does not place ads on their sites.

Scrutiny of Ads Placements

Outline approaches pertinent to this chapter, highlighting similarities/commonalities and differences with regular enforcement.

Description of intervention - 51.2.2

War of aggression by Russia on Ukraine


Microsoft continues to prevent the serving of advertising related to the war of aggression by Russia on Ukraine pursuant to its Critical Events policy. Under this policy, Microsoft Advertising may remove or limit advertising in response to a sensitive or high-profile news event to prevent the commercial exploitation of such events and to ensure user safety. In addition, Microsoft Advertising’s Information integrity and misleading content policies prohibit advertising that is misleading, deceptive, fraudulent, or otherwise harmful to its users, including advertisements that spread disinformation. Microsoft Advertising also requires its syndication partners (i.e., partners that display Microsoft Advertising advertisements on their services) to comply with strict brand safety policies to prevent advertising revenue from flowing to websites engaging in misleading, deceptive, harmful, or insensitive behaviours. Microsoft Advertising publisher policies include a comprehensive list of prohibited content against which ads may not serve, including, but not limited to, sensitive political content (e.g., extreme, aggressive, or misleading interpretations of news, events, or individuals), and unsavoury content (such as content disparaging individuals or organisations). Partner properties that violate these policies are removed from our network until the partner remedies the issue.


Israel–Hamas conflict


Microsoft continues to prevent the serving of advertising related to the Israel–Hamas conflict pursuant to its Critical Events policy. Under this policy, Microsoft Advertising may remove or limit advertising in response to a sensitive or high-profile news event to prevent the commercial exploitation of such events and to ensure user safety. In addition, Microsoft Advertising’s Information integrity and misleading content policies prohibit advertising that is misleading, deceptive, fraudulent, or otherwise harmful to its users, including advertisements that spread disinformation. Microsoft Advertising also requires its syndication partners (i.e., partners that display Microsoft Advertising advertisements on their services) to comply with strict brand safety policies to prevent advertising revenue from flowing to websites engaging in misleading, deceptive, harmful, or insensitive behaviours. Microsoft Advertising publisher policies include a comprehensive list of prohibited content against which ads may not serve, including, but not limited to, sensitive political content (e.g., extreme, aggressive, or misleading interpretations of news, events, or individuals), and unsavoury content (such as content disparaging individuals or organisations). Partner properties that violate these policies are removed from our network until the partner remedies the issue.

Indication of impact - 51.2.3

War of aggression by Russia on Ukraine


During the reporting period, Microsoft Advertising maintained blocks on ads associated with 736 search queries in all markets and blocked no additional web domains across its network, leaving the total of blocked domains at 2,791.

Microsoft Advertising maintained the suspension of all 1,483 Russian-based advertisers imposed prior to the reporting period and did not onboard any new Russian-based advertisers during the reporting period.


Israel-Hamas conflict


During the reporting period, Microsoft Advertising maintained blocks on ads associated with 36 search queries in all markets and blocked no additional web domains across its network, leaving the total of blocked domains at 3.

Political Advertising

Outline approaches pertinent to this chapter, highlighting similarities/commonalities and differences with regular enforcement.

Description of intervention - 51.3.2

Not applicable. As described in Section 3 of the report, Microsoft Advertising does not support political advertising.

Indication of impact - 51.3.3

Not applicable.