LinkedIn

Report March 2025

Submitted

Your organisation description

Advertising

Commitment 1

Relevant signatories participating in ad placements commit to defund the dissemination of disinformation, and improve the policies and systems which determine the eligibility of content to be monetised, the controls for monetisation and ad placement, and the data to report on the accuracy and effectiveness of controls and services around ad placements.

We signed up to the following measures of this commitment

Measure 1.1 Measure 1.2 Measure 1.3 Measure 1.4 Measure 1.5 Measure 1.6

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

No

If yes, list these implementation measures here


Not applicable

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

Yes

If yes, which further implementation measures do you plan to put in place in the next 6 months?

LinkedIn plans to continue to assess its policies and services and to update them as warranted.


Measure 1.1

Relevant Signatories involved in the selling of advertising, inclusive of media platforms, publishers and ad tech companies, will deploy, disclose, and enforce policies with the aims of: - first avoiding the publishing and carriage of harmful Disinformation to protect the integrity of advertising supported businesses - second taking meaningful enforcement and remediation steps to avoid the placement of advertising next to Disinformation content or on sources that repeatedly violate these policies; and - third adopting measures to enable the verification of the landing / destination pages of ads and origin of ad placement.



QRE 1.1.1

Signatories will disclose and outline the policies they develop, deploy, and enforce to meet the goals of Measure 1.1 and will link to relevant public pages in their help centres.

LinkedIn prohibits misinformation and disinformation on its platform, whether in the form of organic content or in the form of advertising content. LinkedIn’s Professional Community Policies,which apply to all content on LinkedIn’s platform, expressly prohibit false and misleading content, including misinformation and disinformation:

  • Do not share false or misleading content. Do not share content that is false, misleading, or intended to deceive. Do not share content to interfere with or improperly influence an election or civic process. Do not share content that directly contradicts guidance from leading global health organisations and public health authorities; including false information about the safety or efficacy of vaccines or medical treatments. Do not share content or endorse someone or something in exchange for personal benefit (including personal or family relationships, monetary payment, free products or services, or other value), unless you have included a clear and conspicuous notice of the personal benefit you receive and have otherwise complied with our Advertising Policies.
  
LinkedIn provides specific examples of false and misleading content that violates its policy via a Help Center article on False or Misleading Content.  

LinkedIn’s Advertising Policiesincorporate the Professional Community Policies provision, and similarly prohibit misinformation and disinformation. In addition, LinkedIn’s Advertising Policies also prohibit fraudulent and deceptive ads, and require that claims in an ad have factual support:  

  • Fraud and Deception: Ads must not be fraudulent or deceptive. Your product or service must accurately match the content of your ad. Any claims in your ad must have factual support. Do not make deceptive or inaccurate claims about competitive products or services. Do not imply you or your product are affiliated with or endorsed by others without their permission. Additionally, make sure to disclose any pertinent partnerships when sharing advertising content on LinkedIn. Do not advertise prices or offers that are inaccurate - any advertised discount, offer or price must be easily discoverable from the link in your ad.  

Of note, unlike some other platforms, LinkedIn does not allow members to monetise or run ads against their content, nor does it offer a member ad revenue share program. Thus, members publishing disinformation on LinkedIn are not able to monetise that disinformation or collect advertising revenue via LinkedIn. LinkedIn has instead reported the number of ads it restricted on its platform during the period.  

SLI 1.1.1

Signatories will report, quantitatively, on actions they took to enforce each of the policies mentioned in the qualitative part of this service level indicator, at the Member State or language level. This could include, for instance, actions to remove, to block, or to otherwise restrict advertising on pages and/or domains that disseminate harmful Disinformation.

The table below reports metrics concerning ads LinkedIn restricted under the misinformation policies in QRE 1.1.1.   

The metrics include:   

1.      the number of ads LinkedIn restricted under the misinformation policies in QRE 1.1.1 between 1 July – 31 December 2024, broken out by EEA Member State;  
2.      the number of impressions those ads received before they were restricted. The metrics are assigned to EEA Member State based on the primary country targeting of the ad.   

No ads were restricted under the misinformation policies in QRE 1.1.1 between 1 July - 31 December 2024.   
The following factors may contribute to the number of ads reported by LinkedIn being lower than other platforms:  
-        LinkedIn is primarily a business-to-business advertising platform – that is, businesses marketing their products and services to other businesses and members in a professional capacity.  
-        Related, because of the business-to-business nature of LinkedIn’s advertising platform, ads on LinkedIn may cost more than ads in other settings, impacting the ads run on LinkedIn. 

SLI 1.1.2

Please insert the relevant data

Following the methodology developed by the Task-force Subgroup on Ad Scrutiny, this SLI considers the impressions to ads or sources that were blocked and applies an agreed-upon conversion factor to those impressions.

As reported above, LinkedIn restricted 0 ads between 1 July – 31 December 2024 under its misinformation policies in QRE 1.1.1. 

We calculated the approximate financial value in the table by using a “blended CPM” value and the following equation: 
 
(Impressions/1000) x Blended CPM; where CPM means “Cost Per Mille.”

Measure 1.2

Relevant Signatories responsible for the selling of advertising, inclusive of publishers, media platforms, and ad tech companies, will tighten eligibility requirements and content review processes for content monetisation and ad revenue share programmes on their services as necessary to effectively scrutinise parties and bar participation by actors who systematically post content or engage in behaviours which violate policies mentioned in Measure 1.1 that tackle Disinformation.

QRE 1.2.1

Signatories will outline their processes for reviewing, assessing, and augmenting their monetisation policies in order to scrutinise and bar participation by actors that systematically provide harmful Disinformation.

LinkedIn does not offer a member ad revenue share program and does not allow members to monetise content they post to LinkedIn by running ads against it.   

LinkedIn displays ads in two environments: (1) on the LinkedIn platform, which accounts for the vast majority of ads; and (2) on the LinkedIn Audience Network, which allows LinkedIn advertisers to extend their reach to professionals on a curated network of approximately twenty-five thousand third-party publishers selected by LinkedIn (for example, Nasdaq.com, CNN.com, Vogue.com, Realtor.com).   

With respect to the first category – ads displayed on the LinkedIn platform – as noted in response to QRE 1.1.1, unlike other platforms, LinkedIn does not offer a content monetisation or an ad revenue share program to members. Thus, no member content is monetised or demonetised, and there is no ability for a member publishing disinformation to collect any advertising revenue share from LinkedIn.   

With respect to the second category – ads displayed on the LinkedIn Audience Network – LinkedIn takes a number of steps to help ensure LinkedIn advertisers’ ads appear in a trusted environment and that publishers that systematically provide harmful disinformation are not included in the LinkedIn Audience Network.  

  • First, the LinkedIn Audience Network is a curated network of third-party sites and apps selected by LinkedIn. LinkedIn does not allow any blog, application, or website to join the LinkedIn Audience Network and display ads; rather, LinkedIn selects the publishers that are included in the network.
  • Second, LinkedIn has integrated with partners, such as Integral Ad Science and DoubleVerify, to help monitor the quality and brand safety of the publishers in the LinkedIn Audience Network and filter out publisher inventory that falls short of standards, such as brand safety floors.
  • Third, LinkedIn regularly reviews the publishers included in the LinkedIn Audience Network to ensure they meet LinkedIn standards and are serving LinkedIn advertisers.

To date, LinkedIn has periodically removed publishers from the LinkedIn Audience Network, but has not had to remove any publisher as a result of publishing disinformation. 

SLI 1.2.1

Signatories will report on the number of policy reviews and/or updates to policies relevant to Measure 1.2 throughout the reporting period. In addition, Signatories will report on the numbers of accounts or domains barred from participation to advertising or monetisation as a result of these policies at the Member State level.

As stated in response to QRE 1.2.1, LinkedIn does not allow members to monetise content they post to LinkedIn by running ads against it and has not had to remove any publisher from the LinkedIn Audience Network for publishing disinformation. 

Accordingly, the metrics for this SLI for the period 1 July – 31 December 2024 are zero.  

Measure 1.3

Relevant Signatories responsible for the selling of advertising, inclusive of publishers, media platforms, and ad tech companies, will take commercial and technically feasible steps, including support for relevant third-party approaches, to give advertising buyers transparency on the placement of their advertising.

QRE 1.3.1

Signatories will report on the controls and transparency they provide to advertising buyers with regards to the placement of their ads as it relates to Measure 1.3.

LinkedIn provides a range of information and tools to give advertisers transparency and control regarding the placement of their advertising. For example, for ads on the LinkedIn platform, LinkedIn publishes a Feed Brand Safety score for advertisers and the public. The Feed Brand Safety score measures the number of ad impressions on the LinkedIn platform that appeared adjacent to – that is, immediately above or below within the LinkedIn feed – content removed for violating LinkedIn’s Professional Community Policies, including disinformation. From 1 July through 31 December 2024, the Feed Brand Safety score was 99%+ safe. More information about LinkedIn’s Feed Brand Safety Score is available here.

In addition, LinkedIn publishes for advertisers and the public a semiannual transparency report, which discloses the amount of violating member content, including misinformation, that LinkedIn removed from the platform during the period. For the period from 1 January to 30 June 2024, for example, LinkedIn removed 30,497 pieces of misinformation from the platform. LinkedIn’s most recent transparency report is available here. 

For ads on the LinkedIn Audience Network, as discussed in QRE 1.2.1, LinkedIn provides tools to assist advertisers in controlling where their ads appear within the network. For example, advertisers can set up category-level blocking based on the Interactive Advertising Bureau’s (IAB) publisher category taxonomy to prevent their ads from running on certain types of publishers within the network. Similarly, advertisers can review the list of publishers within the network and create custom allow lists and block lists to ensure their ads are placed on apps and sites that meet an advertiser’s specific standards. 

Measure 1.4

Relevant Signatories responsible for the buying of advertising, inclusive of advertisers, and agencies, will place advertising through ad sellers that have taken effective, and transparent steps to avoid the placement of advertising next to Disinformation content or in places that repeatedly publish Disinformation.

QRE 1.4.1

Relevant Signatories that are responsible for the buying of advertising will describe their processes and procedures to ensure they place advertising through ad sellers that take the steps described in Measure 1.4.

This Measure is not relevant or pertinent for LinkedIn as it does not buy advertising on behalf of others, inclusive of advertisers, and agencies. 

Measure 1.5

Relevant Signatories involved in the reporting of monetisation activities inclusive of media platforms, ad networks, and ad verification companies will take the necessary steps to give industry-recognised relevant independent third-party auditors commercially appropriate and fair access to their services and data in order to: - First, confirm the accuracy of first party reporting relative to monetisation and Disinformation, seeking alignment with regular audits performed under the DSA. - Second, accreditation services should assess the effectiveness of media platforms' policy enforcement, including Disinformation policies.

QRE 1.5.1

Signatories that produce first party reporting will report on the access provided to independent third-party auditors as outlined in Measure 1.5 and will link to public reports and results from such auditors, such as MRC Content Level Brand Safety Accreditation, TAG Brand Safety certifications, or other similarly recognised industry accepted certifications.

As indicated in response to QRE 1.2.1, LinkedIn does not offer a content monetisation or an ad revenue share program to members. Thus, no member content is monetised or demonetised, and there is no ability for a member publishing disinformation on LinkedIn to collect advertising revenue share. As a result, LinkedIn has not undertaken independent third-party audits relative to monetisation and disinformation.  

QRE 1.5.2

Signatories that conduct independent accreditation via audits will disclose areas of their accreditation that have been updated to reflect needs in Measure 1.5.

Not applicable.  

Measure 1.6

Relevant Signatories will advance the development, improve the availability, and take practical steps to advance the use of brand safety tools and partnerships, with the following goals: - To the degree commercially viable, relevant Signatories will provide options to integrate information and analysis from source-raters, services that provide indicators of trustworthiness, fact-checkers, researchers or other relevant stakeholders providing information e.g., on the sources of Disinformation campaigns to help inform decisions on ad placement by ad buyers, namely advertisers and their agencies. - Advertisers, agencies, ad tech companies, and media platforms and publishers will take effective and reasonable steps to integrate the use of brand safety tools throughout the media planning, buying and reporting process, to avoid the placement of their advertising next to Disinformation content and/or in places or sources that repeatedly publish Disinformation. - Brand safety tool providers and rating services who categorise content and domains will provide reasonable transparency about the processes they use, insofar that they do not release commercially sensitive information or divulge trade secrets, and that they establish a mechanism for customer feedback and appeal.

QRE 1.6.1

Signatories that place ads will report on the options they provide for integration of information, indicators and analysis from source raters, services that provide indicators of trustworthiness, fact-checkers, researchers, or other relevant stakeholders providing information e.g. on the sources of Disinformation campaigns to help inform decisions on ad placement by buyers.

LinkedIn has integrated a number of brand safety tools and services to help advertisers understand and control the placement of their advertising and help avoid the placement of advertising next to disinformation content and/or in places or sources that repeatedly publish disinformation.

First, it’s worth noting that LinkedIn endeavours to limit the disinformation that may appear on its platform in the first place. As set out in response to QREs 17.1.1 / 18.1.3 / 18.2.1 / 23.2.1, LinkedIn has implemented automated and manual systems and processes to detect and remove content that violates our policies, including disinformation, and to take action on violative content when it’s reported to us. Further, LinkedIn limits and controls the publishers that are included in the LinkedIn Audience Network, discussed in response to QRE 1.2.1.

Second, LinkedIn has partnered with third parties, such as Integral Ad Science and DoubleVerify, to evaluate and filter advertising inventory on LinkedIn Audience Network publisher sites that falls short of standards, such as brand safety floors. These partners help evaluate and filter third-party publisher advertising inventory before a bid is placed, and decrease instances when an ad may run on an unsafe or low-quality page.  

In addition, LinkedIn has implemented a Brand Safety Hub within LinkedIn Campaign Manager. As part of the hub, advertisers can control what publisher apps and sites their ads appear on within the LinkedIn Audience Network. For example, advertisers can create custom block lists and allow lists of publisher sites within the LinkedIn Audience Network that meet an advertiser’s specific standards. Similarly, advertisers can apply third-party brand safety tools to their campaigns, including DoubleVerify brand suitability profiles.

QRE 1.6.2

Signatories that purchase ads will outline the steps they have taken to integrate the use of brand safety tools in their advertising and media operations, disclosing what percentage of their media investment is protected by such services.

This QRE is not relevant or pertinent for LinkedIn as it does not buy advertising on behalf of others, inclusive of advertisers, and agencies. 

QRE 1.6.3

Signatories that provide brand safety tools will outline how they are ensuring transparency and appealability about their processes and outcomes.

This QRE is not relevant or pertinent as LinkedIn is not a brand safety tool provider. 

QRE 1.6.4

Relevant Signatories that rate sources to determine if they persistently publish Disinformation shall provide reasonable information on the criteria under which websites are rated, make public the assessment of the relevant criteria relating to Disinformation, operate in an apolitical manner and give publishers the right to reply before ratings are published.

This QRE is not relevant or pertinent as LinkedIn is not a ratings service. 

SLI 1.6.1

Signatories that purchase ads will outline the steps they have taken to integrate the use of brand safety tools in their advertising and media operations, disclosing what percentage of their media investment is protected by such services.

Not applicable

Commitment 2

Relevant Signatories participating in advertising commit to prevent the misuse of advertising systems to disseminate Disinformation in the form of advertising messages.

We signed up to the following measures of this commitment

Measure 2.1 Measure 2.2 Measure 2.3 Measure 2.4

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

No

If yes, list these implementation measures here

Not applicable

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

Yes

If yes, which further implementation measures do you plan to put in place in the next 6 months?

LinkedIn plans to continue to assess its policies and services and to update them as warranted. 

Measure 2.1

Relevant Signatories will develop, deploy, and enforce appropriate and tailored advertising policies that address the misuse of their advertising systems for propagating harmful Disinformation in advertising messages and in the promotion of content.

QRE 2.1.1

Signatories will disclose and outline the policies they develop, deploy, and enforce to meet the goals of Measure 2.1 and will link to relevant public pages in their help centres.

As noted in response to QRE 1.1.1, LinkedIn prohibits misinformation and disinformation on its platform, whether in the form of organic content or in the form of advertising content. LinkedIn’s Professional Community Policies, which apply to all content on the platform, expressly prohibit false and misleading content, including misinformation and disinformation:

-       Do not share false or misleading content. Do not share content that is false, misleading, or intended to deceive. Do not share content to interfere with or improperly influence an election or other civic process. Do not share content that directly contradicts guidance from leading global health organisations and public health authorities; including false information about the safety or efficacy of vaccines or medical treatments. Do not share content or endorse someone or something in exchange for personal benefit (including personal or family relationships, monetary payment, free products or services, or other value), unless you have included a clear and conspicuous notice of the personal benefit you receive and have otherwise complied with our Advertising Policies.

 LinkedIn provides specific examples of false and misleading content that violates its policy via a Help Center article on False or Misleading Content.  
 
LinkedIn’s Advertising Policiesincorporate the Professional Community Policies provision, and similarly prohibit misinformation and disinformation. In addition, LinkedIn’s Advertising Policies separately prohibit fraudulent and deceptive ads, and require that claims in an ad have factual support:  

-        Fraud and Deception: Ads must not be fraudulent or deceptive. Your product or service must accurately match the content of your ad. Any claims in your ad must have factual support. Do not make deceptive or inaccurate claims about competitive products or services. Do not imply you or your product are affiliated with or endorsed by others without their permission. Additionally, make sure to disclose any pertinent partnerships when sharing advertising content on LinkedIn. Do not advertise prices or offers that are inaccurate – any advertised discount, offer or price must be easily discoverable from the link in your ad.  

SLI 2.1.1

Signatories will report, quantitatively, on actions they took to enforce each of the policies mentioned in the qualitative part of this service level indicator, at the Member State or language level. This could include, for instance, actions to remove, to block, or to otherwise restrict harmful Disinformation in advertising messages and in the promotion of content.

The table below reports the number of ads LinkedIn restricted under the misinformation policies in QRE 2.1.1 above between 1 July – 31 December 2024, broken out by EEA Member State.   

The metrics are assigned to EEA Member State based on the primary country targeting of the ad.   

No ads were restricted between 1 July – 31 December 2024. 

The following factors may contribute to the number of ads reported by LinkedIn being low:  
-        LinkedIn is primarily a business-to-business advertising platform - that is, businesses marketing their products and services to other businesses and members in a professional capacity.  
-        Because of the business-to-business nature of LinkedIn’s advertising platform, ads on LinkedIn may cost more than ads placed in other settings, impacting the ads run on LinkedIn.

Measure 2.2

Relevant Signatories will develop tools, methods, or partnerships, which may include reference to independent information sources both public and proprietary (for instance partnerships with fact-checking or source rating organisations, or services providing indicators of trustworthiness, or proprietary methods developed internally) to identify content and sources as distributing harmful Disinformation, to identify and take action on ads and promoted content that violate advertising policies regarding Disinformation mentioned in Measure 2.1.

QRE 2.2.1

Signatories will describe the tools, methods, or partnerships they use to identify content and sources that contravene policies mentioned in Measure 2.1 - while being mindful of not disclosing information that'd make it easier for malicious actors to circumvent these tools, methods, or partnerships. Signatories will specify the independent information sources involved in these tools, methods, or partnerships.

LinkedIn works with numerous partners to facilitate the flow of information to tackle purveyors of disinformation, including disinformation spread by state-sponsored and institutional actors. 

LinkedIn maintains an internal Trust and Safety team composed of threat investigators and intelligence analysts to address disinformation. This team works with peers and other stakeholders, including our Artificial Intelligence modeling team, to identify and remove nation-state actors and coordinated inauthentic campaigns. LinkedIn conducts investigations into election-related influence operations and nation-state targeting including continued information sharing on threats with industry peers and Law Enforcement on a regular basis. LinkedIn works with peer companies and other stakeholders to receive and share indicators related to fake accounts created by state-sponsored actors, such as confirmed Tactics, Techniques, and Protocols (TTPs) and Indicators of Compromise (IOC). This exchange of information leads to a better understanding of the incentives of sophisticated and well-funded threat actors and how they evolve their TTPs to achieve those goals, which assists LinkedIn in their identification and removal. Any associated disinformation content is verified by our internal or external fact-checkers, and coordinated inauthentic behaviours (CIBs) are also removed by our Trust and Safety team.

LinkedIn, along with its parent company, Microsoft, is heavily involved in threat exchanges. These threat exchanges take various forms, such as: 1) regular discussion amongst industry peers to discuss high-level trends and campaigns; and, 2) one-on-one engagement with individual peer companies to discuss TTPs and IOCs. This exchange of information leads to a better understanding of the incentives of sophisticated and well-funded threat actors and how they evolve their TTPs to achieve those goals, which assists us in their identification and removal. 

LinkedIn always stands ready to receive and investigate any leads we receive from peers and other external stakeholders. In addition to one-on-one engagement with peers, we also consume intelligence from vendors and investigate any TTPs and IOCs made available in peer disclosures. In turn, we also regularly release information about disinformation on our platform in publicly-available transparency reports and blog posts. 

Measure 2.3

Relevant Signatories will adapt their current ad verification and review systems as appropriate and commercially feasible, with the aim of preventing ads placed through or on their services that do not comply with their advertising policies in respect of Disinformation to be inclusive of advertising message, promoted content, and site landing page.

QRE 2.3.1

Signatories will describe the systems and procedures they use to ensure that ads placed through their services comply with their advertising policies as described in Measure 2.1.

All advertising that runs on LinkedIn’s platform is subject to LinkedIn’s Advertising Policies. LinkedIn has implemented both automated and manual systems to help ensure that advertising on the platform complies with its Advertising Policies, and that ads that do not comply with its policies are removed.

When an advertiser submits an advertising campaign, the campaign is evaluated by LinkedIn automated systems. If those systems determine a campaign may violate LinkedIn’s policies, the campaign is rejected or forwarded to LinkedIn’s advertising review team for manual review.

The advertising review team is trained in LinkedIn’s Advertising Policies and dedicated to advertising review. LinkedIn also employs a dedicated team of trainers, who not only support the onboarding of new ad reviewers, but also provide ongoing educational opportunities for reviewers.  

LinkedIn similarly employs quality assurance analysts, who provide one-on-one coaching, as well as regular monthly forums to discuss reviewers’ most frequent challenges. For complex issues, reviewers have direct access to global advertising policy managers through regular office hours and dedicated escalation pathways.

LinkedIn members may also report ads that they believe violate LinkedIn’s advertising policies, and when members report ads LinkedIn’s advertising review team reviews them. To report an ad, members can click on the three-dot icon in the upper right-hand corner of every ad and select the “Hide or report this ad” option. Members are then directed to select a reporting reason, with “Misinformation” provided as a reporting option.

SLI 2.3.1

Signatories will report quantitatively, at the Member State level, on the ads removed or prohibited from their services using procedures outlined in Measure 2.3. In the event of ads successfully removed, parties should report on the reach of violatory content and advertising.

The table below reports metrics concerning ads LinkedIn restricted under the misinformation policies in QRE 2.1.1. The metrics include: (1) the number of ads LinkedIn restricted under the misinformation policies in QRE 2.1.1 between 1 July - 31 December 2024, broken out by EEA Member State; (2) the number of impressions those ads received before they were restricted. The metrics are assigned to EEA Member States based on the primary country targeting of the ad.   

No ads were restricted under the misinformation policies in QRE 2.1.1 between 1 July - 31 December 2024. 

The following factors may contribute to the number of ads reported by LinkedIn being lower than other platforms:  

-        LinkedIn is primarily a business-to-business advertising platform -- that is, businesses marketing their products and services to other businesses and members in a professional capacity.  
-        Because of the business-to-business nature of LinkedIn’s advertising platform, ads on LinkedIn may cost more than ads on other platforms, impacting the ads run on LinkedIn.  

Measure 2.4

Relevant Signatories will provide relevant information to advertisers about which advertising policies have been violated when they reject or remove ads violating policies described in Measure 2.1 above or disable advertising accounts in application of these policies and clarify their procedures for appeal.

QRE 2.4.1

Signatories will describe how they provide information to advertisers about advertising policies they have violated and how advertisers can appeal these policies.

When LinkedIn rejects or restricts an ad for violation of its policies, as described in QRE 2.3.1, LinkedIn sends the advertiser an email notification. The email notification outlines the rejection reason and advertising policy that the ad has violated.

The notification also provides advertisers instructions regarding how they can address the violation, including by revising the ad in LinkedIn Campaign Manager to address the violations, or by contacting their sales representative or LinkedIn customer support if they require clarification or believe there has been a mistake.

Because advertisers can address rejections a number of ways – by revising and resubmitting the advertisement, by creating a new advertisement that complies with LinkedIn’s policies, or by contacting their LinkedIn sales representative or customer support – LinkedIn does not report “appeal” and “appeal grant” metrics for ad rejections. LinkedIn has provided metrics on the number of ad restrictions as part of SLI 2.3.1 above.

SLI 2.4.1

Signatories will report quantitatively, at the Member State level, on the number of appeals per their standard procedures they received from advertisers on the application of their policies and on the proportion of these appeals that led to a change of the initial policy decision.

LinkedIn does not report “appeal” and “appeal grant” metrics for ad rejections as outlined in our response to QRE 2.4.1. LinkedIn has provided metrics on the number of ad restrictions as part of SLI 2.3.1 above. 

Commitment 3

Relevant Signatories involved in buying, selling and placing digital advertising commit to exchange best practices and strengthen cooperation with relevant players, expanding to organisations active in the online monetisation value chain, such as online e-payment services, e-commerce platforms and relevant crowd-funding/donation systems, with the aim to increase the effectiveness of scrutiny of ad placements on their own services.

We signed up to the following measures of this commitment

Measure 3.1 Measure 3.2 Measure 3.3

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

No

If yes, list these implementation measures here

Not applicable

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

Not applicable 

Measure 3.1

Relevant Signatories will cooperate with platforms, advertising supply chain players, source-rating services, services that provide indicators of trustworthiness, fact-checking organisations, advertisers and any other actors active in the online monetisation value chain, to facilitate the integration and flow of information, in particular information relevant for tackling purveyors of harmful Disinformation, in full respect of all relevant data protection rules and confidentiality agreements.

QRE 3.1.1

Signatories will outline how they work with others across industry and civil society to facilitate the flow of information that may be relevant for tackling purveyors of harmful Disinformation.

As discussed as part of QRE 2.2.1, LinkedIn works with numerous partners to facilitate the flow of information to tackle purveyors of disinformation, including disinformation spread by state-sponsored and institutional actors. 

LinkedIn maintains an internal Trust and Safety team composed of threat investigators and intelligence analysts to address disinformation. This team works with peers and other stakeholders, including our Artificial Intelligence modelling team, to identify and remove nation-state actors and coordinated inauthentic campaigns. LinkedIn conducts investigations into election-related influence operations and nation-state targeting including continued information sharing on threats with industry peers and Law Enforcement on a regular basis. LinkedIn works with peer companies and other stakeholders to receive and share indicators related to fake accounts created by state-sponsored actors, such as confirmed Tactics, Techniques, and Protocols (TTPs) and Indicators of Compromise (IOC). This exchange of information leads to a better understanding of the incentives of sophisticated and well-funded threat actors and how they evolve their TTPs to achieve those goals, which assists LinkedIn in their identification and removal.  Any associated disinformation content is verified by our internal or external fact-checkers, and coordinated inauthentic behaviours (CIBs) are also removed by our Trust and Safety team.

LinkedIn, along with its parent company, Microsoft, is heavily involved in threat exchanges. These threat exchanges take various forms, such as: 1) regular discussion amongst industry peers to discuss high-level trends and campaigns; and, 2) one-on-one engagement with individual peer companies to discuss TTPs and IOCs. This exchange of information leads to a better understanding of the incentives of sophisticated and well-funded threat actors and how they evolve their TTPs to achieve those goals, which assists us in their identification and removal. 

LinkedIn always stands ready to receive and investigate any leads we receive from peers and other external stakeholders. In addition to one-on-one engagement with peers, we also consume intelligence from vendors and investigate any TTPs and IOCs made available in peer disclosures. In turn, we also regularly release information about disinformation on our platform in publicly-available transparency reports and blog posts.

Measure 3.2

Relevant Signatories will exchange among themselves information on Disinformation trends and TTPs (Tactics, Techniques, and Procedures), via the Code Task-force, GARM, IAB Europe, or other relevant fora. This will include sharing insights on new techniques or threats observed by Relevant Signatories, discussing case studies, and other means of improving capabilities and steps to help remove Disinformation across the advertising supply chain - potentially including real-time technical capabilities.

QRE 3.2.1

Signatories will report on their discussions within fora mentioned in Measure 3.2, being mindful of not disclosing information that is confidential and/or that may be used by malicious actors to circumvent the defences set by Signatories and others across the advertising supply chain. This could include, for instance, information about the fora Signatories engaged in; about the kinds of information they shared; and about the learnings they derived from these exchanges.

Please see the response to QRE 3.1.1. 

Measure 3.3

Relevant Signatories will integrate the work of or collaborate with relevant third-party organisations, such as independent source-rating services, services that provide indicators of trustworthiness, fact-checkers, researchers, or open-source investigators, in order to reduce monetisation of Disinformation and avoid the dissemination of advertising containing Disinformation.

QRE 3.3.1

Signatories will report on the collaborations and integrations relevant to their work with organisations mentioned.

Please see the response to QRE 3.1.1. In addition, as discussed in response to QRE 1.6.1, LinkedIn partners with companies including Integral Ad Science and DoubleVerify to help evaluate and filter advertising inventory on LinkedIn Audience Network publisher sites that falls short of standards, such as brand safety floors. 

Political Advertising

Commitment 4

Relevant Signatories commit to adopt a common definition of "political and issue advertising".

We signed up to the following measures of this commitment

Measure 4.1 Measure 4.2

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

No

If yes, list these implementation measures here

Not applicable

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

Not applicable

Measure 4.1

Relevant Signatories commit to define "political and issue advertising" in this section in line with the definition of "political advertising" set out in the European Commission's proposal for a Regulation on the transparency and targeting of political advertising.

QRE 4.1.1

Relevant Signatories will declare the relevant scope of their commitment at the time of reporting and publish their relevant policies, demonstrating alignment with the European Commission's proposal for a Regulation on the transparency and targeting of political advertising.

LinkedIn’s Advertising Policies do not allow political advertising, and LinkedIn has not allowed political advertising since 2018. 

Among other things, LinkedIn Advertising policies prohibit “ads advocating for or against a particular candidate, party, or ballot proposition or otherwise intended to influence an election outcome” and “ads fundraising for or by political candidates, parties, political action committees or similar organisations, or ballot propositions.” In addition, LinkedIn’s Advertising Policies prohibit certain types of advertisements that might be considered issue based. For example, “ads exploiting a sensitive political issue even if the advertiser has no explicit political agenda” are also prohibited.

QRE 4.1.2

After the first year of the Code's operation, Relevant Signatories will state whether they assess that further work with the Task-force is necessary and the mechanism for doing so, in line with Measure 4.2.

Microsoft looks forward to the full entry into application of the Regulation on Transparency and Targeting of Political Advertising and the associated upcoming common guidance to be issued in accordance with Art. 8.2 of the Regulation. 

Commitment 5

Relevant Signatories commit to apply a consistent approach across political and issue advertising on their services and to clearly indicate in their advertising policies the extent to which such advertising is permitted or prohibited on their services.

We signed up to the following measures of this commitment

Measure 5.1

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

No

If yes, list these implementation measures here

Not applicable

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

Not applicable as LinkedIn currently prohibits all political advertising, as outlined under QRE 5.1.1.

Measure 5.1

Relevant Signatories will apply the labelling, transparency and verification principles (as set out below) across all ads relevant to their Commitments 4 and 5. They will publicise their policy rules or guidelines pertaining to their service's definition(s) of political and/or issue advertising in a publicly available and easily understandable way.

QRE 5.1.1

Relevant Signatories will report on their policy rules or guidelines and on their approach towards publicising them.

LinkedIn’s Advertising Policies do not allow political advertising, and LinkedIn has not allowed political advertising since 2018. 

Among other things, LinkedIn Advertising policies prohibit “ads advocating for or against a particular candidate, party, or ballot proposition or otherwise intended to influence an election outcome” and “ads fundraising for or by political candidates, parties, political action committees or similar organisations, or ballot propositions.” In addition, LinkedIn’s Advertising Policies prohibit certain types of advertisements that might be considered issue based. For example, “ads exploiting a sensitive political issue even if the advertiser has no explicit political agenda” are also prohibited. 

Commitment 7

Relevant Signatories commit to put proportionate and appropriate identity verification systems in place for sponsors and providers of advertising services acting on behalf of sponsors placing political or issue ads. Relevant signatories will make sure that labelling and user-facing transparency requirements are met before allowing placement of such ads.

We signed up to the following measures of this commitment

Measure 7.3

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

No

If yes, list these implementation measures here

Not applicable 

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

Not applicable 

Measure 7.3

Relevant Signatories will take appropriate action, such as suspensions or other account-level penalties, against political or issue ad sponsors who demonstrably evade verification and transparency requirements via on-platform tactics. Relevant Signatories will develop - or provide via existing tools - functionalities that allow users to flag ads that are not labelled as political.

QRE 7.3.1

Relevant Signatories will report on the tools and processes in place to request a declaration on whether the advertising service requested constitutes political or issue advertising.

As set out in QRE 5.1.1, LinkedIn’s Advertising Policies prohibit political advertising. Before submitting a campaign, customers must agree that their ad complies with LinkedIn’s policies.

As detailed in QRE 2.3.1, LinkedIn has implemented both automated and manual systems to help ensure that advertising on the platform complies with its Advertising Policies, and that ads that do not comply with its policies are removed. These enforcement systems apply equally to prohibited political advertising, as well as other violations of LinkedIn’s Advertising Policies. 

In addition to LinkedIn’s pre-emptive enforcement, LinkedIn members may also report ads that they believe violate LinkedIn’s advertising policies, and when members report ads LinkedIn’s advertising review team reviews them. To report an ad, members can click on the three-dot icon in the upper right-hand corner of every ad and select the “Hide or report this ad” option. 

QRE 7.3.2

Relevant Signatories will report on policies in place against political or issue ad sponsors who demonstrably evade verification and transparency requirements on-platform.

As set out in QRE 5.1.1, LinkedIn’s Advertising Policies prohibit political advertising. Ads that do not comply with LinkedIn’s Advertising Policies are removed. 

Integrity of Services

Commitment 14

In order to limit impermissible manipulative behaviours and practices across their services, Relevant Signatories commit to put in place or further bolster policies to address both misinformation and disinformation across their services, and to agree on a cross-service understanding of manipulative behaviours, actors and practices not permitted on their services. Such behaviours and practices include: The creation and use of fake accounts, account takeovers and bot-driven amplification, Hack-and-leak operations, Impersonation, Malicious deep fakes, The purchase of fake engagements, Non-transparent paid messages or promotion by influencers, The creation and use of accounts that participate in coordinated inauthentic behaviour, User conduct aimed at artificially amplifying the reach or perceived public support for disinformation.

We signed up to the following measures of this commitment

Measure 14.1 Measure 14.2 Measure 14.3

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

No

If yes, list these implementation measures here

Not applicable 

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

Not applicable

Measure 14.1

Relevant Signatories will adopt, reinforce and implement clear policies regarding impermissible manipulative behaviours and practices on their services, based on the latest evidence on the conducts and tactics, techniques and procedures (TTPs) employed by malicious actors, such as the AMITT Disinformation Tactics, Techniques and Procedures Framework.

QRE 14.1.1

Relevant Signatories will list relevant policies and clarify how they relate to the threats mentioned above as well as to other Disinformation threats.

LinkedIn’s User Agreement (in particular section 8 LinkedIn “Dos and Don’ts”) and our Professional Community Policies - which are accepted by every member when joining LinkedIn - detail the impermissible manipulative behaviours and practices that are prohibited on our platform. Fake accounts, misinformation, and inauthentic content are not allowed, and we take active steps to remove it from our platform.

LinkedIn provides additional specific examples of false and misleading content that violates its policy via a Help Center article on False or Misleading Content.  

QRE 14.1.2

Signatories will report on their proactive efforts to detect impermissible content, behaviours, TTPs and practices relevant to this commitment.

LinkedIn works with numerous partners to facilitate the flow of information to tackle purveyors of disinformation, including disinformation spread by state-sponsored and institutional actors.   

LinkedIn maintains an internal Trust and Safety team composed of threat investigators and intelligence analysts to address disinformation. This team works with peers and other stakeholders, including our Artificial Intelligence modeling team, to identify and remove nation-state actors and coordinated inauthentic campaigns. LinkedIn conducts investigations into election-related influence operations and nation-state targeting including continued information sharing on threats with industry peers and Law Enforcement on a regular basis. LinkedIn works with peer companies and other stakeholders to receive and share indicators related to fake accounts created by state-sponsored actors, such as confirmed Tactics, Techniques, and Protocols (TTPs) and Indicators of Compromise (IOC). This exchange of information leads to a better understanding of the incentives of sophisticated and well-funded threat actors and how they evolve their TTPs to achieve those goals, which assists LinkedIn in their identification and removal.  Any associated disinformation content is verified by our internal or external fact-checkers as needed, and coordinated inauthentic behaviours (CIBs) are also removed by our Trust and Safety team. 

LinkedIn, along with its parent company, Microsoft, is heavily involved in threat exchanges. These threat exchanges take various forms, such as: 1) regular discussion amongst industry peers to discuss high-level trends and campaigns; and, 2) one-on-one engagement with individual peer companies to discuss TTPs and IOCs. This exchange of information leads to a better understanding of the incentives of sophisticated and well-funded threat actors and how they evolve their TTPs to achieve those goals, which assists us in their identification and removal.   

LinkedIn always stands ready to receive and investigate any leads we receive from peers and other external stakeholders. In addition to one-on-one engagement with peers, we also consume intelligence from vendors and investigate any TTPs and IOCs made available in peer disclosures. In turn, we also regularly release information about policy-violating content on our platform in publicly available transparency reports and blog posts, including for exampleHowWe’re Protecting Members From Fake Profiles,Automated FakeAccount Detection, andAn Update on How We Keep Members Safe.The LinkedInCommunity Reportalso describes actions we take on content that violates our Professional Community Policies and User Agreement. It is published twice per year and covers the global detection of fake accounts, spam and scams, content violations and copyright infringements. The most recent reporting period covered 1 January to 30 June 2024. LinkedIn Ireland Unlimited Company – the provider of LinkedIn’s services in the EU – has been designated by the European Commission as a very large online platform and, therefore, pursuant to its obligations under Article 42 of the Digital Services Act, publishes Transparency Reports covering the EU every 6 months, with the most recent report published in February 2025.   

Measure 14.2

Relevant Signatories will keep a detailed, up-to-date list of their publicly available policies that clarifies behaviours and practices that are prohibited on their services and will outline in their reports how their respective policies and their implementation address the above set of TTPs, threats and harms as well as other relevant threats.

QRE 14.2.1

Relevant Signatories will report on actions taken to implement the policies they list in their reports and covering the range of TTPs identified/employed, at the Member State level.

LinkedIn’s Professional Community Policies prohibit misinformation and misinformation is removed from the LinkedIn platform.  Where LinkedIn removes such content pursuant to our false and misleading content policies, LinkedIn notifies members of the action taken. Members that repeatedly post misinformation are permanently restricted.  State-sponsored attempts to post misinformation, if any, are removed.  

Further, LinkedIn’s professional focus shapes the type of content we see on platform. People tend to say things differently when their colleagues and employer are watching. Accordingly, our members don’t tend to use LinkedIn to engage in the mass dissemination of misinformation, and bad actors generally need to create fake accounts to peddle misinformation.   

To ensure their content reaches a large audience, bad actors need to either connect with real members or post content that real members will like—both of which are hard to achieve on LinkedIn given our professional focus. The mass dissemination of false information, as well as artificial traffic and engagement, therefore, requires the mass creation of fake accounts, which we have various defences to prevent and limit.   

To evolve to the ever-changing threat landscape, our team continually invests in new technologies for combating inauthentic behaviour on the platform. We are investing in artificial intelligence technologies such as advanced network algorithms that detect communities of fake accounts through similarities in their content and behaviour, computer vision and natural language processing algorithms for detecting AI-generated elements in fake profiles, anomaly detection of risky behaviours, and deep learning models for detecting sequences of activity that are associated with abusive automation. As noted in our most recent global Transparency Report, in the period 1 January to 30 June 2024, LinkedIn blocked or removed approximately 86 million fake accounts. Our automated defenses blocked 94.6% of the fake accounts we stopped during that period, with the remaining 5.4% stopped by our manual investigations and restrictions. 99.7% of the fake accounts were stopped proactively, before a member report.   

To respond to the ever-changing threat landscape, LinkedIn’s teams continually invest in new technologies for combating inauthentic behavior on the platform. LinkedIn committed to the Tech Accord to Combat Deceptive Use of AI in 2024 Elections. Aligned with this commitment, LinkedIn is investing in AI technologies such as advanced network algorithms that detect communities of fake accounts through similarities in their content and behavior; computer vision and natural language processing algorithms for detecting AI-generated elements in fake Profiles such as deep fakes; anomaly detection of risky behaviors; and deep learning models for detecting sequences of activity that are associated with abusive automation. LinkedIn has also adopted the Coalition for Content Provenance and Authenticity’s industry-leading “Content Credentials” technology (C2PA) to include metadata labeling, including data about whether content is created using AI, on content containing the C2PA technology. Furthermore, LinkedIn acts vigilantly to maintain the integrity of all accounts and to ward off false account activity:
-        Establishing metrics for when election-related conversations, violations, or operational capacity breach a threshold and require additional support
-        Maintaining a dedicated Anti-Abuse team to research emerging trends and key risks and develop tools to address them
-        Using AI to detect inauthentic activity and communities of fake accounts
-        Using automated systems to detect and block automated activity
-        Imposing limits on certain categories of activity commonly engaged in by bad actors   
-        Conducting manual investigation and restriction of accounts engaged in automated activity   
-        Using third party fact checkers during the human content review process
-        Conducting hash matching for known instances of deepfake content
-        Maintaining 24/7 escalation paths to address any emerging issues

LinkedIn has reported available metrics at SLI 14.2.1 in respect of the following TTPs:  

-        TTP 1: Creation of inauthentic accounts or botnets (which may include automated, partially automated, or nonautomated accounts)  
-        TTP 2: Use of fake / inauthentic reactions (e.g. likes, up votes, comments)  
-        TTP 3: Use of fake followers or subscribers  
-        TTP 4: Creation of inauthentic pages, groups, chat groups, fora, or domains  

LinkedIn has also reported metrics for SLI 14.2.2 in respect of TTP 1 and TTP 4.   

LinkedIn has focused its efforts on TTPs 1-4 because, as a real-identity professional network, the harm on LinkedIn is generally conducted through fake accounts. Our real members know that the content they post is viewed by their colleagues, managers, and potential business partners, and therefore they generally do not knowingly post misinformation.  

With respect to the remaining TTPs, LinkedIn is unable to reasonably ascertain the intent or provenance of such content. As discussed above, disinformation is not prevalent on LinkedIn due to the professional context of the platform. Distribution of such content through fake accounts is further hampered due to the need to create connections between the fake account and the real member. In the rare instances that such misinformation is spread through fake accounts, due to the adversarial nature of this activity, publicly disclosing details regarding the threat actor's TTPs would hurt our ability to fight against this activity. For example, reporting that vulnerable recipients were not targeted may incentivize the targeting of such recipients.  

LinkedIn has and will continue to evaluate what additional metrics it could potentially include in future reporting in light of how LinkedIn’s services function and are used. 

SLI 14.2.1

Number of instances of identified TTPs and actions taken at the Member State level under policies addressing each of the TTPs as well as information on the type of content.

The table below addresses:
  • TTP 1: “Creation of inauthentic accounts or botnets (which may include automated, partially automated, or non-automated accounts).” SLI 14.2.1 reports the number of fake accounts that LinkedIn prevented from being created or restricted between 1 July - 31 December 2024, broken out by EEA Member State. The fake accounts reported are attributed to EEA Member States based on the IP address used during registration of the account. ‘Number of instances of identified TTPs’ and ‘Number of actions taken by type’ are identical given LinkedIn blocked the registration attempt or restricted the account in all instances.
  • TTP 2: “Use of fake / inauthentic reactions (e.g. likes, up votes, comments).” The table reports the number of fake accounts reported in TTP 1 SLI 14.2.1 that reacted to, commented on, or shared (collectively, “engaged with”) a feed post between 1 July – 31 December 2024. 
    • The numbers of fake accounts reported below are a subset of the fake accounts reported in TTP 1 SLI 14.2.1 that engaged with a feed post between 1 July – 31 December 2024. For example, of the 194,153 fake accounts that LinkedIn prevented from being created or restricted between 1 July – 31 December 2024 in Austria (as reported in TTP 1 SLI 14.2.1), 1,148 of those accounts engaged with a feed post between 1July – 31 December 2024.  
  • TTP 3: “Use of fake followers or subscribers.” The table reports the number of fake accounts reported in TTP 1 SLI 14.2.1 that followed a LinkedIn profile or page between 1 July – 31 December 2024.    
    • The numbers of fake accounts reported below are a subset of the fake accounts reported in TTP 1 SLI 14.2.1 that followed a LinkedIn profile or page between 1 July – 31 December 2024. For example, of the 194,153 fake accounts that LinkedIn prevented from being created or restricted between 1 July – 31 December 2024 in Austria (as reported in TTP 1 SLI 14.2.1), 9,198 of those accounts followed a LinkedIn profile or page between 1 July – 31 December 2024 (as reported below).  
  • TTP 4: “Creation of inauthentic pages, groups, chat groups, I, or domains.” SLI 14.2.1 reports the number of LinkedIn pages or groups that the fake accounts reported in TTP 1 SLI 14.2.1 created between 1 July – 31 December 2024.    
    • The numbers of LinkedIn pages or groups created reported below are based on the population of fake accounts reported in TTP 1 SLI 14.2.1. For example, the 194,153 fake accounts that LinkedIn prevented from being created or restricted between 1 July – 31 December 2024 in Austria (as reported in TTP 1 SLI 14.2.1) created 33 LinkedIn pages or groups between 1 July – 31 December 2024 (as reported below). 

 Please note that the metrics provided below are total numbers and do not imply that these fake accounts were engaging in posting misinformation or disinformation. 
  
We have not reported metrics associated with this TTP where, for example, there is no meaningful metric to report (e.g., metrics for after the TTP in question, given LinkedIn removes detected misinformation and fake accounts from our platform) or LinkedIn does not have a reasonable means to compute the requested metrics.   

Country TTP 1 - Nr of instances of identified TTPs - The number of fake accounts LinkedIn prevented or restricted between 1 July – 31 December 2024 TTP 1 - Nr of actions taken by type - The number of fake accounts LinkedIn prevented or restricted between 1 July – 31 December 2024 TTP 2 - Nr of instances of identified TTPs - The number of fake accounts reported in TTP 1 SLI 14.2.1 that engaged with a feed post between 1 July – 31 December 2024 TTP 3 - Nr of instances of identified TTPs - The number of fake accounts reported in TTP 1 SLI 14.2.1 that followed a LinkedIn profile or page between 1 July – 31 December 2024 TTP 4 - Nr of instances of identified TTPs - The number of LinkedIn pages or groups created between 1 July – 31 December 2024 by the fake accounts reported in TTP 1 SLI 14.2.1.
Austria 194,153 194,153 1,148 9,198 33
Belgium 344,346 344,346 1,399 10,504 35
Bulgaria 140,495 140,495 791 5,564 18
Croatia 53,395 53,395 373 3,539 12
Cyprus 61,105 61,105 312 1,631 16
Czech Republic 185,280 185,280 740 11,763 50
Denmark 150,598 150,598 648 6,065 8
Estonia 452,316 452,316 166 2,523 7
Finland 530,752 530,752 744 5,352 10
France 2,712,034 2,712,034 14,984 137,403 321
Germany 1,923,995 1,923,995 15,410 142,964 195
Greece 271,628 271,628 1,230 11,514 21
Hungary 102,782 102,782 475 4,856 12
Iceland 444,743 444,743 1,064 11,910 26
Ireland 8,365,534 8,365,534 6,200 84,608 141
Italy 468,370 468,370 312 2,667 13
Latvia 155,975 155,975 678 4,464 10
Lithuania 40,195 40,195 261 1,567 3
Luxembourg 31,539 31,539 122 851 6
Malta 1,172,414 1,172,414 4,451 39,036 94
Netherlands 597,637 597,637 4,104 55,905 104
Poland 180,022 180,022 1.66 12,295 79
Portugal 683,248 683,248 1,328 10,745 38
Romania 84,337 4,337 309 3,110 20
Slovakia 80,358 80,358 166 1,298 6
Slovenia 2,046,114 2,046,114 6,420 65,638 191
Spain 518,628 518,628 1,378 11,253 28
Sweden 16,166 16,166 49 390 0
Liechtenstein 1,080 1,080 4 52 0
Norway 104,916 104,916 583 3,859 9
Total EU 21,991,993 21,991,993 66,909 658,223 1,497
Total EEA 22,114,155 22,114,155 67,545 662,524 1,506

SLI 14.2.2

Views/impressions of and interaction/engagement at the Member State level (e.g. likes, shares, comments), related to each identified TTP, before and after action was taken.

The table below addresses:
TTP 1: “Creation of inauthentic accounts or botnets (which may include automated, partially automated, or non-automated accounts).” 
  • SLI 14.2.2. reports two metrics. First, the number of EEA accounts that connected to or followed the fake accounts in SLI 14.2.1 between 1 July – 31 December 2024. For example, the 194,153 fake accounts reported for Austria had a total of 3,299 EEA accounts connect to or follow them between 1 July and 31 December 2024. Whether an account qualifies as an EEA account is based on the IP address used during registration of the account.  Second, the number of fake accounts in SLI 14.2.1 that posted a feed post between 1 July – 31 December 2024. For example, of the 444,743 fake accounts prevented or restricted for Ireland, 789 posted a feed post between 1 July and 31 December 2024.
TTP 4: “Creation of inauthentic pages, groups, chat groups, I, or domains.” 
  • SLI 14.2.2 reports the number of accounts in the EEA that joined or followed the pages or groups reported in TTP 4 SLI 14.2.1 between 1 July – 31 December 2024. For example, the 33 pages and groups reported for Austria in TTP4 SLI 14.2.1 had a total of 107 EEA accounts join or follow between 1 July – 31 December 2024. Whether an account qualifies as an EEA account is based on the IP address used during registration of the account.  

Please note that the metrics provided below are total numbers and do not imply that these fake accounts were engaging in posting misinformation or disinformation.  
  
We have not reported metrics associated with this TTP where, for example, there is no meaningful metric to report (e.g., metrics for after the TTP in question, given LinkedIn removes detected misinformation and fake accounts from our platform) or LinkedIn does not have a reasonable means to compute the requested metrics.
  

Country TTP 1 - Views/ impressions before action - The number of EEA accounts that connected to or followed the fake accounts between 1 July – 31 December 2024 TTP 1 - Views/ impressions before action - The number of fake accounts that posted a feed post between 1 July – 31 December 2024 TTP 4 - Views/ impressions before action - The number of accounts in the EEA that joined or followed the pages and groups reported in TTP 4 SLI 14.2.1 between 1 July – 31 December 2024
Austria 3,299 710 107
Belgium 5,081 1,147 177
Bulgaria 2,033 549 31
Croatia 1,465 236 78
Cyprus 977 218 23
Czech Republic 3,811 629 50
Denmark 2,732 558 42
Estonia 525 129 14
Finland 1,701 506 35
France 58,419 11,784 2,484
Germany 35,323 9,523 792
Greece 5,422 1,009 211
Hungary 1,606 401 114
Ireland 3,951 789 47
Italy 21,848 4,535 2,225
Latvia 1,087 230 35
Lithuania 1,927 304 50
Luxembourg 655 134 21
Malta 587 101 35
Netherlands 18,922 3,448 420
Poland 13,476 2,715 349
Portugal 6,676 1,448 884
Romania 5,942 920 161
Slovakia 1,207 266 33
Slovenia 654 142 18
Spain 68,945 6,128 1,499
Sweden 6,511 1,020 147
Iceland 191 36 5
Liechtenstein 24 6 2
Norway 1,797 413 24
Total EU 274,782 49,579 10,082
Total EEA 276,794 50,034 10,113

Measure 14.3

Relevant Signatories will convene via the Permanent Task-force to agree upon and publish a list and terminology of TTPs employed by malicious actors, which should be updated on an annual basis.

QRE 14.3.1

Signatories will report on the list of TTPs agreed in the Permanent Task-force within 6 months of the signing of the Code and will update this list at least every year. They will also report about the common baseline elements, objectives and benchmarks for the policies and measures.

The relevant Taskforce Subgroup has considered the list of TTPs adopted in the second half of 2022 (and reported on in Microsoft’s previous reports) as being fit for purpose for the current reporting cycle. LinkedIn reiterates the need for flexibility amongst different types of services to address TTPs that are most relevant to their platforms.  

This list can be consulted below:

The following TTPs pertain to the creation of assets for the purpose of a disinformation campaign, and to ways to make these assets seem credible:    

·     1. Creation of inauthentic accounts or botnets (which may include automated, partially automated, or non-automated accounts)     
·     2. Use of fake / inauthentic reactions (e.g. likes, up votes, comments)   
·     3. Use of fake followers or subscribers    
·     4. Creation of inauthentic pages, groups, chat groups, fora, or domains    
·     5. Account hijacking or impersonation   
    
The following TTPs pertain to the dissemination of content created in the context of a disinformation campaign, which may or may not include some forms of targeting or attempting to silence opposing views. Relevant TTPs include:     

·     6. Deliberately targeting vulnerable recipients (e.g. via personalized advertising, location spoofing or obfuscation)    
·     7. Deploy deceptive manipulated media (e.g. “deep fakes”, “cheap fakes”…)
·     8. Use “hack and leak” operation (which may or may not include doctored content)    
·     9. Inauthentic coordination of content creation or amplification, including attempts to deceive/manipulate platforms algorithms (e.g. keyword stuffing or inauthentic posting/reposting designed to mislead people about popularity of content, including by influencers)    
·     10. Use of deceptive practices to deceive/manipulate platform algorithms, such as to create, amplify or hijack hashtags, data voids, filter bubbles, or echo chambers   
·     11. Non-transparent compensated messages or promotions by influences
·     12. Coordinated mass reporting of non-violative opposing content or accounts    
Further, as noted above, the relevant Taskforce Subgroup has considered whether the SLIs for each of these TTPs are fit for purpose and classified each SLI as either Theoretically fit for purpose, Not fit for purpose, Partially fit for purpose or Optional /Alternative. 

Commitment 15

Relevant Signatories that develop or operate AI systems and that disseminate AI-generated and manipulated content through their services (e.g. deepfakes) commit to take into consideration the transparency obligations and the list of manipulative practices prohibited under the proposal for Artificial Intelligence Act.

We signed up to the following measures of this commitment

Measure 15.1 Measure 15.2

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Yes

If yes, list these implementation measures here

Additional transparency on use of personal data for generative AI.

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

Yes

If yes, which further implementation measures do you plan to put in place in the next 6 months?

LinkedIn will continue to assess its policies and services and to update them as warranted. 

Measure 15.1

Relevant signatories will establish or confirm their policies in place for countering prohibited manipulative practices for AI systems that generate or manipulate content, such as warning users and proactively detect such content.

QRE 15.1.1

In line with EU and national legislation, Relevant Signatories will report on their policies in place for countering prohibited manipulative practices for AI systems that generate or manipulate content.

During the reporting period, LinkedIn continued to support and launch products and features that disseminate, and enable LinkedIn members to disseminate, AI-generated textual content.  LinkedIn also continues to integrate generative AI-powered features into existing products. To mitigate the potential safety risks posed by such features, LinkedIn has in place and continues to augment policies and procedures to ensure that our AI systems, including any new features are consistent with LinkedIn’s Responsible AI Principles and applicable law.

1. Privacy and Security – LinkedIn has an existing process for assessing the privacy and security of new products and initiatives, which has been augmented to recognize particular risks arising from the use of generative AI. With respect to generative AI, additional considerations include being thoughtful about the personal data used in prompt engineering and ensuring that members maintain full control of their profiles. 

2. Safety – LinkedIn has an existing process for assessing the safety of new products and initiatives, that has been augmented to recognize particular risks with generative AI. New features are carefully ramped to members and rate limits are introduced to reduce the likelihood of abuse.  Limiting access allows us to watch for issues that may arise. We aim to proactively identify how prompts could be misused to then mitigate potential abuse. We engage in proactive content moderation (all AI generated content is held to the same professional bar as other content on the LinkedIn platform), through applying content moderation filters to both the member inputs for prompts and the output.  We also engage in reactive content moderation, through provision of member tools to report policy-violating issues with the content.  Additional features have been added to 1.      these tools that address generative AI-specific issues such as ‘hallucinations.’  Additionally, all generative AI-powered features that have outputs that are directly visible to LinkedIn users, go through (1) manual and automated “red teaming,” to test the generative AI-powered feature and to identify and mitigate any vulnerabilities, and (2) quality assurance assessments on response quality, accuracy, and hallucinations with the goal to remediate discovered inaccuracies.  
3. Fairness and Inclusion – LinkedIn has a cross functional team that designs policy and process to proactively mitigate the risk that AI tools, including generative AI tools, perpetuate societal biases or facilitate discrimination. To promote fairness and inclusion, we target two key areas - content subject and communities. With respect to content subjects, prompts are engineered to reduce the risk of biased content, blocklists are leveraged to replace harmful terms with neutral terms, and member feedback is monitored to learn and improve. With respect to communities, in addition to a focus on problematic content like stereotypes, we are working to expand the member communities that are served by our generative AI tools. Additionally, LinkedIn continues to invest in methodologies and techniques to more broadly ensure algorithmic fairness.  
4. Transparency – LinkedIn is committed to being transparent with members.  With respect to generative AI products and features, our goal is to educate members about the technology and our use of it such that they can make their own decisions about how to engage with it. For example, with Collaborative Articles we identify the use of AI in the relevant UI and we provide additional detail in a linked Help Center article. Additionally, LinkedIn labels content containing industry-leading “Content Credentials” technology developed by the Coalition for Content 1.      Provenance and Authenticity (“C2PA”), including AI-generated content containing C2PA metadata.  Content Credentials on LinkedIn show as a “Cr” icon on images and videos that contain C2PA metadata, particularly on highly visible surfaces such as the feed. By clicking the icon, LinkedIn members can trace the origin of the AI-created media, including the source and history of the content, and whether it was created or edited by AI. Additionally, LinkedIn provides members with information on how their personal data is used for generative AI in the LinkedIn Help Center, including how personal data is used for content generating AI model training. As of December 31, 2024, LinkedIn did not train content generating AI models on data from members located in the EU, EEA, UK, Switzerland, Canada, Hong Kong, or mainland China. 
5. Accountability – In addition to the privacy, security, and safety processes discussed above, for AI tools we have additional assessments of training data and model cards so we can more appropriately assess risks and develop mitigations for the AI models that support our AI products and initiatives.  

Measure 15.2

Relevant Signatories will establish or confirm their policies in place to ensure that the algorithms used for detection, moderation and sanctioning of impermissible conduct and content on their services are trustworthy, respect the rights of end-users and do not constitute prohibited manipulative practices impermissibly distorting their behaviour in line with Union and Member States legislation.

QRE 15.2.1

Relevant Signatories will report on their policies and actions to ensure that the algorithms used for detection, moderation and sanctioning of impermissible conduct and content on their services are trustworthy, respect the rights of end-users and do not constitute prohibited manipulative practices in line with Union and Member States legislation.

With respect to the algorithms used for detection, moderation, and sanctioning of impermissible conduct and content, please see:  
·       QRE 15.1.1 (policies for countering prohibited manipulative   practices in AI systems);   
·       QRE 18.1.3 (design of recommender systems and related AI);   
·       QRE 18.2.1 (policies and procedures to limit spread of harmful false or misleading information);   
·       QRE 22.2.1 (actions taken to assist members in identifying trustworthy content); and   
·       QRE 23.2.1 (actions taken to ensure integrity of reporting and appeals process).  

Commitment 16

Relevant Signatories commit to operate channels of exchange between their relevant teams in order to proactively share information about cross-platform influence operations, foreign interference in information space and relevant incidents that emerge on their respective services, with the aim of preventing dissemination and resurgence on other services, in full compliance with privacy legislation and with due consideration for security and human rights risks.

We signed up to the following measures of this commitment

Measure 16.1 Measure 16.2

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

No

If yes, list these implementation measures here

Not applicable

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

Yes

If yes, which further implementation measures do you plan to put in place in the next 6 months?

We look forward to continuing to work on this commitment with the other signatories as we develop further cross platform information sharing.  

Measure 16.1

Relevant Signatories will share relevant information about cross-platform information manipulation, foreign interference in information space and incidents that emerge on their respective services for instance via a dedicated sub-group of the permanent Task-force or via existing fora for exchanging such information.

QRE 16.1.1

Relevant Signatories will disclose the fora they use for information sharing as well as information about learnings derived from this sharing.

LinkedIn, through Microsoft, is an active participant in and contributor to the Task-force’s Crisis Response subgroup, in which it proactively provides analysis and data, related to influence operations, foreign interference in information space and relevant incidents that emerges on its service. Microsoft’s internal threat detection and research teams, including Microsoft Threat Analysis Center (MTAC), Microsoft Threat Intelligence Center (MSTIC), Microsoft Research (MSR), and AI For Good, collect and analyse data on actors of disinformation, misinformation and information manipulation across platforms.   

Moreover, LinkedIn works with numerous partners to facilitate the flow of information to tackle purveyors of disinformation, including disinformation spread by state-sponsored and institutional actors.   

LinkedIn maintains an internal Trust and Safety team composed of threat investigators and intelligence analysts to address disinformation. This team works with peers and other stakeholders, including our Artificial Intelligence modelling team, to identify and remove nation-state actors and coordinated inauthentic campaigns. LinkedIn conducts investigations into election-related influence operations and nation-state targeting including continued information sharing on threats with industry peers and Law Enforcement on a regular basis. LinkedIn works with peer companies and other stakeholders to receive and share indicators related to fake accounts created by state-sponsored actors, such as confirmed Tactics, Techniques, and Protocols (TTPs) and Indicators of Compromise (IOC). This exchange of information leads to a better understanding of the incentives of sophisticated and well-funded threat actors and how they evolve their TTPs to achieve those goals, which assists LinkedIn in their identification and removal.   Any associated disinformation content is verified by our internal or external fact-checkers as needed, and coordinated inauthentic behaviours (CIBs) are also removed by our Threat Prevention and Defense team.  

LinkedIn, along with its parent company, Microsoft, is heavily involved in threat exchanges. These threat exchanges take various forms, such as: 1) regular discussion amongst industry peers to discuss high-level trends and campaigns; and, 2) one-on-one engagement with individual peer companies to discuss TTPs and IOCs. This exchange of information leads to a better understanding of the incentives of sophisticated and well-funded threat actors and how they evolve their TTPs to achieve those goals, which assists us in their identification and removal.    

LinkedIn always stands ready to receive and investigate any leads we receive from peers and other external stakeholders. In addition to one-on-one engagement with peers, we also consume intelligence from vendors and investigate any TTPs and IOCs made available in peer disclosures. In turn, we also regularly release information about policy-violating content on our platform in publicly-available transparency reports and blog posts. 

SLI 16.1.1

Number of actions taken as a result of the collaboration and information sharing between signatories. Where they have such information, they will specify which Member States that were affected (including information about the content being detected and acted upon due to this collaboration).

We look forward to providing reports where appropriate in future reporting periods.

Measure 16.2

Relevant Signatories will pay specific attention to and share information on the tactical migration of known actors of misinformation, disinformation and information manipulation across different platforms as a way to circumvent moderation policies, engage different audiences or coordinate action on platforms with less scrutiny and policy bandwidth.

QRE 16.2.1

As a result of the collaboration and information sharing between them, Relevant Signatories will share qualitative examples and case studies of migration tactics employed and advertised by such actors on their platforms as observed by their moderation team and/or external partners from Academia or fact-checking organisations engaged in such monitoring.

We look forward to working on this commitment with the other signatories as we develop further cross-platform information sharing.

Empowering Users

Commitment 17

In light of the European Commission's initiatives in the area of media literacy, including the new Digital Education Action Plan, Relevant Signatories commit to continue and strengthen their efforts in the area of media literacy and critical thinking, also with the aim to include vulnerable groups.

We signed up to the following measures of this commitment

Measure 17.1 Measure 17.2 Measure 17.3

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

No

If yes, list these implementation measures here

Not applicable

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

Not applicable

Measure 17.1

Relevant Signatories will design and implement or continue to maintain tools to improve media literacy and critical thinking, for instance by empowering users with context on the content visible on services or with guidance on how to evaluate online content.

QRE 17.1.1

Relevant Signatories will outline the tools they develop or maintain that are relevant to this commitment and report on their deployment in each Member State.

As the world around us changes, LinkedIn continues to evolve and adapt our systems and practices for combating misinformation and other inauthentic behaviour on our platform, including to respond to the unique challenges presented by world events. 

LinkedIn’s Professional Community Policies, which all members agree to abide by on joining LinkedIn,  prohibit misinformation. As described in more detail in our response to QRE 18.1.1, LinkedIn uses a combination of automated and manual activity to keep content that violates our policies off of LinkedIn.

LinkedIn also aims to educate its members about civic discourse, electoral processes, and public security through its global team of news editors. These editors provide trustworthy and authoritative content to LinkedIn’s member-base, and its content moderation teams closely monitor associated platform conversations in a number of languages. 

In addition to broader measures, LinkedIn has taken special care to counter low authority information in relation to the war of aggression by Russia on Ukraine, the Israel-Hamas Conflict, and the European Elections, as detailed in relevant chapters.

For example, during pre-election cycles, LinkedIn relies on trusted and reputable publisher sources for featured shares, focusing on the policy impact on businesses and professionals around the EU. LinkedIn also curates links to topical landing pages from trusted publishers to provide members with easy and reliable entry points to more detailed coverage. LinkedIn does not compete with trusted publishers for speed or depth of coverage, but instead aims to connect their existing coverage to LinkedIn members and their needs. During important events in European Elections, this team provides manually curated and localised storylines.

We also work to identify and remove misinformation and inauthentic behaviour from our platform. As we continue to improve, we are committed to helping our members make informed decisions about content they find on LinkedIn, so we work with Microsoft to provide tools that assist our members in identifying trustworthy, relevant, authentic, and diverse content.

LinkedIn’s Professional Community Policies clearly detail the objectionable and harmful content that is not allowed on LinkedIn. Misinformation and inauthentic content is not allowed, and our automated defenses take proactive steps to remove them. LinkedIn’s blog provides information regarding our efforts, including How We’re Protecting Members From Fake Profiles, Automated Fake Account Detection, and An Update on How We Keep Members Safe.

LinkedIn members can report content that violates our Professional Community Policies, including misinformation and inauthentic content. Our Trust and Safety teams work every day to identify and restrict such activity, and if reported content violates the Professional Community Policies, it will be actioned in accordance with our polices.

LinkedIn members can identify misinformation and inauthentic behaviour by utilising the News Literacy Project, The Trust Project and Verified, all of which develop information literacy campaigns built on industry research and best practices. The News Literacy Project campaign developed a quiz that tests a person’s ability to identify why the information they are seeing is false and inaccurate in less than five minutes. The Trust Project campaign developed the research-backed 8 Trust Indicators, which aim to improve consumers ability to identify reliable, ethical journalism. Finally, Verified delivers lifesaving information and fact-based advice to build digital literacy that helps communities protect themselves from misinformation. LinkedIn has also published an article in our Help Center compiling these useful resources on misinformation and inauthentic behaviour.

SLI 17.1.1

Relevant Signatories will report, at the Member State level, on metrics pertinent to assessing the effects of the tools described in the qualitative reporting element for Measure 17.1, which will include: the total count of impressions of the tool; and information on the interactions/engagement with the tool.

The table below reports the number of visitors to LinkedIn’s Help Center article compiling useful resources on misinformation during the period 1 July – 31 December 2024. 

Country Total count of the tool's impressions - Number of visits during the period 1 July - 31 December 2024
Austria 35
Belgium 15
Bulgaria 39
Croatia 19
Cyprus 4
Czech Republic 28
Denmark 80
Estonia 18
Finland 81
France 290
Germany 794
Greece 24
Hungary 19
Ireland 63
Italy 3,988
Latvia 354
Lithuania 185
Luxembourg 21
Malta 10
Netherlands 632
Poland 166
Portugal 32
Romania 56
Slovakia 15
Slovenia 7
Spain 109
Sweden 99
Iceland 14
Liechtenstein 1
Norway 45
Total EU 7,184
Total EEA 7,244

Measure 17.2

Relevant Signatories will develop, promote and/or support or continue to run activities to improve media literacy and critical thinking such as campaigns to raise awareness about Disinformation, as well as the TTPs that are being used by malicious actors, among the general public across the European Union, also considering the involvement of vulnerable communities.

QRE 17.2.1

Relevant Signatories will describe the activities they launch or support and the Member States they target and reach. Relevant signatories will further report on actions taken to promote the campaigns to their user base per Member States targeted.

Microsoft works with leading media and information literacy partners globally to support the development and promotion of media literacy campaigns.

Microsoft has continued its partnerships with third-party organisations, including the News Literacy Project and The Trust Project, to amplify media literacy campaigns while continuing introductory calls with new organizations to grow additional campaigns’ reach to new markets.  Beginning March 2024 and continuing through Autumn 2024, Microsoft launched a new “Be Informed, Not Misled” campaign from the News Literacy Project. This campaign averages millions of impressions monthly.
 
Microsoft has also launched a media literacy initiative focusing on awareness about deceptive use of AI on elections and how to identify AI generated content: Combating the deceptive use of AI in elections.

Measure 17.3

For both of the above Measures, and in order to build on the expertise of media literacy experts in the design, implementation, and impact measurement of tools, relevant Signatories will partner or consult with media literacy experts in the EU, including for instance the Commission's Media Literacy Expert Group, ERGA's Media Literacy Action Group, EDMO, its country-specific branches, or relevant Member State universities or organisations that have relevant expertise.

QRE 17.3.1

Relevant Signatories will describe how they involved and partnered with media literacy experts for the purposes of all Measures in this Commitment.

Microsoft continues to work with multiple organisations to develop and promote media literacy campaigns, including 2024 campaigns from the News Literacy Project and The Trust Project to promote information literacy resources on Microsoft platforms. 

For the next reporting period, Microsoft is continuing to work with existing and new partners to create, disseminate, and report on expanded literacy campaigns in EEA markets. Please also see response to QRE 17.1.1.

Commitment 18

Relevant Signatories commit to minimise the risks of viral propagation of Disinformation by adopting safe design practices as they develop their systems, policies, and features.

We signed up to the following measures of this commitment

Measure 18.1 Measure 18.2 Measure 18.3

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

No

If yes, list these implementation measures here

Not applicable

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

Yes

If yes, which further implementation measures do you plan to put in place in the next 6 months?

LinkedIn will continue to assess its policies and services and to update them as warranted.

Measure 18.1

Relevant Signatories will take measures to mitigate risks of their services fuelling the viral spread of harmful Disinformation, such as: recommender systems designed to improve the prominence of authoritative information and reduce the prominence of Disinformation based on clear and transparent methods and approaches for defining the criteria for authoritative information; other systemic approaches in the design of their products, policies, or processes, such as pre-testing.

QRE 18.1.1

Relevant Signatories will report on the risk mitigation systems, tools, procedures, or features deployed under Measure 18.1 and report on their deployment in each EU Member State.

With respect to AI design, please see QRE 18.1.3

With respect to additional tools, procedures, or features, please see: 

·       QRE 17.1.1 (editorial practices to provide members with trustworthy news); 
·       QRE 18.2.1 (policies and procedures to limit spread of harmful false or misleading information); 
·       QRE 21.1.1 (action taken when information is identified as misinformation); 
·       QRE 22.1.1 (features and systems related to fake and inauthentic profiles); 
·       QRE 22.2.1 (actions taken to assist members identify trustworthy content); 

QRE 23.2.1 (actions taken to ensure integrity of reporting and appeals process). 

QRE 18.1.2

Relevant Signatories will publish the main parameters of their recommender systems, both in their report and, once it is operational, on the Transparency Centre.

The main parameters of the LinkedIn feed recommender systems are as follows: 

-          Identity: We seek to contextualise content based on who a member is by looking at their profile, for example: Who are you? Where do you work? What are your skills? Who are your connections? Where is your profile location?
-          Content: We aim to match appropriate content to each member by evaluating, for example: How many times was the feed update viewed? How many times was it reacted to? What is the content about? How old is it? Is the update sharing knowledge or professional advice? Is the update from someone the member is connected to or follows? What language is it written in? Is the conversation constructive and professional? Will engagement on the update lead to future high-quality content? What companies, people, or topics are mentioned in the update?
-          Member Activity: Finally, we look at how a member engages with content and examine, for example: What have you reacted to and shared in the past? Who do you interact with most frequently or recently? Where do you spend the most time in your feed? Which hashtags, people or companies do you follow? Who are your connections? What types of topics are you interested in? What other members follow you? What actions have other members taken on your posts? How long has it been since the foregoing actions took place? 

Combining these and other related signals, the LinkedIn feed recommender systems rank the content for the member, with the goal of showing the member high quality content that the member will enjoy consuming and can lead to further creation on the platform. To do this, the feed optimizes for content that a member is most likely to find highly valuable, which in turn is likely to lead the member to act on (e.g., react, comment, or reshare). 

QRE 18.1.3

Relevant Signatories will outline how they design their products, policies, or processes, to reduce the impressions and engagement with Disinformation whether through recommender systems or through other systemic approaches, and/or to increase the visibility of authoritative information.

At LinkedIn, our guiding principle is “Members First.” It ensures we honour our responsibility to protect our members and maintain their trust in every decision we make, and puts their interests first. A key area where we apply this value in engineering is within our design process. We call this “responsible design,” which means that everything we build is intended to work as part of a unified system that delivers the best member experience, provides the right protections for our members and customers, and mitigates any unintended consequences in our products. 

One of the core pillars of “responsible design” is “responsible AI,” which follows the LinkedIn Responsible AI Principles, which are inspired by and aligned with Microsoft’s Responsible AI Principles. The LinkedIn Responsible AI Principles are to advance economic opportunity, uphold trust, promote fairness and inclusion, provide transparency, and embrace accountability.  In addition to the LinkedIn Responsible AI Principles, responsible AI is also about intent and impact. “Intent” involves evaluating training data, designing systems, and reviewing model performance before the model is ever deployed to production to make sure that our principles are reflected at every step in the process. It includes actively changing our products and algorithms to empower every member. “Impact” covers detecting and monitoring the ways that people interact with products and features after they are deployed. We do this by measuring whether they provide significant value and empower individuals to reach their goals. Intent and impact are a cyclical process of refinement that go hand-in-hand towards the broader goal of responsible design. 

With respect to safety, we seek to keep content that violates our Professional Community Policies off of LinkedIn. This is done through a combination of automated and manual activity. Our first layer of protection is using AI to proactively filter out bad content and deliver relevant experiences for our members. We use content (like certain key words or images) that has previously been identified as violating our content policies to help inform our AI models so that we can better identify and restrict similar content from being posted in the future. The second layer of protection uses AI to flag content that is likely to be violative for human review. This occurs when the algorithm is not confident enough to warrant automatic removal. The third layer is member led, where members report content and then our team of reviewers evaluates the content and removes it if it is found to be in violation of our policies.

Quantifying the above process to monitor how many content violations are successfully prevented is another important task that our Data Science team prioritises, such that we can continuously refine our processes to improve detection and prevention of violative content. 

Please also see: 

·       QRE 17.1.1 (editorial practices to provide members with trustworthy news); 
·       QRE 18.2.1 (policies and procedures to limit spread of harmful false or misleading information); 
·       QRE 21.1.1 (action taken when information is identified as misinformation); 
·       QRE 22.1.1 (features and systems related to fake and inauthentic profiles); 
·       QRE 22.2.1 (actions taken to assist members identify trustworthy content); 
QRE 23.2.1 (actions taken to ensure integrity of reporting and appeals process). 


Measure 18.2

Relevant Signatories will develop and enforce publicly documented, proportionate policies to limit the spread of harmful false or misleading information (as depends on the service, such as prohibiting, downranking, or not recommending harmful false or misleading information, adapted to the severity of the impacts and with due regard to freedom of expression and information); and take action on webpages or actors that persistently violate these policies.

QRE 18.2.1

Relevant Signatories will report on the policies or terms of service that are relevant to Measure 18.2 and on their approach towards persistent violations of these policies.

LinkedIn is an online professional network. On LinkedIn, the world’s professionals come together to find jobs, stay informed, learn new skills, and build productive relationships. The content that our members share becomes part of their professional identity and can be seen by their boss, colleagues, and potential business partners. Accordingly, the content on LinkedIn is professional in nature. 

To help keep LinkedIn safe, trusted, and professional, our Professional Community Policies clearly detail the range of objectionable and harmful content that is not allowed on LinkedIn. Fake accounts, misinformation, and inauthentic content are not allowed, and we take active steps to remove it from our platform. 

LinkedIn removes “specific claims, presented as fact, that are demonstrably false or substantially misleading and likely to cause harm.” This approach applies globally and is used for purposes of content moderation and for publicly reporting figures on misinformation. Specific examples of what might constitute misinformation can be found here in our Help Center. As part of our User Agreement, our Professional Community Policies are accepted by every member when joining LinkedIn and are easily available to every member.

LinkedIn creates value and preserves trust by fostering a safe, trusted, and professional platform, while honouring members’ professional expression and speech. LinkedIn enables healthy on-platform conversations by facilitating the removal of misinformation that threatens its members’ safety. And when content doesn’t conclusively violate LinkedIn policies, LinkedIn gives the speaker the benefit of the doubt and favours speech (i.e., leaves the content up on platform). 

Additionally, as described in greater detail below, human review plays a significant role in our content moderation process. Additionally, Members who post content and members who report content can appeal our content moderation decisions. 

Our content policies are clear and we apply them equally for all members. Within our Professional Community Policies we provide granular information and examples on what is and what is not allowed on LinkedIn.

Furthermore, LinkedIn has automated defences to identify and prevent abuse, including inauthentic behaviour, such as spam, phishing and scams, duplicate accounts, fake accounts, and misinformation. Our Trust and Safety teams work every day to identify and restrict inauthentic activity. We’re regularly rolling out scalable technologies like machine learning models to keep our platform safe. 

SLI 18.2.1

Relevant Signatories will report on actions taken in response to violations of policies relevant to Measure 18.2, at the Member State level. The metrics shall include: Total number of violations and Meaningful metrics to measure the impact of these actions (such as their impact on the visibility of or the engagement with content that was actioned upon).

The table below reports metrics concerning content LinkedIn removed from its platform as Misinformation, pursuant to the policy outlined in QRE 18.2.1 above. The metrics include: 

-       the number of pieces of content removed as Misinformation between 1 July – 31 December 2024, broken out by EEA Member State; 
-       the number of those content removals that were appealed by the content author; 
-       the number of those appeals that were granted;
-       the median time from appeal-to-appeal decision for those appeals. The metrics are assigned to EEA Member State based on the IP address of the of the content author.

Country The number of pieces of content removed as Misinformation between 1 July – 31 December 2024 The number of removals that were appealed by the content author The number of appeals that were granted The median time from appeal-to-appeal decision in hours
Austria 177 2 0 1.5 hours
Belgium 445 3 1
Bulgaria 36 0 0
Croatia 54 3 0
Cyprus 13 1 1
Czech Republic 88 1 0
Denmark 291 2 0
Estonia 9 0 0
Finland 52 1 0
France 3,452 14 1
Germany 1,639 40 2
Greece 164 2 0
Hungary 40 1 0
Ireland 136 0 0
Italy 1,264 15 2
Latvia 7 0 0
Lithuania 24 2 0
Luxembourg 62 0 0
Malta 11 1 0
Netherlands 3,308 38 5
Poland 128 2 0
Portugal 189 5 1
Romania 151 3 0
Slovakia 8 0 0
Slovenia 8 0 0
Spain 640 6 1
Sweden 209 1 0
Iceland 6 0 0
Liechtenstein 0 0 0
Norway 99 2 0
Total EU 12,605 142 14
Total EEA 12,710 144 14

Measure 18.3

Relevant Signatories will invest and/or participate in research efforts on the spread of harmful Disinformation online and related safe design practices, will make findings available to the public or report on those to the Code's taskforce. They will disclose and discuss findings within the permanent Task-force, and explain how they intend to use these findings to improve existing safe design practices and features or develop new ones.

QRE 18.3.1

Relevant Signatories will describe research efforts, both in-house and in partnership with third-party organisations, on the spread of harmful Disinformation online and relevant safe design practices, as well as actions or changes as a result of this research. Relevant Signatories will include where possible information on financial investments in said research. Wherever possible, they will make their findings available to the general public.

Microsoft maintains an internal research team — the Microsoft Threat Analysis Center (MTAC) — that conducts research on information influence operations and publishes both internal and public reports on its findings.

Microsoft also works with Princeton University on the creation of hub for researchers to access data from social media companies to improve the identification and tracking of cyber enabled information operations. This accelerator will be available to researchers around the world including in Europe. 

Commitment 19

Relevant Signatories using recommender systems commit to make them transparent to the recipients regarding the main criteria and parameters used for prioritising or deprioritising information, and provide options to users about recommender systems, and make available information on those options.

We signed up to the following measures of this commitment

Measure 19.1 Measure 19.2

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

No

If yes, list these implementation measures here

Not applicable

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

Not applicable

Measure 19.1

Relevant Signatories will make available to their users, including through the Transparency Centre and in their terms and conditions, in a clear, accessible and easily comprehensible manner, information outlining the main parameters their recommender systems employ.

QRE 19.1.1

Relevant Signatories will provide details of the policies and measures put in place to implement the above-mentioned measures accessible to EU users, especially by publishing information outlining the main parameters their recommender systems employ in this regard. This information should also be included in the Transparency Centre.

LinkedIn has published a variety of articles to explain to users how our recommender systems work, including:“Mythbusting the Feed: How theAlgorithm Works”; "Mythbusting the Feed: Helping our members betterunderstand LinkedIn";"Keeping your feed relevant and productive"; LinkedInSafety Series: Using AI to Protect Member Data; “Guide: Features to Help You Control Your Feed and Conversations”; Our approach to building transparentand explainable AI systems, Suggested Posts in Feed. During earlier reporting periods LinkedIn collated and expanded upon existing resources to further explain the main parameters of LinkedIn recommender systems and options provided to users to influence and control these recommender systems. 

Additionally, LinkedIn addresses automated processing and relevancy in the LinkedInUser Agreementand it includes a link to the above referenced Help Centre article in Section 3.6 of the LinkedIn User Agreement, which section focuses on recommendations and automated processing.  During an earlier reporting period, LinkedIn launched a new setting for members to control the default for how their LinkedIn feed is presented to them.  Members can now change their preferred feed view from “most relevant first” to “most recent first”.  “Most relevant first” means that LinkedIn will use data from the member’s profile and LinkedIn activity data to rank feed content based on the member’s interests.  “Most recent first” means that LinkedIn will not use the member’s profile and LinkedIn activity data to rank feed content and will instead show updates from the member’s network in reverse chronological order.

As reported in an earlier report, in August 2023, LinkedIn launched two new experiences in the EU. Additional detail is included below:

·       LinkedIn launched a revised and expanded experience to enable Members to change how their Feed experience is presented to them.  The choice is presented in the Feed (on desktop, mobile app, and mobile web) and it also points members to the setting referenced above where members can change the default sort of their Feed. Members can toggle between the following two choices: “most relevant first” or “most recent first.” The default sort option is “most relevant first.” If the Member toggles to “most recent first,” that choice will only persist for the current feed view on that particular device.  
·       LinkedIn also launched a new setting within a Member’s Account Preferences settings so Members can change the default sort option from “most relevant first” to “most recent first.” Changing that setting will persist across sessions and devices. Members can learn more about this experience and the setting in our Help Center.

Measure 19.2

Relevant Signatories will provide options for the recipients of the service to select and to modify at any time their preferred options for relevant recommender systems, including giving users transparency about those options.

SLI 19.2.1

Relevant Signatories will provide aggregated information on effective user settings, such as the number of times users have actively engaged with these settings within the reporting period or over a sample representative timeframe, and clearly denote shifts in configuration patterns.

Members that do not wish to have their LinkedIn feed experience sorted by relevance can change the default of how their LinkedIn feed experience is sorted via a setting.   

The table below reports: (1) the number of EEA members who used the “preferred feed view” setting between 1 July – 31 December 2024; and (2) the number of times those members used the “preferred feed view” setting between 1 July – 31 December 2024. 

The metrics are assigned to EEA Member State based on the self-reported profile location of the member. 

Country The number of EEA members who used the “preferred feed view” setting between 1 July – 31 December 2024 The number of times the members used the “preferred feed view” setting between 1 July – 31 December 2024
Austria 2,024 3,021
Belgium 3,108 4,719
Bulgaria 492 793
Croatia 502 903
Cyprus 256 391
Czech Republic 1,179 1,742
Denmark 2,495 3,760
Estonia 294 444
Finland 2,749 4,137
France 21,112 33,303
Germany 21,565 32,823
Greece 1,305 2,031
Hungary 777 1,148
Ireland 2,757 4,212
Italy 7,160 10,794
Latvia 256 423
Lithuania 387 606
Luxembourg 460 686
Malta 195 298
Netherlands 13,698 21,132
Poland 3,817 5,657
Portugal 2,781 4,241
Romania 1,357 2,269
Slovakia 368 575
Slovenia 268 389
Spain 10,006 14,665
Sweden 4,628 7,014
Iceland 41 57
Liechtenstein 24 44
Norway 1,141 1,878
Total EU 105,996 162,176
Total EEA 107,202 164,155

Commitment 20

Relevant Signatories commit to empower users with tools to assess the provenance and edit history or authenticity or accuracy of digital content.

We signed up to the following measures of this commitment

Measure 20.1 Measure 20.2

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

No

If yes, list these implementation measures here

Not applicable

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

Not applicable

Measure 20.1

Relevant Signatories will develop technology solutions to help users check authenticity or identify the provenance or source of digital content, such as new tools or protocols or new open technical standards for content provenance (for instance, C2PA).

QRE 20.1.1

Relevant Signatories will provide details of the progress made developing provenance tools or standards, milestones reached in the implementation and any barriers to progress.

Microsoft is a founding and active member of the Coalition for Content Provenance and Authenticity (C2PA) and is currently a co-chair. 

On 15 May 2024, LinkedIn announced that, starting that day,  content containing the Coalition for Content Provenance and Authenticity’s industry-leading “Content Credentials” technology (C2PA) will be automatically labelled on LinkedIn. Since LinkedIn began ramping Content Credentials, users are beginning to see the “Cr” icon on images and videos that contain C2PA metadata. By clicking on the icon, users are able to trace the origin of the AI-created media, including the source and history of the content, and whether it was created or edited by AI. The first place users will see the Content Credentials icon is on their LinkedIn feed, and LinkedIn is working to expand coverage to additional surfaces, including ads. By providing a verifiable trail of where content originates from and whether it was edited, C2PA helps keep digital information reliable, protect against unauthorized use, and create a transparent, secure digital environment for creators, publishers, and members. LinkedIn has also published an article in its Help Center which provides more information on C2PA and Content Credentials.  

Measure 20.2

Relevant Signatories will take steps to join/support global initiatives and standards bodies (for instance, C2PA) focused on the development of provenance tools.

QRE 20.2.1

Relevant Signatories will provide details of global initiatives and standards bodies focused on the development of provenance tools (for instance, C2PA) that signatories have joined, or the support given to relevant organisations, providing links to organisation websites where possible.

Microsoft is a founding member of the Coalition for Content Provenance and Authenticity (C2PA). The C2PA Coalition aims to address the prevalence of disinformation, misinformation, and online content fraud through developing technical standards for certifying the source and history or provenance of media content.

As detailed in the response to QRE 20.1.1., LinkedIn has also adopted the C2PA’s industry-leading “Content Credentials” technology to include metadata labelling, including data about whether content is created using AI, on content containing the C2PA technology.

Further information on C2PA is available on its website here.

Commitment 21

Relevant Signatories commit to strengthen their efforts to better equip users to identify Disinformation. In particular, in order to enable users to navigate services in an informed way, Relevant Signatories commit to facilitate, across all Member States languages in which their services are provided, user access to tools for assessing the factual accuracy of sources through fact-checks from fact-checking organisations that have flagged potential Disinformation, as well as warning labels from other authoritative sources.

We signed up to the following measures of this commitment

Measure 21.1 Measure 21.2 Measure 21.3

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

No

If yes, list these implementation measures here

Not applicable

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

Not applicable

Measure 21.1

Relevant Signatories will further develop and apply policies, features, or programs across Member States and EU languages to help users benefit from the context and insights provided by independent fact-checkers or authoritative sources, for instance by means of labels, such as labels indicating fact-checker ratings, notices to users who try to share or previously shared the rated content, information panels, or by acting upon content notified by fact-checkers that violate their policies.

QRE 21.1.1

Relevant Signatories will report on the policies, features, or programs they deploy to meet this Measure and on their availability across Member States.

As detailed above in QRE1.1.1 and 17.1.1, LinkedIn prohibits misinformation and disinformation on its platform, whether in the form of organic content or in the form of advertising content. LinkedIn’s Professional Community Policies, which apply to all content on LinkedIn’s platform, expressly prohibit the sharing of false or misleading content, including misinformation and disinformation. 

Where content is identified as misinformation (whether as a result of a report or proactively detected), we do not label it, rather it is removed from LinkedIn. This includes situations where LinkedIn personnel leverage the conclusions of fact checkers to determine whether the content at issue violates LinkedIn’s Professional Community Policies.

Please also see our response to QRE 17.1.1 which details how our internal team of experienced news editors provides trustworthy news about current events from verified sources and other steps we take to tackle disinformation. 

SLI 21.1.1

Relevant Signatories will report through meaningful metrics on actions taken under Measure 21.1, at the Member State level. At the minimum, the metrics will include: total impressions of fact-checks; ratio of impressions of fact-checks to original impressions of the fact-checked content–or if these are not pertinent to the implementation of fact-checking on their services, other equally pertinent metrics and an explanation of why those are more adequate.

As detailed in our response to 21.1.1, LinkedIn removes, rather than labels, content that violates policy on false and misleading content.

Accordingly, the metrics for this SLI for the period 1 July – 31 December 2024 is zero. 

SLI 21.1.2

When cooperating with independent fact-checkers to label content on their services, Relevant Signatories will report on actions taken at the Member State level and their impact, via metrics, of: number of articles published by independent fact-checkers; number of labels applied to content, such as on the basis of such articles; meaningful metrics on the impact of actions taken under Measure 21.1.1 such as the impact of said measures on user interactions with, or user re-shares of, content fact-checked as false or misleading.

As detailed in response to QRE 21.1.1, LinkedIn removes, rather than labels, content that violates our policy on false and misleading content. Accordingly, the metrics for this SLI for the period 1 July – 31 December 2024 is zero.

Measure 21.2

Relevant Signatories will, in light of scientific evidence and the specificities of their services, and of user privacy preferences, undertake and/or support research and testing on warnings or updates targeted to users that have interacted with content that was later actioned upon for violation of policies mentioned in this section. They will disclose and discuss findings within the permanent Task-force in view of identifying relevant follow up actions.

QRE 21.2.1

Relevant Signatories will report on the research or testing efforts that they supported and undertook as part of this commitment and on the findings of research or testing undertaken as part of this commitment. Wherever possible, they will make their findings available to the general public.

LinkedIn has to date not undertaken and/or supported separate research and testing on the potential efficacy of warnings or updates targeted to users that have interacted with content that was later actioned upon for violation of our Professional Community Policies. 

Given LinkedIn currently removes, rather than labels, content that violates our policy on false and misleading content, LinkedIn may be unable to provide meaningful context to users as to the specific content that they had viewed which was later actioned. 

To the extent others have conducted such research and/or testing, LinkedIn is happy to discuss findings within the relevant Task-force Subgroups in view of identifying relevant follow-up actions.

Measure 21.3

Where Relevant Signatories employ labelling and warning systems, they will design these in accordance with up-to-date scientific evidence and with analysis of their users' needs on how to maximise the impact and usefulness of such interventions, for instance such that they are likely to be viewed and positively received.

QRE 21.3.1

Relevant Signatories will report on their procedures for developing and deploying labelling or warning systems and how they take scientific evidence and their users' needs into account to maximise usefulness.

As detailed in response to QRE 21.1.1, LinkedIn removes, rather than labels, content that violates our policy on false and misleading content

Commitment 22

Relevant Signatories commit to provide users with tools to help them make more informed decisions when they encounter online information that may be false or misleading, and to facilitate user access to tools and information to assess the trustworthiness of information sources, such as indicators of trustworthiness for informed online navigation, particularly relating to societal issues or debates of general interest.

We signed up to the following measures of this commitment

Measure 22.1 Measure 22.2 Measure 22.3 Measure 22.7

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

No

If yes, list these implementation measures here

Not applicable

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

Not applicable

Measure 22.1

Relevant Signatories will make it possible for users of their services to access indicators of trustworthiness (such as trust marks focused on the integrity of the source and the methodology behind such indicators) developed by independent third-parties, in collaboration with the news media, including associations of journalists and media freedom organisations, as well as fact-checkers and other relevant entities, that can support users in making informed choices.

QRE 22.1.1

Relevant Signatories will report on how they enable users of their services to benefit from such indicators or trust marks.

In October 2022, LinkedIn began offering an “About this profile” feature that shows members when a Profile was created, when the member’s contact information and Profile photo were last updated, and if the member has verifications associated with their profile (like a workplace or identity verification). LinkedIn also has rolled out a range of free verifications, which allow our members to verify certain information about themselves, like their association with a particular company or educational institution or their identity (through one of LinkedIn’s verification partners e.g. in the EEA LinkedIn’s identity verification partner is Persona). 

The above features can be strong user empowerment tools. Specifically, they can provide our members valuable authenticity signals to help them make more informed decisions about what content and individuals they engage with online.

SLI 22.1.1

Relevant Signatories will report on Member State level percentage of users that have enabled the trustworthiness indicator.

The table below reports metrics concerning EEA member use of the “About this profile” feature described above in QRE 22.1.1. The metrics include: (1) the number of members who used the “About this profile” feature between 1 July – 31 December 2024; and (2) the aggregate number of times those members used the feature between 1 July – 31 December 2024. 

The metrics are assigned to EEA Member State based on the self-reported profile location of the member. 

Country Percentage of users that have enabled the trustworthiness indicator - The number of members who used the “About this profile” feature between 1 July – 31 December 2024 The aggregate number of times those members used the feature between 1 July – 31 December 2024
Austria 201,047 497,213
Belgium 416,378 1,008,087
Bulgaria 67,997 173,071
Croatia 49,369 107,802
Cyprus 34,730 102,045
Czech Republic 153,427 381,771
Denmark 331,225 805,215
Estonia 29,112 79,910
Finland 146,719 330,688
France 2,773,641 7,030,529
Germany 1,754,780 4,557,455
Greece 162,018 420,517
Hungary 98,215 228,240
Ireland 242,133 620,914
Italy 1,186,530 2,655,054
Latvia 31,247 74,767
Lithuania 57,795 162,218
Luxembourg 47,455 131,560
Malta 24,417 62,099
Netherlands 1,236,806 3,065,899
Poland 508,088 1,319,937
Portugal 326,189 770,593
Romania 188,243 461,845
Slovakia 46,635 114,738
Slovenia 28,765 63,266
Spain 1,253,059 3,172,615
Sweden 437,507 1,037,882
Iceland 7,395 15,305
Liechtenstein 2,915 7,234
Norway 168,496 369,054
Total EU 11,833,527 29,435,930
Total EEA 12,012,333 29,827,523

Measure 22.2

Relevant Signatories will give users the option of having signals relating to the trustworthiness of media sources into the recommender systems or feed such signals into their recommender systems.

QRE 22.2.1

Relevant Signatories will report on whether and, if relevant, how they feed signals related to the trustworthiness of media sources into their recommender systems, and outline the rationale for their approach.

LinkedIn does not prioritise any new sources in our feed, but in crisis situations, (e.g., Ukraine), we will use our manually curated Trusted Storylines to point members to reputable sources of information. 

LinkedIn’s focus, in addition to pointing members to trustworthy content, has been to prohibit members from sharing harmful content on the platform. As a real identity online professional networking platform, content posted by members is seen by that member’s colleagues, employer, and potential business partners. Consequently, members do not tend to post reputationally harmful content like misinformation, and such content does not gain traction on LinkedIn for the same reasons. Nonetheless, where misinformation is removed from LinkedIn, it is ineligible to be included in our recommender systems. 

Measure 22.3

Relevant Signatories will make details of the policies and measures put in place to implement the above-mentioned measures accessible to EU users, especially by publishing information outlining the main parameters their recommender systems employ in this regard. This information should also be included in the Transparency Centre.

QRE 22.3.1

Relevant Signatories will provide details of the policies and measures put in place to implement the above-mentioned measures accessible to EU users, especially by publishing information outlining the main parameters their recommender systems employ in this regard. This information should also be included in the Transparency Centre.

In addition to the LinkedIn User Agreement, LinkedIn has established and published (a) the LinkedInProfessional Community Policiesto set out and elaborate on LinkedIn’s requirements and expectations for its member base; and (b) help centercontentthat collates and expands upon existing resources to further explain the main parameters of LinkedIn recommender systems and options provided to users to influence and control these recommender systems.  

The Professional Community Policies and help center content are published on our platform and available in all languages that LinkedIn currently supports, including the following official EU and EEA languages: English, Czech, Danish, Dutch, Finnish, French, German, Greek, Hungarian, Italian, Norwegian, Polish, Portuguese, Romanian, Spanish and Swedish. Additionally, we have extended this language coverage in accordance with the Digital Services Act. 

LinkedIn seeks to reflect the best version of professional life through a community where we treat each other with respect and help one another succeed. 

The Professional Community Policies have three main elements: (1) Be Safe, (2) Be Trustworthy and (3) Be Professional. Additionally, the Professional Community Policies set out how members can report content that may violate our policies and that a violation of our Professional Community Policies can result in action taken against that member’s account or content. 
(1)    Be Safe: do not post harassing content; do not threaten, incite, or promote violence; do not share material depicting the exploitation of children; do not promote, sell or attempt to purchase illegal or dangerous goods or services; do not share content promoting dangerous organisations or individuals.
(2)    Be Trustworthy: do not share false or misleading content; do not create a fake profile or falsify information about yourself; do not scam, defraud, deceive others. 
(3)    Be Professional: do not be hateful, do not engage in sexual innuendos or unwanted advances; do not share harmful or shocking material; do not spam members or the platform.

Measure 22.7

Relevant Signatories will design and apply products and features (e.g. information panels, banners, pop-ups, maps and prompts, trustworthiness indicators) that lead users to authoritative sources on topics of particular public and societal interest or in crisis situations.

QRE 22.7.1

Relevant Signatories will outline the products and features they deploy across their services and will specify whether those are available across Member States.

As the world around us changes, LinkedIn continues to evolve and adapt our systems and practices for combating misinformation and other inauthentic behaviour on our platform, including to respond to the unique challenges presented by world events. 

LinkedIn’s Professional Community Policies, which all members agree to abide by on joining LinkedIn, prohibit misinformation. As described in more detail in our response to QRE 18.1.1, LinkedIn uses a combination of automated and manual activity to keep content that violates our policies off of LinkedIn.

LinkedIn also aims to educate its members about civic discourse, electoral processes, and public security through its global team of news editors. These editors provide trustworthy and authoritative content to LinkedIn’s member-base, and its content moderation teams closely monitor associated platform conversations in a number of languages. 

In addition to broader measures, LinkedIn has taken special care to counter low authority information in relation to the Russian Invasion of Ukraine, the Israel-Hamas Conflict and the European Elections as detailed in the Crisis Reporting appendices. 

 For example, during pre-election cycles, LinkedIn relies on trusted and reputable publisher sources for featured shares, focusing on the policy impact on businesses and professionals around the EU. LinkedIn also curate's links to topical landing pages from trusted publishers to provide members with easy and reliable entry points to more detailed coverage. LinkedIn does not compete with trusted publishers for speed or depth of coverage, but instead aims to connect their existing coverage to LinkedIn members and their needs. During important events in European elections, this team provides manually curated and localised storylines.   

SLI 22.7.1

Relevant Signatories will report on the reach and/or user interactions with the products or features, at the Member State level, via the metrics of impressions and interactions (clicks, click-through rates (as relevant to the tools and services in question) and shares (as relevant to the tools and services in question).

LinkedIn has no applicable metrics to report during this reporting period.

Commitment 23

Relevant Signatories commit to provide users with the functionality to flag harmful false and/or misleading information that violates Signatories policies or terms of service.

We signed up to the following measures of this commitment

Measure 23.1 Measure 23.2

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

No

If yes, list these implementation measures here

Not applicable

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

Not applicable

Measure 23.1

Relevant Signatories will develop or continue to make available on all their services and in all Member States languages in which their services are provided a user-friendly functionality for users to flag harmful false and/or misleading information that violates Signatories' policies or terms of service. The functionality should lead to appropriate, proportionate and consistent follow-up actions, in full respect of the freedom of expression.

QRE 23.1.1

Relevant Signatories will report on the availability of flagging systems for their policies related to harmful false and/or misleading information across EU Member States and specify the different steps that are required to trigger the systems.

If LinkedIn users locate content they believe violates ourProfessional Community Policies, we encourage them to report it using the in-product reporting mechanism represented by the three dots in the upper right-hand corner of the content itself on LinkedIn.

Misinformation is specifically called out as one of the reporting options.
 
The reporting feature is available through, and largely identical across, LinkedIn’s website and mobile app, although reporting reasons and their visual presentation may vary slightly for certain types of content. In most instances, the reporting process is located just one click away from the content being reported and, depending on whether content is reported in the LinkedIn App or on desktop, the reporting process takes between four or five clicks to complete.
 
Reported content generally is reviewed by trained content reviewers. In addition, LinkedIn uses automation to flag potentially violative content to our content moderation teams. If reported or flagged content violates the Professional Community Policies, it will be actioned in accordance with our policies. 
 
When members use the above reporting process, they will receive an email acknowledging receipt of the report. The email includes a link to the report status page, which we update when we make a decision, including providing the opportunity to appeal. Logged-out users receive updates on their report by email and are also provided with the opportunity to appeal.

Members also receive an email notifying them in in the event their content actioned in accordance with our policies. The email includes a link to a notice page for additional details and resources. If the member believes that their content complies with our Professional Community Policies, they can ask us to revisit our decision by submitting an appeal by clicking on the link in the notice page.
 
Further, LinkedIn has  a dedicated process for those entities who have been awarded Trusted Flagger status in accordance with Article 22 of the Digital Services Act.

Measure 23.2

Relevant Signatories will take the necessary measures to ensure that this functionality is duly protected from human or machine-based abuse (e.g., the tactic of 'mass-flagging' to silence other voices).

QRE 23.2.1

Relevant Signatories will report on the general measures they take to ensure the integrity of their reporting and appeals systems, while steering clear of disclosing information that would help would-be abusers find and exploit vulnerabilities in their defences.

As noted in our response to QRE 23.1.1, content that is flagged as misinformation (whether reported or automatically detected) is removed from LinkedIn. LinkedIn has a quality assurance team that is dedicated to ensure the quality of our content review processes and decisions. For example, the quality assurance team performs quality checks, on a routine basis, the content moderation decisions that have previously been made. This also allows us to improve our processes and further strengthen our platform as a trusted source of information.

Furthermore, as a real identity professional network, LinkedIn acts vigilantly to maintain the integrity of all accounts and to ward off bot and false account activity. LinkedIn enforces the policies in its UserAgreement prohibiting the use of “bots or other unauthorized automated methods to access the Services, add or download contacts, send or redirect messages, create, comment on, like, share, or re-share posts, or otherwise drive inauthentic engagement” through:  

·       Maintaining a dedicated Anti-Abuse team to research emerging trends and key risks and develop tools to address them 
·       Using AI to detect inauthentic activity and communities of fake accounts  
·       Using automated systems detect and block automated  activity  
·       Imposing limits on certain categories of activity commonly engaged in by bad actors  
·       Conducting manual investigation and restriction of accounts engaged in automated activity  
·       Maintaining 24/7 escalation paths to address any emerging issues. 

Commitment 24

Relevant Signatories commit to inform users whose content or accounts has been subject to enforcement actions (content/accounts labelled, demoted or otherwise enforced on) taken on the basis of violation of policies relevant to this section (as outlined in Measure 18.2), and provide them with the possibility to appeal against the enforcement action at issue and to handle complaints in a timely, diligent, transparent, and objective manner and to reverse the action without undue delay where the complaint is deemed to be founded.

We signed up to the following measures of this commitment

Measure 24.1

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

No

If yes, list these implementation measures here

Not applicable

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

Not applicable

Measure 24.1

Relevant Signatories commit to provide users with information on why particular content or accounts have been labelled, demoted, or otherwise enforced on, on the basis of violation of policies relevant to this section, as well as the basis for such enforcement action, and the possibility for them to appeal through a transparent mechanism.

QRE 24.1.1

Relevant Signatories will report on the availability of their notification and appeals systems across Member States and languages and provide details on the steps of the appeals procedure.

When a post, comment, reply, or article, is reported and found to go against our Professional Community Policies, we take appropriate actions to remove it and/or restrict accounts depending on the severity of violation.

The author whose content we’ve actioned or account we’ve restricted will generally be notified when we take action. Notices are typically sent by email and contain a link to a notice page containing certain additional information (e.g., about the content at issue, the policy violated, the action LinkedIn has taken, redress info and, in most instances, a link to allow the individual to appeal LinkedIn's decision). If the author believes LinkedIn has made a mistake in actioning their content or restricting their account, the member can ask LinkedIn to take a second look by clicking the link to submit an appeal. In order to submit the appeal, the member must confirm that they have read the relevant LinkedIn policy (a link is provided to the relevant policy, for example, LinkedIn’s policy on false and misleading information) and confirm that having reviewed the content at issue, they believe it complies with the policy. LinkedIn reviews those appeals and notifies the member of its appeal decision. If the appeal is successful, we put the content back up on LinkedIn.

Appeals made by members are treated the same regardless of whether they use LinkedIn’s premium services. 

SLI 24.1.1

Relevant Signatories provide information on the number and nature of enforcement actions for policies described in response to Measure 18.2, the numbers of such actions that were subsequently appealed, the results of these appeals, information, and to the extent possible metrics, providing insight into the duration or effectiveness of processing of appeals process, and publish this information on the Transparency Centre.

The table below reports metrics concerning content LinkedIn removed from its platform as Misinformation, pursuant to the policy outlined in QRE 18.2.1. The metrics include: 

-          (1) the number of pieces of content removed as Misinformation between 1 July – 31 December 2024, broken out by EEA Member State; 
-          (2) the number of those content removals that were appealed by the content author; 
-          (3) the number of those appeals that were granted; 
-          (4) the median time from appeal to appeal decision for those appeals. The metrics are assigned to EEA Member State based on the self-reported profile location of the content author.

Country Nr of enforcement actions Nr of actions appealed Metrics on results of appeals Metrics on the duration and effectiveness of the appeal process
Austria 177 0 0
Belgium 445 3 1
Bulgaria 36 0 0
Croatia 54 3 0
Cyprus 13 1 1
Czech Republic 88 1 0
Denmark 291 2 0
Estonia 9 0 0
Finland 52 1 0
France 3,452 14 1
Germany 1,639 40 2
Greece 164 2 0
Hungary 40 1 0
Ireland 136 0 0
Italy 1,264 15 2
Latvia 7 0 0
Lithuania 24 2 0
Luxembourg 62 0 0
Malta 11 1 0
Netherlands 3,308 38 5
Poland 128 2 0
Portugal 189 5 1
Romania 151 3 0
Slovakia 8 0 0
Slovenia 8 0 0
Spain 640 6 1
Sweden 209 1 0
Iceland 6 0 0
Liechtenstein 0 0 0
Norway 99 2 0
Total EU 12,605 142 14 1.5 hours
Total EEA 12,710 144 14

Empowering Researchers

Commitment 26

Relevant Signatories commit to provide access, wherever safe and practicable, to continuous, real-time or near real-time, searchable stable access to non-personal data and anonymised, aggregated, or manifestly-made public data for research purposes on Disinformation through automated means such as APIs or other open and accessible technical solutions allowing the analysis of said data.

We signed up to the following measures of this commitment

Measure 26.1 Measure 26.2 Measure 26.3

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Yes

If yes, list these implementation measures here

Beta program to allow bonafide researchers to access public data for research on impact of misinformation and other online harms impacting the Union. 

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

Not applicable

Measure 26.1

Relevant Signatories will provide public access to non-personal data and anonymised, aggregated or manifestly-made public data pertinent to undertaking research on Disinformation on their services, such as engagement and impressions (views) of content hosted by their services, with reasonable safeguards to address risks of abuse (e.g. API policies prohibiting malicious or commercial uses).

QRE 26.1.1

Relevant Signatories will describe the tools and processes in place to provide public access to non-personal data and anonymised, aggregated and manifestly-made public data pertinent to undertaking research on Disinformation, as well as the safeguards in place to address risks of abuse.

LinkedIn supports the aims of the research community and regularly provides information and data to the research community in a variety of ways. 

To date, we have made non-personal, aggregated data publicly available (data on gender equity in the workplace, data on green skills and jobs, data on industry and job skills, and data on engagement with labor markets and employment trends).  Our goal with this action to enable researchers to understand the rapidly changing world of work through access to and use of LinkedIn data. Because much of our data is publicly available, the extent to which such data has been used for disinformation related research purposes cannot easily be ascertained.

Additionally, LinkedIn is expanding its API access for public data for disinformation related research purposes. Information about the LinkedIn APIs are available to the public and researcher access is provided here.  

Finally, Microsoft is also a leader in research in Responsible AI and provides a range of tools and resources dedicated to promoting responsible usage of artificial intelligence to allow practitioners and researchers to maximize the benefits of AI systems while mitigating harms. For example, as part of its Responsible AI Toolbox, Microsoft provides a Responsible AI Mitigations Library, which enables practitioners to more easily experiment with different techniques for addressing failure (which could include inaccurate outputs), and the Responsible AI Tracker, which uses visualizations to show the effectiveness of the different techniques for more informed decision-making.  These tools are available to the public and research community for free. 

QRE 26.1.2

Relevant Signatories will publish information related to data points available via Measure 25.1, as well as details regarding the technical protocols to be used to access these data points, in the relevant help centre. This information should also be reachable from the Transparency Centre. At minimum, this information will include definitions of the data points available, technical and methodological information about how they were created, and information about the representativeness of the data.

LinkedIn published information on its [Beta] Researcher Access Program researcher access to public data.

SLI 26.1.1

Relevant Signatories will provide quantitative information on the uptake of the tools and processes described in Measure 26.1, such as number of users.

LinkedIn will publish information as it continues to build its data research program pertinent to these commitments. 

Nr of users of public access
0 applications were approved under our Beta Art. 40 process in the period covered by this report. Note: Unknown number of researchers who use our broadly available service to conduct research.

Measure 26.2

Relevant Signatories will provide real-time or near real-time, machine-readable access to non-personal data and anonymised, aggregated or manifestly-made public data on their service for research purposes, such as accounts belonging to public figures such as elected official, news outlets and government accounts subject to an application process which is not overly cumbersome.

QRE 26.2.1

Relevant Signatories will describe the tools and processes in place to provide real-time or near real-time access to non-personal data and anonymised, aggregated and manifestly-made public data for research purposes as described in Measure 26.2.

See QRE 26.1.1

QRE 26.2.2

Relevant Signatories will describe the scope of manifestly-made public data as applicable to their services.

See QRE 26.1.1 

QRE 26.2.3

Relevant Signatories will describe the application process in place to in order to gain the access to non-personal data and anonymised, aggregated and manifestly-made public data described in Measure 26.2.

Access to Data: For access to LinkedIn APIs, a researcher needs to submit an application,meet criteria for approval and provide additional information necessary for us to assess their project. APIs including non-public data may be made available for research purposes based on special requests and the ability of the researcher to protect personal data pursuant to GDPR and relevant intellectual property rights. Upon approval, the researcher’s application with be provisioned with the relevant APIs. In addition, access is available to anyone who visits the relevant LinkedIn site.

For access to other data, researchers may be provided with datasets and information as part of research inquiries and research partnerships with LinkedIn. Researchers may contact LinkedIn to discuss research opportunities. 

SLI 26.2.1

Relevant Signatories will provide meaningful metrics on the uptake, swiftness, and acceptance level of the tools and processes in Measure 26.2, such as: Number of monthly users (or users over a sample representative timeframe), Number of applications received, rejected, and accepted (over a reporting period or a sample representative timeframe), Average response time (over a reporting period or a sample representative timeframe).

LinkedIn has provided metrics relating to the number of applications to its Beta Researcher Access Program in the period covered by this report. Consideration of a number of these applications remains ongoing. 

No of applications received No of applications rejected No of applications accepted
Data 52 47 0

Measure 26.3

Relevant Signatories will implement procedures for reporting the malfunctioning of access systems and for restoring access and repairing faulty functionalities in a reasonable time.

QRE 26.3.1

Relevant Signatories will describe the reporting procedures in place to comply with Measure 26.3 and provide information about their malfunction response procedure, as well as about malfunctions that would have prevented the use of the systems described above during the reporting period and how long it took to remediate them.

LinkedIn provides a comprehensive Help Center for assistance with other matters. LinkedIn endeavors to restore access and address any issues expeditiously.

Commitment 27

Relevant Signatories commit to provide vetted researchers with access to data necessary to undertake research on Disinformation by developing, funding, and cooperating with an independent, third-party body that can vet researchers and research proposals.

We signed up to the following measures of this commitment

Measure 27.1 Measure 27.2 Measure 27.3 Measure 27.4

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Yes

If yes, list these implementation measures here

Microsoft was an active participant in the EDMO Working Group for the Creation of an Independent Intermediary Body to Support Research on Digital Platforms. 

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

Not applicable

Measure 27.1

Relevant Signatories commit to work with other relevant organisations (European Commission, Civil Society, DPAs) to develop within a reasonable timeline the independent third-party body referred to in Commitment 27, taking into account, where appropriate, ongoing efforts such as the EDMO proposal for a Code of Conduct on Access to Platform Data.

QRE 27.1.1

Relevant Signatories will describe their engagement with the process outlined in Measure 27.1 with a detailed timeline of the process, the practical outcome and any impacts of this process when it comes to their partnerships, programs, or other forms of engagement with researchers.

Microsoft has been a member of the Working Group for the Creation of an Independent Intermediary Body to Support Research on Digital Platforms. The Working Group started its work on 10 May 2023 under the coordination of the European Digital Media Observatory (EDMO). Its main task has been to develop an organizational model for a new independent intermediary body that will facilitate data sharing between digital platforms and independent, external researchers. 

Measure 27.2

Relevant Signatories commit to co-fund from 2022 onwards the development of the independent third-party body referred to in Commitment 27.

QRE 27.2.1

Relevant Signatories will disclose their funding for the development of the independent third-party body referred to in Commitment 27.

As the development of the independent third-party body has not yet been finalized, there was no funding allocated to the implementation of Measure 27.2 during the period covered by this report. 

Measure 27.3

Relevant Signatories commit to cooperate with the independent third-party body referred to in Commitment 27 once it is set up, in accordance with applicable laws, to enable sharing of personal data necessary to undertake research on Disinformation with vetted researchers in accordance with protocols to be defined by the independent third-party body.

QRE 27.3.1

Relevant Signatories will describe how they cooperate with the independent third-party body to enable the sharing of data for purposes of research as outlined in Measure 27.3, once the independent third-party body is set up.

As the development of the independent third-party body has not yet been finalized, no data was shared with this body for the purposes of research as outlined under Measure 27.3 during the period covered by this report. 

SLI 27.3.1

Relevant Signatories will disclose how many of the research projects vetted by the independent third-party body they have initiated cooperation with or have otherwise provided access to the data they requested.

As the development of the independent third-party body has not yet been finalized, no research projects were vetted by this body, as set out under Measure 27.3, during the period covered by this report.

Measure 27.4

Relevant Signatories commit to engage in pilot programs towards sharing data with vetted researchers for the purpose of investigating Disinformation, without waiting for the independent third-party body to be fully set up. Such pilot programmes will operate in accordance with all applicable laws regarding the sharing/use of data. Pilots could explore facilitating research on content that was removed from the services of Signatories and the data retention period for this content.

QRE 27.4.1

Relevant Signatories will describe the pilot programs they are engaged in to share data with vetted researchers for the purpose of investigating Disinformation. This will include information about the nature of the programs, number of research teams engaged, and where possible, about research topics or findings.

Microsoft is a leader in research in Responsible AI and provides a range of tools and resources dedicated to promoting responsible usage of artificial intelligence to allow practitioners and researchers to maximize the benefits of AI systems while mitigating harms. For example, as part of its Responsible AI Toolbox, Microsoft provides a mitigations library, which enables practitioners to experiment with different techniques to address the failure of AI systems (which could include the production of inaccurate outputs). We also provide the Responsible AI tracker, which uses visualizations to show the effectiveness of the different techniques for more informed decision-making. These tools are available to the public and research community for free.

These are just a few of the examples of partnerships Microsoft forged with third parties to combat the creation and dissemination of deceptive AI-generated content targeted at our elections. Microsoft teams regularly engage with external stakeholders on these issues to inform our internal policies, practices, and standards, to improve our products, and to understand emerging threats.

Commitment 28

COOPERATION WITH RESEARCHERS Relevant Signatories commit to support good faith research into Disinformation that involves their services.

We signed up to the following measures of this commitment

Measure 28.1 Measure 28.2 Measure 28.3 Measure 28.4

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Yes

If yes, list these implementation measures here

Published details on our [Beta]  

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

Yes

If yes, which further implementation measures do you plan to put in place in the next 6 months?

Ongoing review of researcher feedback and needs may result in additional measures and resources being made available.
 
Update our program in light of regulatory guidance that may be provided pursuant to Art. 40 of the DSA, including the upcoming delegated act on access to online platform data for vetted researchers.   

Measure 28.1

Relevant Signatories will ensure they have the appropriate human resources in place in order to facilitate research, and should set-up and maintain an open dialogue with researchers to keep track of the types of data that are likely to be in demand for research and to help researchers find relevant contact points in their organisations.

QRE 28.1.1

Relevant Signatories will describe the resources and processes they deploy to facilitate research and engage with the research community, including e.g. dedicated teams, tools, help centres, programs, or events.

LinkedIn facilitates research, engages with the research community, and provides data to the research community in a variety of ways, as described below and in QRE 26.1-2.

Historically, LinkedIn’s work with external stakeholders, including, for example, research institutes, and academia, to understand the rapidly changing world of work through access to and use of LinkedIn data. Additionally, LinkedIn employs academics to gain practical experience combining industry knowledge with academic expertise to solve complex business problems spanning all areas of engineering, with an initial focus on artificial intelligence (including work related to large recommender systems and deep learning algorithms) and data science. 

While the foregoing work remains critical to our mission, we are working to expand access to data for research purposes consistent with the goals of the CoP as well as the applicable requirements of the DSA and look forward to providing further information on this in future reports. 

Additionally, LinkedIn regularly explores potential partnerships with non-governmental and research institutions and is actively in discussions with one research institution to conduct a data and recommender system pilot project leveraging LinkedIn data. LinkedIn hopes to publicly announce this partnership in its next report. 

Finally, LinkedIn has in place the needed teams and tools to make data available to researchers in a variety of ways, including via Excel or XML files, GitHub repositories, sandboxed laptops, and APIs. 

Measure 28.2

Relevant Signatories will be transparent on the data types they currently make available to researchers across Europe.

QRE 28.2.1

Relevant Signatories will describe what data types European researchers can currently access via their APIs or via dedicated teams, tools, help centres, programs, or events.

Please see QRE 26.1.1 and QRE 26.2.3. 

Measure 28.3

Relevant Signatories will not prohibit or discourage genuinely and demonstratively public interest good faith research into Disinformation on their platforms, and will not take adversarial action against researcher users or accounts that undertake or participate in good-faith research into Disinformation.

QRE 28.3.1

Relevant Signatories will collaborate with EDMO to run an annual consultation of European researchers to assess whether they have experienced adversarial actions or are otherwise prohibited or discouraged to run such research.

We look forward to partnering with other relevant signatories on this project and will provide further reporting as the annual consultation is established. 

Measure 28.4

As part of the cooperation framework between the Signatories and the European research community, relevant Signatories will, with the assistance of the EDMO, make funds available for research on Disinformation, for researchers to independently manage and to define scientific priorities and transparent allocation procedures based on scientific merit.

QRE 28.4.1

Relevant Signatories will disclose the resources made available for the purposes of Measure 28.4 and procedures put in place to ensure the resources are independently managed.

There were no relevant developments during the period covered by this report. 

Empowering fact-checkers

Commitment 30

Relevant Signatories commit to establish a framework for transparent, structured, open, financially sustainable, and non-discriminatory cooperation between them and the EU fact-checking community regarding resources and support made available to fact-checkers.

We signed up to the following measures of this commitment

Measure 30.1 Measure 30.2 Measure 30.3 Measure 30.4

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

No

If yes, list these implementation measures here

Not applicable

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

Not applicable

Measure 30.1

Relevant Signatories will set up agreements between them and independent fact-checking organisations (as defined in whereas (e)) to achieve fact-checking coverage in all Member States. These agreements should meet high ethical and professional standards and be based on transparent, open, consistent and non-discriminatory conditions and will ensure the independence of fact-checkers.

QRE 30.1.1

Relevant Signatories will report on and explain the nature of their agreements with fact-checking organisations; their expected results; relevant quantitative information (for instance: contents fact-checked, increased coverage, changes in integration of fact-checking as depends on the agreements and to be further discussed within the Task-force); and such as relevant common standards and conditions for these agreements.

LinkedIn has entered into a fact-checking arrangement with an external, independent global news agency. This relationship helps our internal content reviewers determine if user generated content violates LinkedIn’s policy on false and misleading content

QRE 30.1.2

Relevant Signatories will list the fact-checking organisations they have agreements with (unless a fact-checking organisation opposes such disclosure on the basis of a reasonable fear of retribution or violence).

Reuters, a global news organization with 2,500 journalists in about 200 locations worldwide. Reuters is one of the largest news agencies in the world.

QRE 30.1.3

Relevant Signatories will report on resources allocated where relevant in each of their services to achieve fact-checking coverage in each Member State and to support fact-checking organisations' work to combat Disinformation online at the Member State level.

LinkedIn has implemented internal processes empowering our hundreds of global internal content reviewers to be able to obtain a fact-check from an external fact-checker partnership. Fact-checker conclusions are reviewed by internal content reviewers to determine whether the content at issue violates LinkedIn’s policy on false and misleading content and if so, the content is removed from the platform.

SLI 30.1.1

Relevant Signatories will report on Member States and languages covered by agreements with the fact-checking organisations, including the total number of agreements with fact-checking organisations, per language and, where relevant, per service.

LinkedIn receives fact checking services for content in English, Spanish, Portuguese, French, German, Italian, Croatian, Czech, Danish, Dutch, Finnish, Greek, Hungarian, Polish, Swedish, Bulgarian, Latvian, Lithuanian, Maltese, Romanian and Slovak.

LinkedIn sends content to external fact checkers regardless of the location of the member posting the content, the viewers of the content, or the topic at issue. Content that violates LinkedIn’s policy on false and misleading content is removed.

Nr of agreements with fact-checking organisations
EU 1

Measure 30.2

Relevant Signatories will provide fair financial contributions to the independent European fact-checking organisations for their work to combat Disinformation on their services. Those financial contributions could be in the form of individual agreements, of agreements with multiple fact-checkers or with an elected body representative of the independent European fact-checking organisations that has the mandate to conclude said agreements.

QRE 30.2.1

Relevant Signatories will report on actions taken and general criteria used to ensure the fair financial contributions to the fact-checkers for the work done, on criteria used in those agreements to guarantee high ethical and professional standards, independence of the fact-checking organisations, as well as conditions of transparency, openness, consistency and non-discrimination.

LinkedIn has engaged in arms-length negotiations with a large global news organisation that follows the highest ethical standards in news reporting, including those related to accuracy, independence, integrity, and freedom from bias. Our agreements give the fact-checkers complete discretion in providing their factchecking conclusions, and LinkedIn personnel leverage these conclusions to determine whether the content at issue violates LinkedIn’s policy on false and misleading content and if so, the content is removed from the platform.  

QRE 30.2.2

Relevant Signatories will engage in, and report on, regular reviews with their fact-checking partner organisations to review the nature and effectiveness of the Signatory's fact-checking programme.

LinkedIn meets with its fact-checking partner to discuss improvements in process.

QRE 30.2.3

European fact-checking organisations will, directly (as Signatories to the Code) or indirectly (e.g. via polling by EDMO or an elected body representative of the independent European fact-checking organisations) report on the fairness of the individual compensations provided to them via these agreements.

This QRE is not relevant or pertinent as LinkedIn is not a fact-checking organisation.

Measure 30.3

Relevant Signatories will contribute to cross-border cooperation between fact-checkers.

QRE 30.3.1

Relevant Signatories will report on actions taken to facilitate their cross-border collaboration with and between fact-checkers, including examples of fact-checks, languages, or Member States where such cooperation was facilitated.

LinkedIn meets with its fact-checking partner to discuss improvements in process.

Measure 30.4

To develop the Measures above, relevant Signatories will consult EDMO and an elected body representative of the independent European fact-checking organisations.

QRE 30.4.1

Relevant Signatories will report, ex ante on plans to involve, and ex post on actions taken to involve, EDMO and the elected body representative of the independent European fact-checking organisations, including on the development of the framework of cooperation described in Measures 30.3 and 30.4.

There were no relevant developments during the period covered by this report.

Commitment 31

Relevant Signatories commit to integrate, showcase, or otherwise consistently use fact-checkers' work in their platforms' services, processes, and contents; with full coverage of all Member States and languages.

We signed up to the following measures of this commitment

Measure 31.1 Measure 31.2 Measure 31.3 Measure 31.4

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

No

If yes, list these implementation measures here

Not applicable

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

Not applicable 

Measure 31.2

Relevant Signatories that integrate fact-checks in their products or processes will ensure they employ swift and efficient mechanisms such as labelling, information panels, or policy enforcement to help increase the impact of fact-checks on audiences.

QRE 31.2.1

Relevant Signatories will report on their specific activities and initiatives related to Measures 31.1 and 31.2, including the full results and methodology applied in testing solutions to that end.

LinkedIn leverages its fact-checker to review user generated content that may violate its Professional Community Policies, which prohibit misinformation. Content that violates LinkedIn’s Professional Community Policies is removed from LinkedIn.

SLI 31.1.1 (for Measures 31.1 and 31.2)

Member State level reporting on use of fact-checks by service and the swift and efficient mechanisms in place to increase their impact, which may include (as depends on the service): number of fact-check articles published; reach of fact-check articles; number of content pieces reviewed by fact-checkers.

The figure for the number of content pieces reviewed by fact-checkers represents the number of pieces sent to our external fact checkers during the period 1 July – 31 December 2024. See also SLI 21.1.2.

Nr of fact-checked articles published Reach of fact-checked Nr of content pieces reviewed by fact-checkers
Total Global 0 N/A 106

Measure 31.3

Relevant Signatories (including but not necessarily limited to fact-checkers and platforms) will create, in collaboration with EDMO and an elected body representative of the independent European fact-checking organisations, a repository of fact-checking content that will be governed by the representatives of fact-checkers. Relevant Signatories (i.e. platforms) commit to contribute to funding the establishment of the repository, together with other Signatories and/or other relevant interested entities. Funding will be reassessed on an annual basis within the Permanent Task-force after the establishment of the repository, which shall take no longer than 12 months.

QRE 31.3.1

Relevant Signatories will report on their work towards and contribution to the overall repository project, which may include (depending on the Signatories): financial contributions; technical support; resourcing; fact-checks added to the repository. Further relevant metrics should be explored within the Permanent Task-force.

There were no discussions in the relevant Subgroup of the Permanent Task-force on the development of the repository of fact-checking content during the period covered by this report. 

Measure 31.4

Relevant Signatories will explore technological solutions to facilitate the efficient use of this common repository across platforms and languages. They will discuss these solutions with the Permanent Task-force in view of identifying relevant follow up actions.

QRE 31.4.1

Relevant Signatories will report on the technical solutions they explore and insofar as possible and in light of discussions with the Task-force on solutions they implemented to facilitate the efficient use of a common repository across platforms.

There were no discussions in the relevant Subgroup of the Permanent Task-force on the development of the repository of fact-checking content during the period covered by this report. 

Commitment 32

Relevant Signatories commit to provide fact-checkers with prompt, and whenever possible automated, access to information that is pertinent to help them to maximise the quality and impact of fact-checking, as defined in a framework to be designed in coordination with EDMO and an elected body representative of the independent European fact-checking organisations.

We signed up to the following measures of this commitment

Measure 32.1 Measure 32.2 Measure 32.3

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

No

If yes, list these implementation measures here

Not applicable

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

Not applicable

Measure 32.3

Relevant Signatories will regularly exchange information between themselves and the fact-checking community, to strengthen their cooperation.

QRE 32.3.1

Relevant Signatories will report on the channels of communications and the exchanges conducted to strengthen their cooperation - including success of and satisfaction with the information, interface, and other tools referred to in Measures 32.1 and 32.2 - and any conclusions drawn from such exchanges.

LinkedIn currently uses the Code’s Task-force, in particular the Crisis Response and Empowerment of Fact-checkers subgroups, as a channel of communication with the fact-checking community represented by the signatories to the Code.