LinkedIn

Report March 2025

Submitted
Commitment 14
In order to limit impermissible manipulative behaviours and practices across their services, Relevant Signatories commit to put in place or further bolster policies to address both misinformation and disinformation across their services, and to agree on a cross-service understanding of manipulative behaviours, actors and practices not permitted on their services. Such behaviours and practices include: The creation and use of fake accounts, account takeovers and bot-driven amplification, Hack-and-leak operations, Impersonation, Malicious deep fakes, The purchase of fake engagements, Non-transparent paid messages or promotion by influencers, The creation and use of accounts that participate in coordinated inauthentic behaviour, User conduct aimed at artificially amplifying the reach or perceived public support for disinformation.
We signed up to the following measures of this commitment
Measure 14.1 Measure 14.2 Measure 14.3
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
No

If yes, list these implementation measures here
Not applicable 
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
No
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Not applicable
Measure 14.1
Relevant Signatories will adopt, reinforce and implement clear policies regarding impermissible manipulative behaviours and practices on their services, based on the latest evidence on the conducts and tactics, techniques and procedures (TTPs) employed by malicious actors, such as the AMITT Disinformation Tactics, Techniques and Procedures Framework.
QRE 14.1.1
Relevant Signatories will list relevant policies and clarify how they relate to the threats mentioned above as well as to other Disinformation threats.
LinkedIn’s User Agreement (in particular section 8 LinkedIn “Dos and Don’ts”) and our Professional Community Policies - which are accepted by every member when joining LinkedIn - detail the impermissible manipulative behaviours and practices that are prohibited on our platform. Fake accounts, misinformation, and inauthentic content are not allowed, and we take active steps to remove it from our platform.

LinkedIn provides additional specific examples of false and misleading content that violates its policy via a Help Center article on False or Misleading Content.  

QRE 14.1.2
Signatories will report on their proactive efforts to detect impermissible content, behaviours, TTPs and practices relevant to this commitment.
LinkedIn works with numerous partners to facilitate the flow of information to tackle purveyors of disinformation, including disinformation spread by state-sponsored and institutional actors.   

LinkedIn maintains an internal Trust and Safety team composed of threat investigators and intelligence analysts to address disinformation. This team works with peers and other stakeholders, including our Artificial Intelligence modeling team, to identify and remove nation-state actors and coordinated inauthentic campaigns. LinkedIn conducts investigations into election-related influence operations and nation-state targeting including continued information sharing on threats with industry peers and Law Enforcement on a regular basis. LinkedIn works with peer companies and other stakeholders to receive and share indicators related to fake accounts created by state-sponsored actors, such as confirmed Tactics, Techniques, and Protocols (TTPs) and Indicators of Compromise (IOC). This exchange of information leads to a better understanding of the incentives of sophisticated and well-funded threat actors and how they evolve their TTPs to achieve those goals, which assists LinkedIn in their identification and removal.  Any associated disinformation content is verified by our internal or external fact-checkers as needed, and coordinated inauthentic behaviours (CIBs) are also removed by our Trust and Safety team. 

LinkedIn, along with its parent company, Microsoft, is heavily involved in threat exchanges. These threat exchanges take various forms, such as: 1) regular discussion amongst industry peers to discuss high-level trends and campaigns; and, 2) one-on-one engagement with individual peer companies to discuss TTPs and IOCs. This exchange of information leads to a better understanding of the incentives of sophisticated and well-funded threat actors and how they evolve their TTPs to achieve those goals, which assists us in their identification and removal.   

LinkedIn always stands ready to receive and investigate any leads we receive from peers and other external stakeholders. In addition to one-on-one engagement with peers, we also consume intelligence from vendors and investigate any TTPs and IOCs made available in peer disclosures. In turn, we also regularly release information about policy-violating content on our platform in publicly available transparency reports and blog posts, including for exampleHowWe’re Protecting Members From Fake Profiles,Automated FakeAccount Detection, andAn Update on How We Keep Members Safe.The LinkedInCommunity Reportalso describes actions we take on content that violates our Professional Community Policies and User Agreement. It is published twice per year and covers the global detection of fake accounts, spam and scams, content violations and copyright infringements. The most recent reporting period covered 1 January to 30 June 2024. LinkedIn Ireland Unlimited Company – the provider of LinkedIn’s services in the EU – has been designated by the European Commission as a very large online platform and, therefore, pursuant to its obligations under Article 42 of the Digital Services Act, publishes Transparency Reports covering the EU every 6 months, with the most recent report published in February 2025.