LinkedIn works with numerous partners to facilitate the flow of information to tackle purveyors of disinformation, including disinformation spread by state-sponsored and institutional actors.
LinkedIn maintains an internal Trust and Safety team composed of threat investigators and intelligence analysts to address disinformation. This team works with peers and other stakeholders, including our Artificial Intelligence modeling team, to identify and remove nation-state actors and coordinated inauthentic campaigns. LinkedIn conducts investigations into election-related influence operations and nation-state targeting including continued information sharing on threats with industry peers and Law Enforcement on a regular basis. LinkedIn works with peer companies and other stakeholders to receive and share indicators related to fake accounts created by state-sponsored actors, such as confirmed Tactics, Techniques, and Protocols (TTPs) and Indicators of Compromise (IOC). This exchange of information leads to a better understanding of the incentives of sophisticated and well-funded threat actors and how they evolve their TTPs to achieve those goals, which assists LinkedIn in their identification and removal. Any associated disinformation content is verified by our internal or external fact-checkers as needed, and coordinated inauthentic behaviours (CIBs) are also removed by our Trust and Safety team.
LinkedIn, along with its parent company, Microsoft, is heavily involved in threat exchanges. These threat exchanges take various forms, such as: 1) regular discussion amongst industry peers to discuss high-level trends and campaigns; and, 2) one-on-one engagement with individual peer companies to discuss TTPs and IOCs. This exchange of information leads to a better understanding of the incentives of sophisticated and well-funded threat actors and how they evolve their TTPs to achieve those goals, which assists us in their identification and removal.
LinkedIn always stands ready to receive and investigate any leads we receive from peers and other external stakeholders. In addition to one-on-one engagement with peers, we also consume intelligence from vendors and investigate any TTPs and IOCs made available in peer disclosures. In turn, we also regularly release information about policy-violating content on our platform in publicly available transparency reports and blog posts, including for example
HowWe’re Protecting Members From Fake Profiles,Automated FakeAccount Detection, and
An Update on How We Keep Members Safe.The LinkedIn
Community Reportalso describes actions we take on content that violates our Professional Community Policies and User Agreement. It is published twice per year and covers the global detection of fake accounts, spam and scams, content violations and copyright infringements. The most recent reporting period covered 1 January to 30 June 2024. LinkedIn Ireland Unlimited Company – the provider of LinkedIn’s services in the EU – has been designated by the European Commission as a very large online platform and, therefore, pursuant to its obligations under Article 42 of the Digital Services Act, publishes Transparency Reports covering the EU every 6 months, with the most recent report
published in February 2025.