LinkedIn

Report March 2026

Submitted
LinkedIn Ireland Unlimited Company (“LinkedIn Ireland”) – the provider of LinkedIn’s services in the European Union (EU) – welcomes the opportunity to file this report on our compliance with the commitments and measures of the Code of Conduct on Disinformation (the “Code”) that we subscribed to in our Subscription Document dated 15 January 2025. This report covers the period from 1 July to 31 December 2025 (the “Reporting Period”).  

LinkedIn’s vision is to create economic opportunity for every member of the global workforce. Its mission is to connect the world’s professionals to make them more productive and successful. LinkedIn is a networking tool that enables members to establish their professional identities online, connect with other professionals, and build relationships for the purpose of collaborating, learning, and staying informed about industry information and trends. As such, the design and function of the platform are central to its overall risk profile, which they shape in a few key ways: 

  • LinkedIn is a real-identity platform, where members must use their real or preferred professional names, and the content they post is visible, for example, to their colleagues, employers, potential future employers, and business partners. Given this audience, members by and large tend to limit their activity to professional areas of interest and expect the content they see to be professional in nature. 
  • LinkedIn operates under standards of professionalism, which are reflected both in content policies and enforcement, as well as in content prioritisation and amplification. LinkedIn’s policies bolster a safe, trusted, and professional platform, and LinkedIn strictly enforces them. LinkedIn strives to broadly distribute high-quality content that advances professional conversations on the platform. 
  • LinkedIn’s Digital Safety function helps ensure a “safety by design” approach throughout the product development lifecycle by partnering with the relevant product and engineering organizations to conduct continuous assessments of risks and address threats prior to product launch. 
  • LinkedIn services are tailored toward professionals and businesses, and LinkedIn’s Professional Community Policies clearly detail what is expected of every member as they post, share and comment on the platform, including that disinformation is not permitted on LinkedIn. 

LinkedIn is committed to keeping its platform and services safe, trusted, and professional and to providing transparency to its members, the public, and to regulators. Members come to LinkedIn to find a job, stay informed, connect with other professionals, and learn new skills. As a real-identity online networking service for professionals to connect and interact with other professionals, LinkedIn has a unique risk profile when compared with many social media platforms. With this in mind, LinkedIn continues to invest in numerous Trust domains to proactively enhance the safety, security, privacy, and quality of the LinkedIn user experience. Further, as confirmed by LinkedIn’s Systemic Risk Assessments conducted to date, the residual risks most relevant to misinformation and disinformation (i.e. those relating to Civic Discourse and Electoral Process, Public Health and Public Security) are categorised as “Low” or “Minimal”. 

Unless stated otherwise, data provided under this report covers a reporting period of 1 July 2025 to 31 December 2025 (“Reporting Period”).  

Download PDF

Commitment 18
Relevant Signatories commit to minimise the risks of viral propagation of Disinformation by adopting safe design practices as they develop their systems, policies, and features.
We signed up to the following measures of this commitment
Measure 18.2
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
No
If yes, list these implementation measures here
Not applicable 
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
Yes
If yes, which further implementation measures do you plan to put in place in the next 6 months?
LinkedIn will continue to assess its policies and services and to update them as warranted. 
Measure 18.2
Relevant Signatories will develop and enforce publicly documented, proportionate policies to limit the spread of harmful false or misleading information (as depends on the service, such as prohibiting, downranking, or not recommending harmful false or misleading information, adapted to the severity of the impacts and with due regard to freedom of expression and information); and take action on webpages or actors that persistently violate these policies.
QRE 18.2.1
Relevant Signatories will report on the policies or terms of service that are relevant to Measure 18.2 and on their approach towards persistent violations of these policies.
LinkedIn is an online professional network. On LinkedIn, the world’s professionals come together to find jobs, stay informed, learn new skills, and build productive relationships. The content that our members share becomes part of their professional identity and can be seen by their boss, colleagues, and potential business partners. Accordingly, the content on LinkedIn is professional in nature.  
 
To help keep LinkedIn safe, trusted, and professional, our Professional Community Policies clearly detail the range of objectionable and harmful content that is not allowed on LinkedIn. Fake accounts, misinformation, and inauthentic content are not allowed, and we take active steps to remove it from our platform.  
 
LinkedIn removes “specific claims, presented as fact, that are demonstrably false or substantially misleading and likely to cause harm.” This approach applies globally and is used for purposes of content moderation and for publicly reporting figures on misinformation. Specific examples of what might constitute misinformation can be found here in our Help Center. As part of our User Agreement, our Professional Community Policies are accepted by every member when joining LinkedIn and are easily available to every member. 
 
LinkedIn creates value and preserves trust by fostering a safe, trusted, and professional platform, while honouring members’ professional expression and speech. LinkedIn enables healthy on-platform conversations by facilitating the removal of misinformation that threatens its members’ safety. And when content doesn’t conclusively violate LinkedIn policies, LinkedIn gives the speaker the benefit of the doubt and favours speech (i.e., leaves the content up on platform).  
 
Additionally, members who post content and those who report content can appeal our content moderation decisions.  
 
Our content policies are clear and we apply them equally for all members. Within our Professional Community Policies we provide granular information and examples on what is and what is not allowed on LinkedIn. 
 
Furthermore, LinkedIn has automated defences to identify and prevent abuse, including inauthentic behaviour, such as spam, phishing and scams, duplicate accounts, fake accounts, and misinformation. Our Trust and Safety teams work every day to identify and restrict inauthentic activity. We’re regularly rolling out scalable technologies like machine learning models to keep our platform safe.  
SLI 18.2.1
Relevant Signatories will report on actions taken in response to violations of policies relevant to Measure 18.2, at the Member State level. The metrics shall include: Total number of violations and Meaningful metrics to measure the impact of these actions (such as their impact on the visibility of or the engagement with content that was actioned upon).
Methodology of data measurement: 
The table below reports metrics concerning content LinkedIn removed from its platform as Misinformation, pursuant to the policy outlined in QRE 18.2.1 above. The metrics include:  

  • the number of pieces of content removed as Misinformation between 1 July – 31 December 2025, broken out by EEA Member State;  
  • the number of those content removals that were appealed by the content author;  
  • the number of those appeals that were granted; 
  • the median time from appeal-to-appeal decision for those appeals. The metrics are assigned to EEA Member State based on the IP address of the of the content author. 
The number of pieces of content removed as Misinformation between 1 July - 31 December 2025 The number of removals that were appealed by the content author The number of appeals that were granted The median time from appeal-to-appeal decision in hours
Austria 85 
Belgium 132 
Bulgaria
Croatia
Cyprus
Czech Republic 14 
Denmark 247  54  26 
Estonia
Finland 10 
France 634  31  13 
Germany 300  21 
Greece 46 
Hungary 15 
Ireland 110  11 
Italy 421  27  14 
Latvia
Lithuania
Luxembourg 14 
Malta 20 
Netherlands 684  36  19 
Poland 43 
Portugal 73 
Romania 29 
Slovakia
Slovenia 12 
Spain 179 
Sweden 85 
Iceland
Liechtenstein
Norway 51 
Total EU  3,193 220  106  12 hours
Total EEA  3,254 228  110  12 hours