LinkedIn

Report September 2025

Submitted
LinkedIn Ireland Unlimited Company (“LinkedIn Ireland”) – the provider of LinkedIn’s services in the European Union (EU) – welcomes the opportunity to file this report on our compliance with the commitments and measures of the strengthened 2022 EU Code of Practice[1] on Disinformation that we subscribed to in our Subscription Document dated 15 January 2025. This report covers the period from 1 January to 30 June 2025 (the “Reporting Period”). 

LinkedIn’s vision is to create economic opportunity for every member of the global workforce. Its mission is to connect the world’s professionals to make them more productive and successful. LinkedIn is a networking tool that enables members to establish their professional identities online, connect with other professionals, and build relationships for the purpose of collaborating, learning, and staying informed about industry information and trends. As such, the design and function of the platform are central to its overall risk profile, which they shape in a few key ways:

  • LinkedIn is a real-identity platform, where members must use their real or preferred professional names, and the content they post is visible, for example, to their colleagues, employers, potential future employers, and business partners. Given this audience, members by and large tend to limit their activity to professional areas of interest and expect the content they see to be professional in nature.
  • LinkedIn operates under standards of professionalism, which are reflected both in content policies and enforcement, as well as in content prioritization and amplification. LinkedIn’s policies bolster a safe, trusted, and professional platform, and LinkedIn strictly enforces them. LinkedIn strives to broadly distribute high-quality content that advances professional conversations on the platform.
  • LinkedIn services are tailored toward professionals and businesses, and LinkedIn’s Professional Community Policies clearly detail what is expected of every member as they post, share and comment on the platform, including that disinformation is not permitted on LinkedIn.

LinkedIn is committed to keeping its platform and services safe, trusted, and professional and to providing transparency to its members, the public, and to regulators. Members come to LinkedIn to find a job, stay informed, connect with other professionals, and learn new skills. As a real-identity online networking service for professionals to connect and interact with other professionals, LinkedIn has a unique risk profile when compared with many social media platforms. With this in mind, LinkedIn invests heavily in numerous Trust and Safety domains to proactively enhance the safety, security, privacy, and quality of the LinkedIn user experience. Further, as confirmed by LinkedIn’s Systemic Risk Assessments conducted to date, the residual risks most relevant to misinformation and disinformation (i.e. those relating to Civic Discourse and Electoral Process, Public Health and Public Security) are categorised as “Low.”

LinkedIn Ireland supports the objectives of the European Code of Practice on Disinformation (the “Code”) and we are committed to actively working with Signatories and the European Commission in the context of this Code to defend against disinformation on the LinkedIn service.

Unless stated otherwise, data provided under this report covers a reporting period of 1 January 2025 to 30 June 2025 (“Reporting Period”). 

[1] We have referred to the code as the Code of Practice on Disinformation, as the report covers the period prior to the conversion to a code of conduct taking effect.

Download PDF

Commitment 18
Relevant Signatories commit to minimise the risks of viral propagation of Disinformation by adopting safe design practices as they develop their systems, policies, and features.
We signed up to the following measures of this commitment
Measure 18.2
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
No
If yes, list these implementation measures here
Not applicable
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
Yes
If yes, which further implementation measures do you plan to put in place in the next 6 months?
LinkedIn will continue to assess its policies and services and to update them as warranted.
Measure 18.2
Relevant Signatories will develop and enforce publicly documented, proportionate policies to limit the spread of harmful false or misleading information (as depends on the service, such as prohibiting, downranking, or not recommending harmful false or misleading information, adapted to the severity of the impacts and with due regard to freedom of expression and information); and take action on webpages or actors that persistently violate these policies.
QRE 18.2.1
Relevant Signatories will report on the policies or terms of service that are relevant to Measure 18.2 and on their approach towards persistent violations of these policies.
LinkedIn is an online professional network. On LinkedIn, the world’s professionals come together to find jobs, stay informed, learn new skills, and build productive relationships. The content that our members share becomes part of their professional identity and can be seen by their boss, colleagues, and potential business partners. Accordingly, the content on LinkedIn is professional in nature. 

To help keep LinkedIn safe, trusted, and professional, our Professional Community Policies clearly detail the range of objectionable and harmful content that is not allowed on LinkedIn. Fake accounts, misinformation, and inauthentic content are not allowed, and we take active steps to remove it from our platform. 

LinkedIn removes “specific claims, presented as fact, that are demonstrably false or substantially misleading and likely to cause harm.” This approach applies globally and is used for purposes of content moderation and for publicly reporting figures on misinformation. Specific examples of what might constitute misinformation can be found here in our Help Center. As part of our User Agreement, our Professional Community Policies are accepted by every member when joining LinkedIn and are easily available to every member.

LinkedIn creates value and preserves trust by fostering a safe, trusted, and professional platform, while honouring members’ professional expression and speech. LinkedIn enables healthy on-platform conversations by facilitating the removal of misinformation that threatens its members’ safety. And when content doesn’t conclusively violate LinkedIn policies, LinkedIn gives the speaker the benefit of the doubt and favours speech (i.e., leaves the content up on platform). 

Additionally, as described in greater detail below, human review plays a significant role in our content moderation process. Additionally, Members who post content and members who report content can appeal our content moderation decisions. 

Our content policies are clear and we apply them equally for all members. Within our Professional Community Policies we provide granular information and examples on what is and what is not allowed on LinkedIn.

Furthermore, LinkedIn has automated defences to identify and prevent abuse, including inauthentic behaviour, such as spam, phishing and scams, duplicate accounts, fake accounts, and misinformation. Our Trust and Safety teams work every day to identify and restrict inauthentic activity. We’re regularly rolling out scalable technologies like machine learning models to keep our platform safe.
SLI 18.2.1
Relevant Signatories will report on actions taken in response to violations of policies relevant to Measure 18.2, at the Member State level. The metrics shall include: Total number of violations and Meaningful metrics to measure the impact of these actions (such as their impact on the visibility of or the engagement with content that was actioned upon).
The table below reports metrics concerning content LinkedIn removed from its platform as Misinformation, pursuant to the policy outlined in QRE 18.2.1 above. The metrics include: 

  • the number of pieces of content removed as Misinformation between 1 January – 30 June 2025, broken out by EEA Member State; 
  • the number of those content removals that were appealed by the content author; 
  • the number of those appeals that were granted;
  • the median time from appeal-to-appeal decision for those appeals. The metrics are assigned to EEA Member State based on the IP address of the of the content author.
Country The number of pieces of content removed as Misinformation between 1 January – 30 June 2025 The number of removals that were appealed by the content author The number of appeals that were granted The median time from appeal-to-appeal decision in hours
Austria 200 0 0 1.0 hours
Belgium 438 6 1
Bulgaria 36 0 0
Croatia 74 2 1
Cyprus 20 0 0
Czech Republic 70 1 0
Denmark 344 0 0
Estonia 13 0 0
Finland 36 0 0
France 3,686 11 2
Germany 1,646 15 0
Greece 190 1 0
Hungary 42 0 0
Ireland 168 0 0
Italy 1,462 10 3
Latvia 11 0 0
Lithuania 15 1 0
Luxembourg 53 0 0
Malta 9 1 0
Netherlands 2,586 21 5
Poland 144 1 1
Portugal 185 0 0
Romania 174 0 0
Slovakia 7 0 0
Slovenia 14 0 0
Spain 738 4 0
Sweden 220 0 0
Iceland 5 0 0
Liechtenstein 4 0 0
Norway 85 5 0
Total EU 12,581 74 13
Total EEA 12,675 79 13