LinkedIn

Report September 2025

Submitted
LinkedIn Ireland Unlimited Company (“LinkedIn Ireland”) – the provider of LinkedIn’s services in the European Union (EU) – welcomes the opportunity to file this report on our compliance with the commitments and measures of the strengthened 2022 EU Code of Practice[1] on Disinformation that we subscribed to in our Subscription Document dated 15 January 2025. This report covers the period from 1 January to 30 June 2025 (the “Reporting Period”). 

LinkedIn’s vision is to create economic opportunity for every member of the global workforce. Its mission is to connect the world’s professionals to make them more productive and successful. LinkedIn is a networking tool that enables members to establish their professional identities online, connect with other professionals, and build relationships for the purpose of collaborating, learning, and staying informed about industry information and trends. As such, the design and function of the platform are central to its overall risk profile, which they shape in a few key ways:

  • LinkedIn is a real-identity platform, where members must use their real or preferred professional names, and the content they post is visible, for example, to their colleagues, employers, potential future employers, and business partners. Given this audience, members by and large tend to limit their activity to professional areas of interest and expect the content they see to be professional in nature.
  • LinkedIn operates under standards of professionalism, which are reflected both in content policies and enforcement, as well as in content prioritization and amplification. LinkedIn’s policies bolster a safe, trusted, and professional platform, and LinkedIn strictly enforces them. LinkedIn strives to broadly distribute high-quality content that advances professional conversations on the platform.
  • LinkedIn services are tailored toward professionals and businesses, and LinkedIn’s Professional Community Policies clearly detail what is expected of every member as they post, share and comment on the platform, including that disinformation is not permitted on LinkedIn.

LinkedIn is committed to keeping its platform and services safe, trusted, and professional and to providing transparency to its members, the public, and to regulators. Members come to LinkedIn to find a job, stay informed, connect with other professionals, and learn new skills. As a real-identity online networking service for professionals to connect and interact with other professionals, LinkedIn has a unique risk profile when compared with many social media platforms. With this in mind, LinkedIn invests heavily in numerous Trust and Safety domains to proactively enhance the safety, security, privacy, and quality of the LinkedIn user experience. Further, as confirmed by LinkedIn’s Systemic Risk Assessments conducted to date, the residual risks most relevant to misinformation and disinformation (i.e. those relating to Civic Discourse and Electoral Process, Public Health and Public Security) are categorised as “Low.”

LinkedIn Ireland supports the objectives of the European Code of Practice on Disinformation (the “Code”) and we are committed to actively working with Signatories and the European Commission in the context of this Code to defend against disinformation on the LinkedIn service.

Unless stated otherwise, data provided under this report covers a reporting period of 1 January 2025 to 30 June 2025 (“Reporting Period”). 

[1] We have referred to the code as the Code of Practice on Disinformation, as the report covers the period prior to the conversion to a code of conduct taking effect.

Download PDF

Commitment 15
Relevant Signatories that develop or operate AI systems and that disseminate AI-generated and manipulated content through their services (e.g. deepfakes) commit to take into consideration the transparency obligations and the list of manipulative practices prohibited under the proposal for Artificial Intelligence Act.
We signed up to the following measures of this commitment
Measure 15.1 Measure 15.2
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
Yes
If yes, list these implementation measures here
Additional transparency on use of personal data for generative AI. 
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
Yes
If yes, which further implementation measures do you plan to put in place in the next 6 months?
LinkedIn will continue to assess its policies and services and to update them as warranted.
Measure 15.1
Relevant signatories will establish or confirm their policies in place for countering prohibited manipulative practices for AI systems that generate or manipulate content, such as warning users and proactively detect such content.
QRE 15.1.1
In line with EU and national legislation, Relevant Signatories will report on their policies in place for countering prohibited manipulative practices for AI systems that generate or manipulate content.
During the reporting period, LinkedIn continued to support and launch products and features that disseminate, and enable LinkedIn members to disseminate, AI-generated textual content.  LinkedIn also continues to integrate generative AI-powered features into existing products. To mitigate the potential safety risks posed by such features, LinkedIn has in place and continues to augment policies and procedures to ensure that our AI systems, including any new features are consistent with LinkedIn’s Responsible AI Principles and applicable law.

  1. Privacy and Security – LinkedIn has an existing process for assessing the privacy and security of new products and initiatives, which has been augmented to recognize particular risks arising from the use of generative AI. With respect to generative AI, additional considerations include being thoughtful about the personal data used in prompt engineering and ensuring that members maintain full control of their profiles. 
  2. Safety – LinkedIn has an existing process for assessing the safety of new products and initiatives, that has been augmented to recognize particular risks with generative AI. New features are carefully ramped to members and rate limits are introduced to reduce the likelihood of abuse.  Limiting access allows us to watch for issues that may arise. We aim to proactively identify how prompts could be misused to then mitigate potential abuse. We engage in proactive content moderation (all AI generated content is held to the same professional bar as other content on the LinkedIn platform), through applying content moderation filters to both the member inputs for prompts and the output.  We also engage in reactive content moderation, through provision of member tools to report policy-violating issues with the content.  Additional features have been added to these tools that address generative AI-specific issues such as ‘hallucinations.’  Additionally, all generative AI-powered features that have outputs that are directly visible to LinkedIn users, go through (1) manual and automated “red teaming,” to test the generative AI-powered feature and to identify and mitigate any vulnerabilities, and (2) quality assurance assessments on response quality, accuracy, and hallucinations with the goal to remediate discovered inaccuracies.  
  3. Fairness and Inclusion – LinkedIn has a cross functional team that designs policy and process to proactively mitigate the risk that AI tools, including generative AI tools, perpetuate societal biases or facilitate discrimination. To promote fairness and inclusion, we target two key areas - content subject and communities. With respect to content subjects, prompts are engineered to reduce the risk of biased content, blocklists are leveraged to replace harmful terms with neutral terms, and member feedback is monitored to learn and improve. With respect to communities, in addition to a focus on problematic content like stereotypes, we are working to expand the member communities that are served by our generative AI tools. Additionally, LinkedIn continues to invest in methodologies and techniques to more broadly ensure algorithmic fairness
  4. Transparency – LinkedIn is committed to being transparent with members.  With respect to generative AI products and features, our goal is to educate members about the technology and our use of it such that they can make their own decisions about how to engage with it. Additionally, LinkedIn labels content containing industry-leading “Content Credentials” technology developed by the Coalition for Content Provenance and Authenticity (“C2PA”), including AI-generated content containing C2PA metadata.  Content Credentials on LinkedIn show as a “Cr” icon on images and videos that contain C2PA metadata, particularly on highly visible surfaces such as the feed. By clicking the icon, LinkedIn members can trace the origin of the AI-created media, including the source and history of the content, and whether it was created or edited by AI. Additionally, LinkedIn provides members with information on how their personal data is used for generative AI in the LinkedIn Help Center, including how personal data is used for content generating AI model training. As of June 30, 2025, LinkedIn did not train content generating AI models on data from members located in the EU, EEA, UK, Switzerland, Canada, Hong Kong, or mainland China. 
  5. Accountability – In addition to the privacy, security, and safety processes discussed above, for AI tools we have additional assessments of training data and model cards so we can more appropriately assess risks and develop mitigations for the AI models that support our AI products and initiatives.