LinkedIn

Report March 2025

Submitted
Commitment 15
Relevant Signatories that develop or operate AI systems and that disseminate AI-generated and manipulated content through their services (e.g. deepfakes) commit to take into consideration the transparency obligations and the list of manipulative practices prohibited under the proposal for Artificial Intelligence Act.
We signed up to the following measures of this commitment
Measure 15.1 Measure 15.2
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
Yes
If yes, list these implementation measures here
Additional transparency on use of personal data for generative AI.

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
Yes
If yes, which further implementation measures do you plan to put in place in the next 6 months?
LinkedIn will continue to assess its policies and services and to update them as warranted. 
Measure 15.1
Relevant signatories will establish or confirm their policies in place for countering prohibited manipulative practices for AI systems that generate or manipulate content, such as warning users and proactively detect such content.
QRE 15.1.1
In line with EU and national legislation, Relevant Signatories will report on their policies in place for countering prohibited manipulative practices for AI systems that generate or manipulate content.
During the reporting period, LinkedIn continued to support and launch products and features that disseminate, and enable LinkedIn members to disseminate, AI-generated textual content.  LinkedIn also continues to integrate generative AI-powered features into existing products. To mitigate the potential safety risks posed by such features, LinkedIn has in place and continues to augment policies and procedures to ensure that our AI systems, including any new features are consistent with LinkedIn’s Responsible AI Principles and applicable law.

1. Privacy and Security – LinkedIn has an existing process for assessing the privacy and security of new products and initiatives, which has been augmented to recognize particular risks arising from the use of generative AI. With respect to generative AI, additional considerations include being thoughtful about the personal data used in prompt engineering and ensuring that members maintain full control of their profiles. 

2. Safety – LinkedIn has an existing process for assessing the safety of new products and initiatives, that has been augmented to recognize particular risks with generative AI. New features are carefully ramped to members and rate limits are introduced to reduce the likelihood of abuse.  Limiting access allows us to watch for issues that may arise. We aim to proactively identify how prompts could be misused to then mitigate potential abuse. We engage in proactive content moderation (all AI generated content is held to the same professional bar as other content on the LinkedIn platform), through applying content moderation filters to both the member inputs for prompts and the output.  We also engage in reactive content moderation, through provision of member tools to report policy-violating issues with the content.  Additional features have been added to 1.      these tools that address generative AI-specific issues such as ‘hallucinations.’  Additionally, all generative AI-powered features that have outputs that are directly visible to LinkedIn users, go through (1) manual and automated “red teaming,” to test the generative AI-powered feature and to identify and mitigate any vulnerabilities, and (2) quality assurance assessments on response quality, accuracy, and hallucinations with the goal to remediate discovered inaccuracies.  
3. Fairness and Inclusion – LinkedIn has a cross functional team that designs policy and process to proactively mitigate the risk that AI tools, including generative AI tools, perpetuate societal biases or facilitate discrimination. To promote fairness and inclusion, we target two key areas - content subject and communities. With respect to content subjects, prompts are engineered to reduce the risk of biased content, blocklists are leveraged to replace harmful terms with neutral terms, and member feedback is monitored to learn and improve. With respect to communities, in addition to a focus on problematic content like stereotypes, we are working to expand the member communities that are served by our generative AI tools. Additionally, LinkedIn continues to invest in methodologies and techniques to more broadly ensure algorithmic fairness.  
4. Transparency – LinkedIn is committed to being transparent with members.  With respect to generative AI products and features, our goal is to educate members about the technology and our use of it such that they can make their own decisions about how to engage with it. For example, with Collaborative Articles we identify the use of AI in the relevant UI and we provide additional detail in a linked Help Center article. Additionally, LinkedIn labels content containing industry-leading “Content Credentials” technology developed by the Coalition for Content 1.      Provenance and Authenticity (“C2PA”), including AI-generated content containing C2PA metadata.  Content Credentials on LinkedIn show as a “Cr” icon on images and videos that contain C2PA metadata, particularly on highly visible surfaces such as the feed. By clicking the icon, LinkedIn members can trace the origin of the AI-created media, including the source and history of the content, and whether it was created or edited by AI. Additionally, LinkedIn provides members with information on how their personal data is used for generative AI in the LinkedIn Help Center, including how personal data is used for content generating AI model training. As of December 31, 2024, LinkedIn did not train content generating AI models on data from members located in the EU, EEA, UK, Switzerland, Canada, Hong Kong, or mainland China. 
5. Accountability – In addition to the privacy, security, and safety processes discussed above, for AI tools we have additional assessments of training data and model cards so we can more appropriately assess risks and develop mitigations for the AI models that support our AI products and initiatives.