LinkedIn

Report March 2025

Submitted
Commitment 18
Relevant Signatories commit to minimise the risks of viral propagation of Disinformation by adopting safe design practices as they develop their systems, policies, and features.
We signed up to the following measures of this commitment
Measure 18.1 Measure 18.2 Measure 18.3
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
No
If yes, list these implementation measures here
Not applicable
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
Yes
If yes, which further implementation measures do you plan to put in place in the next 6 months?
LinkedIn will continue to assess its policies and services and to update them as warranted.
Measure 18.1
Relevant Signatories will take measures to mitigate risks of their services fuelling the viral spread of harmful Disinformation, such as: recommender systems designed to improve the prominence of authoritative information and reduce the prominence of Disinformation based on clear and transparent methods and approaches for defining the criteria for authoritative information; other systemic approaches in the design of their products, policies, or processes, such as pre-testing.
QRE 18.1.1
Relevant Signatories will report on the risk mitigation systems, tools, procedures, or features deployed under Measure 18.1 and report on their deployment in each EU Member State.
With respect to AI design, please see QRE 18.1.3

With respect to additional tools, procedures, or features, please see: 

·       QRE 17.1.1 (editorial practices to provide members with trustworthy news); 
·       QRE 18.2.1 (policies and procedures to limit spread of harmful false or misleading information); 
·       QRE 21.1.1 (action taken when information is identified as misinformation); 
·       QRE 22.1.1 (features and systems related to fake and inauthentic profiles); 
·       QRE 22.2.1 (actions taken to assist members identify trustworthy content); 

QRE 23.2.1 (actions taken to ensure integrity of reporting and appeals process). 
QRE 18.1.2
Relevant Signatories will publish the main parameters of their recommender systems, both in their report and, once it is operational, on the Transparency Centre.
The main parameters of the LinkedIn feed recommender systems are as follows: 

-          Identity: We seek to contextualise content based on who a member is by looking at their profile, for example: Who are you? Where do you work? What are your skills? Who are your connections? Where is your profile location?
-          Content: We aim to match appropriate content to each member by evaluating, for example: How many times was the feed update viewed? How many times was it reacted to? What is the content about? How old is it? Is the update sharing knowledge or professional advice? Is the update from someone the member is connected to or follows? What language is it written in? Is the conversation constructive and professional? Will engagement on the update lead to future high-quality content? What companies, people, or topics are mentioned in the update?
-          Member Activity: Finally, we look at how a member engages with content and examine, for example: What have you reacted to and shared in the past? Who do you interact with most frequently or recently? Where do you spend the most time in your feed? Which hashtags, people or companies do you follow? Who are your connections? What types of topics are you interested in? What other members follow you? What actions have other members taken on your posts? How long has it been since the foregoing actions took place? 

Combining these and other related signals, the LinkedIn feed recommender systems rank the content for the member, with the goal of showing the member high quality content that the member will enjoy consuming and can lead to further creation on the platform. To do this, the feed optimizes for content that a member is most likely to find highly valuable, which in turn is likely to lead the member to act on (e.g., react, comment, or reshare). 
QRE 18.1.3
Relevant Signatories will outline how they design their products, policies, or processes, to reduce the impressions and engagement with Disinformation whether through recommender systems or through other systemic approaches, and/or to increase the visibility of authoritative information.
At LinkedIn, our guiding principle is “Members First.” It ensures we honour our responsibility to protect our members and maintain their trust in every decision we make, and puts their interests first. A key area where we apply this value in engineering is within our design process. We call this “responsible design,” which means that everything we build is intended to work as part of a unified system that delivers the best member experience, provides the right protections for our members and customers, and mitigates any unintended consequences in our products. 

One of the core pillars of “responsible design” is “responsible AI,” which follows the LinkedIn Responsible AI Principles, which are inspired by and aligned with Microsoft’s Responsible AI Principles. The LinkedIn Responsible AI Principles are to advance economic opportunity, uphold trust, promote fairness and inclusion, provide transparency, and embrace accountability.  In addition to the LinkedIn Responsible AI Principles, responsible AI is also about intent and impact. “Intent” involves evaluating training data, designing systems, and reviewing model performance before the model is ever deployed to production to make sure that our principles are reflected at every step in the process. It includes actively changing our products and algorithms to empower every member. “Impact” covers detecting and monitoring the ways that people interact with products and features after they are deployed. We do this by measuring whether they provide significant value and empower individuals to reach their goals. Intent and impact are a cyclical process of refinement that go hand-in-hand towards the broader goal of responsible design. 

With respect to safety, we seek to keep content that violates our Professional Community Policies off of LinkedIn. This is done through a combination of automated and manual activity. Our first layer of protection is using AI to proactively filter out bad content and deliver relevant experiences for our members. We use content (like certain key words or images) that has previously been identified as violating our content policies to help inform our AI models so that we can better identify and restrict similar content from being posted in the future. The second layer of protection uses AI to flag content that is likely to be violative for human review. This occurs when the algorithm is not confident enough to warrant automatic removal. The third layer is member led, where members report content and then our team of reviewers evaluates the content and removes it if it is found to be in violation of our policies.

Quantifying the above process to monitor how many content violations are successfully prevented is another important task that our Data Science team prioritises, such that we can continuously refine our processes to improve detection and prevention of violative content. 

Please also see: 

·       QRE 17.1.1 (editorial practices to provide members with trustworthy news); 
·       QRE 18.2.1 (policies and procedures to limit spread of harmful false or misleading information); 
·       QRE 21.1.1 (action taken when information is identified as misinformation); 
·       QRE 22.1.1 (features and systems related to fake and inauthentic profiles); 
·       QRE 22.2.1 (actions taken to assist members identify trustworthy content); 
QRE 23.2.1 (actions taken to ensure integrity of reporting and appeals process).