LinkedIn

Report March 2025

Submitted
Commitment 23
Relevant Signatories commit to provide users with the functionality to flag harmful false and/or misleading information that violates Signatories policies or terms of service.
We signed up to the following measures of this commitment
Measure 23.1 Measure 23.2
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
No
If yes, list these implementation measures here
Not applicable
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
No
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Not applicable
Measure 23.1
Relevant Signatories will develop or continue to make available on all their services and in all Member States languages in which their services are provided a user-friendly functionality for users to flag harmful false and/or misleading information that violates Signatories' policies or terms of service. The functionality should lead to appropriate, proportionate and consistent follow-up actions, in full respect of the freedom of expression.
QRE 23.1.1
Relevant Signatories will report on the availability of flagging systems for their policies related to harmful false and/or misleading information across EU Member States and specify the different steps that are required to trigger the systems.
If LinkedIn users locate content they believe violates ourProfessional Community Policies, we encourage them to report it using the in-product reporting mechanism represented by the three dots in the upper right-hand corner of the content itself on LinkedIn.

Misinformation is specifically called out as one of the reporting options.
 
The reporting feature is available through, and largely identical across, LinkedIn’s website and mobile app, although reporting reasons and their visual presentation may vary slightly for certain types of content. In most instances, the reporting process is located just one click away from the content being reported and, depending on whether content is reported in the LinkedIn App or on desktop, the reporting process takes between four or five clicks to complete.
 
Reported content generally is reviewed by trained content reviewers. In addition, LinkedIn uses automation to flag potentially violative content to our content moderation teams. If reported or flagged content violates the Professional Community Policies, it will be actioned in accordance with our policies. 
 
When members use the above reporting process, they will receive an email acknowledging receipt of the report. The email includes a link to the report status page, which we update when we make a decision, including providing the opportunity to appeal. Logged-out users receive updates on their report by email and are also provided with the opportunity to appeal.

Members also receive an email notifying them in in the event their content actioned in accordance with our policies. The email includes a link to a notice page for additional details and resources. If the member believes that their content complies with our Professional Community Policies, they can ask us to revisit our decision by submitting an appeal by clicking on the link in the notice page.
 
Further, LinkedIn has  a dedicated process for those entities who have been awarded Trusted Flagger status in accordance with Article 22 of the Digital Services Act.