If LinkedIn users locate content they believe violates our
Professional Community Policies, we encourage them to report it using the in-product reporting mechanism represented by the three dots in the upper right-hand corner of the content itself on LinkedIn.
Misinformation is specifically called out as one of the reporting options.
The reporting feature is available through, and largely identical across, LinkedIn’s website and mobile app, although reporting reasons and their visual presentation may vary slightly for certain types of content. In most instances, the reporting process is located just one click away from the content being reported and, depending on whether content is reported in the LinkedIn App or on desktop, the reporting process takes between four or five clicks to complete.
Reported content generally is reviewed by trained content reviewers. In addition, LinkedIn uses automation to flag potentially violative content to our content moderation teams. If reported or flagged content violates the Professional Community Policies, it will be actioned in accordance with our policies.
When members use the above reporting process, they will receive an email acknowledging receipt of the report. The email includes a link to the report status page, which we update when we make a decision, including providing the opportunity to appeal. Logged-out users receive updates on their report by email and are also provided with the opportunity to appeal.
Members also receive an email notifying them in in the event their content actioned in accordance with our policies. The email includes a link to a notice page for additional details and resources. If the member believes that their content complies with our Professional Community Policies, they can ask us to revisit our decision by submitting an appeal by clicking on the link in the notice page.
Further, LinkedIn has a dedicated
process for those entities who have been awarded Trusted Flagger status in accordance with Article 22 of the Digital Services Act.