As the world around us changes, LinkedIn continues to evolve and adapt our systems and practices for combating misinformation and other inauthentic behaviour on our platform, including to respond to the unique challenges presented by world events.
LinkedIn’s
Professional Community Policies, which all members agree to abide by on joining LinkedIn, prohibit misinformation. As described in more detail in our response to QRE 18.1.1, LinkedIn uses a combination of automated and manual activity to keep content that violates our policies off of LinkedIn.
LinkedIn also aims to educate its members about civic discourse, electoral processes, and public security through its global team of news editors. These editors provide trustworthy and authoritative content to LinkedIn’s member-base, and its content moderation teams closely monitor associated platform conversations in a number of languages.
In addition to broader measures, LinkedIn has taken special care to counter low authority information in relation to the war of aggression by Russia on Ukraine, the Israel-Hamas Conflict, and the European Elections, as detailed in relevant chapters.
For example, during pre-election cycles, LinkedIn relies on trusted and reputable publisher sources for featured shares, focusing on the policy impact on businesses and professionals around the EU. LinkedIn also curates links to topical landing pages from trusted publishers to provide members with easy and reliable entry points to more detailed coverage. LinkedIn does not compete with trusted publishers for speed or depth of coverage, but instead aims to connect their existing coverage to LinkedIn members and their needs. During important events in European Elections, this team provides manually curated and localised storylines.
We also work to identify and remove misinformation and inauthentic behaviour from our platform. As we continue to improve, we are committed to helping our members make informed decisions about content they find on LinkedIn, so we work with Microsoft to provide tools that assist our members in identifying trustworthy, relevant, authentic, and diverse content.
LinkedIn’s
Professional Community Policies clearly detail the objectionable and harmful content that is not allowed on LinkedIn. Misinformation and inauthentic content is not allowed, and our automated defenses take proactive steps to remove them. LinkedIn’s blog provides information regarding our efforts, including
How We’re Protecting Members From Fake Profiles,
Automated Fake Account Detection, and
An Update on How We Keep Members Safe.
LinkedIn members can
report content that violates our Professional Community Policies, including misinformation and inauthentic content. Our Trust and Safety teams work every day to identify and restrict such activity, and if reported content violates the Professional Community Policies, it will be actioned in accordance with our polices.
LinkedIn members can identify misinformation and inauthentic behaviour by utilising the
News Literacy Project,
The Trust Project and
Verified, all of which develop information literacy campaigns built on industry research and best practices. The News Literacy Project campaign developed a
quiz that tests a person’s ability to identify why the information they are seeing is false and inaccurate in less than five minutes. The Trust Project campaign developed the research-backed
8 Trust Indicators, which aim to improve consumers ability to identify reliable, ethical journalism. Finally,
Verified delivers lifesaving information and fact-based advice to build digital literacy that helps communities protect themselves from misinformation. LinkedIn has also published an
article in our Help Center compiling these useful resources on misinformation and inauthentic behaviour.