QRE 18.1.3
Relevant Signatories will outline how they design their products, policies, or processes, to reduce the impressions and engagement with Disinformation whether through recommender systems or through other systemic approaches, and/or to increase the visibility of authoritative information.
At LinkedIn, our guiding principle is “Members First.” It ensures we honour our responsibility to
protect our members and maintain their trust in every decision we make, and puts their interests first. A key area where we apply this value in engineering is within our design process. We call this “responsible design,” which means that everything we build is intended to work as part of a unified system that delivers the best member experience, provides the right protections for our members and customers, and mitigates any unintended consequences in our products.
One of the core pillars of “responsible design” is “responsible AI,” which follows the
LinkedIn Responsible AI Principles, which are inspired by and aligned with
Microsoft’s Responsible AI Principles. The LinkedIn Responsible AI Principles are to advance economic opportunity, uphold trust, promote fairness and inclusion, provide transparency, and embrace accountability. In addition to the LinkedIn Responsible AI Principles, responsible AI is also about intent and impact. “Intent” involves evaluating training data, designing systems, and reviewing model performance before the model is ever deployed to production to make sure that our principles are reflected at every step in the process. It includes actively changing our products and algorithms to empower every member. “Impact” covers detecting and monitoring the ways that people interact with products and features after they are deployed. We do this by measuring whether they provide significant value and empower individuals to reach their goals. Intent and impact are a cyclical process of refinement that go hand-in-hand towards the broader goal of responsible design.
With respect to safety, we seek to keep content that violates our Professional Community Policies off of LinkedIn. This is done through a combination of automated and manual activity. Our first layer of protection is using AI to proactively filter out bad content and deliver relevant experiences for our members. We use content (like certain key words or images) that has previously been identified as violating our content policies to help inform our AI models so that we can better identify and restrict similar content from being posted in the future. The second layer of protection uses AI to flag content that is likely to be violative for human review. This occurs when the algorithm is not confident enough to warrant automatic removal. The third layer is member led, where members report content and then our team of reviewers evaluates the content and removes it if it is found to be in violation of our policies.
Quantifying the above process to monitor how many content violations are successfully prevented is another important task that our Data Science team prioritises, such that we can continuously refine our processes to improve detection and prevention of violative content.
Please also see:
· QRE 17.1.1 (editorial practices to provide members with trustworthy news);
· QRE 18.2.1 (policies and procedures to limit spread of harmful false or misleading information);
· QRE 21.1.1 (action taken when information is identified as misinformation);
· QRE 22.1.1 (features and systems related to fake and inauthentic profiles);
· QRE 22.2.1 (actions taken to assist members identify trustworthy content);
QRE 23.2.1 (actions taken to ensure integrity of reporting and appeals process).