QRE 26.1.1
Relevant Signatories will describe the tools and processes in place to provide public access to non-personal data and anonymised, aggregated and manifestly-made public data pertinent to undertaking research on Disinformation, as well as the safeguards in place to address risks of abuse.
LinkedIn supports the aims of the research community and regularly provides information and data to the research community in a variety of ways.
To date, we have made non-personal, aggregated data publicly available (data on gender equity in the workplace, data on green skills and jobs, data on industry and job skills, and data on engagement with labor markets and employment trends). Our goal with this action to enable researchers to understand the rapidly changing world of work through access to and use of LinkedIn data. Because much of our data is publicly available, the extent to which such data has been used for disinformation related research purposes cannot easily be ascertained.
Additionally, LinkedIn is expanding its
API access for public data for disinformation related research purposes. Information about the LinkedIn APIs are available to the public and researcher access is provided here.
Finally, Microsoft is also a leader in research in Responsible AI and provides
a range of tools and resources dedicated to promoting responsible usage of artificial intelligence to allow practitioners and researchers to maximize the benefits of AI systems while mitigating harms. For example, as part of its
Responsible AI Toolbox, Microsoft provides a Responsible AI Mitigations Library, which enables practitioners to more easily experiment with different techniques for addressing failure (which could include inaccurate outputs), and the Responsible AI Tracker, which uses visualizations to show the effectiveness of the different techniques for more informed decision-making. These tools are available to the public and research community for free.