We see harmful misinformation as different from other content issues. Context and fact-checking are critical to consistently and accurately enforcing our harmful misinformation policies, which is why we work with 14 fact-checking partners in Europe, covering 23 EEA languages.
While we use machine learning models to help detect potential misinformation, our approach is to have members of our content moderation team, who receive specialised training on misinformation, assess, confirm, and take action on harmful misinformation. This includes direct access to our fact-checking partners who help assess the accuracy of content. Our fact-checking partners are involved in our moderation process in three ways:
(i) a moderator sends a video to fact-checkers for review and their assessment of the accuracy of the content by providing a rating. Fact-checkers will do so independently from us, and their review may include calling sources, consulting public data, authenticating videos and images, and more.
While content is being fact-checked or when content can't be substantiated through fact-checking, we may reduce the content’s distribution so that fewer people see it. Fact-checkers ultimately do not take action on the content directly. The moderator will instead take into account the fact-checkers’ feedback on the accuracy of the content when deciding whether the content violates our CGs and what action to take.
(ii) contributing to our global database of previously fact-checked claims to help our misinformation moderators make decisions.
(iii) a proactive detection programme with our fact-checkers who flag new and evolving claims they're seeing on our platform. This enables our moderators to quickly assess these claims and remove violations.
In addition, we use fact-checking feedback to provide additional context to users about certain content. As mentioned, when our fact checking partners conclude that the fact-check is inconclusive or content is not able to be confirmed, (which is especially common during unfolding events or crises), we inform viewers
via a banner when we identify a video with unverified content in an effort to raise users' awareness about the credibility of the content and to reduce sharing. The video may also become ineligible for recommendation into anyone's For You feed to limit the spread of potentially misleading information.