We have fact-checking coverage in 23 official EEA languages: Bulgarian, Croatian, Czech, Danish, Dutch, English, Estonian, Finnish, French, German, Greek, Hungarian, Italian, Latvian, Lithuanian, Norwegian, Polish, Portuguese, Romanian, Slovak, Slovenian, Spanish and Swedish.
We have fact-checking coverage in a number of other European languages or languages which affect European users, including Georgian, Russian, Turkish, and Ukrainian and we can request additional support in Azeri, Armenian, and Belarusian.
In terms of global fact-checking initiatives, we currently cover more than 60 languages and 130 markets across the world, thereby improving the overall integrity of the service and benefiting European users.
In order to effectively scale the feedback provided by our fact-checkers globally, we have implemented the measures listed below.
- Fact-checking repository. We have built a repository of previously fact-checked claims to help misinformation moderators make swift and accurate decisions.
- Insights reports. Our fact-checking partners provide regular reports identifying general misinformation trends observed on our platform and across the industry generally, including new/changing industry or market trends, events or topics that generated particular misinformation or disinformation.
- Proactive detection by our fact-checking partners. Our fact-checking partners are authorised to proactively identify content that may constitute harmful misinformation on our platform and suggest prominent misinformation that is circulating online that may benefit from verification.
Fact-checking guidelines. Where relevant, we create guidelines and trending topic reminders for our moderators which are informed by previous fact checking assessments. This helps our moderation teams leverage the insights from our fact-checking partners and supports swift and accurate decisions on flagged content regardless of the language in which the original claim was made.
- Election Speaker Series. To further promote election integrity, and inform our approach to country-level EU and regionally relevant elections, we invited suitably qualified local and regional external experts to share their insights and market expertise with our internal teams. Our recent Election Speaker Series heard presentations from the following organisations:
- Albania: Internews Kosova (Kallxo)
- Belarus: Belarusian Investigative Center
- Germany: Deutsche Presse-Agentur (dpa)
- Greenland: Logically Facts
- Kosovo: Internews Kosova (Kallxo)
- Poland: Demagog
- Portugal: Poligrafo
Moderation teams working dedicated misinformation queues receive enhanced training on our misinformation policies and have access to the above-mentioned tools and measures, which enables them to make accurate content decisions across Europe and globally.
We place considerable emphasis on proactive detection to remove violative content and reduce exposure to potentially distressing content for our human moderators. Before content is posted to our platform, it's reviewed by automated moderation technologies which identify content or behavior that may violate our policies or For You feed eligibility standards, or that may require age-restriction or other actions. While undergoing this review, the content is visible only to the uploader.
If our automated moderation technology identifies content that is a potential violation, it will either take action against the content or flag it for further review by our human moderation teams. In line with our safeguards to help ensure accurate decisions are made, automated removal is applied when violations are the most clear-cut.
Some of the methods and technologies that support these efforts include:
- Vision-based: Computer vision models can identify objects that violate our Community Guidelines—like weapons or hate symbols.
- Audio-based: Audio clips are reviewed for violations of our Community Guidelines, supported by a dedicated audio bank and "classifiers" that help us detect audios that are similar or modified to previous violations.
- Text-based: Detection models review written content like comments or hashtags, using foundational keyword lists to find variations of violative text. "Natural language processing"—a type of Artificial Intelligence (AI) that can interpret the context surrounding content—helps us identify violations that are context-dependent, such as words that can be used in a hateful way but may not violate our policies by themselves. We also work with various external experts, like our fact-checking partners, to inform our keyword lists.
- Similarity-based: "Similarity detection systems" enable us to not only catch identical or highly similar versions of violative content, but other types of content that share key contextual similarities and may require additional review.
- Activity-based: Technologies that look at how accounts are being operated help us disrupt deceptive activities like bot accounts, spam, or attempts to artificially inflate engagement through fake likes or follow attempts.
- LLM-based: We're starting to use a kind of AI called "large language learning models" to scale and improve content moderation. LLMs can comprehend human language and perform highly specific, complex tasks. This can make it possible to moderate content with a higher degree of precision, consistency and speed than human moderation.
- Multi-modal LLM-based: "Multi-modal LLMs" can also perform complex, highly specific tasks related to other types of content, such as visual content. For example, we can use this technology to make misinformation moderation easier by extracting specific misinformation "claims" from videos for moderators to assess directly or route to our fact-checking partners.
- Content Credentials: We launched the ability to read Content Credentials that attach metadata to content, which we can use to automatically label AI-generated content that originated on other major platforms.
Continuing to leverage the fact-checking output in this way enables us to further increase the positive impact of our fact checking programme.