Threats observed during the electoral period: [suggested character limit 2000 characters]. AI Forensics has been actively involved in election monitoring in 2024, with the following reports published as its outcomes:
1. French Elections (Artifi cial Elections: Exposing the Use of Generative AI Imagery in the Political Campaigns of the 2024 French Elections) AI Forensics investigated how AI-generated images were used in French political campaigns during the 2024 European Parliament and legislative elections. In May and June of 2024, we collected data from a variety of sources to get a comprehensive look at the use of AI imagery. We explored offi cial party websites and their social media accounts on platforms such as Facebook, Instagram, X (formerly Twitter), TikTok, YouTube, and LinkedIn.
Main threats:
The lack of transparency is alarming and highlights several critical concerns. Firstly, political parties and social media platforms are failing to adequately disclose the use of AI-generated imagery, which undermines public trust. Additionally, there is a pressing need for stricter content labelling to ensure the integrity of political campaigns and prevent the spread of misleading information. Finally, our fi ndings underscore the necessity of reinforcing EU-wide policies on the use of generative AI in elections to safeguard democratic processes and maintain electoral integrity.
2. TikTok Search: Analyzing TikTok´s “Others searched for” Feature: TikTok’s impact on public discourse among young users in Germany, focusing on the infl uence of search suggestions. This investigation on TikTok “Others searched for”; feature helps to understand its infl uence on political discourse, especially in the context of the 2024 elections. Conducted in collaboration with AI Forensics and interface TikTok Audit Team, this study aimed to determine if TikTok´s algorithm promotes misleading or sensational content. This feature suggests search terms to users, which could potentially lead them to questionable information or politically biased content, posing signifi cant risks to public discourse.
Main threats: The study highlights that TikTok's "Others Searched For" feature can distort reality for young users, especially during critical electoral periods. This distortion can negatively affect public political discourse, making it imperative for social media platforms to implement more robust oversight and transparency on their algorithms, including on less prominent algorithmic features such as search suggestions.. Our fi ndings emphasize the need for improved measures to ensure that search suggestions do not perpetuate misinformation or political bias, thus contributing to a more informed and balanced media environment.
3. Chatbot (s)elected moderation: Measuring the Moderation of Election-Related Content Across Chatbots, Languages and Electoral Contexts
This report evaluates and compares the effectiveness of these safeguards in different scenarios. In particular, we investigate the consistency with which electoral moderation is triggered, depending on (i) the chatbot, (ii) the language of the prompt, (iii) the electoral context, and (iv) the interface.
Main threats: The effectiveness of the moderation safeguards deployed by Copilot, ChatGPT, and Gemini is widely different. Gemini's moderation was the most consistent, with a moderation rate of 98%. For the same sample on Copilot, the rate was around 50%, while on the OpenAI web version of ChatGPT, there is no additional election-related moderation. Moderation is strictest in English and highly inconsistent across languages. When prompting Copilot about EU Elections, the moderation rate was the highest for English (90%), followed by Polish (80%), Italian (74%), and French (72%). It falls below 30% for Romanian, Swedish, Greek, or Dutch, and even for German (28%) despite it being the EU’s second most spoken language. For a given language, when asking the analogous prompts for both the EU and the US elections, the moderation rate can vary substantially. This confi rms the inconsistency of the process. Moderation is inconsistent between the web and API versions. The electoral safeguards on the web version of Gemini have not been implemented on the API version of the same tool.
4. No Embargo in Sight: Meta leds pro-Russian propaganda fl ood the EU: This investigation sheds light on a signifi cant loophole in the moderation of political advertisements on Meta platforms, highlighting systemic failures just as the European Union heads into crucial parliamentary elections. Our fi ndings uncover a sprawling pro-Russian infl uence operation that exploits these moderation failures, risking the integrity of democratic processes in Europe.
Main threats: Widespread Non-compliance: Less than 5% of undeclared political ads are caught by Meta's moderation system.Ineffective Moderation: 60% of ads moderated by Meta do not adhere to their own guidelines concerning political advertising. Signifi cant Reach: A specifi c pro-Russian propaganda campaign reached over 38 million users in France and Germany, with most ads not being identifi ed as political in a timely manner. Rapid Adaptation: The infl uence operation has adeptly adjusted its messaging to major geopolitical events to further its narratives.