AI Forensics

Report March 2025

Submitted
As the Code evolves and Signatories strengthen their collaboration within a shared framework, AI Forensics remains committed to its two core areas: algorithmic auditing and active participation in key working groups. In 2024, as the Code of Practice transitions to a Code of Conduct, we continue our engagement in the Generative AI and Elections Monitoring subgroups within the Crisis Response framework. In the lead-up to the 2024 European elections, we conducted extensive research on the impact of emerging technologies on electoral integrity.
We look forward to further collaboration with other Signatories, the European Commission, ERGA, and EDMO, reinforcing accountability and transparency in the digital ecosystem.

Download PDF

Elections 2024
[Note: Signatories are requested to provide information relevant to their particular response to the threats and challenges they observed on their service(s). They ensure that the information below provides an accurate and complete report of their relevant actions. As operational responses to crisis/election situations can vary from service to service, an absence of information should not be considered a priori a shortfall in the way a particular service has responded. Impact metrics are accurate to the best of signatories’ abilities to measure them].
Threats observed or anticipated
Threats observed during the electoral period: [suggested character limit 2000 characters]. AI Forensics has been actively involved in election monitoring in 2024, with the following reports published as its outcomes:

1. French Elections (Artifi cial Elections: Exposing the Use of Generative AI Imagery in the Political Campaigns of the 2024 French Elections) AI Forensics investigated how AI-generated images were used in French political campaigns during the 2024 European Parliament and legislative elections. In May and June of 2024, we collected data from a variety of sources to get a comprehensive look at the use of AI imagery. We explored offi cial party websites and their social media accounts on platforms such as Facebook, Instagram, X (formerly Twitter), TikTok, YouTube, and LinkedIn.
Main threats:
The lack of transparency is alarming and highlights several critical concerns. Firstly, political parties and social media platforms are failing to adequately disclose the use of AI-generated imagery, which undermines public trust. Additionally, there is a pressing need for stricter content labelling to ensure the integrity of political campaigns and prevent the spread of misleading information. Finally, our fi ndings underscore the necessity of reinforcing EU-wide policies on the use of generative AI in elections to safeguard democratic processes and maintain electoral integrity.

2. TikTok Search: Analyzing TikTok´s “Others searched for” Feature: TikTok’s impact on public discourse among young users in Germany, focusing on the infl uence of search suggestions. This investigation on TikTok “Others searched for”; feature helps to understand its infl uence on political discourse, especially in the context of the 2024 elections. Conducted in collaboration with AI Forensics and interface TikTok Audit Team, this study aimed to determine if TikTok´s algorithm promotes misleading or sensational content. This feature suggests search terms to users, which could potentially lead them to questionable information or politically biased content, posing signifi cant risks to public discourse.
Main threats: The study highlights that TikTok's "Others Searched For" feature can distort reality for young users, especially during critical electoral periods. This distortion can negatively affect public political discourse, making it imperative for social media platforms to implement more robust oversight and transparency on their algorithms, including on less prominent algorithmic features such as search suggestions.. Our fi ndings emphasize the need for improved measures to ensure that search suggestions do not perpetuate misinformation or political bias, thus contributing to a more informed and balanced media environment.

3. Chatbot (s)elected moderation: Measuring the Moderation of Election-Related Content Across Chatbots, Languages and Electoral Contexts
This report evaluates and compares the effectiveness of these safeguards in different scenarios. In particular, we investigate the consistency with which electoral moderation is triggered, depending on (i) the chatbot, (ii) the language of the prompt, (iii) the electoral context, and (iv) the interface.
Main threats: The effectiveness of the moderation safeguards deployed by Copilot, ChatGPT, and Gemini is widely different. Gemini's moderation was the most consistent, with a moderation rate of 98%. For the same sample on Copilot, the rate was around 50%, while on the OpenAI web version of ChatGPT, there is no additional election-related moderation. Moderation is strictest in English and highly inconsistent across languages. When prompting Copilot about EU Elections, the moderation rate was the highest for English (90%), followed by Polish (80%), Italian (74%), and French (72%). It falls below 30% for Romanian, Swedish, Greek, or Dutch, and even for German (28%) despite it being the EU’s second most spoken language. For a given language, when asking the analogous prompts for both the EU and the US elections, the moderation rate can vary substantially. This confi rms the inconsistency of the process. Moderation is inconsistent between the web and API versions. The electoral safeguards on the web version of Gemini have not been implemented on the API version of the same tool.

4. No Embargo in Sight: Meta leds pro-Russian propaganda fl ood the EU: This investigation sheds light on a signifi cant loophole in the moderation of political advertisements on Meta platforms, highlighting systemic failures just as the European Union heads into crucial parliamentary elections. Our fi ndings uncover a sprawling pro-Russian infl uence operation that exploits these moderation failures, risking the integrity of democratic processes in Europe.
Main threats: Widespread Non-compliance: Less than 5% of undeclared political ads are caught by Meta's moderation system.Ineffective Moderation: 60% of ads moderated by Meta do not adhere to their own guidelines concerning political advertising. Signifi cant Reach: A specifi c pro-Russian propaganda campaign reached over 38 million users in France and Germany, with most ads not being identifi ed as political in a timely manner. Rapid Adaptation: The infl uence operation has adeptly adjusted its messaging to major geopolitical events to further its narratives.
Mitigations in place
Policies and Terms and Conditions
Our analysis on the French elections highlights several areas where policies and terms and conditions should respond to emerging threats related to generative AI in political campaigns:

1. Transparency Requirements: There is a critical need for greater transparency from political parties and social media platforms regarding the use of AI-generated imagery. Current policies must enforce clear disclosure when synthetic content is used in campaigns, ensuring the public is fully informed about AI-altered visuals. This should include a requirement for political actors to label AI-generated materials and for platforms to fl ag such content when shared on social media.
2. Stricter Content Labelling: To combat the spread of misleading or deceptive AI-generated content, platforms must enhance their content moderation policies. Automated tools and human oversight should work in tandem to identify and remove manipulated or misleading images that distort political discourse. Policies should also include stringent checks to ensure that AI-generated content used in political contexts complies with electoral laws and ethical standards.
3. Translating Codes of Conduct into regulatory obligations: The fi ndings underline the necessity of strengthening EU-wide policies on the use of generative AI in elections. Current frameworks, like the Code of Conduct for the 2024 European Parliamentary Elections, should be reinforced with mandatory regulations, penalties for violations, and robust enforcement mechanisms. This will safeguard democratic processes from the undue infl uence of misleading, AI-generated content and maintain electoral integrity across member states.
4. Amplification of Misinformation: Generative AI has been used to produce content that spreads misinformation, emotionally manipulates voters, and supports extremist ideologies. The ease and low cost of creating such content exacerbate the risk of misleading narratives dominating electoral campaigns.

Our report on TikTok´s “Others Searched for” Feature suggests several solutions to address the threats:

1. Stronger Oversight to prevent algorithmic harms: Social media platforms, especially TikTok, should strengthen their content moderation systems to prevent misleading or biased search suggestions. This includes actively identifying and removing dog whistles, misinformation, and content designed to manipulate users' political views
2. Transparency in Algorithms: Platforms must be more transparent about how their algorithms generate search suggestions. Clear policies are needed to explain how suggestions are ranked, especially during election periods, to ensure that users aren't steered toward specifi c political narratives or parties.
3. Reducing Political Bias: TikTok should implement safeguards to ensure that search suggestions do not disproportionately promote one political party or viewpoint. By doing so, they can help foster a more balanced media environment that avoids distorting electoral discourse.

Our report on “Chatbot (s)elected moderationbsuggests the following solutions to address the threats posed by chatbot moderation and misinformation in sensitive contexts such as elections:

1. Consistency in Moderation: Platforms must ensure that chatbot moderation mechanisms are applied uniformly across all languages and geographies, preventing gaps in protection for non-English users and elections in various regions.
2. Transparency of Moderation Systems: Platforms should publish clear documentation explaining the design, implementation, and functioning of their moderation systems, helping users and researchers understand how content is managed and ensuring safeguards are in place.
3. Accountability through External Scrutiny: Introducing research APIs that allow third parties to test and scrutinize chatbot moderation layers is essential for improving accountability. This would enable external experts to assess the effectiveness of the moderation mechanisms and identify potential biases or inconsistencies.
4. Improved Moderation for Sensitive Prompts: Platforms should develop robust safeguards for sensitive topics, such as elections, ensuring that chatbots do not spread harmful misinformation or propaganda. Enhanced moderation must be implemented systematically across all contexts.