AI Forensics

Report March 2025

Submitted
As the Code evolves and Signatories strengthen their collaboration within a shared framework, AI Forensics remains committed to its two core areas: algorithmic auditing and active participation in key working groups. In 2024, as the Code of Practice transitions to a Code of Conduct, we continue our engagement in the Generative AI and Elections Monitoring subgroups within the Crisis Response framework. In the lead-up to the 2024 European elections, we conducted extensive research on the impact of emerging technologies on electoral integrity.
We look forward to further collaboration with other Signatories, the European Commission, ERGA, and EDMO, reinforcing accountability and transparency in the digital ecosystem.

Download PDF

Commitment 28
COOPERATION WITH RESEARCHERS Relevant Signatories commit to support good faith research into Disinformation that involves their services.
We signed up to the following measures of this commitment
Measure 28.1 Measure 28.2 Measure 28.3
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 28.1
Relevant Signatories will ensure they have the appropriate human resources in place in order to facilitate research, and should set-up and maintain an open dialogue with researchers to keep track of the types of data that are likely to be in demand for research and to help researchers find relevant contact points in their organisations.
QRE 28.1.1
Relevant Signatories will describe the resources and processes they deploy to facilitate research and engage with the research community, including e.g. dedicated teams, tools, help centres, programs, or events.
AI Forensics continues to strengthen its multidisciplinary, socio-technical research approach, with a dedicated team of 17 members. Our team actively participates in academic discussions, including Winter and Summer Schools at Amsterdam Universities, and maintains a highly collaborative approach, working closely with research organizations, civil society, and media partners.
In 2024, AI Forensics led a collaborative effort with civil society organizations, scholars, and media to analyze algorithm-driven content dissemination across YouTube, TikTok, and Microsoft Copilot during the EU elections. This initiative produced critical reports exposing the role of recommendation systems in shaping the electoral landscape.
Our research on AI-generated imagery during the EU and French elections uncovered 51 instances of unlabeled AI images, often amplifying anti-EU and anti-immigrant narratives. Additionally, in partnership with SNV, we assessed misleading TikTok search suggestions that distorted election-related information.
In collaboration with Nieuwsuur, we investigated AI chatbot responses to political campaign strategy prompts in the Netherlands. The follow-up report analyzed the effectiveness of content moderation across different chatbots, evaluating how electoral safeguards varied based on factors such as platform, language, electoral context, and interface.