Democracy Reporting International

Report March 2025

Submitted
Executive summary 
Democracy Reporting International's (DRI) Digital Democracy Programme Unit focuses on identifying trends in online discourse and online harms during political events and electoral periods across Europe and beyond. Our Digital Democracy team conducts social media monitoring and formulates policy recommendations for various stakeholders in the technology and society ecosystem, including lawmakers, tech platforms, and civil society organizations. 

Key Findings and Actions during the Reporting Period: 

1. Research into Murky Accounts: DRI’s Digital Democracy Programme Unit researched how VLOPs and VLOSEs address tactics like inauthentic accounts, fake followers, and political impersonation. We published eight reports on "Murky Accounts" —accounts of questionable affiliation that present themselves as official government, politician, or party accounts when, in fact, they are not. Murky accounts do not declare themselves as fan or parody pages, and can be interpreted as attempts to promote, amplify, and/or advertise political content.  
We identified the systematic use of Murky Accounts in the 2024 European Parliament, French, and Romanian elections. We recommended that TikTok strengthen policies to prevent fan account abuse, improve enforcement of their policies to identify and address impersonation, require verified badges for political accounts, and enforce consistent guidelines, including pre-election reviews. 

2. Social Media Monitoring (SMM): DRI also conducted detailed analyses of online discourse during the EP Elections in eight member states uncovering instances of toxic speech and disinformation threats targeting historically marginalised groups and the integrity of elections. Our techniques included keyword searches, sentiment analysis, and advanced computational methods to glean a nuanced understanding of online discourse during both electoral periods. 

3. AI System Analysis and Recommendations: DRI continued its monitoring of generative AI risks, particularly from LLM-powered chatbots, through regular audits assessing their impact on elections. While some genAI systems (e.g., Gemini) implemented safeguards, others (e.g., Copilot, ChatGPT-4) still generated misleading electoral information, highlighting the need for consistent safeguards. We also track the use of AI-generated content during the 2024 EP Elections and formulated policy recommendations to address potential misuses. During the reporting period we also published a guide on auditing approaches for LLM risks and a report analysing chatbot alignment with human rights-based pluralism. 

4. Policy Recommendations, Engagement and Advocacy: DRI actively participated in the Rapid Response System under the Code of Conduct on Disinformation, advocating for the robust implementation of the DSA’s risk mitigation framework and data access provisions. We worked directly with platforms to develop strategies for minimising online harms and pushed for greater transparency in content recommendation and moderation practices. Additionally, we engaged with EU stakeholders through roundtables, workshops, and conferences, fostering awareness and action on the DSA and broader digital governance issues. 

Download PDF

Commitment 15
Relevant Signatories that develop or operate AI systems and that disseminate AI-generated and manipulated content through their services (e.g. deepfakes) commit to take into consideration the transparency obligations and the list of manipulative practices prohibited under the proposal for Artificial Intelligence Act.
We signed up to the following measures of this commitment
Measure 15.1 Measure 15.2
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 15.1
Relevant signatories will establish or confirm their policies in place for countering prohibited manipulative practices for AI systems that generate or manipulate content, such as warning users and proactively detect such content.
QRE 15.1.1
In line with EU and national legislation, Relevant Signatories will report on their policies in place for countering prohibited manipulative practices for AI systems that generate or manipulate content.
Throughout 2024, DRI’s Digital Democracy Programme Unit continued its monitoring of generative AI, conducting regular audits to assess the risks LLM-powered chatbots pose to elections and voters. We also provided guidance on the recently enforced AI Act, analysing its implications for elections and democracy. 
 
We tracked the use of AI-generated content by political parties in the EU during the EP Elections, compiling our findings into a report and dashboard. These insights were later presented at the European Parliament to EU stakeholders and Microsoft, where we advocated for stronger policies to prevent potential misuse. 
 
During the reporting period, we also published a guide evaluating different auditing approaches for identifying and mitigating risks associated with large language models (LLMs), supporting researchers in conducting similar analyses. Additionally, we released a report assessing how well LLM responses align with our human rights-based definition of pluralism and their representation of different political perspectives. 
 
Building on our findings on chatbots’ ability to deliver accurate election-related information across multiple languages and regions, we developed policy recommendations examining the role of AI-generated content in the 2024 European elections. 
 
Below is a list of DRI reports published during the reporting period related to these efforts: 
 
 
Our research over the past year indicates that while some of the most widely used publicly available generative AI systems (e.g., Gemini) have implemented safeguards to mitigate misinformation—such as refusing to answer election-related questions—following our risk notifications, others, including Copilot and ChatGPT-4/4o, have yet to adopt a consistent approach. As a result, largely used language models continue to generate false or partially inaccurate electoral information.