
Report March 2025
Submitted
Executive summary
Democracy Reporting International's (DRI) Digital Democracy Programme Unit focuses on identifying trends in online discourse and online harms during political events and electoral periods across Europe and beyond. Our Digital Democracy team conducts social media monitoring and formulates policy recommendations for various stakeholders in the technology and society ecosystem, including lawmakers, tech platforms, and civil society organizations.
Key Findings and Actions during the Reporting Period:
1. Research into Murky Accounts: DRI’s Digital Democracy Programme Unit researched how VLOPs and VLOSEs address tactics like inauthentic accounts, fake followers, and political impersonation. We published eight reports on "Murky Accounts" —accounts of questionable affiliation that present themselves as official government, politician, or party accounts when, in fact, they are not. Murky accounts do not declare themselves as fan or parody pages, and can be interpreted as attempts to promote, amplify, and/or advertise political content.
We identified the systematic use of Murky Accounts in the 2024 European Parliament, French, and Romanian elections. We recommended that TikTok strengthen policies to prevent fan account abuse, improve enforcement of their policies to identify and address impersonation, require verified badges for political accounts, and enforce consistent guidelines, including pre-election reviews.
2. Social Media Monitoring (SMM): DRI also conducted detailed analyses of online discourse during the EP Elections in eight member states uncovering instances of toxic speech and disinformation threats targeting historically marginalised groups and the integrity of elections. Our techniques included keyword searches, sentiment analysis, and advanced computational methods to glean a nuanced understanding of online discourse during both electoral periods.
3. AI System Analysis and Recommendations: DRI continued its monitoring of generative AI risks, particularly from LLM-powered chatbots, through regular audits assessing their impact on elections. While some genAI systems (e.g., Gemini) implemented safeguards, others (e.g., Copilot, ChatGPT-4) still generated misleading electoral information, highlighting the need for consistent safeguards. We also track the use of AI-generated content during the 2024 EP Elections and formulated policy recommendations to address potential misuses. During the reporting period we also published a guide on auditing approaches for LLM risks and a report analysing chatbot alignment with human rights-based pluralism.
4. Policy Recommendations, Engagement and Advocacy: DRI actively participated in the Rapid Response System under the Code of Conduct on Disinformation, advocating for the robust implementation of the DSA’s risk mitigation framework and data access provisions. We worked directly with platforms to develop strategies for minimising online harms and pushed for greater transparency in content recommendation and moderation practices. Additionally, we engaged with EU stakeholders through roundtables, workshops, and conferences, fostering awareness and action on the DSA and broader digital governance issues.