Democracy Reporting International

Report March 2025

Submitted
Executive summary 
Democracy Reporting International's (DRI) Digital Democracy Programme Unit focuses on identifying trends in online discourse and online harms during political events and electoral periods across Europe and beyond. Our Digital Democracy team conducts social media monitoring and formulates policy recommendations for various stakeholders in the technology and society ecosystem, including lawmakers, tech platforms, and civil society organizations. 

Key Findings and Actions during the Reporting Period: 

1. Research into Murky Accounts: DRI’s Digital Democracy Programme Unit researched how VLOPs and VLOSEs address tactics like inauthentic accounts, fake followers, and political impersonation. We published eight reports on "Murky Accounts" —accounts of questionable affiliation that present themselves as official government, politician, or party accounts when, in fact, they are not. Murky accounts do not declare themselves as fan or parody pages, and can be interpreted as attempts to promote, amplify, and/or advertise political content.  
We identified the systematic use of Murky Accounts in the 2024 European Parliament, French, and Romanian elections. We recommended that TikTok strengthen policies to prevent fan account abuse, improve enforcement of their policies to identify and address impersonation, require verified badges for political accounts, and enforce consistent guidelines, including pre-election reviews. 

2. Social Media Monitoring (SMM): DRI also conducted detailed analyses of online discourse during the EP Elections in eight member states uncovering instances of toxic speech and disinformation threats targeting historically marginalised groups and the integrity of elections. Our techniques included keyword searches, sentiment analysis, and advanced computational methods to glean a nuanced understanding of online discourse during both electoral periods. 

3. AI System Analysis and Recommendations: DRI continued its monitoring of generative AI risks, particularly from LLM-powered chatbots, through regular audits assessing their impact on elections. While some genAI systems (e.g., Gemini) implemented safeguards, others (e.g., Copilot, ChatGPT-4) still generated misleading electoral information, highlighting the need for consistent safeguards. We also track the use of AI-generated content during the 2024 EP Elections and formulated policy recommendations to address potential misuses. During the reporting period we also published a guide on auditing approaches for LLM risks and a report analysing chatbot alignment with human rights-based pluralism. 

4. Policy Recommendations, Engagement and Advocacy: DRI actively participated in the Rapid Response System under the Code of Conduct on Disinformation, advocating for the robust implementation of the DSA’s risk mitigation framework and data access provisions. We worked directly with platforms to develop strategies for minimising online harms and pushed for greater transparency in content recommendation and moderation practices. Additionally, we engaged with EU stakeholders through roundtables, workshops, and conferences, fostering awareness and action on the DSA and broader digital governance issues. 

Download PDF

Commitment 14
In order to limit impermissible manipulative behaviours and practices across their services, Relevant Signatories commit to put in place or further bolster policies to address both misinformation and disinformation across their services, and to agree on a cross-service understanding of manipulative behaviours, actors and practices not permitted on their services. Such behaviours and practices include: The creation and use of fake accounts, account takeovers and bot-driven amplification, Hack-and-leak operations, Impersonation, Malicious deep fakes, The purchase of fake engagements, Non-transparent paid messages or promotion by influencers, The creation and use of accounts that participate in coordinated inauthentic behaviour, User conduct aimed at artificially amplifying the reach or perceived public support for disinformation.
We signed up to the following measures of this commitment
Measure 14.1 Measure 14.2 Measure 14.3
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 14.1
Relevant Signatories will adopt, reinforce and implement clear policies regarding impermissible manipulative behaviours and practices on their services, based on the latest evidence on the conducts and tactics, techniques and procedures (TTPs) employed by malicious actors, such as the AMITT Disinformation Tactics, Techniques and Procedures Framework.
QRE 14.1.1
Relevant Signatories will list relevant policies and clarify how they relate to the threats mentioned above as well as to other Disinformation threats.
Murky accounts as a systemic threat to elections in the EU 
 
In 2024, the DRI’s Digital Democracy Programme Unit conducted research on how VLOPs and VLOSEs address TTPs, including the creation of non-automated inauthentic accounts, the use of fake followers or subscribers, and the impersonation of political candidates and parties. 
 
As part of this effort, we published a total of eight reports analysing the phenomenon of Murky Accounts on TikTok in the context of the 2024 European Parliament Elections, the French snap elections, the Saxony and Thuringia regional elections in Germany, and the Romanian elections. We argued that these accounts pose a serious risk to civic discourse and EU elections by misleading voters, distorting perceptions of political support, and bypassing TikTok’s stricter policies on political accounts. A list of all Murky Accounts reports can be found in QRE 14.2.1. 
 
Across all reports, we recommended TikTok to strengthen their policies to prevent fan account abuse, introduce features to stop impersonation, require verified badges for political accounts in the EU, conduct pre-election reviews, and ensure consistent enforcement of their community guidelines. We met with TikTok representatives in Berlin on 12 August to discuss our findings and recommendations. 
 
Social media monitoring (SMM) of elections  
 
Through DRI’s SMM across Europe and beyond we also identified trends in online discourse and detect instances of online harms, including disinformation, hate speech and toxic content. The following is a list of DRI’s efforts in 2024 to detect impermissible online content, behaviours, and practices relevant to Commitment 14, as well as those polices recommended to mitigate the spread of such content. Outside of the European context, DRI also conducts social media monitoring in South America, the Middle East, and North Africa: