Democracy Reporting International

Report March 2026

Submitted

Executive summary 

Democracy Reporting International's (DRI) Digital Democracy Programme Unit monitors threats to information integrity during political events and electoral periods across Europe and beyond. Our Digital Democracy team conducts social media monitoring, audits AI-powered chatbots for their impact on political online discourse and formulates policy recommendations for various stakeholders in the technology and society ecosystem, including lawmakers, tech platforms, and civil society organizations. During 2025 we continue our active work as signatories of the Code, among others, co-chairing the Taskforce on Elections. 

Exposed widespread inauthentic behaviour on TikTok:  DRI continued its research into murky accounts during the reporting period, with a focus on elections in Germany, Poland, and Romania. In 2025, using authenticity indicators and account-level metrics, we identified and reported 482 inauthentic accounts through the Rapid Response System, documenting how these accounts impersonate political actors, amplify partisan messaging, and distort perceptions of political support. Our findings show that murky accounts remain a persistent problem on platforms, particularly during elections, and reinforce the need for stronger detection and enforcement against inauthentic behaviour. 

Advocated for data access for civil society researchers: DRI continued its work on platform transparency and accountability by examining the nature and limits of researcher access to platform data under Article 40 of the DSA. During the reporting period, we undertook litigation action against X, published analyses and opinion pieces on barriers to meaningful access, engaged with more than 88 stakeholders through meetings and webinars to raise awareness, shared practical lessons from using platform tools and pursuing legal remedies. This work aimed to clarify how existing access mechanisms function in practice and to support stronger, more consistent implementation of researcher data access. 

Generated evidence about social media’s impact on elections and political discourse:  DRI carried out social media monitoring across the Austrian, German, Polish, and Sri Lankan elections, identifying toxic narratives, disinformation risks, and harmful online speech affecting both historically marginalised groups and electoral integrity. Using methods such as keyword-based monitoring, sentiment analysis, and computational analysis of online content, we tracked how divisive narratives, discriminatory rhetoric, and polarising campaign strategies spread across platforms during these electoral periods. 

Raised awareness about AI-related election risks and advocated for the transparency of AI-generated content: DRI audited the most popular chatbots during the German federal elections and identified both inaccurate electoral information and unlabelled generative AI content in political communication. DRI also engaged in policy discussions under the EU Code of Practice on Transparency of AI-Generated Content and provided input to the European Commission on transparency, labelling, and accountability standards for generative AI. 

Increased engagement and knowledge exchange on platform transparency and accountability:  Our work during the reporting period remained closely tied to the Code of Conduct framework through DRI’s co-chairing role in the Elections Working Group and our use of the Rapid Response System. We also convened more than six roundtables, webinars, and conferences with researchers, regulators, civil society groups, and other stakeholders on election integrity, platform accountability, and data access. 

Download PDF

Commitment 14
In order to limit impermissible manipulative behaviours and practices across their services, Relevant Signatories commit to put in place or further bolster policies to address both misinformation and disinformation across their services, and to agree on a cross-service understanding of manipulative behaviours, actors and practices not permitted on their services. Such behaviours and practices include: The creation and use of fake accounts, account takeovers and bot-driven amplification, Hack-and-leak operations, Impersonation, Malicious deep fakes, The purchase of fake engagements, Non-transparent paid messages or promotion by influencers, The creation and use of accounts that participate in coordinated inauthentic behaviour, User conduct aimed at artificially amplifying the reach or perceived public support for disinformation.
We signed up to the following measures of this commitment
Measure 14.1 Measure 14.2
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 14.1
Relevant Signatories will adopt, reinforce and implement clear policies regarding impermissible manipulative behaviours and practices on their services, based on the latest evidence on the conducts and tactics, techniques and procedures (TTPs) employed by malicious actors, such as the AMITT Disinformation Tactics, Techniques and Procedures Framework.
QRE 14.1.1
Relevant Signatories will list relevant policies and clarify how they relate to the threats mentioned above as well as to other Disinformation threats.
Murky accounts as a systemic threat to elections in the EU  
 
In 2025, DRI continued its systematic reporting on murky accounts as part of efforts to address impermissible manipulative behaviours and practices across online services.  
DRI monitored 3 elections in Romania, Poland and Germany in 2025. In these elections, we identified networks of impersonation accounts, fake engagement ecosystems, and inauthentic amplification on TikTok.  
 
A list of all Murky Accounts reports can be found in QRE 14.2.1.  
 
Research on Election Information Integrity and Policy Recommendations 
 
Through DRI’s activities across Europe and beyond in 2025, we continued to identify trends in online discourse and detect instances of threats to information integrity, including disinformation, hate speech, and toxic content. The following is a list of DRI’s 2025 efforts to identify impermissible online content, behaviours, and practices relevant to Commitment 14, alongside the policy measures recommended to mitigate their spread. In addition to European electoral contexts, DRI also conducted social media monitoring in South Asia and Africa: 
 

Data Access  
 
To support effective implementation of Article 40 of the Digital Services Act, DRI produced a series of policy analyses examining regulatory gaps, researcher access barriers, and platform transparency obligations.  
This work was also informed by our first case against X concerning access to German election data, which highlighted the practical obstacles researchers continue to face when seeking access to publicly available platform data. Together, these publications provide legal and operational recommendations to strengthen access to platform data and enable independent scrutiny of systemic online risks: 
 
 
Interactive tools hosted on the Digital Democracy Monitor Knowledge Hub presented key findings on platform obligations, enforcement pathways, and implementation gaps in a more accessible format, supporting better understanding of online election risks and possible responses: