Democracy Reporting International

Report March 2026

Submitted

Executive summary 

Democracy Reporting International's (DRI) Digital Democracy Programme Unit monitors threats to information integrity during political events and electoral periods across Europe and beyond. Our Digital Democracy team conducts social media monitoring, audits AI-powered chatbots for their impact on political online discourse and formulates policy recommendations for various stakeholders in the technology and society ecosystem, including lawmakers, tech platforms, and civil society organizations. During 2025 we continue our active work as signatories of the Code, among others, co-chairing the Taskforce on Elections. 

Exposed widespread inauthentic behaviour on TikTok:  DRI continued its research into murky accounts during the reporting period, with a focus on elections in Germany, Poland, and Romania. In 2025, using authenticity indicators and account-level metrics, we identified and reported 482 inauthentic accounts through the Rapid Response System, documenting how these accounts impersonate political actors, amplify partisan messaging, and distort perceptions of political support. Our findings show that murky accounts remain a persistent problem on platforms, particularly during elections, and reinforce the need for stronger detection and enforcement against inauthentic behaviour. 

Advocated for data access for civil society researchers: DRI continued its work on platform transparency and accountability by examining the nature and limits of researcher access to platform data under Article 40 of the DSA. During the reporting period, we undertook litigation action against X, published analyses and opinion pieces on barriers to meaningful access, engaged with more than 88 stakeholders through meetings and webinars to raise awareness, shared practical lessons from using platform tools and pursuing legal remedies. This work aimed to clarify how existing access mechanisms function in practice and to support stronger, more consistent implementation of researcher data access. 

Generated evidence about social media’s impact on elections and political discourse:  DRI carried out social media monitoring across the Austrian, German, Polish, and Sri Lankan elections, identifying toxic narratives, disinformation risks, and harmful online speech affecting both historically marginalised groups and electoral integrity. Using methods such as keyword-based monitoring, sentiment analysis, and computational analysis of online content, we tracked how divisive narratives, discriminatory rhetoric, and polarising campaign strategies spread across platforms during these electoral periods. 

Raised awareness about AI-related election risks and advocated for the transparency of AI-generated content: DRI audited the most popular chatbots during the German federal elections and identified both inaccurate electoral information and unlabelled generative AI content in political communication. DRI also engaged in policy discussions under the EU Code of Practice on Transparency of AI-Generated Content and provided input to the European Commission on transparency, labelling, and accountability standards for generative AI. 

Increased engagement and knowledge exchange on platform transparency and accountability:  Our work during the reporting period remained closely tied to the Code of Conduct framework through DRI’s co-chairing role in the Elections Working Group and our use of the Rapid Response System. We also convened more than six roundtables, webinars, and conferences with researchers, regulators, civil society groups, and other stakeholders on election integrity, platform accountability, and data access. 

Download PDF

Elections 2025
[Note: Signatories are requested to provide information relevant to their particular response to the threats and challenges they observed on their service(s). They ensure that the information below provides an accurate and complete report of their relevant actions. As operational responses to crisis/election situations can vary from service to service, an absence of information should not be considered a priori a shortfall in the way a particular service has responded. Impact metrics are accurate to the best of signatories’ abilities to measure them].
Threats observed or anticipated

  1. Impersonation and inauthentic accounts and political ads violating platform’s policies 

DRI used the Rapid Response System to flag coordinated inauthentic behaviour and murky political accounts on TikTok that impersonated candidates and amplified partisan content. We alerted platforms and authorities to networks distorting electoral discourse and violating platform integrity policies. 

German elections: The report identified 138 inauthentic TikTok accounts operating ahead of Germany’s 2025 elections, most of which promoted or impersonated actors linked to the Alternative für Deutschland and generated disproportionately high engagement compared to accounts tied to other parties. Using tactics such as impersonating figures like Alice Weidel and Björn Höcke, trending hashtags, memes, and AI-generated imagery, these “murky” accounts exposed enforcement gaps under the EU Digital Services Act despite most being removed after researcher reporting.  

Polish elections
: Analysis of Poland’s 2025 presidential election found that a small group of candidates produced over 57% of campaign content, while a study of 5,500+ social media posts revealed uneven reach and unusually rapid audience growth linked to certain far-right actors. Monitoring also identified 145 inauthentic TikTok accounts impersonating candidates and parties, with some profiles amassing hundreds of thousands of followers despite partial platform removals. 

Romanian elections: Ahead of Romania’s May 2025 presidential election—following the annulment of the November 2024 vote by the Constitutional Court of Romania—monitoring identified 323 murky TikTok accounts impersonating political actors, with 35.2% supporting Călin Georgescu and others mimicking figures such as Elena Lasconi, George Simion, and Nicușor Dan. While Georgescu-linked accounts were most active, pro-Simion profiles achieved the highest engagement, underscoring persistent inauthentic coordinated behaviour despite substantial post-reporting removals by TikTok.  
  1. Chatbots misinforming about elections, and prevalence of generative AI in campaigns 
Over the past two years, LLM-powered chatbots have grown rapidly and are increasingly integrated into tools like search engines, but DRI studies show they remain unreliable for providing accurate election information. In testing six chatbots for the 2025 Germanfederal elections, only Gemini and Copilot fully refrained from giving electoral answers, while others still produced false or partisan responses, highlighting the need for chatbots to consistently direct users to official sources and avoid generating election-related content.Additionally, we observed that analysis of over 53,000 Facebook posts linked to Alternative für Deutschland ahead of the 2025 election revealed coordinated crisis-focused messaging blaming political rivals, emotionally charged framing of violent incidents, and the use of undisclosed AI-generated imagery to amplify anti-establishment narratives. 
  1. Toxicity in political speech, disinformation narratives, and far-right online campaigning 

Our monitoring of elections in Austria, Germany, and Poland pointed to recurring risks in online political communication, including algorithmic amplification, concentrated campaign activity, and toxic rhetoric. 

Mitigations in place
Raised awareness about threats and built networks with relevant stakeholders through webinars and roundtables 

Throughout our monitoring of electoral and platform risks in 2025, we engaged with policymakers, researchers, and civil society stakeholders to raise awareness of emerging online threats and strengthen coordinated responses through webinars and roundtables. 

  • Elections, Algorithms, and Accountability: Digital Platforms and the 2025 German Federal Elections | 25.02.2025 
  • Retrospective Insights: Election Monitoring Efforts to Preserve Information Integrity | 04.09.2025 
  • TED Webinar: Safeguarding Democracy and Elections in the Age of AI | 01.10.2025 
  • DisinfoCon 2025 | 11–12.11.2025