Democracy Reporting International

Report September 2025

Submitted
 Democracy Reporting International's (DRI) Digital Democracy Programme Unit focuses on identifying trends in online discourse and potential threats to the integrity of information during political events and electoral periods across Europe and beyond. Our Digital Democracy team monitors and audits key elements of the online information space (social media platforms, consumer AI products) and formulates policy recommendations for various stakeholders in the technology and society ecosystem, including lawmakers, tech platforms, and civil society organisations. During the reporting period, DRI identified multiple threats to electoral integrity across Germany, Romania, and Poland, including impersonation by TikTok murky accounts, misinformation by chatbots, potential algorithmic bias in recommender systems, and reductions in platform commitments under the EU Code of Practice transitioning into the legally binding Code of Conduct. These threats risked misleading voters, amplifying extremist content, and undermining transparency and accountability. In response, DRI implemented a multi-pronged mitigation strategy, combining research, stakeholder engagement, advocacy, legal action, and public awareness efforts to strengthen compliance with EU digital regulations and safeguard civic discourse.

Key Findings and Actions during the Reporting Period:
  • TikTok Murky Accounts Research: DRI identified 606 inauthentic TikTok accounts across Germany, Romania, and Poland. We reported them through the Election Rapid Response System. Following an internal review based on its Terms of Services and Community Guidelines, TikTok took action to remove 414 of them. These accounts impersonated politicians or party pages, disproportionately promoting right-leaning candidates and parties. In Poland, accounts supporting Konfederacja generated 12 times more engagement than the second-most engaged party, highlighting the potential impact on voter perception and discourse, as well as some gaps in the understanding and regulation of political propaganda/advertisement.
  • Chatbots and Generative AI: Monitoring of six major chatbots revealed persistent misinformation on electoral topics, despite some improvements aligned with DRI and EU Commission guidelines. In addition, far-right parties in Germany and Poland used AI-generated images and videos, often without disclosure, to misinform, reinforce narratives, and increase engagement, demonstrating the growing role of generative AI in campaigns.
  • Recommender Systems and Algorithmic Bias: In the context of the 2025 German federal election, we explored the recommender algorithms on TikTok and Instagram to assess how users’ political interest and preferences shaped the amount of political content they encountered, and how these dynamics varied across both platforms. We also analysed whether recommended content aligned or not with users’ pre-defined political positions. To ground this analysis, we conducted a literature review on political exposure bias during the 2024 U.S. and 2025 German elections. Results show that platforms amplified right-leaning or far-right content even for neutral users, most strongly on X and TikTok, and weakly on Instagram. Users aligned with far-right parties received more targeted recommendations, and less cross-party recommendations. This persistent bias potentially poses systemic risks, including polarisation and radicalisation of online political discourse. 
  • Platform Rollbacks under the CoC: Major platforms scaled back commitments under the EU Code of Practice transitioning to the Code of Conduct, reducing fact-checking, political advertising transparency, and researcher support by 31%. Microsoft, Google, and TikTok withdrew key measures, and all platforms abandoned Commitment 27 on researcher data access, raising concerns about the CoC’s effectiveness in ensuring accountability and combating disinformation.
  • Legal Action and Advocacy: DRI sued X under DSA Article 40(12), testing the platform’s obligation to provide timely access to publicly available data ahead the German federal elections in February 2025. The Court’s ruling in May gave extensive interpretation of the data access provision and declared itself competent to review the case, an important precedent that CSOs and researchers can sue in all EU Member States (and not only in Ireland at X HQ). A public webinar unpacked the Berlin court ruling and its implications, providing guidance for other CSOs seeking to pursue similar advocacy. 

Download PDF

Crisis 2025
[Note: Signatories are requested to provide information relevant to their particular response to the threats and challenges they observed on their service(s). They ensure that the information below provides an accurate and complete report of their relevant actions. As operational responses to crisis/election situations can vary from service to service, an absence of information should not be considered a priori a shortfall in the way a particular service has responded. Impact metrics are accurate to the best of signatories’ abilities to measure them].
Threats observed or anticipated
  1. Impersonation and inauthentic TikTok political accounts violating TikTok’s policies 
During the reporting period, DRI advanced its monitoring of TikTok “murky accounts”- a term we use for accounts with unclear affiliations and questionable authenticity that actively promote political parties and candidates. We flagged 606 such accounts across the German (138), Romanian (323), and Polish (145) elections, of which 414 were removed by TikTok, following an internal review based on its Terms of Services and Community Guidelines. Our monitoring covered accounts supporting candidates and parties across the political spectrum. 

In Germany, 69% of flagged accounts promoted AfD politicians and party content and falsely presented themselves as official party pages. In Romania, murky accounts most frequently supported Călin Georgescu (35%), liberal party candidate Elena Lasconi (14.2%), or George Simion of the right-wing Alliance for the Union of Romanians (11%). 21 murky accounts (8.3%) supported independent candidate, Nicușor Dan, who ultimately won the 2025 election in the second round.  

In Poland, far-right candidate Sławomir Mentzen had the highest murky account support (21 accounts), followed by extreme-right candidate Grzegorz Braun (15 accounts). The conservative Law and Justice (PiS) and far-right Konfederacja parties also received significant murky account backing. Notably, accounts supporting Konfederacja generated over 12 times more engagement than those linked to the second most-engaged politician or party. 

 

  1. Chatbots misinforming about the German election, and prevalence of generative AI in campaigns 
Since 2024, DRI has evaluated and monitored large language model (LLM)-powered chatbots’ ability to deliver accurate election-related information in multiple languages and regions. Ahead of the 2025 German elections, we continued our monitoring efforts by assessing six chatbots (ChatGPT 4.0, ChatGPT 4.0 Turbo (available to those who subscribe to ChatGPT premium), Gemini, Copilot, Grok, and Perplexity.AI) in both German and English on a total of 22 questions about the electoral process and key political topics in Germany. Overall, we found that chatbot providers have made progress in line with some DRI recommendations, including refraining from answering, and those provided by EU Commission guidelines, such as referring to voting advice and to sources provided by electoral authorities. The issue of providing misleading or incomplete information persists, however, suggesting that most major AI providers have not put robust risk mitigation systems in place.   

Chatbots were not the only way in which AI directly impacted the outcome of electoral processes in the EU. In Germany, the far-right party, AfD, frequently used AI-generated images and videos, often without disclosure. These visuals served multiple purposes, from attacking political opponents to reinforcing the narrative of a Germany in decline. The research highlights how these tactics blend AI-generated misinformation, emotional priming, and aggressive political attacks to drive engagement. In Poland, candidates also leveraged generative AI in their campaign.  

 
  1. Recommender Systems and Electoral Integrity 
As TikTok and Instagram increasingly play a role as prominent sources of political information, understanding their recommender algorithms is essential for ensuring users can maintain control over their feeds, encounter diverse perspectives, and engage meaningfully in democratic processes - particularly during elections. Throughout the 2025 German federal election, we manually collected videos from both platforms by creating five user profiles, each representing a plausible individual from across the German political spectrum with varying levels of political interest and distinct political leanings, to assess how these variables shape the amount of political content they encountered. Additionally, we considered the variance across both platforms. Our findings show TikTok pushes political content more aggressively than Instagram, but unevenly: users with strong leanings toward AfD or BSW received far more targeted recommendations, while centrist and left-leaning profiles saw fewer, less relevant videos. The least political users saw almost none. 

Our analysis was further underpinned by a literature review on political exposure bias during the 2024 U.S. and 2025 German elections on TikTok, Instagram, and X.  Results show that platforms amplified right-leaning or far-right content even for neutral users, most strongly on X and TikTok, and weakly on Instagram. Users aligned with far-right parties received more targeted recommendations, and less cross-party recommendations. Persistent political exposure bias may constitute a systemic risk to elections, when capable of fueling polarisation, radicalisation, and fragmentation - especially if driven by manipulation, inauthentic activity, or opaque platform practices. Unnotified downranking of political actors may also breach Article 17 of the DSA, which requires transparency in moderation decisions. 

 
  1. Flagging Reduced Platform Commitments Under the CoC 
As the EU Code of Practice transitioned into a legally binding Code of Conduct under the DSA, major platforms scaled back commitments, particularly in fact-checking, political advertising transparency, and support for independent research. Microsoft, Google, and TikTok reduced or withdrew key measures, while all platforms abandoned Commitment 27 on facilitating researcher data access. These reductions, often vaguely justified, risk undermining the CoC’s effectiveness in combating disinformation and ensuring accountability. 


Mitigations in place
  1. Code of Conduct on Disinformation 
DRI continued its reporting efforts under the Rapid Response System of the Code of Conduct on Disinformation, actively collaborating with signatories. We regularly attended coordination meetings and contributed by flagging and discussing content that potentially violated platforms’ terms of use, including the risks identified and mentioned above. We continued to directly share findings with platforms to push for platform improvement and accountability.  

  1. Raised awareness about threats and built networks with relevant stakeholders through roundtables 
During monitoring of the German, Romanian, and Polish elections, we collaborated with key stakeholders, including the Polish think-tank Institute of Public Affairs (IPA) on monitoring social media platforms surrounding the 2025 Polish presidential elections. Throughout an engagement analysis of the Polish Presidential elections, IPA found that Mentzen’s Facebook account experienced several unusual spikes in activity, with multiple posts receiving exceptionally high and evenly distributed engagement within minutes – far above platform norms, and at a greater rate than other candidates. This trend, combined with abrupt changes in follower growth and interaction rates ending abruptly mid-April, suggests Mentzen’s online presence may have been partially boosted by inauthentic or artificially generated traffic.  

We also raised public awareness through three roundtables and network collaborations. During the German Elections we collaborated as part of the Counter Disinformation Network (CDN), a pan-European collaborative initiative designed to preempt, detect, and combat information manipulation. In this forum, we shared our research findings and advocacy work.  

The first DRI roundtable was a two-hour, in-person event with around 20 EU NGOs examining major online electoral risks during the German elections, which focused on sharing and gathering insights from participants’ chatbot auditing and social media monitoring efforts during the German elections.  

After the platform X failed to provide timely data access ahead of the German elections, DRI brought litigation against the VLOP, launching the first court case to test the DSA’s provisions. To examine the Berlin Regional Court’s ruling, DRI hosted a public webinar with legal and policy experts, highlighting the case’s implications for future platform accountability and data access. Key takeaways were published to support other CSOs considering similar advocacy. 

In September, we hosted Retrospective Insights: Election Monitoring Efforts to Preserve Information Integrity, presenting findings from our two-year access://democracy project monitoring online discourse during key EU elections. Nearly 30 representatives of civil society organisations and monitoring agencies joined to share the results of their social media monitoring efforts and assess whether platforms are meeting new DSA obligations. The discussion strengthened cross-organisation collaboration, with shared best practices informing a forthcoming meta-analysis that will be made publicly available to support peer capacity building. 

 
  1. Communicating continued data access challenges 
DRI leveraged op-eds to raise public and policymaker awareness about ongoing barriers to platform data access under the DSA. In Unpacking TikTok’s Data Access Illusion, we exposed the shortcomings of TikTok’s Virtual Compute Environment, showing how its restrictive design renders it functionally unusable for meaningful research. In Why We’re Suing Elon Musk’s X for German Election Data, we explained our landmark legal case against X for failing to provide timely access to publicly available data, highlighting its implications for DSA enforcement and European research capacity. Together, these op-eds amplified the importance of effective data access.