Democracy Reporting International

Report March 2026

Submitted

Your organisation description

Integrity of Services

Commitment 14

In order to limit impermissible manipulative behaviours and practices across their services, Relevant Signatories commit to put in place or further bolster policies to address both misinformation and disinformation across their services, and to agree on a cross-service understanding of manipulative behaviours, actors and practices not permitted on their services. Such behaviours and practices include: The creation and use of fake accounts, account takeovers and bot-driven amplification, Hack-and-leak operations, Impersonation, Malicious deep fakes, The purchase of fake engagements, Non-transparent paid messages or promotion by influencers, The creation and use of accounts that participate in coordinated inauthentic behaviour, User conduct aimed at artificially amplifying the reach or perceived public support for disinformation.

We signed up to the following measures of this commitment

Measure 14.1 Measure 14.2

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

If yes, list these implementation measures here

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

If yes, which further implementation measures do you plan to put in place in the next 6 months?

Measure 14.1

Relevant Signatories will adopt, reinforce and implement clear policies regarding impermissible manipulative behaviours and practices on their services, based on the latest evidence on the conducts and tactics, techniques and procedures (TTPs) employed by malicious actors, such as the AMITT Disinformation Tactics, Techniques and Procedures Framework.

QRE 14.1.1

Relevant Signatories will list relevant policies and clarify how they relate to the threats mentioned above as well as to other Disinformation threats.

Murky accounts as a systemic threat to elections in the EU  
 
In 2025, DRI continued its systematic reporting on murky accounts as part of efforts to address impermissible manipulative behaviours and practices across online services.  
DRI monitored 3 elections in Romania, Poland and Germany in 2025. In these elections, we identified networks of impersonation accounts, fake engagement ecosystems, and inauthentic amplification on TikTok.  
 
A list of all Murky Accounts reports can be found in QRE 14.2.1.  
 
Research on Election Information Integrity and Policy Recommendations 
 
Through DRI’s activities across Europe and beyond in 2025, we continued to identify trends in online discourse and detect instances of threats to information integrity, including disinformation, hate speech, and toxic content. The following is a list of DRI’s 2025 efforts to identify impermissible online content, behaviours, and practices relevant to Commitment 14, alongside the policy measures recommended to mitigate their spread. In addition to European electoral contexts, DRI also conducted social media monitoring in South Asia and Africa: 
 

Data Access  
 
To support effective implementation of Article 40 of the Digital Services Act, DRI produced a series of policy analyses examining regulatory gaps, researcher access barriers, and platform transparency obligations.  
This work was also informed by our first case against X concerning access to German election data, which highlighted the practical obstacles researchers continue to face when seeking access to publicly available platform data. Together, these publications provide legal and operational recommendations to strengthen access to platform data and enable independent scrutiny of systemic online risks: 
 
 
Interactive tools hosted on the Digital Democracy Monitor Knowledge Hub presented key findings on platform obligations, enforcement pathways, and implementation gaps in a more accessible format, supporting better understanding of online election risks and possible responses: 
 

Measure 14.2

Relevant Signatories will keep a detailed, up-to-date list of their publicly available policies that clarifies behaviours and practices that are prohibited on their services and will outline in their reports how their respective policies and their implementation address the above set of TTPs, threats and harms as well as other relevant threats.

QRE 14.2.1

Relevant Signatories will report on actions taken to implement the policies they list in their reports and covering the range of TTPs identified/employed, at the Member State level.

Across the German, Romanian, and Polish elections, DRI flagged 482 murky TikTok accounts in 2025, out of which 394 were removed by TikTok following internal review under its Terms of Service and Community Guidelines. 
 

SLI 14.2.1

Number of instances of identified TTPs and actions taken at the Member State level under policies addressing each of the TTPs as well as information on the type of content.

Identified instances of TTP : 
  • Reported 482 murky accounts on TikTok
  • Reported 7 cases of unlabelled generative AI content with harmful stereotypes on Meta (Facebook)
  • Reported 6 cases of unlabelled political advertising in the Meta content library

Number of actions taken by type:
  • TikTok acted on 394 of these reports. 

Country TTP OR ACTION1 - Nr of instances TTP OR ACTION1 - Nr of actions TTP OR ACTION2 - Nr of instances TTP OR ACTION2 - Nr of actions TTP OR ACTION3 - Nr of instances TTP OR ACTION3 - Nr of actions TTP OR ACTION4 - Nr of instances TTP OR ACTION4 - Nr of actions TTP OR ACTION5 - Nr of instances TTP OR ACTION5 - Nr of actions TTP OR ACTION6 - Nr of instances TTP OR ACTION6 - Nr of actions TTP OR ACTION7 - Nr of instances TTP OR ACTION7 - Nr of actions TTP OR ACTION8 - Nr of instances TTP OR ACTION8 - Nr of actions TTP OR ACTION9 - Nr of instances TTP OR ACTION9 - Nr of actions TTP OR ACTION10 - Nr of instances TTP OR ACTION10 - Nr of actions TTP OR ACTION11 - Nr of instances TTP OR ACTION11 - Nr of actions TTP OR ACTION12 - Nr of instances TTP OR ACTION12 - Nr of actions
Austria 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Belgium 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Bulgaria 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Croatia 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Cyprus 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Czech Republic 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Denmark 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Estonia 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Finland 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
France 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Germany 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Greece 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Hungary 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Iceland 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Ireland 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Italy 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Latvia 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Lithuania 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Luxembourg 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Malta 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Netherlands 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Poland 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Portugal 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Romania 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Slovakia 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Slovenia 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Spain 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Sweden 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Liechtenstein 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Norway 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

Commitment 15

Relevant Signatories that develop or operate AI systems and that disseminate AI-generated and manipulated content through their services (e.g. deepfakes) commit to take into consideration the transparency obligations and the list of manipulative practices prohibited under the proposal for Artificial Intelligence Act.

We signed up to the following measures of this commitment

Measure 15.1 Measure 15.2

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

If yes, list these implementation measures here

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

If yes, which further implementation measures do you plan to put in place in the next 6 months?

Measure 15.1

Relevant signatories will establish or confirm their policies in place for countering prohibited manipulative practices for AI systems that generate or manipulate content, such as warning users and proactively detect such content.

QRE 15.1.1

In line with EU and national legislation, Relevant Signatories will report on their policies in place for countering prohibited manipulative practices for AI systems that generate or manipulate content.

During the reporting period, DRI monitored and reported risks from AI systems that generate or manipulate content in electoral contexts. We audited major LLM chatbots during the German federal elections and identified inaccurate and fabricated election information, calling for consistent safeguards and redirection to authoritative sources. We also identified and reported political uses of unlabelled generative AI content, including seven ads and posts from official party accounts breaching platform hateful conduct policies. 
Below is a list of DRI reports published during the reporting period related to these efforts:  
 
 
At the policy level, DRI joined the EU Code of Practice on Transparency of AI-Generated Content and submitted joint civil society recommendations to strengthen transparency, labelling, and accountability standards for generative AI. We proposed clearer deepfake definitions, stronger explainability requirements, lifecycle transparency obligations, and multi-layered watermarking standards.  
DRI also provided input to the European Commission warning that simplifying the AI Act risks weakening fundamental rights protections. 
 
  • Joint feedback with the European Partnership for Democracy (EPD), CEE Digital Democracy Watch, and GLOBSEC on transparency requirements for generative AI systems under Article 50 of the AI Act | 09.10.2025 
 
DRI also engaged with the European Commission’s Digital Omnibus process, submitting feedback to ensure that efforts to streamline EU AI act guidelines do not weaken key accountability safeguards.  
 
 
As part of its work on AI governance and platform accountability, DRI participated in expert fora to strengthen coordination: 
 
  • Connected Learnings – Transparency and Accountability in AI Systems and Social Media | 12.03.2025. 
    Online workshop with researchers from GPAI and social media fields on joint data access advocacy and moving from transparency demands toward stronger scrutiny frameworks. DRI presented key research and advocacy findings. 

Commitment 16

Relevant Signatories commit to operate channels of exchange between their relevant teams in order to proactively share information about cross-platform influence operations, foreign interference in information space and relevant incidents that emerge on their respective services, with the aim of preventing dissemination and resurgence on other services, in full compliance with privacy legislation and with due consideration for security and human rights risks.

We signed up to the following measures of this commitment

Measure 16.1

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

If yes, list these implementation measures here

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

If yes, which further implementation measures do you plan to put in place in the next 6 months?

Measure 16.1

Relevant Signatories will share relevant information about cross-platform information manipulation, foreign interference in information space and incidents that emerge on their respective services for instance via a dedicated sub-group of the permanent Task-force or via existing fora for exchanging such information.

QRE 16.1.1

Relevant Signatories will disclose the fora they use for information sharing as well as information about learnings derived from this sharing.

Participating in and establishing fora for sharing information on the tools, tactics, and narratives deployed by disinformation actors is a key facet of DRI’s Digital Democracy work. The following is a list of working groups, webinars, conferences, and roundtables attended during the reporting period, with DRI in the role of either organiser or presenter:  
 
  • DRI co-chairs the Elections Working Group under the EU Code of Practice on Disinformation (transitioned to Code of Conduct in February 2025). 
    Since June 2023, DRI has served as co-chair alongside Globsec and TikTok in a multi-stakeholder forum of over 100 civil society and platform representatives. 
  • Elections, Algorithms, and Accountability: Digital Platforms and the 2025 German Federal Elections | 25.02.2025. 
    DRI convened a high-level roundtable in Berlin ahead of the German federal elections to examine how digital platforms shape electoral discourse under the Digital Services Act and AI Act. Sixteen policymakers, regulators, researchers, and civil society representatives discussed research findings and advocacy pathways to strengthen DSA enforcement. 
  • Modelling Researcher Access to Data Legislation Workshop | 13.03.2025. 
    Expert workshop hosted by the Ada Lovelace Institute on legal frameworks for researcher data access across the UK, US, and EU. DRI presented research findings and contributed comparative policy perspectives. 
  • 2025 Milton Wolf Seminar on Media and Diplomacy | 09.04.2025. 
    Vienna-based seminar convening academic and policy experts for in-depth discussions on technology, media, and politics. DRI presented research findings on digital democracy and platform governance. 
  • DSA Circle of Friends | 14.04.2025. 
    Network meeting of the DSA Research Network addressing freedom of expression, supervision independence, and enforcement of the risk-based approach. Discussions informed stakeholder coordination on DSA implementation. 
  • Berlin Independent Tech Researchers' Meetup | 13.05.2025. 
    Research professionals’ meetup on the evolving digital democracy research landscape, with a focus on assessing the effectiveness of platform mitigation measures. Insights informed future research planning. 
  • The DSA in Court: What We Learned from Suing X | 10.07.2025. 
    Following the Berlin Regional Court ruling in DRI’s case against X, DRI and Gesellschaft für Freiheitsrechte co-hosted a public webinar on implications for researcher data access rights under Article 40(12) DSA. The discussion addressed litigation strategies, enforcement pathways, and civil society use of legal data access mechanisms. 
  • Retrospective Insights: Election Monitoring Efforts to Preserve Information Integrity | 04.09.2025. 
    DRI convened a roundtable with 28 civil society and academic participants to assess digital democracy developments since 2023 and review findings from six national and European elections. Insights informed a meta-analysis outlining future research and advocacy priorities. 
  • The Independent Tech Researchers' Summit | 16–17.09.2025. 
    Berlin summit of independent researchers addressing collaboration with platforms, safeguards against researcher retaliation, and strategies for securing data access. DRI shared election monitoring findings and data access challenges. 
  • #InfluencersAgainstDisinfo: Empowering Online Opinion Leaders to Enhance Democratic Resilience | 17–19.09.2025. 
    Berlin event hosted by the Aspen Institute bringing together experts and content creators to address digital communication and disinformation resilience. DRI shared social media monitoring insights and data access concerns. 
  • Data Access Days | 25.09.2025. 
    Convening under the DSA40 Collaboratory focused on implementation of the Delegated Act on Data Access. DRI shared operational experiences with platform data access tools and litigation efforts. 
  • TED Webinar: Safeguarding Democracy and Elections in the Age of AI | 01.10.2025. 
    Online webinar examining AI’s dual impact on democratic processes, electoral integrity, and governance risks. DRI contributed examples of platform accountability work and multi-stakeholder collaboration. 
  • DisinfoCon 2025 | 11–12.11.2025. 
    Organised with the Embassy of Canada to Germany and Alliance4Europe, DisinfoCon brought together researchers, journalists, policymakers, and civil society actors to discuss decentralised social media, AI accountability, and disinformation resilience. The event hosted 65 in-person participants in Berlin and 48 online. 
  • DRI Media Coverage| 2025. Our research and advocacy efforts garnered significant attention, with our reports and analysis being referenced by leading media outlets such as Politico, Euronews, Reuters, CNN and many more. This media coverage furthers the impact of our work, shaping public discourse and informing key stakeholders—including policymakers, civil society, and the broader public—we continue to drive meaningful conversations on critical issues. 

Crisis and Elections Response

Elections 2025

[Note: Signatories are requested to provide information relevant to their particular response to the threats and challenges they observed on their service(s). They ensure that the information below provides an accurate and complete report of their relevant actions. As operational responses to crisis/election situations can vary from service to service, an absence of information should not be considered a priori a shortfall in the way a particular service has responded. Impact metrics are accurate to the best of signatories’ abilities to measure them].

Threats observed or anticipated


  1. Impersonation and inauthentic accounts and political ads violating platform’s policies 

DRI used the Rapid Response System to flag coordinated inauthentic behaviour and murky political accounts on TikTok that impersonated candidates and amplified partisan content. We alerted platforms and authorities to networks distorting electoral discourse and violating platform integrity policies. 

German elections: The report identified 138 inauthentic TikTok accounts operating ahead of Germany’s 2025 elections, most of which promoted or impersonated actors linked to the Alternative für Deutschland and generated disproportionately high engagement compared to accounts tied to other parties. Using tactics such as impersonating figures like Alice Weidel and Björn Höcke, trending hashtags, memes, and AI-generated imagery, these “murky” accounts exposed enforcement gaps under the EU Digital Services Act despite most being removed after researcher reporting.  

Polish elections
: Analysis of Poland’s 2025 presidential election found that a small group of candidates produced over 57% of campaign content, while a study of 5,500+ social media posts revealed uneven reach and unusually rapid audience growth linked to certain far-right actors. Monitoring also identified 145 inauthentic TikTok accounts impersonating candidates and parties, with some profiles amassing hundreds of thousands of followers despite partial platform removals. 

Romanian elections: Ahead of Romania’s May 2025 presidential election—following the annulment of the November 2024 vote by the Constitutional Court of Romania—monitoring identified 323 murky TikTok accounts impersonating political actors, with 35.2% supporting Călin Georgescu and others mimicking figures such as Elena Lasconi, George Simion, and Nicușor Dan. While Georgescu-linked accounts were most active, pro-Simion profiles achieved the highest engagement, underscoring persistent inauthentic coordinated behaviour despite substantial post-reporting removals by TikTok.  
  1. Chatbots misinforming about elections, and prevalence of generative AI in campaigns 
Over the past two years, LLM-powered chatbots have grown rapidly and are increasingly integrated into tools like search engines, but DRI studies show they remain unreliable for providing accurate election information. In testing six chatbots for the 2025 Germanfederal elections, only Gemini and Copilot fully refrained from giving electoral answers, while others still produced false or partisan responses, highlighting the need for chatbots to consistently direct users to official sources and avoid generating election-related content.Additionally, we observed that analysis of over 53,000 Facebook posts linked to Alternative für Deutschland ahead of the 2025 election revealed coordinated crisis-focused messaging blaming political rivals, emotionally charged framing of violent incidents, and the use of undisclosed AI-generated imagery to amplify anti-establishment narratives. 
  1. Toxicity in political speech, disinformation narratives, and far-right online campaigning 

Our monitoring of elections in Austria, Germany, and Poland pointed to recurring risks in online political communication, including algorithmic amplification, concentrated campaign activity, and toxic rhetoric. 

Mitigations in place

Raised awareness about threats and built networks with relevant stakeholders through webinars and roundtables 

Throughout our monitoring of electoral and platform risks in 2025, we engaged with policymakers, researchers, and civil society stakeholders to raise awareness of emerging online threats and strengthen coordinated responses through webinars and roundtables. 

  • Elections, Algorithms, and Accountability: Digital Platforms and the 2025 German Federal Elections | 25.02.2025 
  • Retrospective Insights: Election Monitoring Efforts to Preserve Information Integrity | 04.09.2025 
  • TED Webinar: Safeguarding Democracy and Elections in the Age of AI | 01.10.2025 
  • DisinfoCon 2025 | 11–12.11.2025