Democracy Reporting International

Report March 2025

Submitted

Your organisation description

Integrity of Services

Commitment 14

In order to limit impermissible manipulative behaviours and practices across their services, Relevant Signatories commit to put in place or further bolster policies to address both misinformation and disinformation across their services, and to agree on a cross-service understanding of manipulative behaviours, actors and practices not permitted on their services. Such behaviours and practices include: The creation and use of fake accounts, account takeovers and bot-driven amplification, Hack-and-leak operations, Impersonation, Malicious deep fakes, The purchase of fake engagements, Non-transparent paid messages or promotion by influencers, The creation and use of accounts that participate in coordinated inauthentic behaviour, User conduct aimed at artificially amplifying the reach or perceived public support for disinformation.

We signed up to the following measures of this commitment

Measure 14.1 Measure 14.2 Measure 14.3

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

If yes, list these implementation measures here

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

If yes, which further implementation measures do you plan to put in place in the next 6 months?

Measure 14.1

Relevant Signatories will adopt, reinforce and implement clear policies regarding impermissible manipulative behaviours and practices on their services, based on the latest evidence on the conducts and tactics, techniques and procedures (TTPs) employed by malicious actors, such as the AMITT Disinformation Tactics, Techniques and Procedures Framework.

QRE 14.1.1

Relevant Signatories will list relevant policies and clarify how they relate to the threats mentioned above as well as to other Disinformation threats.

Murky accounts as a systemic threat to elections in the EU 
 
In 2024, the DRI’s Digital Democracy Programme Unit conducted research on how VLOPs and VLOSEs address TTPs, including the creation of non-automated inauthentic accounts, the use of fake followers or subscribers, and the impersonation of political candidates and parties. 
 
As part of this effort, we published a total of eight reports analysing the phenomenon of Murky Accounts on TikTok in the context of the 2024 European Parliament Elections, the French snap elections, the Saxony and Thuringia regional elections in Germany, and the Romanian elections. We argued that these accounts pose a serious risk to civic discourse and EU elections by misleading voters, distorting perceptions of political support, and bypassing TikTok’s stricter policies on political accounts. A list of all Murky Accounts reports can be found in QRE 14.2.1. 
 
Across all reports, we recommended TikTok to strengthen their policies to prevent fan account abuse, introduce features to stop impersonation, require verified badges for political accounts in the EU, conduct pre-election reviews, and ensure consistent enforcement of their community guidelines. We met with TikTok representatives in Berlin on 12 August to discuss our findings and recommendations. 
 
Social media monitoring (SMM) of elections  
 
Through DRI’s SMM across Europe and beyond we also identified trends in online discourse and detect instances of online harms, including disinformation, hate speech and toxic content. The following is a list of DRI’s efforts in 2024 to detect impermissible online content, behaviours, and practices relevant to Commitment 14, as well as those polices recommended to mitigate the spread of such content. Outside of the European context, DRI also conducts social media monitoring in South America, the Middle East, and North Africa: 
 

Measure 14.2

Relevant Signatories will keep a detailed, up-to-date list of their publicly available policies that clarifies behaviours and practices that are prohibited on their services and will outline in their reports how their respective policies and their implementation address the above set of TTPs, threats and harms as well as other relevant threats.

QRE 14.2.1

Relevant Signatories will report on actions taken to implement the policies they list in their reports and covering the range of TTPs identified/employed, at the Member State level.

As part of the Rapid Response System, DRI identified 231 Murky Accounts, leading TikTok to take action on 159 for impersonation or inauthentic behavior. You can find all eight reports here: 
 

SLI 14.2.1

Number of instances of identified TTPs and actions taken at the Member State level under policies addressing each of the TTPs as well as information on the type of content.

Nr of instances of identified TTP : We reported 231 Murky Accounts to TikTok 

Nr of actions taken by type: TikTok acted on 159 accounts 

Country TTP OR ACTION1 - Nr of instances TTP OR ACTION1 - Nr of actions TTP OR ACTION2 - Nr of instances TTP OR ACTION2 - Nr of actions TTP OR ACTION3 - Nr of instances TTP OR ACTION3 - Nr of actions TTP OR ACTION4 - Nr of instances TTP OR ACTION4 - Nr of actions TTP OR ACTION5 - Nr of instances TTP OR ACTION5 - Nr of actions TTP OR ACTION6 - Nr of instances TTP OR ACTION6 - Nr of actions TTP OR ACTION7 - Nr of instances TTP OR ACTION7 - Nr of actions TTP OR ACTION8 - Nr of instances TTP OR ACTION8 - Nr of actions TTP OR ACTION9 - Nr of instances TTP OR ACTION9 - Nr of actions TTP OR ACTION10 - Nr of instances TTP OR ACTION10 - Nr of actions TTP OR ACTION11 - Nr of instances TTP OR ACTION11 - Nr of actions TTP OR ACTION12 - Nr of instances TTP OR ACTION12 - Nr of actions
Austria 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Belgium 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Bulgaria 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Croatia 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Cyprus 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Czech Republic 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Denmark 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Estonia 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Finland 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
France 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Germany 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Greece 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Hungary 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Iceland 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Ireland 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Italy 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Latvia 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Lithuania 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Luxembourg 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Malta 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Netherlands 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Poland 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Portugal 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Romania 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Slovakia 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Slovenia 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Spain 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Sweden 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Liechtenstein 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Norway 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

Commitment 15

Relevant Signatories that develop or operate AI systems and that disseminate AI-generated and manipulated content through their services (e.g. deepfakes) commit to take into consideration the transparency obligations and the list of manipulative practices prohibited under the proposal for Artificial Intelligence Act.

We signed up to the following measures of this commitment

Measure 15.1 Measure 15.2

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

If yes, list these implementation measures here

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

If yes, which further implementation measures do you plan to put in place in the next 6 months?

Measure 15.1

Relevant signatories will establish or confirm their policies in place for countering prohibited manipulative practices for AI systems that generate or manipulate content, such as warning users and proactively detect such content.

QRE 15.1.1

In line with EU and national legislation, Relevant Signatories will report on their policies in place for countering prohibited manipulative practices for AI systems that generate or manipulate content.

Throughout 2024, DRI’s Digital Democracy Programme Unit continued its monitoring of generative AI, conducting regular audits to assess the risks LLM-powered chatbots pose to elections and voters. We also provided guidance on the recently enforced AI Act, analysing its implications for elections and democracy. 
 
We tracked the use of AI-generated content by political parties in the EU during the EP Elections, compiling our findings into a report and dashboard. These insights were later presented at the European Parliament to EU stakeholders and Microsoft, where we advocated for stronger policies to prevent potential misuse. 
 
During the reporting period, we also published a guide evaluating different auditing approaches for identifying and mitigating risks associated with large language models (LLMs), supporting researchers in conducting similar analyses. Additionally, we released a report assessing how well LLM responses align with our human rights-based definition of pluralism and their representation of different political perspectives. 
 
Building on our findings on chatbots’ ability to deliver accurate election-related information across multiple languages and regions, we developed policy recommendations examining the role of AI-generated content in the 2024 European elections. 
 
Below is a list of DRI reports published during the reporting period related to these efforts: 
 
 
Our research over the past year indicates that while some of the most widely used publicly available generative AI systems (e.g., Gemini) have implemented safeguards to mitigate misinformation—such as refusing to answer election-related questions—following our risk notifications, others, including Copilot and ChatGPT-4/4o, have yet to adopt a consistent approach. As a result, largely used language models continue to generate false or partially inaccurate electoral information. 

Commitment 16

Relevant Signatories commit to operate channels of exchange between their relevant teams in order to proactively share information about cross-platform influence operations, foreign interference in information space and relevant incidents that emerge on their respective services, with the aim of preventing dissemination and resurgence on other services, in full compliance with privacy legislation and with due consideration for security and human rights risks.

We signed up to the following measures of this commitment

Measure 16.1 Measure 16.2

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

If yes, list these implementation measures here

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

If yes, which further implementation measures do you plan to put in place in the next 6 months?

Measure 16.1

Relevant Signatories will share relevant information about cross-platform information manipulation, foreign interference in information space and incidents that emerge on their respective services for instance via a dedicated sub-group of the permanent Task-force or via existing fora for exchanging such information.

QRE 16.1.1

Relevant Signatories will disclose the fora they use for information sharing as well as information about learnings derived from this sharing.

  • All identified TTPs, including murky accounts and ads violating TikTok's community guidelines on political advertising, were flagged under the Rapid Response System of the Code of Conduct on Disinformation. Additionally, in July 2024, we engaged in discussions with TikTok about their policies on impersonation and verified badges for political accounts, fostering collaboration and informing future enforcement measures. 
  • We directly shared our findings with relevant signatories to push for platform improvements. For example, on 30 September we shared with Google our YouTube report on disinformation during the EP Elections, highlighting the platform’s failure to use basic fact-checking tools like information panels and source indicators, despite its commitment to Measure 22.7 of the Code of Practice. 
 
Participating in and establishing fora for sharing information on the tools, tactics, and narratives deployed by disinformation actors is a key facet of DRI’s Digital Democracy work. The following is a list of working groups, webinars, conferences, and roundtables attended during the reporting period, with DRI in the role of either organiser or presenter: 
 
 
  • EP Elections Social Media Monitoring Hub | March – June 2024. In the lead-up to the EP Elections, DRI brought together a team of eight researchers from across the European Union to collaborate on social media monitoring. This group met regularly to discuss major risks and key narratives at the member state level. Each researcher contributed with an in-depth case study analysis. 
 
  • Artificial Intelligence, Democracy and Elections | 21.05.2024. DRI presented at the International Seminar on Artificial Intelligence, Democracy and Elections alongside experts, academics, professionals and leaders to discuss the challenges and opportunities that the intersection between artificial intelligence, democracy and elections represents for the future of global democratic society. 
  • Separating Voice from Noise: Insights from the 2024 EP Elections | 24.06.2024. The 2024 European Parliament elections took place against the backdrop of an evolving EU legal framework designed to address digital threats, though its mechanisms and impacts were still unfolding throughout the campaign period. In the aftermath of the elections, understanding the complexities of these digital battlegrounds became even more critical. Key questions emerged: How did political campaigns evolve online? Which political actors and media outlets shaped public discourse? What role did generative AI play in the electoral process? To explore these pressing issues, we provided comprehensive insights and analysis, examining the influence of digital platforms on election narratives, the spread of disinformation, and the challenges of mitigating hate speech. These findings were further discussed in our post-election webinar, where we unpacked the latest trends and their implications for policymakers, civil society, and digital platforms. 
 
  • Webinar on Innovative Uses of AI by Civil Society in Europe |26.06.2024. On June 26, GLOBSEC hosted an online discussion highlighting the innovative uses of AI by civil society organizations in Europe, exploring tools and technologies from leading tech companies designed to support these initiatives, and addressing the ethical challenges and concerns associated with AI in civil society. DRI attended to share their researching findings. 
 
  • SEEDS Webinar on Joint Lessons from the 2024 EP Elections |24.09.2024. In this webinar, the SEEEDS partners provide insights into the 2024 European Parliament elections based on the findings of civil society organisations and initiate the discussion on the way forward regarding future European electoral reforms and strengthening democratic processes at the EU level. 
 
  • Focus groups with Digital Services Coordinators | 27 September – 02 October 2024. DRI held three focus groups between 27 September and 2 October 2024 with key DSA implementation stakeholders, including 3 CSO representatives, 1 academic, and 8 DSC representatives from 6 small-to-medium member states. We focused on DSA implementation status, challenges DSCs face, their collaboration plans with external stakeholders, particularly CSOs, and citizens' awareness of DSCs and digital rights. 
 
 
  • Expert roundtable: Kick-Off for the Circle of Friends | 07.11.2024. After nine months of DSA enforcement, the DSA Research Network’s Circle of Friends held its inaugural meeting, taking stock of the DSA-related areas in need of further academic research. DRI attended to share their position on emerging topics around the DSA, identify needs for scientific insight and explore different methods to fill those gaps. 
  • Delegated Act Roundtable| 25.11.2024. Following the European Commission's released draft Delegated Act on Data Access, DRI hosted a roundtable for DSA stakeholders. In this roundtable, joined by 23 participants, including European Commission representatives, we presented DRI’s position on the draft and gathered feedback and insights from other CSOs to build a shared understanding of the Delegated Act’s implications for civil society research. This resulted in a joint submission of feedback for the EC. DRI thereby also contributed to policy formulation as lead organisation of this submission of feedback. 
 
  • Distinguindo Vozes de Ruídos: Reflexões sobre as Eleições Municipais de 2024 | 03.12.2024. The 2024 Brazilian municipal elections marked a new phase in online political communication, with AI risks overshadowed by the ongoing spread of disinformation, hate speech, and hostility toward traditional institutions. This webinar, organized by DRI in partnership with FGV Comunicação Rio, FGV Direito Rio, and Agência Lupa, supported by the EU, gathered experts to discuss disinformation, hate speech, online gender-based violence, and the impact of digital platforms on political campaigns and democracy. 
 
  • The GenAI Factor in the 2024 Elections Report Event | 11.12.2024. DRI attended the Kofi Annan Launch event at the European Parliament, sharing key insights from the report with relevant EU stakeholders. 
 
 
  • Are AI Chatbots Reliable? Insights from Tunisia’s 2024 Presidential Race |12.2024. DRI's Tunisia office presented in December their findings from their report into how chatbots answer electoral questions in the country. The Digital Democracy team attended and presented our findings from our earlier audits concerning the European Parliament elections and the importance of testing LLM responses. 
 
  • DRI Media Coverage| 2024. Our research and advocacy efforts garnered significant attention, with our reports and analysis being referenced by leading media outlets such as Politico, Euronews, Forbes, EUobserver, Euractiv, and many more. This media coverage furthers the impact of our work, shaping public discourse and informing key stakeholders—including policymakers, civil society, and the broader public—we continue to drive meaningful conversations on critical issues.