
Report March 2025
Your organisation description
Integrity of Services
Commitment 14
In order to limit impermissible manipulative behaviours and practices across their services, Relevant Signatories commit to put in place or further bolster policies to address both misinformation and disinformation across their services, and to agree on a cross-service understanding of manipulative behaviours, actors and practices not permitted on their services. Such behaviours and practices include: The creation and use of fake accounts, account takeovers and bot-driven amplification, Hack-and-leak operations, Impersonation, Malicious deep fakes, The purchase of fake engagements, Non-transparent paid messages or promotion by influencers, The creation and use of accounts that participate in coordinated inauthentic behaviour, User conduct aimed at artificially amplifying the reach or perceived public support for disinformation.
We signed up to the following measures of this commitment
Measure 14.1 Measure 14.2 Measure 14.3
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 14.1
Relevant Signatories will adopt, reinforce and implement clear policies regarding impermissible manipulative behaviours and practices on their services, based on the latest evidence on the conducts and tactics, techniques and procedures (TTPs) employed by malicious actors, such as the AMITT Disinformation Tactics, Techniques and Procedures Framework.
QRE 14.1.1
Relevant Signatories will list relevant policies and clarify how they relate to the threats mentioned above as well as to other Disinformation threats.
- New Report: Anti-immigrant hate speech, AI solutions, detecting online violence against women and regional strategies | 23.01.2024
- From Engagement to Enmity: Toxicity and Key Narratives in EP Elections 2024 | 24.06.2024
- AfD v. RN: A Comparative Analysis of Far-Right Political Campaigning on X | 12.07.2024
- European Parliament Dashboard | 14.07.2024
- Local Insights, European Trends: Case Studies on Digital Discourse in the 2024 EP Elections | 13.08.2024
- Election Integrity in the Digital Age: Online Risks and Recommendations for the Brazilian Municipal Elections | 05.2024
- Climate Crisis at the Polls: How Porto Alegre’s Mayoral Candidates Address Environmental Challenges | 20.09.2024
- Click Here for Controversy: Disinformation Narratives on YouTube During the 2024 EP Campaign | 25.09.2024
- Gender-Based Violence on X and YouTube in the São Paulo Mayoral Election | 16.10.2024
- Decoding politicians' social media campaigns in Rio de Janeiro and Recife | 11.11.2024
- Brazilian Municipal Elections 2024 Dashboard | 14.11.2024
- Understanding Digital Threats in Brazil: Media and Democracy Meta-Analysis | 12.2024
- Social Media Monitoring and Election Integrity in Brazil | 19.12.2024
Measure 14.2
Relevant Signatories will keep a detailed, up-to-date list of their publicly available policies that clarifies behaviours and practices that are prohibited on their services and will outline in their reports how their respective policies and their implementation address the above set of TTPs, threats and harms as well as other relevant threats.
QRE 14.2.1
Relevant Signatories will report on actions taken to implement the policies they list in their reports and covering the range of TTPs identified/employed, at the Member State level.
- Inauthentic Behaviour on TikTok - Concerning Accounts Supporting the AfD and Rassemblement National | 07.05.2024
- TikTok accounts with unclear affiliation supporting political parties and political candidates in the EU | 11.06.2024
- The big loophole (and how to close it): How TikTok's policy and practice invites murky political accounts | 22.07.2024
- Fourth report: Impersonation and inauthentic TikTok Accounts (French Elections) | 04.07.2024
- Disinformation Concern: Inauthentic TikTok Accounts that Support Political Parties | 24.05.2024
- Germany: TikTok only acts on Pro-AfD accounts when pushed to do so | 25.10.2024
- Manufactured Support: How Inauthentic Activity on TikTok Bolstered the Far-Right in Romania | 28.11.2024
- Skirting the Rules: The Romanian far-right continues to enjoy inauthentic and prohibited support on TikTok | 04.12.2024
SLI 14.2.1
Number of instances of identified TTPs and actions taken at the Member State level under policies addressing each of the TTPs as well as information on the type of content.
Nr of actions taken by type: TikTok acted on 159 accounts
Country | TTP OR ACTION1 - Nr of instances | TTP OR ACTION1 - Nr of actions | TTP OR ACTION2 - Nr of instances | TTP OR ACTION2 - Nr of actions | TTP OR ACTION3 - Nr of instances | TTP OR ACTION3 - Nr of actions | TTP OR ACTION4 - Nr of instances | TTP OR ACTION4 - Nr of actions | TTP OR ACTION5 - Nr of instances | TTP OR ACTION5 - Nr of actions | TTP OR ACTION6 - Nr of instances | TTP OR ACTION6 - Nr of actions | TTP OR ACTION7 - Nr of instances | TTP OR ACTION7 - Nr of actions | TTP OR ACTION8 - Nr of instances | TTP OR ACTION8 - Nr of actions | TTP OR ACTION9 - Nr of instances | TTP OR ACTION9 - Nr of actions | TTP OR ACTION10 - Nr of instances | TTP OR ACTION10 - Nr of actions | TTP OR ACTION11 - Nr of instances | TTP OR ACTION11 - Nr of actions | TTP OR ACTION12 - Nr of instances | TTP OR ACTION12 - Nr of actions |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Austria | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Belgium | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Bulgaria | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Croatia | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Cyprus | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Czech Republic | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Denmark | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Estonia | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Finland | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
France | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Germany | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Greece | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Hungary | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Iceland | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Ireland | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Italy | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Latvia | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Lithuania | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Luxembourg | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Malta | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Netherlands | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Poland | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Portugal | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Romania | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Slovakia | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Slovenia | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Spain | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Sweden | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Liechtenstein | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Norway | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Commitment 15
Relevant Signatories that develop or operate AI systems and that disseminate AI-generated and manipulated content through their services (e.g. deepfakes) commit to take into consideration the transparency obligations and the list of manipulative practices prohibited under the proposal for Artificial Intelligence Act.
We signed up to the following measures of this commitment
Measure 15.1 Measure 15.2
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 15.1
Relevant signatories will establish or confirm their policies in place for countering prohibited manipulative practices for AI systems that generate or manipulate content, such as warning users and proactively detect such content.
QRE 15.1.1
In line with EU and national legislation, Relevant Signatories will report on their policies in place for countering prohibited manipulative practices for AI systems that generate or manipulate content.
- Are Chatbots Misinforming Us About the European Elections? Yes. | 11.04.2024
- When Misinformation Becomes Disinformation: Chatbot Companies and EU Elections | 07.06.2024
- AI Act Comes into Force: What It Means for Elections and DRI’s Next Steps | 01.08.2024
- Ensuring AI Accountability: Auditing Methods to Mitigate the risks of Large Language Models | 14.10.2024
- Are AI Chatbots Reliable? Insights from Tunisia’s 2024 Presidential Race | 02.12.2024
- The GenAI Factor at the Ballot Box | 12.12.2024
- An AI-Powered Audit: Do Chatbots Reproduce Political Pluralism? | 27.12.2024
Commitment 16
Relevant Signatories commit to operate channels of exchange between their relevant teams in order to proactively share information about cross-platform influence operations, foreign interference in information space and relevant incidents that emerge on their respective services, with the aim of preventing dissemination and resurgence on other services, in full compliance with privacy legislation and with due consideration for security and human rights risks.
We signed up to the following measures of this commitment
Measure 16.1 Measure 16.2
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 16.1
Relevant Signatories will share relevant information about cross-platform information manipulation, foreign interference in information space and incidents that emerge on their respective services for instance via a dedicated sub-group of the permanent Task-force or via existing fora for exchanging such information.
QRE 16.1.1
Relevant Signatories will disclose the fora they use for information sharing as well as information about learnings derived from this sharing.
- All identified TTPs, including murky accounts and ads violating TikTok's community guidelines on political advertising, were flagged under the Rapid Response System of the Code of Conduct on Disinformation. Additionally, in July 2024, we engaged in discussions with TikTok about their policies on impersonation and verified badges for political accounts, fostering collaboration and informing future enforcement measures.
- We directly shared our findings with relevant signatories to push for platform improvements. For example, on 30 September we shared with Google our YouTube report on disinformation during the EP Elections, highlighting the platform’s failure to use basic fact-checking tools like information panels and source indicators, despite its commitment to Measure 22.7 of the Code of Practice.
- Risk Assessment Roundtable. Anticipating the Storm: Mapping Digital Threats to the EP Elections | 04.04.2024. DRI held an online exchange with participants from across EU institutions, platforms, international organisations and NGOs to assess risks to the information space before the EP elections. Participants were encouraged to share their thoughts, observations, and predictions about each topic in the context of the EP elections.
- EP Elections Social Media Monitoring Hub | March – June 2024. In the lead-up to the EP Elections, DRI brought together a team of eight researchers from across the European Union to collaborate on social media monitoring. This group met regularly to discuss major risks and key narratives at the member state level. Each researcher contributed with an in-depth case study analysis.
- Artificial Intelligence, Democracy and Elections | 21.05.2024. DRI presented at the International Seminar on Artificial Intelligence, Democracy and Elections alongside experts, academics, professionals and leaders to discuss the challenges and opportunities that the intersection between artificial intelligence, democracy and elections represents for the future of global democratic society.
- Separating Voice from Noise: Insights from the 2024 EP Elections | 24.06.2024. The 2024 European Parliament elections took place against the backdrop of an evolving EU legal framework designed to address digital threats, though its mechanisms and impacts were still unfolding throughout the campaign period. In the aftermath of the elections, understanding the complexities of these digital battlegrounds became even more critical. Key questions emerged: How did political campaigns evolve online? Which political actors and media outlets shaped public discourse? What role did generative AI play in the electoral process? To explore these pressing issues, we provided comprehensive insights and analysis, examining the influence of digital platforms on election narratives, the spread of disinformation, and the challenges of mitigating hate speech. These findings were further discussed in our post-election webinar, where we unpacked the latest trends and their implications for policymakers, civil society, and digital platforms.
- Webinar on Innovative Uses of AI by Civil Society in Europe |26.06.2024. On June 26, GLOBSEC hosted an online discussion highlighting the innovative uses of AI by civil society organizations in Europe, exploring tools and technologies from leading tech companies designed to support these initiatives, and addressing the ethical challenges and concerns associated with AI in civil society. DRI attended to share their researching findings.
- SEEDS Webinar on Joint Lessons from the 2024 EP Elections |24.09.2024. In this webinar, the SEEEDS partners provide insights into the 2024 European Parliament elections based on the findings of civil society organisations and initiate the discussion on the way forward regarding future European electoral reforms and strengthening democratic processes at the EU level.
- Focus groups with Digital Services Coordinators | 27 September – 02 October 2024. DRI held three focus groups between 27 September and 2 October 2024 with key DSA implementation stakeholders, including 3 CSO representatives, 1 academic, and 8 DSC representatives from 6 small-to-medium member states. We focused on DSA implementation status, challenges DSCs face, their collaboration plans with external stakeholders, particularly CSOs, and citizens' awareness of DSCs and digital rights.
- Denver Democracy Series and Summit: 21st Century Elections: Technology, Disinformation/Misinformation & AI | 11.10.2024. DRI joined the Josef Korbel School of International Studies-organised event, where DRI shared its research findings from the European Parliament elections.
- Expert roundtable: Kick-Off for the Circle of Friends | 07.11.2024. After nine months of DSA enforcement, the DSA Research Network’s Circle of Friends held its inaugural meeting, taking stock of the DSA-related areas in need of further academic research. DRI attended to share their position on emerging topics around the DSA, identify needs for scientific insight and explore different methods to fill those gaps.
- Delegated Act Roundtable| 25.11.2024. Following the European Commission's released draft Delegated Act on Data Access, DRI hosted a roundtable for DSA stakeholders. In this roundtable, joined by 23 participants, including European Commission representatives, we presented DRI’s position on the draft and gathered feedback and insights from other CSOs to build a shared understanding of the Delegated Act’s implications for civil society research. This resulted in a joint submission of feedback for the EC. DRI thereby also contributed to policy formulation as lead organisation of this submission of feedback.
- Distinguindo Vozes de Ruídos: Reflexões sobre as Eleições Municipais de 2024 | 03.12.2024. The 2024 Brazilian municipal elections marked a new phase in online political communication, with AI risks overshadowed by the ongoing spread of disinformation, hate speech, and hostility toward traditional institutions. This webinar, organized by DRI in partnership with FGV Comunicação Rio, FGV Direito Rio, and Agência Lupa, supported by the EU, gathered experts to discuss disinformation, hate speech, online gender-based violence, and the impact of digital platforms on political campaigns and democracy.
- The GenAI Factor in the 2024 Elections Report Event | 11.12.2024. DRI attended the Kofi Annan Launch event at the European Parliament, sharing key insights from the report with relevant EU stakeholders.
- From Posts to Polls - Lessons from the 2024 European Elections on Strengthening Young People’s Engagement Through Effective Social Media Strategies | 12.12.2024. This two-hour lunch event presented key findings from the policy study From Posts to Polls: Lessons from the 2024 European Elections on strengthening youth engagement through social media. The study examines young voters' preferences, behaviours, and motivations, alongside political parties’ campaign strategies. The event featured panel discussions with academics, policymakers, and youth organizations, exploring the study’s implications for the next European Political Institutions' mandate, with a focus on the EU’s upcoming Youth Agenda.
- Are AI Chatbots Reliable? Insights from Tunisia’s 2024 Presidential Race |12.2024. DRI's Tunisia office presented in December their findings from their report into how chatbots answer electoral questions in the country. The Digital Democracy team attended and presented our findings from our earlier audits concerning the European Parliament elections and the importance of testing LLM responses.
- DRI Media Coverage| 2024. Our research and advocacy efforts garnered significant attention, with our reports and analysis being referenced by leading media outlets such as Politico, Euronews, Forbes, EUobserver, Euractiv, and many more. This media coverage furthers the impact of our work, shaping public discourse and informing key stakeholders—including policymakers, civil society, and the broader public—we continue to drive meaningful conversations on critical issues.