Report March 2026
Your organisation description
Integrity of Services
Commitment 14
In order to limit impermissible manipulative behaviours and practices across their services, Relevant Signatories commit to put in place or further bolster policies to address both misinformation and disinformation across their services, and to agree on a cross-service understanding of manipulative behaviours, actors and practices not permitted on their services. Such behaviours and practices include: The creation and use of fake accounts, account takeovers and bot-driven amplification, Hack-and-leak operations, Impersonation, Malicious deep fakes, The purchase of fake engagements, Non-transparent paid messages or promotion by influencers, The creation and use of accounts that participate in coordinated inauthentic behaviour, User conduct aimed at artificially amplifying the reach or perceived public support for disinformation.
We signed up to the following measures of this commitment
Measure 14.1 Measure 14.2
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 14.1
Relevant Signatories will adopt, reinforce and implement clear policies regarding impermissible manipulative behaviours and practices on their services, based on the latest evidence on the conducts and tactics, techniques and procedures (TTPs) employed by malicious actors, such as the AMITT Disinformation Tactics, Techniques and Procedures Framework.
QRE 14.1.1
Relevant Signatories will list relevant policies and clarify how they relate to the threats mentioned above as well as to other Disinformation threats.
- From Hashtags to Votes: Social Media Patterns in Austria’s 2024 National Elections|29.01.2025
- Big tech is backing out of commitments countering disinformation—What’s Next for the EU’s Code of Practice? | 07.02.2025
- The DSA Alone Won’t Save Democracy – but Its Interplay with the Rule of Law Might | 07.02.2025
- Filtered for You: Algorithmic Bias on TikTok and Instagram in Germany|10.04.2025
- Ensuring Electoral Integrity: Election Observers in Southern Africa |23.04.2025
- Political Exposure Bias in Recommender Systems: A Review of Evidence from the U.S. and German Elections | 30.04.2025
- Biased by Design? Chatbots and Misinformation in Sri Lanka’s 2025 Local Elections | 14.05.2025
- Engagement Wars: Inside the Polish Presidential Campaigns on Social Media. |30.05.2025
- Digital Duel: Online Campaign Strategies in Poland’s Presidential Runoff|17.06.2025
- Democracy in Disguise: Inauthentic Online Influence on the 2025 Sri Lanka’s Local Government Elections | 17.07.2025
- Algorithms and Agendas: The Digital Fight for Poland’s Presidency 2025 |31.07.2025
- Access or Exit Democracy? Elections and Digital Trends in the EU, 2023-2025 |04.11.2025
- When Politics Turns Personal: Hate Speech and Online Gender-Based Violence in Sri Lanka’s 2025 Local Elections | 23.11.2025
- Why We're Suing Elon Musk's X for German Election Data | 27.02.2025
- Case Against X: Berlin Court Confirms Researchers Can Enforce Their Right to Data Access in National Courts | 13.05.2025
- Unpacking TikTok’s Data Access Illusion | 12.06.2025
- Overview of Platform Data Access Mechanisms | 17.12.2025
- Monitoring the EU Code of Practice on Disinformation| 01.2025
- DSA Framework for Online Election Integrity | 04.2025
Measure 14.2
Relevant Signatories will keep a detailed, up-to-date list of their publicly available policies that clarifies behaviours and practices that are prohibited on their services and will outline in their reports how their respective policies and their implementation address the above set of TTPs, threats and harms as well as other relevant threats.
QRE 14.2.1
Relevant Signatories will report on actions taken to implement the policies they list in their reports and covering the range of TTPs identified/employed, at the Member State level.
- Scroll for a Fake: TikTok Murky Accounts Impersonate German Parties and Politicians Ahead of Elections | 18.02.2025
- Scroll, Like, Deceive: Murky Political Accounts on TikTok before the German 2025 Elections| 21.03.2025
- 323 murky accounts and one denied candidacy: TikTok's role in Romania’s 2025 election | 11.06.2025
- Unverified and Unchecked: Murky TikTok Accounts in Poland’s 2025 Elections |18.06.2025
SLI 14.2.1
Number of instances of identified TTPs and actions taken at the Member State level under policies addressing each of the TTPs as well as information on the type of content.
- Reported 482 murky accounts on TikTok
- Reported 7 cases of unlabelled generative AI content with harmful stereotypes on Meta (Facebook)
- Reported 6 cases of unlabelled political advertising in the Meta content library
Number of actions taken by type:
- TikTok acted on 394 of these reports.
| Country | TTP OR ACTION1 - Nr of instances | TTP OR ACTION1 - Nr of actions | TTP OR ACTION2 - Nr of instances | TTP OR ACTION2 - Nr of actions | TTP OR ACTION3 - Nr of instances | TTP OR ACTION3 - Nr of actions | TTP OR ACTION4 - Nr of instances | TTP OR ACTION4 - Nr of actions | TTP OR ACTION5 - Nr of instances | TTP OR ACTION5 - Nr of actions | TTP OR ACTION6 - Nr of instances | TTP OR ACTION6 - Nr of actions | TTP OR ACTION7 - Nr of instances | TTP OR ACTION7 - Nr of actions | TTP OR ACTION8 - Nr of instances | TTP OR ACTION8 - Nr of actions | TTP OR ACTION9 - Nr of instances | TTP OR ACTION9 - Nr of actions | TTP OR ACTION10 - Nr of instances | TTP OR ACTION10 - Nr of actions | TTP OR ACTION11 - Nr of instances | TTP OR ACTION11 - Nr of actions | TTP OR ACTION12 - Nr of instances | TTP OR ACTION12 - Nr of actions |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Austria | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| Belgium | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| Bulgaria | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| Croatia | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| Cyprus | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| Czech Republic | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| Denmark | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| Estonia | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| Finland | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| France | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| Germany | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| Greece | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| Hungary | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| Iceland | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| Ireland | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| Italy | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| Latvia | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| Lithuania | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| Luxembourg | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| Malta | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| Netherlands | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| Poland | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| Portugal | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| Romania | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| Slovakia | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| Slovenia | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| Spain | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| Sweden | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| Liechtenstein | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| Norway | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Commitment 15
Relevant Signatories that develop or operate AI systems and that disseminate AI-generated and manipulated content through their services (e.g. deepfakes) commit to take into consideration the transparency obligations and the list of manipulative practices prohibited under the proposal for Artificial Intelligence Act.
We signed up to the following measures of this commitment
Measure 15.1 Measure 15.2
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 15.1
Relevant signatories will establish or confirm their policies in place for countering prohibited manipulative practices for AI systems that generate or manipulate content, such as warning users and proactively detect such content.
QRE 15.1.1
In line with EU and national legislation, Relevant Signatories will report on their policies in place for countering prohibited manipulative practices for AI systems that generate or manipulate content.
- Inconsistent and Unreliable: Chatbots Provide Inaccurate Information on German Elections | 12.02.2025
- The AfD on Facebook: Fear, Anti-CDU posts and Abuse of AI | 03.03.2025
- Joint feedback with the European Partnership for Democracy (EPD), CEE Digital Democracy Watch, and GLOBSEC on transparency requirements for generative AI systems under Article 50 of the AI Act | 09.10.2025
- Simplification digital package and omnibus — Feedback from: Democracy Reporting International (DRI) | 14.10.2025
- Connected Learnings – Transparency and Accountability in AI Systems and Social Media | 12.03.2025.
Online workshop with researchers from GPAI and social media fields on joint data access advocacy and moving from transparency demands toward stronger scrutiny frameworks. DRI presented key research and advocacy findings.
Commitment 16
Relevant Signatories commit to operate channels of exchange between their relevant teams in order to proactively share information about cross-platform influence operations, foreign interference in information space and relevant incidents that emerge on their respective services, with the aim of preventing dissemination and resurgence on other services, in full compliance with privacy legislation and with due consideration for security and human rights risks.
We signed up to the following measures of this commitment
Measure 16.1
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 16.1
Relevant Signatories will share relevant information about cross-platform information manipulation, foreign interference in information space and incidents that emerge on their respective services for instance via a dedicated sub-group of the permanent Task-force or via existing fora for exchanging such information.
QRE 16.1.1
Relevant Signatories will disclose the fora they use for information sharing as well as information about learnings derived from this sharing.
- DRI co-chairs the Elections Working Group under the EU Code of Practice on Disinformation (transitioned to Code of Conduct in February 2025).
Since June 2023, DRI has served as co-chair alongside Globsec and TikTok in a multi-stakeholder forum of over 100 civil society and platform representatives. - Elections, Algorithms, and Accountability: Digital Platforms and the 2025 German Federal Elections | 25.02.2025.
DRI convened a high-level roundtable in Berlin ahead of the German federal elections to examine how digital platforms shape electoral discourse under the Digital Services Act and AI Act. Sixteen policymakers, regulators, researchers, and civil society representatives discussed research findings and advocacy pathways to strengthen DSA enforcement. - Modelling Researcher Access to Data Legislation Workshop | 13.03.2025.
Expert workshop hosted by the Ada Lovelace Institute on legal frameworks for researcher data access across the UK, US, and EU. DRI presented research findings and contributed comparative policy perspectives. - 2025 Milton Wolf Seminar on Media and Diplomacy | 09.04.2025.
Vienna-based seminar convening academic and policy experts for in-depth discussions on technology, media, and politics. DRI presented research findings on digital democracy and platform governance. - DSA Circle of Friends | 14.04.2025.
Network meeting of the DSA Research Network addressing freedom of expression, supervision independence, and enforcement of the risk-based approach. Discussions informed stakeholder coordination on DSA implementation. - Berlin Independent Tech Researchers' Meetup | 13.05.2025.
Research professionals’ meetup on the evolving digital democracy research landscape, with a focus on assessing the effectiveness of platform mitigation measures. Insights informed future research planning. - The DSA in Court: What We Learned from Suing X | 10.07.2025.
Following the Berlin Regional Court ruling in DRI’s case against X, DRI and Gesellschaft für Freiheitsrechte co-hosted a public webinar on implications for researcher data access rights under Article 40(12) DSA. The discussion addressed litigation strategies, enforcement pathways, and civil society use of legal data access mechanisms. - Retrospective Insights: Election Monitoring Efforts to Preserve Information Integrity | 04.09.2025.
DRI convened a roundtable with 28 civil society and academic participants to assess digital democracy developments since 2023 and review findings from six national and European elections. Insights informed a meta-analysis outlining future research and advocacy priorities. - The Independent Tech Researchers' Summit | 16–17.09.2025.
Berlin summit of independent researchers addressing collaboration with platforms, safeguards against researcher retaliation, and strategies for securing data access. DRI shared election monitoring findings and data access challenges. - #InfluencersAgainstDisinfo: Empowering Online Opinion Leaders to Enhance Democratic Resilience | 17–19.09.2025.
Berlin event hosted by the Aspen Institute bringing together experts and content creators to address digital communication and disinformation resilience. DRI shared social media monitoring insights and data access concerns. - Data Access Days | 25.09.2025.
Convening under the DSA40 Collaboratory focused on implementation of the Delegated Act on Data Access. DRI shared operational experiences with platform data access tools and litigation efforts. - TED Webinar: Safeguarding Democracy and Elections in the Age of AI | 01.10.2025.
Online webinar examining AI’s dual impact on democratic processes, electoral integrity, and governance risks. DRI contributed examples of platform accountability work and multi-stakeholder collaboration. - DisinfoCon 2025 | 11–12.11.2025.
Organised with the Embassy of Canada to Germany and Alliance4Europe, DisinfoCon brought together researchers, journalists, policymakers, and civil society actors to discuss decentralised social media, AI accountability, and disinformation resilience. The event hosted 65 in-person participants in Berlin and 48 online. - DRI Media Coverage| 2025. Our research and advocacy efforts garnered significant attention, with our reports and analysis being referenced by leading media outlets such as Politico, Euronews, Reuters, CNN and many more. This media coverage furthers the impact of our work, shaping public discourse and informing key stakeholders—including policymakers, civil society, and the broader public—we continue to drive meaningful conversations on critical issues.
Crisis and Elections Response
Elections 2025
[Note: Signatories are requested to provide information relevant to their particular response to the threats and challenges they observed on their service(s). They ensure that the information below provides an accurate and complete report of their relevant actions. As operational responses to crisis/election situations can vary from service to service, an absence of information should not be considered a priori a shortfall in the way a particular service has responded. Impact metrics are accurate to the best of signatories’ abilities to measure them].
Threats observed or anticipated
- Impersonation and inauthentic accounts and political ads violating platform’s policies
DRI used the Rapid Response System to flag coordinated inauthentic behaviour and murky political accounts on TikTok that impersonated candidates and amplified partisan content. We alerted platforms and authorities to networks distorting electoral discourse and violating platform integrity policies.
- Scroll for a Fake: TikTok Murky Accounts Impersonate German Parties and Politicians Ahead of Elections | 18.02.2025
- Scroll, Like, Deceive: Murky Political Accounts on TikTok before the German 2025 Elections| 21.03.2025
Polish elections: Analysis of Poland’s 2025 presidential election found that a small group of candidates produced over 57% of campaign content, while a study of 5,500+ social media posts revealed uneven reach and unusually rapid audience growth linked to certain far-right actors. Monitoring also identified 145 inauthentic TikTok accounts impersonating candidates and parties, with some profiles amassing hundreds of thousands of followers despite partial platform removals.
- Chatbots misinforming about elections, and prevalence of generative AI in campaigns
- Inconsistent and Unreliable: Chatbots Provide Inaccurate Information on German Elections | 12.02.2025
- The AfD on Facebook: Fear, Anti-CDU posts and Abuse of AI | 03.03.2025
- Toxicity in political speech, disinformation narratives, and far-right online campaigning
Our monitoring of elections in Austria, Germany, and Poland pointed to recurring risks in online political communication, including algorithmic amplification, concentrated campaign activity, and toxic rhetoric.
- From Hashtags to Votes: Social Media Patterns in Austria’s 2024 National Elections|29.01.2025
- Filtered for You: Algorithmic Bias on TikTok and Instagram in Germany|10.04.2025
- Engagement Wars: Inside the Polish Presidential Campaigns on Social Media. |30.05.2025
- Digital Duel: Online Campaign Strategies in Poland’s Presidential Runoff|17.06.2025
- Algorithms and Agendas: The Digital Fight for Poland’s Presidency 2025 |31.07.2025
Mitigations in place
Throughout our monitoring of electoral and platform risks in 2025, we engaged with policymakers, researchers, and civil society stakeholders to raise awareness of emerging online threats and strengthen coordinated responses through webinars and roundtables.
- Elections, Algorithms, and Accountability: Digital Platforms and the 2025 German Federal Elections | 25.02.2025
- Retrospective Insights: Election Monitoring Efforts to Preserve Information Integrity | 04.09.2025
- TED Webinar: Safeguarding Democracy and Elections in the Age of AI | 01.10.2025
- DisinfoCon 2025 | 11–12.11.2025