
Report March 2025
Your organisation description
Advertising
Commitment 1
Relevant signatories participating in ad placements commit to defund the dissemination of disinformation, and improve the policies and systems which determine the eligibility of content to be monetised, the controls for monetisation and ad placement, and the data to report on the accuracy and effectiveness of controls and services around ad placements.
We signed up to the following measures of this commitment
Measure 1.1 Measure 1.2 Measure 1.3 Measure 1.4 Measure 1.5 Measure 1.6
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 1.1
Relevant Signatories involved in the selling of advertising, inclusive of media platforms, publishers and ad tech companies, will deploy, disclose, and enforce policies with the aims of: - first avoiding the publishing and carriage of harmful Disinformation to protect the integrity of advertising supported businesses - second taking meaningful enforcement and remediation steps to avoid the placement of advertising next to Disinformation content or on sources that repeatedly violate these policies; and - third adopting measures to enable the verification of the landing / destination pages of ads and origin of ad placement.
QRE 1.1.1
Signatories will disclose and outline the policies they develop, deploy, and enforce to meet the goals of Measure 1.1 and will link to relevant public pages in their help centres.
- Do not share false or misleading content. Do not share content that is false, misleading, or intended to deceive. Do not share content to interfere with or improperly influence an election or civic process. Do not share content that directly contradicts guidance from leading global health organisations and public health authorities; including false information about the safety or efficacy of vaccines or medical treatments. Do not share content or endorse someone or something in exchange for personal benefit (including personal or family relationships, monetary payment, free products or services, or other value), unless you have included a clear and conspicuous notice of the personal benefit you receive and have otherwise complied with our Advertising Policies.
- Fraud and Deception: Ads must not be fraudulent or deceptive. Your product or service must accurately match the content of your ad. Any claims in your ad must have factual support. Do not make deceptive or inaccurate claims about competitive products or services. Do not imply you or your product are affiliated with or endorsed by others without their permission. Additionally, make sure to disclose any pertinent partnerships when sharing advertising content on LinkedIn. Do not advertise prices or offers that are inaccurate - any advertised discount, offer or price must be easily discoverable from the link in your ad.
SLI 1.1.1
Signatories will report, quantitatively, on actions they took to enforce each of the policies mentioned in the qualitative part of this service level indicator, at the Member State or language level. This could include, for instance, actions to remove, to block, or to otherwise restrict advertising on pages and/or domains that disseminate harmful Disinformation.
SLI 1.1.2
Please insert the relevant data
(Impressions/1000) x Blended CPM; where CPM means “Cost Per Mille.”
Measure 1.2
Relevant Signatories responsible for the selling of advertising, inclusive of publishers, media platforms, and ad tech companies, will tighten eligibility requirements and content review processes for content monetisation and ad revenue share programmes on their services as necessary to effectively scrutinise parties and bar participation by actors who systematically post content or engage in behaviours which violate policies mentioned in Measure 1.1 that tackle Disinformation.
QRE 1.2.1
Signatories will outline their processes for reviewing, assessing, and augmenting their monetisation policies in order to scrutinise and bar participation by actors that systematically provide harmful Disinformation.
- First, the LinkedIn Audience Network is a curated network of third-party sites and apps selected by LinkedIn. LinkedIn does not allow any blog, application, or website to join the LinkedIn Audience Network and display ads; rather, LinkedIn selects the publishers that are included in the network.
- Second, LinkedIn has integrated with partners, such as Integral Ad Science and DoubleVerify, to help monitor the quality and brand safety of the publishers in the LinkedIn Audience Network and filter out publisher inventory that falls short of standards, such as brand safety floors.
- Third, LinkedIn regularly reviews the publishers included in the LinkedIn Audience Network to ensure they meet LinkedIn standards and are serving LinkedIn advertisers.
SLI 1.2.1
Signatories will report on the number of policy reviews and/or updates to policies relevant to Measure 1.2 throughout the reporting period. In addition, Signatories will report on the numbers of accounts or domains barred from participation to advertising or monetisation as a result of these policies at the Member State level.
Measure 1.3
Relevant Signatories responsible for the selling of advertising, inclusive of publishers, media platforms, and ad tech companies, will take commercial and technically feasible steps, including support for relevant third-party approaches, to give advertising buyers transparency on the placement of their advertising.
QRE 1.3.1
Signatories will report on the controls and transparency they provide to advertising buyers with regards to the placement of their ads as it relates to Measure 1.3.
Measure 1.4
Relevant Signatories responsible for the buying of advertising, inclusive of advertisers, and agencies, will place advertising through ad sellers that have taken effective, and transparent steps to avoid the placement of advertising next to Disinformation content or in places that repeatedly publish Disinformation.
QRE 1.4.1
Relevant Signatories that are responsible for the buying of advertising will describe their processes and procedures to ensure they place advertising through ad sellers that take the steps described in Measure 1.4.
Measure 1.5
Relevant Signatories involved in the reporting of monetisation activities inclusive of media platforms, ad networks, and ad verification companies will take the necessary steps to give industry-recognised relevant independent third-party auditors commercially appropriate and fair access to their services and data in order to: - First, confirm the accuracy of first party reporting relative to monetisation and Disinformation, seeking alignment with regular audits performed under the DSA. - Second, accreditation services should assess the effectiveness of media platforms' policy enforcement, including Disinformation policies.
QRE 1.5.1
Signatories that produce first party reporting will report on the access provided to independent third-party auditors as outlined in Measure 1.5 and will link to public reports and results from such auditors, such as MRC Content Level Brand Safety Accreditation, TAG Brand Safety certifications, or other similarly recognised industry accepted certifications.
QRE 1.5.2
Signatories that conduct independent accreditation via audits will disclose areas of their accreditation that have been updated to reflect needs in Measure 1.5.
Measure 1.6
Relevant Signatories will advance the development, improve the availability, and take practical steps to advance the use of brand safety tools and partnerships, with the following goals: - To the degree commercially viable, relevant Signatories will provide options to integrate information and analysis from source-raters, services that provide indicators of trustworthiness, fact-checkers, researchers or other relevant stakeholders providing information e.g., on the sources of Disinformation campaigns to help inform decisions on ad placement by ad buyers, namely advertisers and their agencies. - Advertisers, agencies, ad tech companies, and media platforms and publishers will take effective and reasonable steps to integrate the use of brand safety tools throughout the media planning, buying and reporting process, to avoid the placement of their advertising next to Disinformation content and/or in places or sources that repeatedly publish Disinformation. - Brand safety tool providers and rating services who categorise content and domains will provide reasonable transparency about the processes they use, insofar that they do not release commercially sensitive information or divulge trade secrets, and that they establish a mechanism for customer feedback and appeal.
QRE 1.6.1
Signatories that place ads will report on the options they provide for integration of information, indicators and analysis from source raters, services that provide indicators of trustworthiness, fact-checkers, researchers, or other relevant stakeholders providing information e.g. on the sources of Disinformation campaigns to help inform decisions on ad placement by buyers.
QRE 1.6.2
Signatories that purchase ads will outline the steps they have taken to integrate the use of brand safety tools in their advertising and media operations, disclosing what percentage of their media investment is protected by such services.
QRE 1.6.3
Signatories that provide brand safety tools will outline how they are ensuring transparency and appealability about their processes and outcomes.
QRE 1.6.4
Relevant Signatories that rate sources to determine if they persistently publish Disinformation shall provide reasonable information on the criteria under which websites are rated, make public the assessment of the relevant criteria relating to Disinformation, operate in an apolitical manner and give publishers the right to reply before ratings are published.
SLI 1.6.1
Signatories that purchase ads will outline the steps they have taken to integrate the use of brand safety tools in their advertising and media operations, disclosing what percentage of their media investment is protected by such services.
Commitment 2
Relevant Signatories participating in advertising commit to prevent the misuse of advertising systems to disseminate Disinformation in the form of advertising messages.
We signed up to the following measures of this commitment
Measure 2.1 Measure 2.2 Measure 2.3 Measure 2.4
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 2.1
Relevant Signatories will develop, deploy, and enforce appropriate and tailored advertising policies that address the misuse of their advertising systems for propagating harmful Disinformation in advertising messages and in the promotion of content.
QRE 2.1.1
Signatories will disclose and outline the policies they develop, deploy, and enforce to meet the goals of Measure 2.1 and will link to relevant public pages in their help centres.
SLI 2.1.1
Signatories will report, quantitatively, on actions they took to enforce each of the policies mentioned in the qualitative part of this service level indicator, at the Member State or language level. This could include, for instance, actions to remove, to block, or to otherwise restrict harmful Disinformation in advertising messages and in the promotion of content.
Measure 2.2
Relevant Signatories will develop tools, methods, or partnerships, which may include reference to independent information sources both public and proprietary (for instance partnerships with fact-checking or source rating organisations, or services providing indicators of trustworthiness, or proprietary methods developed internally) to identify content and sources as distributing harmful Disinformation, to identify and take action on ads and promoted content that violate advertising policies regarding Disinformation mentioned in Measure 2.1.
QRE 2.2.1
Signatories will describe the tools, methods, or partnerships they use to identify content and sources that contravene policies mentioned in Measure 2.1 - while being mindful of not disclosing information that'd make it easier for malicious actors to circumvent these tools, methods, or partnerships. Signatories will specify the independent information sources involved in these tools, methods, or partnerships.
Measure 2.3
Relevant Signatories will adapt their current ad verification and review systems as appropriate and commercially feasible, with the aim of preventing ads placed through or on their services that do not comply with their advertising policies in respect of Disinformation to be inclusive of advertising message, promoted content, and site landing page.
QRE 2.3.1
Signatories will describe the systems and procedures they use to ensure that ads placed through their services comply with their advertising policies as described in Measure 2.1.
SLI 2.3.1
Signatories will report quantitatively, at the Member State level, on the ads removed or prohibited from their services using procedures outlined in Measure 2.3. In the event of ads successfully removed, parties should report on the reach of violatory content and advertising.
Measure 2.4
Relevant Signatories will provide relevant information to advertisers about which advertising policies have been violated when they reject or remove ads violating policies described in Measure 2.1 above or disable advertising accounts in application of these policies and clarify their procedures for appeal.
QRE 2.4.1
Signatories will describe how they provide information to advertisers about advertising policies they have violated and how advertisers can appeal these policies.
SLI 2.4.1
Signatories will report quantitatively, at the Member State level, on the number of appeals per their standard procedures they received from advertisers on the application of their policies and on the proportion of these appeals that led to a change of the initial policy decision.
Commitment 3
Relevant Signatories involved in buying, selling and placing digital advertising commit to exchange best practices and strengthen cooperation with relevant players, expanding to organisations active in the online monetisation value chain, such as online e-payment services, e-commerce platforms and relevant crowd-funding/donation systems, with the aim to increase the effectiveness of scrutiny of ad placements on their own services.
We signed up to the following measures of this commitment
Measure 3.1 Measure 3.2 Measure 3.3
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 3.1
Relevant Signatories will cooperate with platforms, advertising supply chain players, source-rating services, services that provide indicators of trustworthiness, fact-checking organisations, advertisers and any other actors active in the online monetisation value chain, to facilitate the integration and flow of information, in particular information relevant for tackling purveyors of harmful Disinformation, in full respect of all relevant data protection rules and confidentiality agreements.
QRE 3.1.1
Signatories will outline how they work with others across industry and civil society to facilitate the flow of information that may be relevant for tackling purveyors of harmful Disinformation.
Measure 3.2
Relevant Signatories will exchange among themselves information on Disinformation trends and TTPs (Tactics, Techniques, and Procedures), via the Code Task-force, GARM, IAB Europe, or other relevant fora. This will include sharing insights on new techniques or threats observed by Relevant Signatories, discussing case studies, and other means of improving capabilities and steps to help remove Disinformation across the advertising supply chain - potentially including real-time technical capabilities.
QRE 3.2.1
Signatories will report on their discussions within fora mentioned in Measure 3.2, being mindful of not disclosing information that is confidential and/or that may be used by malicious actors to circumvent the defences set by Signatories and others across the advertising supply chain. This could include, for instance, information about the fora Signatories engaged in; about the kinds of information they shared; and about the learnings they derived from these exchanges.
Measure 3.3
Relevant Signatories will integrate the work of or collaborate with relevant third-party organisations, such as independent source-rating services, services that provide indicators of trustworthiness, fact-checkers, researchers, or open-source investigators, in order to reduce monetisation of Disinformation and avoid the dissemination of advertising containing Disinformation.
QRE 3.3.1
Signatories will report on the collaborations and integrations relevant to their work with organisations mentioned.
Political Advertising
Commitment 4
Relevant Signatories commit to adopt a common definition of "political and issue advertising".
We signed up to the following measures of this commitment
Measure 4.1 Measure 4.2
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 4.1
Relevant Signatories commit to define "political and issue advertising" in this section in line with the definition of "political advertising" set out in the European Commission's proposal for a Regulation on the transparency and targeting of political advertising.
QRE 4.1.1
Relevant Signatories will declare the relevant scope of their commitment at the time of reporting and publish their relevant policies, demonstrating alignment with the European Commission's proposal for a Regulation on the transparency and targeting of political advertising.
QRE 4.1.2
After the first year of the Code's operation, Relevant Signatories will state whether they assess that further work with the Task-force is necessary and the mechanism for doing so, in line with Measure 4.2.
Commitment 5
Relevant Signatories commit to apply a consistent approach across political and issue advertising on their services and to clearly indicate in their advertising policies the extent to which such advertising is permitted or prohibited on their services.
We signed up to the following measures of this commitment
Measure 5.1
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 5.1
Relevant Signatories will apply the labelling, transparency and verification principles (as set out below) across all ads relevant to their Commitments 4 and 5. They will publicise their policy rules or guidelines pertaining to their service's definition(s) of political and/or issue advertising in a publicly available and easily understandable way.
QRE 5.1.1
Relevant Signatories will report on their policy rules or guidelines and on their approach towards publicising them.
Commitment 7
Relevant Signatories commit to put proportionate and appropriate identity verification systems in place for sponsors and providers of advertising services acting on behalf of sponsors placing political or issue ads. Relevant signatories will make sure that labelling and user-facing transparency requirements are met before allowing placement of such ads.
We signed up to the following measures of this commitment
Measure 7.3
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 7.3
Relevant Signatories will take appropriate action, such as suspensions or other account-level penalties, against political or issue ad sponsors who demonstrably evade verification and transparency requirements via on-platform tactics. Relevant Signatories will develop - or provide via existing tools - functionalities that allow users to flag ads that are not labelled as political.
QRE 7.3.1
Relevant Signatories will report on the tools and processes in place to request a declaration on whether the advertising service requested constitutes political or issue advertising.
QRE 7.3.2
Relevant Signatories will report on policies in place against political or issue ad sponsors who demonstrably evade verification and transparency requirements on-platform.
Integrity of Services
Commitment 14
In order to limit impermissible manipulative behaviours and practices across their services, Relevant Signatories commit to put in place or further bolster policies to address both misinformation and disinformation across their services, and to agree on a cross-service understanding of manipulative behaviours, actors and practices not permitted on their services. Such behaviours and practices include: The creation and use of fake accounts, account takeovers and bot-driven amplification, Hack-and-leak operations, Impersonation, Malicious deep fakes, The purchase of fake engagements, Non-transparent paid messages or promotion by influencers, The creation and use of accounts that participate in coordinated inauthentic behaviour, User conduct aimed at artificially amplifying the reach or perceived public support for disinformation.
We signed up to the following measures of this commitment
Measure 14.1 Measure 14.2 Measure 14.3
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 14.1
Relevant Signatories will adopt, reinforce and implement clear policies regarding impermissible manipulative behaviours and practices on their services, based on the latest evidence on the conducts and tactics, techniques and procedures (TTPs) employed by malicious actors, such as the AMITT Disinformation Tactics, Techniques and Procedures Framework.
QRE 14.1.1
Relevant Signatories will list relevant policies and clarify how they relate to the threats mentioned above as well as to other Disinformation threats.
QRE 14.1.2
Signatories will report on their proactive efforts to detect impermissible content, behaviours, TTPs and practices relevant to this commitment.
Measure 14.2
Relevant Signatories will keep a detailed, up-to-date list of their publicly available policies that clarifies behaviours and practices that are prohibited on their services and will outline in their reports how their respective policies and their implementation address the above set of TTPs, threats and harms as well as other relevant threats.
QRE 14.2.1
Relevant Signatories will report on actions taken to implement the policies they list in their reports and covering the range of TTPs identified/employed, at the Member State level.
- Establishing metrics for when election-related conversations, violations, or operational capacity breach a threshold and require additional support
With respect to the remaining TTPs, LinkedIn is unable to reasonably ascertain the intent or provenance of such content. As discussed above, disinformation is not prevalent on LinkedIn due to the professional context of the platform. Distribution of such content through fake accounts is further hampered due to the need to create connections between the fake account and the real member. In the rare instances that such misinformation is spread through fake accounts, due to the adversarial nature of this activity, publicly disclosing details regarding the threat actor's TTPs would hurt our ability to fight against this activity. For example, reporting that vulnerable recipients were not targeted may incentivize the targeting of such recipients.
SLI 14.2.1
Number of instances of identified TTPs and actions taken at the Member State level under policies addressing each of the TTPs as well as information on the type of content.
- TTP 1: “Creation of inauthentic accounts or botnets (which may include automated, partially automated, or non-automated accounts).” SLI 14.2.1 reports the number of fake accounts that LinkedIn prevented from being created or restricted between 1 July - 31 December 2024, broken out by EEA Member State. The fake accounts reported are attributed to EEA Member States based on the IP address used during registration of the account. ‘Number of instances of identified TTPs’ and ‘Number of actions taken by type’ are identical given LinkedIn blocked the registration attempt or restricted the account in all instances.
- TTP 2: “Use of fake / inauthentic reactions (e.g. likes, up votes, comments).” The table reports the number of fake accounts reported in TTP 1 SLI 14.2.1 that reacted to, commented on, or shared (collectively, “engaged with”) a feed post between 1 July – 31 December 2024.
- The numbers of fake accounts reported below are a subset of the fake accounts reported in TTP 1 SLI 14.2.1 that engaged with a feed post between 1 July – 31 December 2024. For example, of the 194,153 fake accounts that LinkedIn prevented from being created or restricted between 1 July – 31 December 2024 in Austria (as reported in TTP 1 SLI 14.2.1), 1,148 of those accounts engaged with a feed post between 1July – 31 December 2024.
- TTP 3: “Use of fake followers or subscribers.” The table reports the number of fake accounts reported in TTP 1 SLI 14.2.1 that followed a LinkedIn profile or page between 1 July – 31 December 2024.
- The numbers of fake accounts reported below are a subset of the fake accounts reported in TTP 1 SLI 14.2.1 that followed a LinkedIn profile or page between 1 July – 31 December 2024. For example, of the 194,153 fake accounts that LinkedIn prevented from being created or restricted between 1 July – 31 December 2024 in Austria (as reported in TTP 1 SLI 14.2.1), 9,198 of those accounts followed a LinkedIn profile or page between 1 July – 31 December 2024 (as reported below).
- TTP 4: “Creation of inauthentic pages, groups, chat groups, I, or domains.” SLI 14.2.1 reports the number of LinkedIn pages or groups that the fake accounts reported in TTP 1 SLI 14.2.1 created between 1 July – 31 December 2024.
- The numbers of LinkedIn pages or groups created reported below are based on the population of fake accounts reported in TTP 1 SLI 14.2.1. For example, the 194,153 fake accounts that LinkedIn prevented from being created or restricted between 1 July – 31 December 2024 in Austria (as reported in TTP 1 SLI 14.2.1) created 33 LinkedIn pages or groups between 1 July – 31 December 2024 (as reported below).
Country | TTP 1 - Nr of instances of identified TTPs - The number of fake accounts LinkedIn prevented or restricted between 1 July 31 December 2024 | TTP 1 - Nr of actions taken by type - The number of fake accounts LinkedIn prevented or restricted between 1 July 31 December 2024 | TTP 2 - Nr of instances of identified TTPs - The number of fake accounts reported in TTP 1 SLI 14.2.1 that engaged with a feed post between 1 July 31 December 2024 | TTP 3 - Nr of instances of identified TTPs - The number of fake accounts reported in TTP 1 SLI 14.2.1 that followed a LinkedIn profile or page between 1 July 31 December 2024 | TTP 4 - Nr of instances of identified TTPs - The number of LinkedIn pages or groups created between 1 July 31 December 2024 by the fake accounts reported in TTP 1 SLI 14.2.1. |
---|---|---|---|---|---|
Austria | 194,153 | 194,153 | 1,148 | 9,198 | 33 |
Belgium | 344,346 | 344,346 | 1,399 | 10,504 | 35 |
Bulgaria | 140,495 | 140,495 | 791 | 5,564 | 18 |
Croatia | 53,395 | 53,395 | 373 | 3,539 | 12 |
Cyprus | 61,105 | 61,105 | 312 | 1,631 | 16 |
Czech Republic | 185,280 | 185,280 | 740 | 11,763 | 50 |
Denmark | 150,598 | 150,598 | 648 | 6,065 | 8 |
Estonia | 452,316 | 452,316 | 166 | 2,523 | 7 |
Finland | 530,752 | 530,752 | 744 | 5,352 | 10 |
France | 2,712,034 | 2,712,034 | 14,984 | 137,403 | 321 |
Germany | 1,923,995 | 1,923,995 | 15,410 | 142,964 | 195 |
Greece | 271,628 | 271,628 | 1,230 | 11,514 | 21 |
Hungary | 102,782 | 102,782 | 475 | 4,856 | 12 |
Iceland | 444,743 | 444,743 | 1,064 | 11,910 | 26 |
Ireland | 8,365,534 | 8,365,534 | 6,200 | 84,608 | 141 |
Italy | 468,370 | 468,370 | 312 | 2,667 | 13 |
Latvia | 155,975 | 155,975 | 678 | 4,464 | 10 |
Lithuania | 40,195 | 40,195 | 261 | 1,567 | 3 |
Luxembourg | 31,539 | 31,539 | 122 | 851 | 6 |
Malta | 1,172,414 | 1,172,414 | 4,451 | 39,036 | 94 |
Netherlands | 597,637 | 597,637 | 4,104 | 55,905 | 104 |
Poland | 180,022 | 180,022 | 1.66 | 12,295 | 79 |
Portugal | 683,248 | 683,248 | 1,328 | 10,745 | 38 |
Romania | 84,337 | 4,337 | 309 | 3,110 | 20 |
Slovakia | 80,358 | 80,358 | 166 | 1,298 | 6 |
Slovenia | 2,046,114 | 2,046,114 | 6,420 | 65,638 | 191 |
Spain | 518,628 | 518,628 | 1,378 | 11,253 | 28 |
Sweden | 16,166 | 16,166 | 49 | 390 | 0 |
Liechtenstein | 1,080 | 1,080 | 4 | 52 | 0 |
Norway | 104,916 | 104,916 | 583 | 3,859 | 9 |
Total EU | 21,991,993 | 21,991,993 | 66,909 | 658,223 | 1,497 |
Total EEA | 22,114,155 | 22,114,155 | 67,545 | 662,524 | 1,506 |
SLI 14.2.2
Views/impressions of and interaction/engagement at the Member State level (e.g. likes, shares, comments), related to each identified TTP, before and after action was taken.
TTP 1: “Creation of inauthentic accounts or botnets (which may include automated, partially automated, or non-automated accounts).”
- SLI 14.2.2. reports two metrics. First, the number of EEA accounts that connected to or followed the fake accounts in SLI 14.2.1 between 1 July – 31 December 2024. For example, the 194,153 fake accounts reported for Austria had a total of 3,299 EEA accounts connect to or follow them between 1 July and 31 December 2024. Whether an account qualifies as an EEA account is based on the IP address used during registration of the account. Second, the number of fake accounts in SLI 14.2.1 that posted a feed post between 1 July – 31 December 2024. For example, of the 444,743 fake accounts prevented or restricted for Ireland, 789 posted a feed post between 1 July and 31 December 2024.
- SLI 14.2.2 reports the number of accounts in the EEA that joined or followed the pages or groups reported in TTP 4 SLI 14.2.1 between 1 July – 31 December 2024. For example, the 33 pages and groups reported for Austria in TTP4 SLI 14.2.1 had a total of 107 EEA accounts join or follow between 1 July – 31 December 2024. Whether an account qualifies as an EEA account is based on the IP address used during registration of the account.
Please note that the metrics provided below are total numbers and do not imply that these fake accounts were engaging in posting misinformation or disinformation.
Country | TTP 1 - Views/ impressions before action - The number of EEA accounts that connected to or followed the fake accounts between 1 July 31 December 2024 | TTP 1 - Views/ impressions before action - The number of fake accounts that posted a feed post between 1 July 31 December 2024 | TTP 4 - Views/ impressions before action - The number of accounts in the EEA that joined or followed the pages and groups reported in TTP 4 SLI 14.2.1 between 1 July 31 December 2024 |
---|---|---|---|
Austria | 3,299 | 710 | 107 |
Belgium | 5,081 | 1,147 | 177 |
Bulgaria | 2,033 | 549 | 31 |
Croatia | 1,465 | 236 | 78 |
Cyprus | 977 | 218 | 23 |
Czech Republic | 3,811 | 629 | 50 |
Denmark | 2,732 | 558 | 42 |
Estonia | 525 | 129 | 14 |
Finland | 1,701 | 506 | 35 |
France | 58,419 | 11,784 | 2,484 |
Germany | 35,323 | 9,523 | 792 |
Greece | 5,422 | 1,009 | 211 |
Hungary | 1,606 | 401 | 114 |
Ireland | 3,951 | 789 | 47 |
Italy | 21,848 | 4,535 | 2,225 |
Latvia | 1,087 | 230 | 35 |
Lithuania | 1,927 | 304 | 50 |
Luxembourg | 655 | 134 | 21 |
Malta | 587 | 101 | 35 |
Netherlands | 18,922 | 3,448 | 420 |
Poland | 13,476 | 2,715 | 349 |
Portugal | 6,676 | 1,448 | 884 |
Romania | 5,942 | 920 | 161 |
Slovakia | 1,207 | 266 | 33 |
Slovenia | 654 | 142 | 18 |
Spain | 68,945 | 6,128 | 1,499 |
Sweden | 6,511 | 1,020 | 147 |
Iceland | 191 | 36 | 5 |
Liechtenstein | 24 | 6 | 2 |
Norway | 1,797 | 413 | 24 |
Total EU | 274,782 | 49,579 | 10,082 |
Total EEA | 276,794 | 50,034 | 10,113 |
Measure 14.3
Relevant Signatories will convene via the Permanent Task-force to agree upon and publish a list and terminology of TTPs employed by malicious actors, which should be updated on an annual basis.
QRE 14.3.1
Signatories will report on the list of TTPs agreed in the Permanent Task-force within 6 months of the signing of the Code and will update this list at least every year. They will also report about the common baseline elements, objectives and benchmarks for the policies and measures.
Commitment 15
Relevant Signatories that develop or operate AI systems and that disseminate AI-generated and manipulated content through their services (e.g. deepfakes) commit to take into consideration the transparency obligations and the list of manipulative practices prohibited under the proposal for Artificial Intelligence Act.
We signed up to the following measures of this commitment
Measure 15.1 Measure 15.2
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 15.1
Relevant signatories will establish or confirm their policies in place for countering prohibited manipulative practices for AI systems that generate or manipulate content, such as warning users and proactively detect such content.
QRE 15.1.1
In line with EU and national legislation, Relevant Signatories will report on their policies in place for countering prohibited manipulative practices for AI systems that generate or manipulate content.
Measure 15.2
Relevant Signatories will establish or confirm their policies in place to ensure that the algorithms used for detection, moderation and sanctioning of impermissible conduct and content on their services are trustworthy, respect the rights of end-users and do not constitute prohibited manipulative practices impermissibly distorting their behaviour in line with Union and Member States legislation.
QRE 15.2.1
Relevant Signatories will report on their policies and actions to ensure that the algorithms used for detection, moderation and sanctioning of impermissible conduct and content on their services are trustworthy, respect the rights of end-users and do not constitute prohibited manipulative practices in line with Union and Member States legislation.
Commitment 16
Relevant Signatories commit to operate channels of exchange between their relevant teams in order to proactively share information about cross-platform influence operations, foreign interference in information space and relevant incidents that emerge on their respective services, with the aim of preventing dissemination and resurgence on other services, in full compliance with privacy legislation and with due consideration for security and human rights risks.
We signed up to the following measures of this commitment
Measure 16.1 Measure 16.2
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 16.1
Relevant Signatories will share relevant information about cross-platform information manipulation, foreign interference in information space and incidents that emerge on their respective services for instance via a dedicated sub-group of the permanent Task-force or via existing fora for exchanging such information.
QRE 16.1.1
Relevant Signatories will disclose the fora they use for information sharing as well as information about learnings derived from this sharing.
SLI 16.1.1
Number of actions taken as a result of the collaboration and information sharing between signatories. Where they have such information, they will specify which Member States that were affected (including information about the content being detected and acted upon due to this collaboration).
Measure 16.2
Relevant Signatories will pay specific attention to and share information on the tactical migration of known actors of misinformation, disinformation and information manipulation across different platforms as a way to circumvent moderation policies, engage different audiences or coordinate action on platforms with less scrutiny and policy bandwidth.
QRE 16.2.1
As a result of the collaboration and information sharing between them, Relevant Signatories will share qualitative examples and case studies of migration tactics employed and advertised by such actors on their platforms as observed by their moderation team and/or external partners from Academia or fact-checking organisations engaged in such monitoring.
Empowering Users
Commitment 17
In light of the European Commission's initiatives in the area of media literacy, including the new Digital Education Action Plan, Relevant Signatories commit to continue and strengthen their efforts in the area of media literacy and critical thinking, also with the aim to include vulnerable groups.
We signed up to the following measures of this commitment
Measure 17.1 Measure 17.2 Measure 17.3
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 17.1
Relevant Signatories will design and implement or continue to maintain tools to improve media literacy and critical thinking, for instance by empowering users with context on the content visible on services or with guidance on how to evaluate online content.
QRE 17.1.1
Relevant Signatories will outline the tools they develop or maintain that are relevant to this commitment and report on their deployment in each Member State.
LinkedIn’s Professional Community Policies clearly detail the objectionable and harmful content that is not allowed on LinkedIn. Misinformation and inauthentic content is not allowed, and our automated defenses take proactive steps to remove them. LinkedIn’s blog provides information regarding our efforts, including How We’re Protecting Members From Fake Profiles, Automated Fake Account Detection, and An Update on How We Keep Members Safe.
SLI 17.1.1
Relevant Signatories will report, at the Member State level, on metrics pertinent to assessing the effects of the tools described in the qualitative reporting element for Measure 17.1, which will include: the total count of impressions of the tool; and information on the interactions/engagement with the tool.
Country | Total count of the tool's impressions - Number of visits during the period 1 July - 31 December 2024 |
---|---|
Austria | 35 |
Belgium | 15 |
Bulgaria | 39 |
Croatia | 19 |
Cyprus | 4 |
Czech Republic | 28 |
Denmark | 80 |
Estonia | 18 |
Finland | 81 |
France | 290 |
Germany | 794 |
Greece | 24 |
Hungary | 19 |
Ireland | 63 |
Italy | 3,988 |
Latvia | 354 |
Lithuania | 185 |
Luxembourg | 21 |
Malta | 10 |
Netherlands | 632 |
Poland | 166 |
Portugal | 32 |
Romania | 56 |
Slovakia | 15 |
Slovenia | 7 |
Spain | 109 |
Sweden | 99 |
Iceland | 14 |
Liechtenstein | 1 |
Norway | 45 |
Total EU | 7,184 |
Total EEA | 7,244 |
Measure 17.2
Relevant Signatories will develop, promote and/or support or continue to run activities to improve media literacy and critical thinking such as campaigns to raise awareness about Disinformation, as well as the TTPs that are being used by malicious actors, among the general public across the European Union, also considering the involvement of vulnerable communities.
QRE 17.2.1
Relevant Signatories will describe the activities they launch or support and the Member States they target and reach. Relevant signatories will further report on actions taken to promote the campaigns to their user base per Member States targeted.
Measure 17.3
For both of the above Measures, and in order to build on the expertise of media literacy experts in the design, implementation, and impact measurement of tools, relevant Signatories will partner or consult with media literacy experts in the EU, including for instance the Commission's Media Literacy Expert Group, ERGA's Media Literacy Action Group, EDMO, its country-specific branches, or relevant Member State universities or organisations that have relevant expertise.
QRE 17.3.1
Relevant Signatories will describe how they involved and partnered with media literacy experts for the purposes of all Measures in this Commitment.
Commitment 18
Relevant Signatories commit to minimise the risks of viral propagation of Disinformation by adopting safe design practices as they develop their systems, policies, and features.
We signed up to the following measures of this commitment
Measure 18.1 Measure 18.2 Measure 18.3
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 18.1
Relevant Signatories will take measures to mitigate risks of their services fuelling the viral spread of harmful Disinformation, such as: recommender systems designed to improve the prominence of authoritative information and reduce the prominence of Disinformation based on clear and transparent methods and approaches for defining the criteria for authoritative information; other systemic approaches in the design of their products, policies, or processes, such as pre-testing.
QRE 18.1.1
Relevant Signatories will report on the risk mitigation systems, tools, procedures, or features deployed under Measure 18.1 and report on their deployment in each EU Member State.
QRE 18.1.2
Relevant Signatories will publish the main parameters of their recommender systems, both in their report and, once it is operational, on the Transparency Centre.
QRE 18.1.3
Relevant Signatories will outline how they design their products, policies, or processes, to reduce the impressions and engagement with Disinformation whether through recommender systems or through other systemic approaches, and/or to increase the visibility of authoritative information.
With respect to safety, we seek to keep content that violates our Professional Community Policies off of LinkedIn. This is done through a combination of automated and manual activity. Our first layer of protection is using AI to proactively filter out bad content and deliver relevant experiences for our members. We use content (like certain key words or images) that has previously been identified as violating our content policies to help inform our AI models so that we can better identify and restrict similar content from being posted in the future. The second layer of protection uses AI to flag content that is likely to be violative for human review. This occurs when the algorithm is not confident enough to warrant automatic removal. The third layer is member led, where members report content and then our team of reviewers evaluates the content and removes it if it is found to be in violation of our policies.
Quantifying the above process to monitor how many content violations are successfully prevented is another important task that our Data Science team prioritises, such that we can continuously refine our processes to improve detection and prevention of violative content.
· QRE 22.1.1 (features and systems related to fake and inauthentic profiles);
QRE 23.2.1 (actions taken to ensure integrity of reporting and appeals process).
Measure 18.2
Relevant Signatories will develop and enforce publicly documented, proportionate policies to limit the spread of harmful false or misleading information (as depends on the service, such as prohibiting, downranking, or not recommending harmful false or misleading information, adapted to the severity of the impacts and with due regard to freedom of expression and information); and take action on webpages or actors that persistently violate these policies.
QRE 18.2.1
Relevant Signatories will report on the policies or terms of service that are relevant to Measure 18.2 and on their approach towards persistent violations of these policies.
Furthermore, LinkedIn has automated defences to identify and prevent abuse, including inauthentic behaviour, such as spam, phishing and scams, duplicate accounts, fake accounts, and misinformation. Our Trust and Safety teams work every day to identify and restrict inauthentic activity. We’re regularly rolling out scalable technologies like machine learning models to keep our platform safe.
SLI 18.2.1
Relevant Signatories will report on actions taken in response to violations of policies relevant to Measure 18.2, at the Member State level. The metrics shall include: Total number of violations and Meaningful metrics to measure the impact of these actions (such as their impact on the visibility of or the engagement with content that was actioned upon).
Country | The number of pieces of content removed as Misinformation between 1 July 31 December 2024 | The number of removals that were appealed by the content author | The number of appeals that were granted | The median time from appeal-to-appeal decision in hours |
---|---|---|---|---|
Austria | 177 | 2 | 0 | 1.5 hours |
Belgium | 445 | 3 | 1 | |
Bulgaria | 36 | 0 | 0 | |
Croatia | 54 | 3 | 0 | |
Cyprus | 13 | 1 | 1 | |
Czech Republic | 88 | 1 | 0 | |
Denmark | 291 | 2 | 0 | |
Estonia | 9 | 0 | 0 | |
Finland | 52 | 1 | 0 | |
France | 3,452 | 14 | 1 | |
Germany | 1,639 | 40 | 2 | |
Greece | 164 | 2 | 0 | |
Hungary | 40 | 1 | 0 | |
Ireland | 136 | 0 | 0 | |
Italy | 1,264 | 15 | 2 | |
Latvia | 7 | 0 | 0 | |
Lithuania | 24 | 2 | 0 | |
Luxembourg | 62 | 0 | 0 | |
Malta | 11 | 1 | 0 | |
Netherlands | 3,308 | 38 | 5 | |
Poland | 128 | 2 | 0 | |
Portugal | 189 | 5 | 1 | |
Romania | 151 | 3 | 0 | |
Slovakia | 8 | 0 | 0 | |
Slovenia | 8 | 0 | 0 | |
Spain | 640 | 6 | 1 | |
Sweden | 209 | 1 | 0 | |
Iceland | 6 | 0 | 0 | |
Liechtenstein | 0 | 0 | 0 | |
Norway | 99 | 2 | 0 | |
Total EU | 12,605 | 142 | 14 | |
Total EEA | 12,710 | 144 | 14 |
Measure 18.3
Relevant Signatories will invest and/or participate in research efforts on the spread of harmful Disinformation online and related safe design practices, will make findings available to the public or report on those to the Code's taskforce. They will disclose and discuss findings within the permanent Task-force, and explain how they intend to use these findings to improve existing safe design practices and features or develop new ones.
QRE 18.3.1
Relevant Signatories will describe research efforts, both in-house and in partnership with third-party organisations, on the spread of harmful Disinformation online and relevant safe design practices, as well as actions or changes as a result of this research. Relevant Signatories will include where possible information on financial investments in said research. Wherever possible, they will make their findings available to the general public.
Commitment 19
Relevant Signatories using recommender systems commit to make them transparent to the recipients regarding the main criteria and parameters used for prioritising or deprioritising information, and provide options to users about recommender systems, and make available information on those options.
We signed up to the following measures of this commitment
Measure 19.1 Measure 19.2
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 19.1
Relevant Signatories will make available to their users, including through the Transparency Centre and in their terms and conditions, in a clear, accessible and easily comprehensible manner, information outlining the main parameters their recommender systems employ.
QRE 19.1.1
Relevant Signatories will provide details of the policies and measures put in place to implement the above-mentioned measures accessible to EU users, especially by publishing information outlining the main parameters their recommender systems employ in this regard. This information should also be included in the Transparency Centre.
As reported in an earlier report, in August 2023, LinkedIn launched two new experiences in the EU. Additional detail is included below:
Measure 19.2
Relevant Signatories will provide options for the recipients of the service to select and to modify at any time their preferred options for relevant recommender systems, including giving users transparency about those options.
SLI 19.2.1
Relevant Signatories will provide aggregated information on effective user settings, such as the number of times users have actively engaged with these settings within the reporting period or over a sample representative timeframe, and clearly denote shifts in configuration patterns.
Country | The number of EEA members who used the preferred feed view setting between 1 July 31 December 2024 | The number of times the members used the preferred feed view setting between 1 July 31 December 2024 |
---|---|---|
Austria | 2,024 | 3,021 |
Belgium | 3,108 | 4,719 |
Bulgaria | 492 | 793 |
Croatia | 502 | 903 |
Cyprus | 256 | 391 |
Czech Republic | 1,179 | 1,742 |
Denmark | 2,495 | 3,760 |
Estonia | 294 | 444 |
Finland | 2,749 | 4,137 |
France | 21,112 | 33,303 |
Germany | 21,565 | 32,823 |
Greece | 1,305 | 2,031 |
Hungary | 777 | 1,148 |
Ireland | 2,757 | 4,212 |
Italy | 7,160 | 10,794 |
Latvia | 256 | 423 |
Lithuania | 387 | 606 |
Luxembourg | 460 | 686 |
Malta | 195 | 298 |
Netherlands | 13,698 | 21,132 |
Poland | 3,817 | 5,657 |
Portugal | 2,781 | 4,241 |
Romania | 1,357 | 2,269 |
Slovakia | 368 | 575 |
Slovenia | 268 | 389 |
Spain | 10,006 | 14,665 |
Sweden | 4,628 | 7,014 |
Iceland | 41 | 57 |
Liechtenstein | 24 | 44 |
Norway | 1,141 | 1,878 |
Total EU | 105,996 | 162,176 |
Total EEA | 107,202 | 164,155 |
Commitment 20
Relevant Signatories commit to empower users with tools to assess the provenance and edit history or authenticity or accuracy of digital content.
We signed up to the following measures of this commitment
Measure 20.1 Measure 20.2
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 20.1
Relevant Signatories will develop technology solutions to help users check authenticity or identify the provenance or source of digital content, such as new tools or protocols or new open technical standards for content provenance (for instance, C2PA).
QRE 20.1.1
Relevant Signatories will provide details of the progress made developing provenance tools or standards, milestones reached in the implementation and any barriers to progress.
Measure 20.2
Relevant Signatories will take steps to join/support global initiatives and standards bodies (for instance, C2PA) focused on the development of provenance tools.
QRE 20.2.1
Relevant Signatories will provide details of global initiatives and standards bodies focused on the development of provenance tools (for instance, C2PA) that signatories have joined, or the support given to relevant organisations, providing links to organisation websites where possible.
Commitment 21
Relevant Signatories commit to strengthen their efforts to better equip users to identify Disinformation. In particular, in order to enable users to navigate services in an informed way, Relevant Signatories commit to facilitate, across all Member States languages in which their services are provided, user access to tools for assessing the factual accuracy of sources through fact-checks from fact-checking organisations that have flagged potential Disinformation, as well as warning labels from other authoritative sources.
We signed up to the following measures of this commitment
Measure 21.1 Measure 21.2 Measure 21.3
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 21.1
Relevant Signatories will further develop and apply policies, features, or programs across Member States and EU languages to help users benefit from the context and insights provided by independent fact-checkers or authoritative sources, for instance by means of labels, such as labels indicating fact-checker ratings, notices to users who try to share or previously shared the rated content, information panels, or by acting upon content notified by fact-checkers that violate their policies.
QRE 21.1.1
Relevant Signatories will report on the policies, features, or programs they deploy to meet this Measure and on their availability across Member States.
SLI 21.1.1
Relevant Signatories will report through meaningful metrics on actions taken under Measure 21.1, at the Member State level. At the minimum, the metrics will include: total impressions of fact-checks; ratio of impressions of fact-checks to original impressions of the fact-checked content–or if these are not pertinent to the implementation of fact-checking on their services, other equally pertinent metrics and an explanation of why those are more adequate.
SLI 21.1.2
When cooperating with independent fact-checkers to label content on their services, Relevant Signatories will report on actions taken at the Member State level and their impact, via metrics, of: number of articles published by independent fact-checkers; number of labels applied to content, such as on the basis of such articles; meaningful metrics on the impact of actions taken under Measure 21.1.1 such as the impact of said measures on user interactions with, or user re-shares of, content fact-checked as false or misleading.
Measure 21.2
Relevant Signatories will, in light of scientific evidence and the specificities of their services, and of user privacy preferences, undertake and/or support research and testing on warnings or updates targeted to users that have interacted with content that was later actioned upon for violation of policies mentioned in this section. They will disclose and discuss findings within the permanent Task-force in view of identifying relevant follow up actions.
QRE 21.2.1
Relevant Signatories will report on the research or testing efforts that they supported and undertook as part of this commitment and on the findings of research or testing undertaken as part of this commitment. Wherever possible, they will make their findings available to the general public.
Measure 21.3
Where Relevant Signatories employ labelling and warning systems, they will design these in accordance with up-to-date scientific evidence and with analysis of their users' needs on how to maximise the impact and usefulness of such interventions, for instance such that they are likely to be viewed and positively received.
QRE 21.3.1
Relevant Signatories will report on their procedures for developing and deploying labelling or warning systems and how they take scientific evidence and their users' needs into account to maximise usefulness.
Commitment 22
Relevant Signatories commit to provide users with tools to help them make more informed decisions when they encounter online information that may be false or misleading, and to facilitate user access to tools and information to assess the trustworthiness of information sources, such as indicators of trustworthiness for informed online navigation, particularly relating to societal issues or debates of general interest.
We signed up to the following measures of this commitment
Measure 22.1 Measure 22.2 Measure 22.3 Measure 22.7
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 22.1
Relevant Signatories will make it possible for users of their services to access indicators of trustworthiness (such as trust marks focused on the integrity of the source and the methodology behind such indicators) developed by independent third-parties, in collaboration with the news media, including associations of journalists and media freedom organisations, as well as fact-checkers and other relevant entities, that can support users in making informed choices.
QRE 22.1.1
Relevant Signatories will report on how they enable users of their services to benefit from such indicators or trust marks.
SLI 22.1.1
Relevant Signatories will report on Member State level percentage of users that have enabled the trustworthiness indicator.
Country | Percentage of users that have enabled the trustworthiness indicator - The number of members who used the About this profile feature between 1 July 31 December 2024 | The aggregate number of times those members used the feature between 1 July 31 December 2024 |
---|---|---|
Austria | 201,047 | 497,213 |
Belgium | 416,378 | 1,008,087 |
Bulgaria | 67,997 | 173,071 |
Croatia | 49,369 | 107,802 |
Cyprus | 34,730 | 102,045 |
Czech Republic | 153,427 | 381,771 |
Denmark | 331,225 | 805,215 |
Estonia | 29,112 | 79,910 |
Finland | 146,719 | 330,688 |
France | 2,773,641 | 7,030,529 |
Germany | 1,754,780 | 4,557,455 |
Greece | 162,018 | 420,517 |
Hungary | 98,215 | 228,240 |
Ireland | 242,133 | 620,914 |
Italy | 1,186,530 | 2,655,054 |
Latvia | 31,247 | 74,767 |
Lithuania | 57,795 | 162,218 |
Luxembourg | 47,455 | 131,560 |
Malta | 24,417 | 62,099 |
Netherlands | 1,236,806 | 3,065,899 |
Poland | 508,088 | 1,319,937 |
Portugal | 326,189 | 770,593 |
Romania | 188,243 | 461,845 |
Slovakia | 46,635 | 114,738 |
Slovenia | 28,765 | 63,266 |
Spain | 1,253,059 | 3,172,615 |
Sweden | 437,507 | 1,037,882 |
Iceland | 7,395 | 15,305 |
Liechtenstein | 2,915 | 7,234 |
Norway | 168,496 | 369,054 |
Total EU | 11,833,527 | 29,435,930 |
Total EEA | 12,012,333 | 29,827,523 |
Measure 22.2
Relevant Signatories will give users the option of having signals relating to the trustworthiness of media sources into the recommender systems or feed such signals into their recommender systems.
QRE 22.2.1
Relevant Signatories will report on whether and, if relevant, how they feed signals related to the trustworthiness of media sources into their recommender systems, and outline the rationale for their approach.
Measure 22.3
Relevant Signatories will make details of the policies and measures put in place to implement the above-mentioned measures accessible to EU users, especially by publishing information outlining the main parameters their recommender systems employ in this regard. This information should also be included in the Transparency Centre.
QRE 22.3.1
Relevant Signatories will provide details of the policies and measures put in place to implement the above-mentioned measures accessible to EU users, especially by publishing information outlining the main parameters their recommender systems employ in this regard. This information should also be included in the Transparency Centre.
(1) Be Safe: do not post harassing content; do not threaten, incite, or promote violence; do not share material depicting the exploitation of children; do not promote, sell or attempt to purchase illegal or dangerous goods or services; do not share content promoting dangerous organisations or individuals.
Measure 22.7
Relevant Signatories will design and apply products and features (e.g. information panels, banners, pop-ups, maps and prompts, trustworthiness indicators) that lead users to authoritative sources on topics of particular public and societal interest or in crisis situations.
QRE 22.7.1
Relevant Signatories will outline the products and features they deploy across their services and will specify whether those are available across Member States.
SLI 22.7.1
Relevant Signatories will report on the reach and/or user interactions with the products or features, at the Member State level, via the metrics of impressions and interactions (clicks, click-through rates (as relevant to the tools and services in question) and shares (as relevant to the tools and services in question).
Commitment 23
Relevant Signatories commit to provide users with the functionality to flag harmful false and/or misleading information that violates Signatories policies or terms of service.
We signed up to the following measures of this commitment
Measure 23.1 Measure 23.2
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 23.1
Relevant Signatories will develop or continue to make available on all their services and in all Member States languages in which their services are provided a user-friendly functionality for users to flag harmful false and/or misleading information that violates Signatories' policies or terms of service. The functionality should lead to appropriate, proportionate and consistent follow-up actions, in full respect of the freedom of expression.
QRE 23.1.1
Relevant Signatories will report on the availability of flagging systems for their policies related to harmful false and/or misleading information across EU Member States and specify the different steps that are required to trigger the systems.
Members also receive an email notifying them in in the event their content actioned in accordance with our policies. The email includes a link to a notice page for additional details and resources. If the member believes that their content complies with our Professional Community Policies, they can ask us to revisit our decision by submitting an appeal by clicking on the link in the notice page.
Measure 23.2
Relevant Signatories will take the necessary measures to ensure that this functionality is duly protected from human or machine-based abuse (e.g., the tactic of 'mass-flagging' to silence other voices).
QRE 23.2.1
Relevant Signatories will report on the general measures they take to ensure the integrity of their reporting and appeals systems, while steering clear of disclosing information that would help would-be abusers find and exploit vulnerabilities in their defences.
Commitment 24
Relevant Signatories commit to inform users whose content or accounts has been subject to enforcement actions (content/accounts labelled, demoted or otherwise enforced on) taken on the basis of violation of policies relevant to this section (as outlined in Measure 18.2), and provide them with the possibility to appeal against the enforcement action at issue and to handle complaints in a timely, diligent, transparent, and objective manner and to reverse the action without undue delay where the complaint is deemed to be founded.
We signed up to the following measures of this commitment
Measure 24.1
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 24.1
Relevant Signatories commit to provide users with information on why particular content or accounts have been labelled, demoted, or otherwise enforced on, on the basis of violation of policies relevant to this section, as well as the basis for such enforcement action, and the possibility for them to appeal through a transparent mechanism.
QRE 24.1.1
Relevant Signatories will report on the availability of their notification and appeals systems across Member States and languages and provide details on the steps of the appeals procedure.
SLI 24.1.1
Relevant Signatories provide information on the number and nature of enforcement actions for policies described in response to Measure 18.2, the numbers of such actions that were subsequently appealed, the results of these appeals, information, and to the extent possible metrics, providing insight into the duration or effectiveness of processing of appeals process, and publish this information on the Transparency Centre.
Country | Nr of enforcement actions | Nr of actions appealed | Metrics on results of appeals | Metrics on the duration and effectiveness of the appeal process |
---|---|---|---|---|
Austria | 177 | 0 | 0 | |
Belgium | 445 | 3 | 1 | |
Bulgaria | 36 | 0 | 0 | |
Croatia | 54 | 3 | 0 | |
Cyprus | 13 | 1 | 1 | |
Czech Republic | 88 | 1 | 0 | |
Denmark | 291 | 2 | 0 | |
Estonia | 9 | 0 | 0 | |
Finland | 52 | 1 | 0 | |
France | 3,452 | 14 | 1 | |
Germany | 1,639 | 40 | 2 | |
Greece | 164 | 2 | 0 | |
Hungary | 40 | 1 | 0 | |
Ireland | 136 | 0 | 0 | |
Italy | 1,264 | 15 | 2 | |
Latvia | 7 | 0 | 0 | |
Lithuania | 24 | 2 | 0 | |
Luxembourg | 62 | 0 | 0 | |
Malta | 11 | 1 | 0 | |
Netherlands | 3,308 | 38 | 5 | |
Poland | 128 | 2 | 0 | |
Portugal | 189 | 5 | 1 | |
Romania | 151 | 3 | 0 | |
Slovakia | 8 | 0 | 0 | |
Slovenia | 8 | 0 | 0 | |
Spain | 640 | 6 | 1 | |
Sweden | 209 | 1 | 0 | |
Iceland | 6 | 0 | 0 | |
Liechtenstein | 0 | 0 | 0 | |
Norway | 99 | 2 | 0 | |
Total EU | 12,605 | 142 | 14 | 1.5 hours |
Total EEA | 12,710 | 144 | 14 |
Empowering Researchers
Commitment 26
Relevant Signatories commit to provide access, wherever safe and practicable, to continuous, real-time or near real-time, searchable stable access to non-personal data and anonymised, aggregated, or manifestly-made public data for research purposes on Disinformation through automated means such as APIs or other open and accessible technical solutions allowing the analysis of said data.
We signed up to the following measures of this commitment
Measure 26.1 Measure 26.2 Measure 26.3
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 26.1
Relevant Signatories will provide public access to non-personal data and anonymised, aggregated or manifestly-made public data pertinent to undertaking research on Disinformation on their services, such as engagement and impressions (views) of content hosted by their services, with reasonable safeguards to address risks of abuse (e.g. API policies prohibiting malicious or commercial uses).
QRE 26.1.1
Relevant Signatories will describe the tools and processes in place to provide public access to non-personal data and anonymised, aggregated and manifestly-made public data pertinent to undertaking research on Disinformation, as well as the safeguards in place to address risks of abuse.
QRE 26.1.2
Relevant Signatories will publish information related to data points available via Measure 25.1, as well as details regarding the technical protocols to be used to access these data points, in the relevant help centre. This information should also be reachable from the Transparency Centre. At minimum, this information will include definitions of the data points available, technical and methodological information about how they were created, and information about the representativeness of the data.
SLI 26.1.1
Relevant Signatories will provide quantitative information on the uptake of the tools and processes described in Measure 26.1, such as number of users.
Nr of users of public access |
---|
0 applications were approved under our Beta Art. 40 process in the period covered by this report. Note: Unknown number of researchers who use our broadly available service to conduct research. |
Measure 26.2
Relevant Signatories will provide real-time or near real-time, machine-readable access to non-personal data and anonymised, aggregated or manifestly-made public data on their service for research purposes, such as accounts belonging to public figures such as elected official, news outlets and government accounts subject to an application process which is not overly cumbersome.
QRE 26.2.1
Relevant Signatories will describe the tools and processes in place to provide real-time or near real-time access to non-personal data and anonymised, aggregated and manifestly-made public data for research purposes as described in Measure 26.2.
QRE 26.2.2
Relevant Signatories will describe the scope of manifestly-made public data as applicable to their services.
QRE 26.2.3
Relevant Signatories will describe the application process in place to in order to gain the access to non-personal data and anonymised, aggregated and manifestly-made public data described in Measure 26.2.
SLI 26.2.1
Relevant Signatories will provide meaningful metrics on the uptake, swiftness, and acceptance level of the tools and processes in Measure 26.2, such as: Number of monthly users (or users over a sample representative timeframe), Number of applications received, rejected, and accepted (over a reporting period or a sample representative timeframe), Average response time (over a reporting period or a sample representative timeframe).
No of applications received | No of applications rejected | No of applications accepted | |
---|---|---|---|
Data | 52 | 47 | 0 |
Measure 26.3
Relevant Signatories will implement procedures for reporting the malfunctioning of access systems and for restoring access and repairing faulty functionalities in a reasonable time.
QRE 26.3.1
Relevant Signatories will describe the reporting procedures in place to comply with Measure 26.3 and provide information about their malfunction response procedure, as well as about malfunctions that would have prevented the use of the systems described above during the reporting period and how long it took to remediate them.
Commitment 27
Relevant Signatories commit to provide vetted researchers with access to data necessary to undertake research on Disinformation by developing, funding, and cooperating with an independent, third-party body that can vet researchers and research proposals.
We signed up to the following measures of this commitment
Measure 27.1 Measure 27.2 Measure 27.3 Measure 27.4
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 27.1
Relevant Signatories commit to work with other relevant organisations (European Commission, Civil Society, DPAs) to develop within a reasonable timeline the independent third-party body referred to in Commitment 27, taking into account, where appropriate, ongoing efforts such as the EDMO proposal for a Code of Conduct on Access to Platform Data.
QRE 27.1.1
Relevant Signatories will describe their engagement with the process outlined in Measure 27.1 with a detailed timeline of the process, the practical outcome and any impacts of this process when it comes to their partnerships, programs, or other forms of engagement with researchers.
Measure 27.2
Relevant Signatories commit to co-fund from 2022 onwards the development of the independent third-party body referred to in Commitment 27.
QRE 27.2.1
Relevant Signatories will disclose their funding for the development of the independent third-party body referred to in Commitment 27.
Measure 27.3
Relevant Signatories commit to cooperate with the independent third-party body referred to in Commitment 27 once it is set up, in accordance with applicable laws, to enable sharing of personal data necessary to undertake research on Disinformation with vetted researchers in accordance with protocols to be defined by the independent third-party body.
QRE 27.3.1
Relevant Signatories will describe how they cooperate with the independent third-party body to enable the sharing of data for purposes of research as outlined in Measure 27.3, once the independent third-party body is set up.
SLI 27.3.1
Relevant Signatories will disclose how many of the research projects vetted by the independent third-party body they have initiated cooperation with or have otherwise provided access to the data they requested.
Measure 27.4
Relevant Signatories commit to engage in pilot programs towards sharing data with vetted researchers for the purpose of investigating Disinformation, without waiting for the independent third-party body to be fully set up. Such pilot programmes will operate in accordance with all applicable laws regarding the sharing/use of data. Pilots could explore facilitating research on content that was removed from the services of Signatories and the data retention period for this content.
QRE 27.4.1
Relevant Signatories will describe the pilot programs they are engaged in to share data with vetted researchers for the purpose of investigating Disinformation. This will include information about the nature of the programs, number of research teams engaged, and where possible, about research topics or findings.
Commitment 28
COOPERATION WITH RESEARCHERS Relevant Signatories commit to support good faith research into Disinformation that involves their services.
We signed up to the following measures of this commitment
Measure 28.1 Measure 28.2 Measure 28.3 Measure 28.4
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 28.1
Relevant Signatories will ensure they have the appropriate human resources in place in order to facilitate research, and should set-up and maintain an open dialogue with researchers to keep track of the types of data that are likely to be in demand for research and to help researchers find relevant contact points in their organisations.
QRE 28.1.1
Relevant Signatories will describe the resources and processes they deploy to facilitate research and engage with the research community, including e.g. dedicated teams, tools, help centres, programs, or events.
Measure 28.2
Relevant Signatories will be transparent on the data types they currently make available to researchers across Europe.
QRE 28.2.1
Relevant Signatories will describe what data types European researchers can currently access via their APIs or via dedicated teams, tools, help centres, programs, or events.
Measure 28.3
Relevant Signatories will not prohibit or discourage genuinely and demonstratively public interest good faith research into Disinformation on their platforms, and will not take adversarial action against researcher users or accounts that undertake or participate in good-faith research into Disinformation.
QRE 28.3.1
Relevant Signatories will collaborate with EDMO to run an annual consultation of European researchers to assess whether they have experienced adversarial actions or are otherwise prohibited or discouraged to run such research.
Measure 28.4
As part of the cooperation framework between the Signatories and the European research community, relevant Signatories will, with the assistance of the EDMO, make funds available for research on Disinformation, for researchers to independently manage and to define scientific priorities and transparent allocation procedures based on scientific merit.
QRE 28.4.1
Relevant Signatories will disclose the resources made available for the purposes of Measure 28.4 and procedures put in place to ensure the resources are independently managed.
Empowering fact-checkers
Commitment 30
Relevant Signatories commit to establish a framework for transparent, structured, open, financially sustainable, and non-discriminatory cooperation between them and the EU fact-checking community regarding resources and support made available to fact-checkers.
We signed up to the following measures of this commitment
Measure 30.1 Measure 30.2 Measure 30.3 Measure 30.4
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 30.1
Relevant Signatories will set up agreements between them and independent fact-checking organisations (as defined in whereas (e)) to achieve fact-checking coverage in all Member States. These agreements should meet high ethical and professional standards and be based on transparent, open, consistent and non-discriminatory conditions and will ensure the independence of fact-checkers.
QRE 30.1.1
Relevant Signatories will report on and explain the nature of their agreements with fact-checking organisations; their expected results; relevant quantitative information (for instance: contents fact-checked, increased coverage, changes in integration of fact-checking as depends on the agreements and to be further discussed within the Task-force); and such as relevant common standards and conditions for these agreements.
QRE 30.1.2
Relevant Signatories will list the fact-checking organisations they have agreements with (unless a fact-checking organisation opposes such disclosure on the basis of a reasonable fear of retribution or violence).
QRE 30.1.3
Relevant Signatories will report on resources allocated where relevant in each of their services to achieve fact-checking coverage in each Member State and to support fact-checking organisations' work to combat Disinformation online at the Member State level.
SLI 30.1.1
Relevant Signatories will report on Member States and languages covered by agreements with the fact-checking organisations, including the total number of agreements with fact-checking organisations, per language and, where relevant, per service.
Nr of agreements with fact-checking organisations | |
---|---|
EU | 1 |
Measure 30.2
Relevant Signatories will provide fair financial contributions to the independent European fact-checking organisations for their work to combat Disinformation on their services. Those financial contributions could be in the form of individual agreements, of agreements with multiple fact-checkers or with an elected body representative of the independent European fact-checking organisations that has the mandate to conclude said agreements.
QRE 30.2.1
Relevant Signatories will report on actions taken and general criteria used to ensure the fair financial contributions to the fact-checkers for the work done, on criteria used in those agreements to guarantee high ethical and professional standards, independence of the fact-checking organisations, as well as conditions of transparency, openness, consistency and non-discrimination.
QRE 30.2.2
Relevant Signatories will engage in, and report on, regular reviews with their fact-checking partner organisations to review the nature and effectiveness of the Signatory's fact-checking programme.
QRE 30.2.3
European fact-checking organisations will, directly (as Signatories to the Code) or indirectly (e.g. via polling by EDMO or an elected body representative of the independent European fact-checking organisations) report on the fairness of the individual compensations provided to them via these agreements.
Measure 30.3
Relevant Signatories will contribute to cross-border cooperation between fact-checkers.
QRE 30.3.1
Relevant Signatories will report on actions taken to facilitate their cross-border collaboration with and between fact-checkers, including examples of fact-checks, languages, or Member States where such cooperation was facilitated.
Measure 30.4
To develop the Measures above, relevant Signatories will consult EDMO and an elected body representative of the independent European fact-checking organisations.
QRE 30.4.1
Relevant Signatories will report, ex ante on plans to involve, and ex post on actions taken to involve, EDMO and the elected body representative of the independent European fact-checking organisations, including on the development of the framework of cooperation described in Measures 30.3 and 30.4.
Commitment 31
Relevant Signatories commit to integrate, showcase, or otherwise consistently use fact-checkers' work in their platforms' services, processes, and contents; with full coverage of all Member States and languages.
We signed up to the following measures of this commitment
Measure 31.1 Measure 31.2 Measure 31.3 Measure 31.4
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 31.2
Relevant Signatories that integrate fact-checks in their products or processes will ensure they employ swift and efficient mechanisms such as labelling, information panels, or policy enforcement to help increase the impact of fact-checks on audiences.
QRE 31.2.1
Relevant Signatories will report on their specific activities and initiatives related to Measures 31.1 and 31.2, including the full results and methodology applied in testing solutions to that end.
SLI 31.1.1 (for Measures 31.1 and 31.2)
Member State level reporting on use of fact-checks by service and the swift and efficient mechanisms in place to increase their impact, which may include (as depends on the service): number of fact-check articles published; reach of fact-check articles; number of content pieces reviewed by fact-checkers.
Nr of fact-checked articles published | Reach of fact-checked | Nr of content pieces reviewed by fact-checkers | |
---|---|---|---|
Total Global | 0 | N/A | 106 |
Measure 31.3
Relevant Signatories (including but not necessarily limited to fact-checkers and platforms) will create, in collaboration with EDMO and an elected body representative of the independent European fact-checking organisations, a repository of fact-checking content that will be governed by the representatives of fact-checkers. Relevant Signatories (i.e. platforms) commit to contribute to funding the establishment of the repository, together with other Signatories and/or other relevant interested entities. Funding will be reassessed on an annual basis within the Permanent Task-force after the establishment of the repository, which shall take no longer than 12 months.
QRE 31.3.1
Relevant Signatories will report on their work towards and contribution to the overall repository project, which may include (depending on the Signatories): financial contributions; technical support; resourcing; fact-checks added to the repository. Further relevant metrics should be explored within the Permanent Task-force.
Measure 31.4
Relevant Signatories will explore technological solutions to facilitate the efficient use of this common repository across platforms and languages. They will discuss these solutions with the Permanent Task-force in view of identifying relevant follow up actions.
QRE 31.4.1
Relevant Signatories will report on the technical solutions they explore and insofar as possible and in light of discussions with the Task-force on solutions they implemented to facilitate the efficient use of a common repository across platforms.
Commitment 32
Relevant Signatories commit to provide fact-checkers with prompt, and whenever possible automated, access to information that is pertinent to help them to maximise the quality and impact of fact-checking, as defined in a framework to be designed in coordination with EDMO and an elected body representative of the independent European fact-checking organisations.
We signed up to the following measures of this commitment
Measure 32.1 Measure 32.2 Measure 32.3
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 32.3
Relevant Signatories will regularly exchange information between themselves and the fact-checking community, to strengthen their cooperation.
QRE 32.3.1
Relevant Signatories will report on the channels of communications and the exchanges conducted to strengthen their cooperation - including success of and satisfaction with the information, interface, and other tools referred to in Measures 32.1 and 32.2 - and any conclusions drawn from such exchanges.