Instagram

Report March 2025

Submitted

Your organisation description

Advertising

Commitment 1

Relevant signatories participating in ad placements commit to defund the dissemination of disinformation, and improve the policies and systems which determine the eligibility of content to be monetised, the controls for monetisation and ad placement, and the data to report on the accuracy and effectiveness of controls and services around ad placements.

We signed up to the following measures of this commitment

Measure 1.1 Measure 1.2 Measure 1.3 Measure 1.4 Measure 1.5 Measure 1.6

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Yes

If yes, list these implementation measures here

Improvements to Inventory Filter for Instagram Feed and Reels: 
Inventory Filter for Instagram Feed and Reels gives advertisers the ability to adjust their preferences for adjacency to different content types. Within this control, advertisers can choose between expanded, moderate, and limited inventory settings based on the suitability level that’s right for their brand. We’ve rolled out the following improvements to this control: 

  • Language Expansion:
    Inventory Filter now supports a total of 34 languages  on Instagram Feed and Reels. We’re further working to expand the number of languages supported by Inventory Filter this year.
    • Please note that this language expansion refers to Inventory Filter for Feed and Reels. The Inventory Filter for in-content ads, currently supports 37 languages and this work will bring the language support closer to parity between the two controls.

Additional Brand Safety & Suitability Meta Business Partners have onboarded to the third-party brand suitability verification solution for Instagram Feed and Reels, such as Adloox.

(For advertising policies, see Commitment 2)

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

Yes

If yes, which further implementation measures do you plan to put in place in the next 6 months?

  • Meta will continue to invest resources in the ongoing development and enhancement of the inventory filter.
  • We plan to expand integrations with our third-party partners to introduce additional functionality.

Measure 1.1

Relevant Signatories involved in the selling of advertising, inclusive of media platforms, publishers and ad tech companies, will deploy, disclose, and enforce policies with the aims of: - first avoiding the publishing and carriage of harmful Disinformation to protect the integrity of advertising supported businesses - second taking meaningful enforcement and remediation steps to avoid the placement of advertising next to Disinformation content or on sources that repeatedly violate these policies; and - third adopting measures to enable the verification of the landing / destination pages of ads and origin of ad placement.

Instagram

QRE 1.1.1

Signatories will disclose and outline the policies they develop, deploy, and enforce to meet the goals of Measure 1.1 and will link to relevant public pages in their help centres.

We continue to require compliance from our users with the policies defined in our baseline report regarding monetisation of their content. No additional new policies to report on in this instance. 

SLI 1.1.1

Signatories will report, quantitatively, on actions they took to enforce each of the policies mentioned in the qualitative part of this service level indicator, at the Member State or language level. This could include, for instance, actions to remove, to block, or to otherwise restrict advertising on pages and/or domains that disseminate harmful Disinformation.

We were not able to deliver this SLI  for this report. 

Country Type of Action 1 Type of Action 2 Type of Action 3 Type of Action 4
Austria 0 0 0 0
Belgium 0 0 0 0
Bulgaria 0 0 0 0
Croatia 0 0 0 0
Cyprus 0 0 0 0
Czech Republic 0 0 0 0
Denmark 0 0 0 0
Estonia 0 0 0 0
Finland 0 0 0 0
France 0 0 0 0
Germany 0 0 0 0
Greece 0 0 0 0
Hungary 0 0 0 0
Ireland 0 0 0 0
Italy 0 0 0 0
Latvia 0 0 0 0
Lithuania 0 0 0 0
Luxembourg 0 0 0 0
Malta 0 0 0 0
Netherlands 0 0 0 0
Poland 0 0 0 0
Portugal 0 0 0 0
Romania 0 0 0 0
Slovakia 0 0 0 0
Slovenia 0 0 0 0
Spain 0 0 0 0
Sweden 0 0 0 0
Iceland 0 0 0 0
Liechtenstein 0 0 0 0
Norway 0 0 0 0

Measure 1.2

Relevant Signatories responsible for the selling of advertising, inclusive of publishers, media platforms, and ad tech companies, will tighten eligibility requirements and content review processes for content monetisation and ad revenue share programmes on their services as necessary to effectively scrutinise parties and bar participation by actors who systematically post content or engage in behaviours which violate policies mentioned in Measure 1.1 that tackle Disinformation.

Instagram

QRE 1.2.1

Signatories will outline their processes for reviewing, assessing, and augmenting their monetisation policies in order to scrutinise and bar participation by actors that systematically provide harmful Disinformation.

We continue to discuss potential changes to our Community Standards, Advertising Standards or Product Policies in our Policy forum meeting

In the second half of 2024, three Policy Forum meetings were conducted that included some discussions on Community Standards, including advertising elements. The topics covered were: Removing Self-Reported Imagery, Disordered Eating, and DOI Condolence Content.

SLI 1.2.1

Signatories will report on the number of policy reviews and/or updates to policies relevant to Measure 1.2 throughout the reporting period. In addition, Signatories will report on the numbers of accounts or domains barred from participation to advertising or monetisation as a result of these policies at the Member State level.

We were not able to deliver this SLI  for this report. 

Country Nr of policy reviews Nr of updates to policies Nr of accounts barred Nr of domains barred
Austria 0 0 0 0
Belgium 0 0 0 0
Bulgaria 0 0 0 0
Croatia 0 0 0 0
Cyprus 0 0 0 0
Czech Republic 0 0 0 0
Denmark 0 0 0 0
Estonia 0 0 0 0
Finland 0 0 0 0
France 0 0 0 0
Germany 0 0 0 0
Greece 0 0 0 0
Hungary 0 0 0 0
Ireland 0 0 0 0
Italy 0 0 0 0
Latvia 0 0 0 0
Lithuania 0 0 0 0
Luxembourg 0 0 0 0
Malta 0 0 0 0
Netherlands 0 0 0 0
Poland 0 0 0 0
Portugal 0 0 0 0
Romania 0 0 0 0
Slovakia 0 0 0 0
Slovenia 0 0 0 0
Spain 0 0 0 0
Sweden 0 0 0 0
Iceland 0 0 0 0
Liechtenstein 0 0 0 0
Norway 0 0 0 0

Measure 1.3

Relevant Signatories responsible for the selling of advertising, inclusive of publishers, media platforms, and ad tech companies, will take commercial and technically feasible steps, including support for relevant third-party approaches, to give advertising buyers transparency on the placement of their advertising.

Instagram

QRE 1.3.1

Signatories will report on the controls and transparency they provide to advertising buyers with regards to the placement of their ads as it relates to Measure 1.3.

We continue to offer several brand safety controls to allow advertisers to have control over the placement of their advertising, including  preventing ads from running alongside certain types of content on Instagram. Advertisers can see and update brand safety settings directly and these controls can be used in combination or on their own [see here for details].

These controls are transparent and advertisers can access details about Meta's brand safety description of methodology

Measure 1.4

Relevant Signatories responsible for the buying of advertising, inclusive of advertisers, and agencies, will place advertising through ad sellers that have taken effective, and transparent steps to avoid the placement of advertising next to Disinformation content or in places that repeatedly publish Disinformation.

N/A

QRE 1.4.1

Relevant Signatories that are responsible for the buying of advertising will describe their processes and procedures to ensure they place advertising through ad sellers that take the steps described in Measure 1.4.

Measure 1.4 applies to signatories responsible for the buying of advertising.

Measure 1.5

Relevant Signatories involved in the reporting of monetisation activities inclusive of media platforms, ad networks, and ad verification companies will take the necessary steps to give industry-recognised relevant independent third-party auditors commercially appropriate and fair access to their services and data in order to: - First, confirm the accuracy of first party reporting relative to monetisation and Disinformation, seeking alignment with regular audits performed under the DSA. - Second, accreditation services should assess the effectiveness of media platforms' policy enforcement, including Disinformation policies.

Instagram

QRE 1.5.1

Signatories that produce first party reporting will report on the access provided to independent third-party auditors as outlined in Measure 1.5 and will link to public reports and results from such auditors, such as MRC Content Level Brand Safety Accreditation, TAG Brand Safety certifications, or other similarly recognised industry accepted certifications.

As mentioned in our baseline report, Instagram is in scope for accreditation from the Media Rating Council (MRC) in the next audit period. 

We are continuing to work on this audit and will issue updates as new information becomes available.

QRE 1.5.2

Signatories that conduct independent accreditation via audits will disclose areas of their accreditation that have been updated to reflect needs in Measure 1.5.

Meta will expand the scope of the recurring MRC audit to Instagram in the future. At present Meta is still determining the scope of this audit. We do not have any updates on this process at this time. 

Measure 1.6

Relevant Signatories will advance the development, improve the availability, and take practical steps to advance the use of brand safety tools and partnerships, with the following goals: - To the degree commercially viable, relevant Signatories will provide options to integrate information and analysis from source-raters, services that provide indicators of trustworthiness, fact-checkers, researchers or other relevant stakeholders providing information e.g., on the sources of Disinformation campaigns to help inform decisions on ad placement by ad buyers, namely advertisers and their agencies. - Advertisers, agencies, ad tech companies, and media platforms and publishers will take effective and reasonable steps to integrate the use of brand safety tools throughout the media planning, buying and reporting process, to avoid the placement of their advertising next to Disinformation content and/or in places or sources that repeatedly publish Disinformation. - Brand safety tool providers and rating services who categorise content and domains will provide reasonable transparency about the processes they use, insofar that they do not release commercially sensitive information or divulge trade secrets, and that they establish a mechanism for customer feedback and appeal.

Instagram

QRE 1.6.1

Signatories that place ads will report on the options they provide for integration of information, indicators and analysis from source raters, services that provide indicators of trustworthiness, fact-checkers, researchers, or other relevant stakeholders providing information e.g. on the sources of Disinformation campaigns to help inform decisions on ad placement by buyers.

As mentioned in the baseline report, we continue to offer several brand safety controls for preventing ads from running alongside certain types of content on Instagram. Advertisers can see and update brand safety settings directly and these controls can be used in combination or on their own [see here for details]

Users can find details about Meta's brand safety description of methodology

QRE 1.6.2

Signatories that purchase ads will outline the steps they have taken to integrate the use of brand safety tools in their advertising and media operations, disclosing what percentage of their media investment is protected by such services.

As mentioned in our baseline report, when advertising on our platforms, we respect our policies and principles and are able to use the brand safety tools outlined above.

QRE 1.6.3

Signatories that provide brand safety tools will outline how they are ensuring transparency and appealability about their processes and outcomes.

As mentioned in our baseline report, we provide brand safety tools across Audience Network and Instagram, and provide resources to use appropriately.

The brand suitability inventory filter control for Instagram Feed and Reels has been expanded to support additional languages including Dutch, Hebrew, Indonesian, Korean, Romanian and Ukrainian.

In addition, third-party brand safety and suitability verification for Instagram Feed and Reels is now available through our Meta Business Partner, Adloox, in addition to our other partners previously announced.

QRE 1.6.4

Relevant Signatories that rate sources to determine if they persistently publish Disinformation shall provide reasonable information on the criteria under which websites are rated, make public the assessment of the relevant criteria relating to Disinformation, operate in an apolitical manner and give publishers the right to reply before ratings are published.

N/A

SLI 1.6.1

Signatories that purchase ads will outline the steps they have taken to integrate the use of brand safety tools in their advertising and media operations, disclosing what percentage of their media investment is protected by such services.

N/A

Country In view of steps taken to integrate brand safety tools: % of advertising/media investment protected by such tools
Austria 0
Belgium 0
Bulgaria 0
Croatia 0
Cyprus 0
Czech Republic 0
Denmark 0
Estonia 0
Finland 0
France 0
Germany 0
Greece 0
Hungary 0
Ireland 0
Italy 0
Latvia 0
Lithuania 0
Luxembourg 0
Malta 0
Netherlands 0
Poland 0
Portugal 0
Romania 0
Slovakia 0
Slovenia 0
Spain 0
Sweden 0
Iceland 0
Liechtenstein 0
Norway 0

Commitment 2

Relevant Signatories participating in advertising commit to prevent the misuse of advertising systems to disseminate Disinformation in the form of advertising messages.

We signed up to the following measures of this commitment

Measure 2.1 Measure 2.2 Measure 2.3 Measure 2.4

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

No

If yes, list these implementation measures here

As mentioned in our baseline report, we enforce Advertising Standards on what is allowed across Meta technologies, and our advertisers must also follow our Terms of service and our Community Standards.

(For monetisation policies, see Commitment 1)

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

As noted in our baseline report, our policies are based on years of experience and expertise in safety combined with external input from experts around the world. We are continuously working to protect the integrity of our platforms and adjusting our Advertising Standards, policies, tools, and processes

Measure 2.1

Relevant Signatories will develop, deploy, and enforce appropriate and tailored advertising policies that address the misuse of their advertising systems for propagating harmful Disinformation in advertising messages and in the promotion of content.

Instagram

QRE 2.1.1

Signatories will disclose and outline the policies they develop, deploy, and enforce to meet the goals of Measure 2.1 and will link to relevant public pages in their help centres.

As noted in our baseline report, advertisers that are running ads across Meta technologies must follow our Terms of Use, our Community Standards and our Advertising Standards. As such, Misinformation is considered to be unacceptable content under our Advertising Standards. See more here. 

SLI 2.1.1

Signatories will report, quantitatively, on actions they took to enforce each of the policies mentioned in the qualitative part of this service level indicator, at the Member State or language level. This could include, for instance, actions to remove, to block, or to otherwise restrict harmful Disinformation in advertising messages and in the promotion of content.

  1. Number of Ads removed  on Facebook and Instagram combined for violating our Misinformation policy in the EU from 01/07/2024 to 31/12/2024.*
  2. Overall number of Ads removed on Facebook and Instagram combined (in the EU) from 01/07/2024 to 31/12/2024.

*Meta's policies to tackle false claims about COVID-19 which could directly contribute to the risk of imminent physical harm changed in June 2023 following Meta's independent Oversight Board’s advice. We now only remove this content in countries with an active COVID-19 public health emergency declaration (during the reporting period no countries had an active health emergency declaration). This change has impacted our enforcement metrics on removals for this reporting period but does not change our overall approach to fact-checking. These changes are an expected part of fluctuating content trends online*

Country Number of Ads removed on Facebook and Instagram combined for violating our Misinformation policy in the EU from 01/07/2024 to 31/12/2024. Overall number of Ads removed on Facebook and Instagram combined (in the EU) from 01/07/2024 to 31/12/2024.
Austria Over 660 Over 56,000 0 0
Belgium Over 1,200 Over 89,000 0 0
Bulgaria Over 1,100 Over 73,000 0 0
Croatia Less than 500 Over 23,000 0 0
Cyprus Less than 500 Over 37,000 0 0
Czech Republic Over 1,300 Over 92,000 0 0
Denmark Over 1,000 Over 58,000 0 0
Estonia Over 2,400 Over 240,000 0 0
Finland Over 730 Over 24,000 0 0
France Over 5,600 Over 400,000 0 0
Germany Over 9,100 Over 780,000 0 0
Greece Over 700 Over 44,000 0 0
Hungary Over 810 Over 63,000 0 0
Ireland Over 800 Over 41,000 0 0
Italy Over 14,000 Over 660,000 0 0
Latvia Over 2,200 Over 73,000 0 0
Lithuania Over 3,400 Over 97,000 0 0
Luxembourg Less than 500 Over 5,100 0 0
Malta Less than 500 Over 16,000 0 0
Netherlands Over 3,700 Over 270,000 0 0
Poland Over 12,000 Over 790,000 0 0
Portugal Over 6,100 Over 270,000 0 0
Romania Over 6,300 Over 210,000 0 0
Slovakia Over 680 Over 38,000 0 0
Slovenia Less than 500 Over 35,000 0 0
Spain Over 8,500 Over 530,000 0 0
Sweden Over 1,900 Over 76,000 0
Iceland N/A N/A 0
Liechtenstein N/A N/A 0
Norway N/A N/A 0 0

Measure 2.2

Relevant Signatories will develop tools, methods, or partnerships, which may include reference to independent information sources both public and proprietary (for instance partnerships with fact-checking or source rating organisations, or services providing indicators of trustworthiness, or proprietary methods developed internally) to identify content and sources as distributing harmful Disinformation, to identify and take action on ads and promoted content that violate advertising policies regarding Disinformation mentioned in Measure 2.1.

Instagram

QRE 2.2.1

Signatories will describe the tools, methods, or partnerships they use to identify content and sources that contravene policies mentioned in Measure 2.1 - while being mindful of not disclosing information that'd make it easier for malicious actors to circumvent these tools, methods, or partnerships. Signatories will specify the independent information sources involved in these tools, methods, or partnerships.

As noted in our baseline report, misinformation is considered to be unacceptable content under our Advertising Standards, and as such those types of content are ineligible to monetise: See our Advertising Standards for more information. 

In the EU, Meta’s third party fact-checkers may review ads posted on Instagram, labelling them where a falsity assessment has concluded that they are false.

Measure 2.3

Relevant Signatories will adapt their current ad verification and review systems as appropriate and commercially feasible, with the aim of preventing ads placed through or on their services that do not comply with their advertising policies in respect of Disinformation to be inclusive of advertising message, promoted content, and site landing page.

Instagram

QRE 2.3.1

Signatories will describe the systems and procedures they use to ensure that ads placed through their services comply with their advertising policies as described in Measure 2.1.

As mentioned in our baseline report, the ad review system checks ads for violations of our policies. This review process may include the specific components of an ad, such as images, video, text and targeting information, as well as an ad's associated landing page or other destinations, among other information.

More specifically, once fact-checking partners have determined that a piece of content contains misinformation, we can use technology to identify near-identical versions across Instagram. If we find ads that are near identical to content fact-checkers have rated, we reject them.

SLI 2.3.1

Signatories will report quantitatively, at the Member State level, on the ads removed or prohibited from their services using procedures outlined in Measure 2.3. In the event of ads successfully removed, parties should report on the reach of violatory content and advertising.

  1. Number of Ads removed on Facebook and Instagram combined for violating our Misinformation policy in the EU from 01/07/2024 to 31/12/2024.*
  2. Overall number of Ads removed on Facebook and Instagram combined (in the EU) from 01/07/2024 to 31/12/2024.

*Meta's policies to tackle false claims about COVID-19 which could directly contribute to the risk of imminent physical harm changed in June 2023 following Meta's independent Oversight Board’s advice. We now only remove this content in countries with an active COVID-19 public health emergency declaration (during the reporting period no countries had an active health emergency declaration). This change has impacted our enforcement metrics on removals for this reporting period but does not change our overall approach to fact-checking. These changes are an expected part of fluctuating content trends online*

Country Number of Ads removed on Facebook and Instagram combined for violating our Misinformation policy in the EU from 01/07/2024 to 31/12/2024. Overall number of Ads removed on Facebook and Instagram combined (in the EU) from 01/07/2024 to 31/12/2024.
Austria Over 660 Over 56,000
Belgium Over 1,200 Over 89,000
Bulgaria Over 1,100 Over 73,000
Croatia Less than 500 Over 23,000
Cyprus Less than 500 Over 37,000
Czech Republic Over 1,300 Over 92,000
Denmark Over 1,000 Over 58,000
Estonia Over 2,400 Over 240,000
Finland Over 730 Over 24,000
France Over 5,600 Over 400,000
Germany Over 9,100 Over 780,000
Greece Over 700 Over 44,000
Hungary Over 810 Over 63,000
Ireland Over 800 Over 41,000
Italy Over 14,000 Over 660,000
Latvia Over 2,200 Over 73,000
Lithuania Over 3,400 Over 97,000
Luxembourg Less than 500 Over 5,100
Malta Less than 500 Over 16,000
Netherlands Over 3,700 Over 270,000
Poland Over 12,000 Over 790,000
Portugal Over 6,100 Over 270,000
Romania Over 6,300 Over 210,000
Slovakia Over 680 Over 38,000
Slovenia Less than 500 Over 35,000
Spain Over 8,500 Over 530,000
Sweden Over 1,900 Over 76,000
Iceland N/A N/A
Liechtenstein N/A N/A
Norway N/A N/A

Measure 2.4

Relevant Signatories will provide relevant information to advertisers about which advertising policies have been violated when they reject or remove ads violating policies described in Measure 2.1 above or disable advertising accounts in application of these policies and clarify their procedures for appeal.

Instagram

QRE 2.4.1

Signatories will describe how they provide information to advertisers about advertising policies they have violated and how advertisers can appeal these policies.

As mentioned in our baseline report, our ad review system relies primarily on automated tools to check ads and business assets against our policies. Our ad review process starts automatically before ads begin running. More information can be found in our Business Help Centre.

Ads remain subject to review and re-review at all times, and may be rejected or restricted for violation of our policies at any time. 

In case of violations advertisers will be notified directly if the account is restricted or disabled access to monetisation tools. Advertisers will always have the option to appeal this review. 

SLI 2.4.1

Signatories will report quantitatively, at the Member State level, on the number of appeals per their standard procedures they received from advertisers on the application of their policies and on the proportion of these appeals that led to a change of the initial policy decision.

We were not able to deliver this SLI  for this report. 

Country Nr of appeals Proportion of appeals that led to a change of the initial decision
Austria 0 0
Belgium 0 0
Bulgaria 0 0
Croatia 0 0
Cyprus 0 0
Czech Republic 0 0
Denmark 0 0
Estonia 0 0
Finland 0 0
France 0 0
Germany 0 0
Greece 0 0
Hungary 0 0
Ireland 0 0
Italy 0 0
Latvia 0 0
Lithuania 0 0
Luxembourg 0 0
Malta 0 0
Netherlands 0 0
Poland 0 0
Portugal 0 0
Romania 0 0
Slovakia 0 0
Slovenia 0 0
Spain 0 0
Sweden 0 0
Iceland 0 0
Liechtenstein 0 0
Norway 0 0

Commitment 3

Relevant Signatories involved in buying, selling and placing digital advertising commit to exchange best practices and strengthen cooperation with relevant players, expanding to organisations active in the online monetisation value chain, such as online e-payment services, e-commerce platforms and relevant crowd-funding/donation systems, with the aim to increase the effectiveness of scrutiny of ad placements on their own services.

We signed up to the following measures of this commitment

Measure 3.1 Measure 3.2 Measure 3.3

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Yes

If yes, list these implementation measures here


Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

Measure 3.1

Relevant Signatories will cooperate with platforms, advertising supply chain players, source-rating services, services that provide indicators of trustworthiness, fact-checking organisations, advertisers and any other actors active in the online monetisation value chain, to facilitate the integration and flow of information, in particular information relevant for tackling purveyors of harmful Disinformation, in full respect of all relevant data protection rules and confidentiality agreements.

Instagram

QRE 3.1.1

Signatories will outline how they work with others across industry and civil society to facilitate the flow of information that may be relevant for tackling purveyors of harmful Disinformation.

There have been no significant updates since the last submitted report.

Measure 3.2

Relevant Signatories will exchange among themselves information on Disinformation trends and TTPs (Tactics, Techniques, and Procedures), via the Code Task-force, GARM, IAB Europe, or other relevant fora. This will include sharing insights on new techniques or threats observed by Relevant Signatories, discussing case studies, and other means of improving capabilities and steps to help remove Disinformation across the advertising supply chain - potentially including real-time technical capabilities.

Instagram

QRE 3.2.1

Signatories will report on their discussions within fora mentioned in Measure 3.2, being mindful of not disclosing information that is confidential and/or that may be used by malicious actors to circumvent the defences set by Signatories and others across the advertising supply chain. This could include, for instance, information about the fora Signatories engaged in; about the kinds of information they shared; and about the learnings they derived from these exchanges.

There have been no significant updates since the last submitted report.

Measure 3.3

Relevant Signatories will integrate the work of or collaborate with relevant third-party organisations, such as independent source-rating services, services that provide indicators of trustworthiness, fact-checkers, researchers, or open-source investigators, in order to reduce monetisation of Disinformation and avoid the dissemination of advertising containing Disinformation.

Instagram

QRE 3.3.1

Signatories will report on the collaborations and integrations relevant to their work with organisations mentioned.

There have been no significant updates since the last submitted report.

Political Advertising

Commitment 4

Relevant Signatories commit to adopt a common definition of "political and issue advertising".

We signed up to the following measures of this commitment

Measure 4.1 Measure 4.2

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

No

If yes, list these implementation measures here

As noted in our baseline report, we continue to enforce our policy for Ads about social issues, elections or politics (“SIEP ads”).

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

Yes

If yes, which further implementation measures do you plan to put in place in the next 6 months?

As the provisions of Regulation (EU) 2024/900 on the transparency and targeting of political advertising become applicable, we will update measures under this Chapter as appropriate and to the extent they are not already addressed by Meta’s products and/or policies.

Measure 4.1

Relevant Signatories commit to define "political and issue advertising" in this section in line with the definition of "political advertising" set out in the European Commission's proposal for a Regulation on the transparency and targeting of political advertising.

Instagram

QRE 4.1.1

Relevant Signatories will declare the relevant scope of their commitment at the time of reporting and publish their relevant policies, demonstrating alignment with the European Commission's proposal for a Regulation on the transparency and targeting of political advertising.

As mentioned in our baseline report, we continue to enforce our policy for Ads about social issues, elections or politics (“SIEP ads”), which covers advertising that:
  • Is made by, on behalf of or about a candidate for public office, a political figure, a political party, a political action committee or advocates for the outcome of an election to public office
  • Is about any election, referendum, or ballot initiative, including "get out the vote" or election information campaigns.
  • Is about any social issue in any place where the ad is being run (we define social issues as sensitive topics that are heavily debated, may influence the outcome of an election or result in/relate to existing or proposed legislation. In the EU, those social issues include civil and social rights, crime, economy, environmental politics, health, immigration, political values and governance, and security and foreign policy).
  • Is regulated by law as political advertising.

Further details of our policies can be found online:

QRE 4.1.2

After the first year of the Code's operation, Relevant Signatories will state whether they assess that further work with the Task-force is necessary and the mechanism for doing so, in line with Measure 4.2.

The Taskforce working group on the definition of political ads has not begun. 

Measure 4.2

Should there be no political agreement on the definition of "political advertising" in the context of the negotiations on the European Commission's proposal for a Regulation on the transparency and targeting of political advertising within the first year of the Code's operation or should this Regulation not include a definition of "political advertising" which adequately covers "issue advertising", the Signatories will come together with the Task-force to establish working definitions of political advertising and issue advertising that can serve as baseline for this chapter.

Instagram

Commitment 5

Relevant Signatories commit to apply a consistent approach across political and issue advertising on their services and to clearly indicate in their advertising policies the extent to which such advertising is permitted or prohibited on their services.

We signed up to the following measures of this commitment

Measure 5.1

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

No

If yes, list these implementation measures here

As mentioned in our baseline report, Instagram’s policy requires that any advertiser who wants to run ads that discuss, debate, or advocate for/or against social issues, elections or politics must go through the authorization process and have a "Paid for by" disclaimer run alongside such ads indicating the payor. It is our intention to detect and enforce consistently on these ads to the extent a political advertiser runs an ad without a disclaimer. 

In addition to this, we've established measures where ads related to voting around elections (this includes primary, general, special and run-off elections) are subject to additional prohibitions and could be rejected if in violation of our policies. 

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

Yes

If yes, which further implementation measures do you plan to put in place in the next 6 months?

As the provisions of Regulation (EU) 2024/900 on the transparency and targeting of political advertising become applicable, we will update measures under this Chapter as appropriate and to the extent they are not already addressed by Meta’s products and or/policies.

Measure 5.1

Relevant Signatories will apply the labelling, transparency and verification principles (as set out below) across all ads relevant to their Commitments 4 and 5. They will publicise their policy rules or guidelines pertaining to their service's definition(s) of political and/or issue advertising in a publicly available and easily understandable way.

Instagram

QRE 5.1.1

Relevant Signatories will report on their policy rules or guidelines and on their approach towards publicising them.

As mentioned and explained in our baseline report, any advertiser running ads about social issues, elections or politics who is located in or targeting people in designated countries must complete the authorization process required by Meta

This applies to any ad that:
  • Is made by, on behalf of or about a candidate for public office, a political figure, a political party, a political action committee or advocates for the outcome of an election to public office
  • Is about any election, referendum or ballot initiative, including "get out the vote" or election information campaigns
  • Is about any social issue in any place where the ad is being run
  • Is regulated as political advertising

Advertisers must include a verified "Paid for by" disclaimer on these ads to show the entity or person responsible for running the ad across Meta technologies. The disclaimer is subject to restrictions. Advertisers must also comply with all applicable laws and regulations, including but not limited to requirements involving; disclaimer, disclosure and ad labelling, blackout periods, foreign interference, spending limits and reporting requirements.

If ads do not include a disclaimer and we determine that the ad content includes content about social issues, elections or politics, it will be disapproved during ad review. If an ad is already running, it can be flagged by automated systems or reported by our community and, if found to be violating our policy by missing a disclaimer, it will be disapproved and added to the Ad Library. From April 2024 the Ad Library in the EU now contains more information about the Advertising or Community Standards that an ad violated (if applicable). We will display this information for disapproved ads for a period of one year after their last impression is delivered and seven years if the ad is about social issues, elections, or politics.

Advertisers also have to disclose whenever a social issue, electoral, or political ad contains a photorealistic image or video, or realistic sounding audio, that was digitally created or altered (more detail about this policy is outlined at the start of this commitment).

We publicly share resources on our advertising standards covering the topics described above, such as ads about social issues, elections or politics in our Transparency Centre.

Commitment 6

Relevant Signatories commit to make political or issue ads clearly labelled and distinguishable as paid-for content in a way that allows users to understand that the content displayed contains political or issue advertising.

We signed up to the following measures of this commitment

Measure 6.1 Measure 6.2 Measure 6.3 Measure 6.4 Measure 6.5

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

No

If yes, list these implementation measures here

As noted in our previous report, Meta launched an AI disclosure policy in 2024 to help people understand when a social issue, election, or political advertisement on Instagram has been digitally created or altered, including through the use of AI. 

Advertisers will have to disclose whenever a social issue, electoral, or political ad contains a photorealistic image or video, or realistic sounding audio, that was digitally created or altered to:
  • Depict a real person as saying or doing something they did not say or do; or
  • Depict a realistic-looking person that does not exist or a realistic-looking event that did not happen, or alter footage of a real event that happened; or
  • Depict a realistic event that allegedly occurred, but that is not a true image, video, or audio recording of the event.

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

As mentioned in our baseline report, our policies are based on years of experience and expertise in safety combined with external input from experts around the world. We are continuously working to protect the integrity of our platforms and adjusting our Political Advertising policies, tools, and processes.

Measure 6.1

Relevant Signatories will develop a set of common best practices and examples for marks and labels on political or issue ads and integrate those learnings as relevant to their services.

Instagram

QRE 6.1.1

Relevant Signatories will publicise the best practices and examples developed as part of Measure 2.2.1 and describe how they relate to their relevant services.

N/A

Measure 6.2

Relevant Signatories will ensure that relevant information, such as the identity of the sponsor, is included in the label attached to the ad or is otherwise easily accessible to the user from the label.

Instagram

QRE 6.2.1

Relevant Signatories will publish examples of how sponsor identities and other relevant information are attached to ads or otherwise made easily accessible to users from the label.

As noted in our baseline report, Ads about social issues, elections or politics require authorizations and a “Paid for by” disclaimer.

QRE 6.2.2

Relevant Signatories will publish their labelling designs.

As noted in our baseline report, examples of political ad labelling may be found in the Ad Library. 

SLI 6.2.1

Relevant Signatories will publish meaningful metrics, at Member State level, on the volume of ads labelled according to Measure 6.2, such as the number of ads accepted and labelled, amounts spent by labelled advertisers, or other metrics to be determined in discussion within the Task-force with the aim to assess the efficiency of this labelling.

Number of unique SIEP ads on Facebook and Instagram combined displaying “paid for by'' disclaimers from 01/07/2024 to 31/12/2024 in EU member states.

Country determined by inferred advertiser location at time of enforcement.

Country Number of ads accepted & labelled on Facebook and Instagram combined
Austria Over 40,000 0 0
Belgium Over 72,000 0 0
Bulgaria Over 7,600 0 0
Croatia Over 13,000 0 0
Cyprus Over 3,700 0 0
Czech Republic Over 35,000 0 0
Denmark Over 24,000 0 0
Estonia Over 2,700 0 0
Finland Over 13,000 0 0
France Over 47,000 0 0
Germany Over 71,000 0 0
Greece Over 19,000 0 0
Hungary Over 37,000 0 0
Ireland Over 22,000 0 0
Italy Over 76,000 0 0
Latvia Over 18,000 0 0
Lithuania Over 11,000 0 0
Luxembourg Over 760 0 0
Malta Over 1,900 0 0
Netherlands Over 61,000 0 0
Poland Over 33,000 0 0
Portugal Over 31,000 0 0
Romania Over 80,000 0 0
Slovakia Over 21,000 0 0
Slovenia Over 2,000 0 0
Spain Over 28,000 0 0
Sweden Over 34,000 0 0
Iceland N/A 0 0
Liechtenstein N/A 0 0
Norway N/A 0 0

Measure 6.3

Relevant Signatories will invest and participate in research to improve users's identification and comprehension of labels, discuss the findings of said research with the Task-force, and will endeavour to integrate the results of such research into their services where relevant.

Instagram

QRE 6.3.1

Relevant Signatories will publish relevant research into understanding how users identify and comprehend labels on political or issue ads and report on the steps they have taken to ensure that users are consistently able to do so and to improve the labels' potential to attract users' awareness.

As mentioned in our baseline report, we have developed labels for SIEP ads as part of our broader efforts to protect elections and increase transparency on Instagram so people can make more informed decisions about the posts they read, trust and share. For this, we worked with third-parties to develop a list of key issues, which we continue to refine over time. 

Measure 6.4

Relevant Signatories will ensure that once a political or issue ad is labelled as such on their platform, the label remains in place when users share that same ad on the same platform, so that they continue to be clearly identified as paid-for political or issue content.

Instagram

QRE 6.4.1

Relevant Signatories will describe the steps they put in place to ensure that labels remain in place when users share ads.

As mentioned in our baseline report, we are committed to making ads about social issues, elections or politics more transparent. If someone sees and shares an ad about social issues, elections or politics, the shared version will still contain the disclaimer and available information about the ad.

Measure 6.5

Relevant Signatories that provide messaging services will, where possible and when in compliance with local law, use reasonable efforts to work towards improving the visibility of labels applied to political advertising shared over messaging services. To this end they will use reasonable efforts to develop solutions that facilitate users recognising, to the extent possible, paid-for content labelled as such on their online platform when shared over their messaging services, without any weakening of encryption and with due regard to the protection of privacy.

N/A

QRE 6.5.1

Relevant Signatories will report on any solutions in place to empower users to recognise paid-for content as outlined in Measure 6.5.

N/A

Commitment 7

Relevant Signatories commit to put proportionate and appropriate identity verification systems in place for sponsors and providers of advertising services acting on behalf of sponsors placing political or issue ads. Relevant signatories will make sure that labelling and user-facing transparency requirements are met before allowing placement of such ads.

We signed up to the following measures of this commitment

Measure 7.1 Measure 7.2 Measure 7.3 Measure 7.4

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

No

If yes, list these implementation measures here

As mentioned in our baseline report, we have taken a broad definition for political advertising and adopted a policy that applies to all “ads about social issues, elections or politics”. Any advertiser—both political and non-political—who wants to run ads targeting countries in the EU that are about a candidate for public office, a political figure, political parties, elections or social issues will be required to confirm their identity.

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

As mentioned in our baseline report, our policies are based on years of experience and expertise in safety combined with external input from experts around the world. We are continuously working to protect the integrity of our platforms and adjusting our Political Advertising policies, tools, and processes. 

Measure 7.1

Relevant Signatories will make sure the sponsors and providers of advertising services acting on behalf of sponsors purchasing political or issue ads have provided the relevant information regarding their identity to verify (and re-verify where appropriate) said identity or the sponsors they are acting on behalf of before allowing placement of such ads.

Instagram

QRE 7.1.1

Relevant Signatories will report on the tools and processes in place to collect and verify the information outlined in Measure 7.1.1, including information on the timeliness and proportionality of said tools and processes.

As mentioned in our baseline report: Any advertiser who wants to create or edit ads in the European Union that reference political figures, political parties, elections in the EU or social issues within the EU will be required to go through the authorisation process and have a "Paid for by" label. This requirement includes anyone who performs actions on ads, about social issues, elections or politics such as starting or pausing ads, adjusting targeting, creating or editing disclaimers, or any other function related to ad management.

Identity confirmation is at the individual level, only needs to be done once and consists of:
  • Turning on two-factor authentication
  • Choosing one of the following options to confirm your identity:
    • Valid photo ID
    • Two official documents
    • A notarized form that you download from facebook.com/id

To help guard against foreign interference, advertisers (including political organisations and agencies) who want to run ads about social issues, elections or politics must have their ad run by a person who is authorised in the EU country that they're targeting. 

European Union institutions, registered European political parties and official political groups qualify to run ads about social issues, elections, and politics in Member States unless otherwise prohibited.

Advertisers are required to follow all other stated terms and conditions.

To help maintain the integrity of our authorization requirements, we'll periodically require that some advertisers reconfirm their identity and location. Identity reconfirmation must be done within 60 days of initial notice.

SLI 7.1.1

Relevant Signatories will publish meaningful metrics on the volume of ads rejected for failure to fulfil the relevant verification processes, comparable to metrics for SLI 6.2.1, where relevant per service and at Member State level.

Number of unique Ads removed for not complying with our policy on SIEP ads on both Facebook and Instagram from 01/07/2024 to 31/12/2024 in EU member states in EU member states.

Country Number of unique Ads removed for not complying with our policy on SIEP ads on both Facebook and Instagram from 01/07/2024 to 31/12/2024 in EU member states.
Austria Over 7,300 0
Belgium Over 19,000 0
Bulgaria Over 3,700 0
Croatia Over 1,900 0
Cyprus Over 2,800 0
Czech Republic Over 9,700 0
Denmark Over 6,800 0
Estonia Over 3,200 0
Finland Over 8,000 0
France Over 36,000 0
Germany Over 40,000 0
Greece Over 7,000 0
Hungary Over 6,900 0
Ireland Over 5,200 0
Italy Over 45,000 0
Latvia Over 5,800 0
Lithuania Over 5,600 0
Luxembourg Over 810 0
Malta Over 1,100 0
Netherlands Over 12,000 0
Poland Over 32,000 0
Portugal Over 15,000 0
Romania Over 17,000 0
Slovakia Over 5,700 0
Slovenia Over 2,400 0
Spain Over 25,000 0
Sweden Over 7,300 0
Iceland N/A 0
Liechtenstein N/A 0
Norway N/A 0

Measure 7.2

Relevant Signatories will complete verifications processes described in Commitment 7 in a timely and proportionate manner.

Instagram

QRE 7.2.1

Relevant Signatories will report on the actions taken against actors demonstrably evading the said tools and processes, including any relevant policy updates.

As mentioned in our baseline report: 
  • Political ads must have a disclaimer with the name and entity that paid for the ads. If we detect an ad running without a disclaimer, it'll be paused, disapproved and added to the Ad Library, until the advertiser completes the authorization process. Requirements vary by country.
  • As mentioned in our Advertising standards, we enforce our policies against all advertisers, and as a general rule, advertisers must not evade or attempt to evade our review process and enforcement actions. 
Regarding specifically social issues, electoral, or political ads,  advertisers who repeatedly run such ads without being authorised will face some restrictions, which could result in permanent restrictions of the advertisers’ ability to advertise. 

QRE 7.2.2

Relevant Signatories will provide information on the timeliness and proportionality of the verification process.

As mentioned in our baseline report, details for country-specific ID verification processes may be found online on our Business Help Centre.

An advertiser must confirm their identity and link an ad account using a valid disclaimer to complete authorization. The review process is usually within 48 hours and disclaimer reviews are typically completed within 24 hours. However in some cases, the time to review ads about elections, social issues or politics can be up to 72 hours.

Measure 7.3

Relevant Signatories will take appropriate action, such as suspensions or other account-level penalties, against political or issue ad sponsors who demonstrably evade verification and transparency requirements via on-platform tactics. Relevant Signatories will develop - or provide via existing tools - functionalities that allow users to flag ads that are not labelled as political.

Instagram

QRE 7.3.1

Relevant Signatories will report on the tools and processes in place to request a declaration on whether the advertising service requested constitutes political or issue advertising.

As mentioned in our baseline report:
  • We require advertisers to acknowledge how we define social issues and review text examples before they can post SIEP ads. Ads where the primary purpose of the ad is the sale of a product or promotion of a service may not be considered social issue ads, which wouldn't require authorizations and a disclaimer. This doesn't apply to products or services about politicians, political parties or legislation, which continue to require transparency.
  • All ads are subject to our ad review system before they're shown on Instagram against our Advertising Standards
  • In certain cases, a post or ad that's already running can be flagged by AI or reported by our community. If this happens, the content may be reviewed again, and if found to be in violation of our policies and/or the ad is missing a “Paid for by” disclaimer, we disapprove it. 

The Community Standards prohibit ads that promote voter interference.

QRE 7.3.2

Relevant Signatories will report on policies in place against political or issue ad sponsors who demonstrably evade verification and transparency requirements on-platform.

As mentioned in our baseline report, our Advertising Standards make clear that we enforce our policies against all advertisers, and as a general rule, advertisers must not evade or attempt to evade our review process and enforcement actions. If we find that an ad account, user account or business account is evading our review process and enforcement actions, an advertiser may face advertising restrictions. 

Regarding specifically social issues, electoral, or political ads, advertisers who repeatedly run such ads without being authorised will face some restrictions, which could result in permanent restrictions of the advertisers’ ability to advertise. 

From 2024 Meta launched a new AI Disclosure policy which helps people understand when a social issue, election, or political advertisement on Instagram has been digitally created or altered (including through the use of AI)  - as a result, advertisers may also incur penalties for advertisements that demonstrably evade verification and transparency requirements.

Measure 7.4

Relevant Signatories commit to request that sponsors, and providers of advertising services acting on behalf of sponsors, declare whether the advertising service they request constitutes political or issue advertising.

Instagram

QRE 7.4.1

Relevant Signatories will report on research and publish data on the effectiveness of measures they take to verify the identity of political or issue ad sponsors.

Please refer to QRE 7.1.1 and SLI 7.1.1.

Commitment 8

Relevant Signatories commit to provide transparency information to users about the political or issue ads they see on their service.

We signed up to the following measures of this commitment

Measure 8.1 Measure 8.2

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

No

If yes, list these implementation measures here

As mentioned in our previous report, we continue to provide transparency on Instagram with tools such as the ‘Why am I seeing this Ad’ tool. 

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

Measure 8.1

Relevant Signatories will agree on the common minimum transparency obligations, seeking alignment with the European Commission's proposal for a Regulation on the transparency and targeting of political advertising, such as identification of the sponsor, display period, ad spend, and aggregate information on recipients of the ad.

Instagram

Measure 8.2

Relevant Signatories will provide a direct link from the ad to the ad repository.

Instagram

Commitment 9

Relevant Signatories commit to provide users with clear, comprehensible, comprehensive information about why they are seeing a political or issue ad.

We signed up to the following measures of this commitment

Measure 9.1 Measure 9.2

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

No

If yes, list these implementation measures here

As mentioned in our previous report, we continue to provide transparency on Instagram with tools such as the ‘Why am I seeing this Ad’ tool. 

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

As mentioned in our baseline report, our policies are based on years of experience and expertise in safety combined with external input from experts around the world. We are continuously working to protect the integrity of our platforms and adjusting our Political Advertising policies, tools, and processes.

Measure 9.1

Relevant Signatories will, seeking alignment with the European Commission's proposal for a Regulation on the transparency and targeting of political advertising, provide a simple means for users to access information about why they are seeing a particular political or issue ad.

Instagram

Measure 9.2

Relevant Signatories will explain in simple, plain language, the rationale and the tools used by the sponsors and providers of advertising services acting on behalf of sponsors (for instance: demographic, geographic, contextual, interest or behaviourally-based) to determine that a political or issue ad is displayed specifically to the user.

Instagram

Commitment 10

Relevant Signatories commit to maintain repositories of political or issue advertising and ensure their currentness, completeness, usability and quality, such that they contain all political and issue advertising served, along with the necessary information to comply with their legal obligations and with transparency commitments under this Code.

We signed up to the following measures of this commitment

Measure 10.1 Measure 10.2

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

No

If yes, list these implementation measures here

As mentioned in our previous report, starting in April 2024, the Ad Library in the EU contains more information about the Advertising Standards or Community Standards that an ad violated (if applicable). We display this information for disapproved ads for a period of one year after their last impression is delivered and seven years if the ad is about social issues, elections, or politics.

For disapproved ads that received delivery in the EU, images will be blurred and there will be messaging saying that the ad was removed. The user can click to see more ad details and see more detailed reasoning on why the ad was disapproved, including the specific Advertising Standard or Community Standard it violated.

This change is applicable only to ads that were added to the Ad Library on or after 17 August 2023.  

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

As mentioned in our baseline report, our policies are based on years of experience and expertise in safety combined with external input from experts around the world. We are continuously working to protect the integrity of our platforms and  adjusting our Political Advertising policies, tools, and processes.

Measure 10.1

Relevant Signatories will set up and maintain dedicated searchable ad repositories containing accurate records (in as close to real time as possible, in particular during election periods) of all political and issue ads served, including the ads themselves. This should be accompanied by relevant information for each ad such as the identification of the sponsor; the dates the ad ran for; the total amount spent on the ad; the number of impressions delivered; the audience criteria used to determine recipients; the demographics and number of recipients who saw the ad; and the geographical areas the ad was seen in.

Instagram

Measure 10.2

The information in such ad repositories will be publicly available for at least 5 years.

Instagram

QRE 10.2.1

Relevant Signatories will detail the availability, features, and updating cadence of their repositories to comply with Measures 10.1 and 10.2. Relevant Signatories will also provide quantitative information on the usage of the repositories, such as monthly usage.

As mentioned in our baseline report, the Ad Library provides advertising transparency by offering a comprehensive, searchable collection of all ads currently running from across Meta technologies. We store these ads in the library for 7 years. 

Commitment 11

Relevant Signatories commit to provide application programming interfaces (APIs) or other interfaces enabling users and researchers to perform customised searches within their ad repositories of political or issue advertising and to include a set of minimum functionalities as well as a set of minimum search criteria for the application of APIs or other interfaces.

We signed up to the following measures of this commitment

Measure 11.1 Measure 11.2 Measure 11.3 Measure 11.4

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

No

If yes, list these implementation measures here

As mentioned in our baseline report, our Ad Library application programming interface (“API”) allows users to perform custom keyword searches of ads stored in the Ad Library. Users can search data for all active and inactive ads about social issues, elections or politics. For people less familiar with the API solution, we provide a simpler research solution with our Ad Library report.

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

As mentioned in our baseline report, our policies are based on years of experience and expertise in safety combined with external input from experts around the world. We are continuously working to protect the integrity of our platforms and adjusting our Political Advertising repositories. 

Measure 11.1

Relevant Signatories' APIs or other interfaces will provide a set of minimum functionalities and search criteria that enable users and researchers to perform customised searches for data in as close to real time as possible (in particular during elections) in standard formats, including for instance searches per advertiser or candidate, per geographic area or country, per language, per keyword, per election, or per other targeting criteria, to allow for research and monitoring.

Instagram

QRE 11.1.1

Please insert the relevant data

As mentioned in our baseline report, the Ad Library API provides access to data about ads about social issues, elections or politics from countries where the Ad Library is live, including European Union countries. 

The Ad Library API provides programmatic access to information about ads about politics or issues in the Library. Users can search data for all active and inactive ads about social issues, elections or politics. People are able to search for any term or name in the Ad Library. For Instagram accounts that don't have a linked Facebook Page, people will be able to search for an advertiser's ad using their Instagram handle name.

Measure 11.2

The data Relevant Signatories make available via such APIs and other interfaces will be equivalent to or more detailed than that data made available through their ad repositories.

Instagram

Measure 11.3

Relevant Signatories will ensure wide access to and availability of APIs and other interfaces.

Instagram

Measure 11.4

Relevant Signatories will engage with researchers and update the functionalities of the APIs and other interfaces to meet researchers' reasonable needs where applicable.

Instagram

QRE 11.4.1

Relevant Signatories will report about their engagement with researchers, including to understand their experience with the functionalities of APIs, and the resulting improvements of the functionalities as the result of this engagement and of a discussion within the Task-force.

As of December 2024, we’ve made targeting information for 35.31 million social issue, electoral, and political Facebook and Instagram ads globally available to academic researchers. More details on the original launch of this initiative are available in the baseline report. 

Commitment 13

Relevant Signatories agree to engage in ongoing monitoring and research to understand and respond to risks related to Disinformation in political or issue advertising.

We signed up to the following measures of this commitment

Measure 13.1 Measure 13.2 Measure 13.3

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

No

If yes, list these implementation measures here

There have been no significant updates since the last submitted report.

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

Measure 13.1

Relevant Signatories agree to work individually and together through the Task-force to identify novel and evolving disinformation risks in the uses of political or issue advertising and discuss options for addressing those risks.

Instagram

QRE 13.1.1

Through the Task-force, the Relevant Signatories will convene, at least annually, an appropriately resourced discussion around novel risks in political advertising to develop coordinated policy.

There have been no significant updates since the last submitted report.

Measure 13.2

Instagram

Measure 13.3

Instagram

Integrity of Services

Commitment 14

In order to limit impermissible manipulative behaviours and practices across their services, Relevant Signatories commit to put in place or further bolster policies to address both misinformation and disinformation across their services, and to agree on a cross-service understanding of manipulative behaviours, actors and practices not permitted on their services. Such behaviours and practices include: The creation and use of fake accounts, account takeovers and bot-driven amplification, Hack-and-leak operations, Impersonation, Malicious deep fakes, The purchase of fake engagements, Non-transparent paid messages or promotion by influencers, The creation and use of accounts that participate in coordinated inauthentic behaviour, User conduct aimed at artificially amplifying the reach or perceived public support for disinformation.

We signed up to the following measures of this commitment

Measure 14.1 Measure 14.2 Measure 14.3

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Yes

If yes, list these implementation measures here

As mentioned in our baseline report, we continue to enforce and report publicly on our policies to tackle inauthentic behaviour. 
  • Inauthentic behaviour: We continue to investigate and take down coordinated adversarial networks of accounts on Instagram that seek to mislead people about who is behind them and what they are doing. We also work to scale our enforcement by feeding the insights we learn from investigating these networks globally to help us automatically detect bad actors engaged in these and similar violating behaviours, including the networks that attempt to come back after we had taken them down.  

We also continue to improve our detection of inauthentic behaviour policy violations to counter new tactics and more quickly act against the spectrum of deceptive practices – both Coordinated Inauthentic Behaviour and other inauthentic tactics (often used by financially motivated actors) we see on our platforms - whether foreign or domestic, state or non-state. 

In July 2024, we stopped removing content solely on the basis of our manipulated video policy. We will continue to remove content if it violates our Community Standards, regardless of whether it is created by AI or not.

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

As mentioned in our baseline report, our policies are based on years of experience and expertise in safety combined with external input from experts around the world. We are continuously working to protect the integrity of our platforms and adjusting our policies, tools, and processes.

Measure 14.1

Relevant Signatories will adopt, reinforce and implement clear policies regarding impermissible manipulative behaviours and practices on their services, based on the latest evidence on the conducts and tactics, techniques and procedures (TTPs) employed by malicious actors, such as the AMITT Disinformation Tactics, Techniques and Procedures Framework.

Instagram

QRE 14.1.1

Relevant Signatories will list relevant policies and clarify how they relate to the threats mentioned above as well as to other Disinformation threats.

To clarify what we’ve included in our baseline report, depending on the context, the actor, and the activity, several TTPs can be combined and are covered by several of our policies. We have highlighted some examples below:

Inauthentic Behaviour - Our Inauthentic Behaviour policy is targeted at addressing deceptive behaviours. In line with our commitment to authentic interactions, we do not allow people to misrepresent themselves on Instagram.  

CIB Policies - Our policy on Coordinated Inauthentic Behaviour (CIB) addresses covert influence operations (IO). Defined as “the use of multiple Facebook or Instagram assets, working in concert to engage in Inauthentic Behaviour (as defined by our policy), where the use of fake accounts is central to the operation”, the policy informs how we find, identify and remove IO networks on our platforms.

CIB can include a variety of different TTPs depending on the actors, context, and operation. Having said that, we often see (1) creation of inauthentic accounts (2) the use of fake / inauthentic reactions (e.g., likes, upvotes, comments), (3) the use of fake followers or subscribers (4) the creation of inauthentic chat groups, fora, or domains (5) inauthentic coordination of content creation or amplification and (6) account hijacking or impersonation and (7) inauthentic coordination.  

Cybersecurity - Attempts to gather sensitive personal information or engage in unauthorised access by deceptive or invasive methods are harmful to the authentic, open and safe atmosphere that we want to foster. Therefore, we do not allow attempts to gather sensitive user information or engage in unauthorised access through the abuse of our platform, products, or services.

Spam - We work hard to limit the spread of spam because we do not want to allow content that is designed to deceive, or that attempts to mislead users, to increase viewership. We also aim to prevent people from abusing our platform, products or features to artificially increase viewership or distribute content en masse for commercial gain.This can be pertinent for several TTPs depending on the context including  (1) creation of inauthentic accounts (2) the use of fake / inauthentic reactions (e.g., likes, upvotes, comments), (3) the use of fake followers or subscribers (4) the creation of inauthentic chat groups, fora, or domains and (5) the use of deceptive practices.

Branded Content Policies - Branded content may only be posted with the use of the branded content tool, and creators must use the branded content tool to tag the featured third-party product, brand, or business partner with their prior permission. Branded content may only be posted by Instagram accounts with access to the branded content tool. This is pertinent to non-transparent promotional messages.

Privacy - We remove content that shares, offers or solicits personally identifiable information or other private information that could lead to physical or financial harm, including financial, residential, and medical information, as well as private information obtained from illegal sources.

QRE 14.1.2

Signatories will report on their proactive efforts to detect impermissible content, behaviours, TTPs and practices relevant to this commitment.

As mentioned in our baseline report, our approach to Coordinated Inauthentic Behaviour (CIB) more broadly, is grounded on behaviour-based enforcement. This means that we are looking for specific violating behaviours exhibited, rather than violating content (which is predicated on other specific violations of our Community Standards, such as misinformation and hate speech). Therefore, when CIB networks are taken down, it is based on their behaviour, not the content they posted.  

In addition to expert investigations against CIB, we also work to tackle inauthentic behaviour by fake accounts at scale. 

Besides, accounts directly involved in CIB activity are removed when detected as part of the deceptive adversarial network. Automatically, as these accounts are taken down, posts published by these accounts go down as well. Taking this behaviour-based approach essentially allows us to address the problem at the source.  

We monitor for efforts to re-establish a presence on Instagram by networks we previously removed. 

For a comprehensive overview of our approach, see here.

Measure 14.2

Relevant Signatories will keep a detailed, up-to-date list of their publicly available policies that clarifies behaviours and practices that are prohibited on their services and will outline in their reports how their respective policies and their implementation address the above set of TTPs, threats and harms as well as other relevant threats.

Instagram

QRE 14.2.1

Relevant Signatories will report on actions taken to implement the policies they list in their reports and covering the range of TTPs identified/employed, at the Member State level.

As mentioned in our baseline report, we report quarterly on enforcement actions taken under the policy most relevant to this Commitment:

Our coordinated inauthentic behaviour policies:

  • In  Q3 2024, we took down 20 Instagram accounts while removing a network which originated in Moldova. We also removed 2 Instagram accounts while removing a network which originated in Iran (more detail provided in Commitment 16) .

Measure 14.3

Relevant Signatories will convene via the Permanent Task-force to agree upon and publish a list and terminology of TTPs employed by malicious actors, which should be updated on an annual basis.

Instagram

QRE 14.3.1

Signatories will report on the list of TTPs agreed in the Permanent Task-force within 6 months of the signing of the Code and will update this list at least every year. They will also report about the common baseline elements, objectives and benchmarks for the policies and measures.

We continue to engage with this working group now that the list of TTPs has been reached (as reported in our benchmark report), notably to discuss how we report for those TTPs under the SLIs 14.2.1-14.2.4 above. 

Commitment 15

Relevant Signatories that develop or operate AI systems and that disseminate AI-generated and manipulated content through their services (e.g. deepfakes) commit to take into consideration the transparency obligations and the list of manipulative practices prohibited under the proposal for Artificial Intelligence Act.

We signed up to the following measures of this commitment

Measure 15.1 Measure 15.2

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Yes

If yes, list these implementation measures here

We recognise that widespread availability and adoption of generative AI tools may have implications for how we identify, and address disinformation on our platforms.

We recognize that the widespread availability and adoption of generative AI tools may have implications for how we identify and address disinformation on our platforms. We also acknowledge that, under the AIA, certain AI techniques are considered purposefully deceptive or manipulative if they impact people's behavior and decision-making abilities and are reasonably likely to cause significant harm.

We want people to know when they see posts that have been made with AI. In early 2024, we announced a new approach for labeling organic AI-generated content. An important part of this approach relies on industry standard indicators that other companies include in content created using their tools, which help us assess whether something is created using AI.

In H2 2024, we rolled out a change to the “AI info” labels on our platforms so they better reflect the extent of AI used in content. Our intent has always been to help people know when they see content that was made with AI, and we’ve continued to work with companies across the industry to improve our labeling process so that labels on our platforms are more in line with peoples’ expectations.

For organic content that we detect was only modified or edited by AI tools, we moved the “AI info” label to the post’s menu. We still display the “AI info” label for content we detect was generated by an AI tool and share whether the content is labeled because of industry-shared signals or because someone self-disclosed.  

In September 2024, we also began rolling out “AI Info” labels on ad creative images using a risk-based framework. When an image is created or significantly edited with our generative AI creative features in our advertiser marketing tools, a label will appear in the three-dot menu or next to the “Sponsored” label. When these tools result in the inclusion of an AI-generated photorealistic human, the label will appear next to the Sponsored label (not behind the three-dot menu).

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

Yes

If yes, which further implementation measures do you plan to put in place in the next 6 months?

In January 2025, we began gradually rolling out “AI Info” labels on ad creative videos using a risk-based framework. When a video is created or significantly edited with our generative AI creative features in our advertiser marketing tools, a label will appear in the three-dot menu or next to the “Sponsored” label. When these tools result in the inclusion of an AI-generated photorealistic human, the label will appear next to the Sponsored label (not behind the three-dot menu).

Measure 15.1

Relevant signatories will establish or confirm their policies in place for countering prohibited manipulative practices for AI systems that generate or manipulate content, such as warning users and proactively detect such content.

Instagram

QRE 15.1.1

In line with EU and national legislation, Relevant Signatories will report on their policies in place for countering prohibited manipulative practices for AI systems that generate or manipulate content.

We address potential abuses from AI-generated content in two primary ways: (1) we remove content that violates our Community Standards regardless of how it was generated; and (2) our third-party fact-checkers can rate content that is false and misleading regardless of how it was generated. 

In February 2024 Meta’s Oversight Board provided feedback regarding our approach to manipulated media, arguing that we unnecessarily risk restricting freedom of expression when we remove manipulated media that does not otherwise violate our Community Standards. It recommended a “less restrictive” approach to manipulated media, such as labels with context. 

We agree that providing transparency and additional context is now the better way to address this content. In May 2024 we began labelling AI generated or edited content (based on industry aligned standards on identifying AI as well as through users self declaring AI influenced content) with the label ‘Made with AI’. While we work with companies across the industry to improve the process so our labelling approach better matches our intent, we’ve updated the “Made with AI” label to “AI info” across our apps, which people can click for more information. These labels cover a broader range of content in addition to the manipulated content that the Oversight Board also recommended labelling in their feedback. 

If we determine that digitally-created or altered images, video or audio create a particularly high risk of materially deceiving the public on a matter of importance, we may add a more prominent label so people have more information and context.

In H2 2024, we rolled out a change to the “AI info” labels on our platforms so they better reflect the extent of AI used in content. Our intent has always been to help people know when they see content that was made with AI, and we’ve continued to work with companies across the industry to improve our labeling process so that labels on our platforms are more in line with peoples’ expectations.

For content that we detect was only modified or edited by AI tools, we are moving the “AI info” label to the post’s menu. We will still display the “AI info” label for content we detect was generated by an AI tool and share whether the content is labeled because of industry-shared signals or because someone self-disclosed.

Measure 15.2

Relevant Signatories will establish or confirm their policies in place to ensure that the algorithms used for detection, moderation and sanctioning of impermissible conduct and content on their services are trustworthy, respect the rights of end-users and do not constitute prohibited manipulative practices impermissibly distorting their behaviour in line with Union and Member States legislation.

Instagram

QRE 15.2.1

Relevant Signatories will report on their policies and actions to ensure that the algorithms used for detection, moderation and sanctioning of impermissible conduct and content on their services are trustworthy, respect the rights of end-users and do not constitute prohibited manipulative practices in line with Union and Member States legislation.

Meta commits to continue investing in Responsible AI to address the hard questions around issues such as privacy, fairness, accountability, and transparency.
  • We display the “AI info” label for content we detect was generated by an AI tool and share whether the content is labeled because of industry-shared signals or because someone self-disclosed.

Commitment 16

Relevant Signatories commit to operate channels of exchange between their relevant teams in order to proactively share information about cross-platform influence operations, foreign interference in information space and relevant incidents that emerge on their respective services, with the aim of preventing dissemination and resurgence on other services, in full compliance with privacy legislation and with due consideration for security and human rights risks.

We signed up to the following measures of this commitment

Measure 16.1 Measure 16.2

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

No

If yes, list these implementation measures here

As mentioned in our baseline report, a key part of our strategy to prevent interference is working with government authorities, law enforcement, security experts, civil society and other tech companies through direct communication, sharing knowledge and collaboration.

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

As mentioned in our baseline report, our policies are based on years of experience and expertise in safety combined with external input from experts around the world. We are continuously working to protect the integrity of our platforms and adjusting our policies, tools, and processes to combat disinformation. 

Measure 16.1

Relevant Signatories will share relevant information about cross-platform information manipulation, foreign interference in information space and incidents that emerge on their respective services for instance via a dedicated sub-group of the permanent Task-force or via existing fora for exchanging such information.

Instagram

QRE 16.1.1

Relevant Signatories will disclose the fora they use for information sharing as well as information about learnings derived from this sharing.

As mentioned in our baseline report, a key part of our strategy to prevent interference is working with government authorities, law enforcement, security experts, civil society and other tech companies to stop emerging threats by establishing a direct line of communication, sharing knowledge and identifying opportunities for collaboration. 

In December 2024 and February 2025, we shared our Quarterly Adversarial Threat reports (Q3 2024 and Q4 2024) with information on threat research into new covert influence operations that we took down. We detected and removed these campaigns before they were able to build authentic audiences on our apps. 


Global enforcements: Russia remains the number one source of global CIB networks we’ve disrupted to date since 2017, with 39 covert influence operations. The next most frequent sources of foreign interference are Iran, with 31 CIB networks, and China, with 12. This year, our teams have taken down around 20 new covert influence operations around the world, including in the Middle East, Asia, Europe and the US.

Moldova:
We removed 7 Facebook accounts, 23 Pages, one Group and 20 accounts on Instagram for violating our policy against coordinated inauthentic behavior. This network originated primarily in the Transnistria region of Moldova, and targeted Russian-speaking audiences in Moldova.
They posted original content, including cartoons, about news and geopolitical events concerning Moldova. It included criticism of President Sandu, pro-EU politicians, and close ties between Moldova and Romania.
We removed this campaign before they were able to build authentic audiences on our apps.


Benin:
We removed 16 Facebook accounts and 6 Pages for violating our coordinated inauthentic behavior policy. This network originated in Benin, and targeted primarily France. 
First, the people behind this operation created Pages which posed as French and posted about politics in France, but were run by authentic users in Benin. We quickly took down this activity on our apps. In response to enforcement, they changed tactics. Instead of using authentic accounts, the operators created a network of fake and compromised accounts, and used TOR and proxy IP infrastructure to conceal their origin and appear to be in France. Our automated systems and expert investigators continued to detect and take them down on a rolling basis.
This effort targeted primarily France with posts in French about news and politics, including criticism of President Macron and NATO; supportive commentary about Marine Le Pen and her party; and calls for reduced support for Ukraine.

Our quarterly reports also included further updates and analysis on Doppelganger.

SLI 16.1.1

Number of actions taken as a result of the collaboration and information sharing between signatories. Where they have such information, they will specify which Member States that were affected (including information about the content being detected and acted upon due to this collaboration).

We found a CIB network as a result of our internal investigation and linked it to an Iranian threat actor, Cotton Sandstorm, which Microsoft previously connected to Iran's Islamic Revolutionary Guard Corps or IRGC.

Measure 16.2

Relevant Signatories will pay specific attention to and share information on the tactical migration of known actors of misinformation, disinformation and information manipulation across different platforms as a way to circumvent moderation policies, engage different audiences or coordinate action on platforms with less scrutiny and policy bandwidth.

Instagram

QRE 16.2.1

As a result of the collaboration and information sharing between them, Relevant Signatories will share qualitative examples and case studies of migration tactics employed and advertised by such actors on their platforms as observed by their moderation team and/or external partners from Academia or fact-checking organisations engaged in such monitoring.

We publish quarterly our Adversarial Treat reports, to share notable trends and investigations to help inform our community’s understanding of the evolving security threats we see. 

In our Q3 and Q4 2024 reports, in addition to sharing our threat research, we also included updates on the most persistent Russian covert influence operation known as Doppelganger.

Empowering Users

Commitment 17

In light of the European Commission's initiatives in the area of media literacy, including the new Digital Education Action Plan, Relevant Signatories commit to continue and strengthen their efforts in the area of media literacy and critical thinking, also with the aim to include vulnerable groups.

We signed up to the following measures of this commitment

Measure 17.1 Measure 17.2 Measure 17.3

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Yes

If yes, list these implementation measures here

As mentioned in our baseline report, the key part of our approach to combat misinformation is providing tools and products that will contribute to a more resilient digital society, where people are able to critically evaluate information, make informed decisions about the content they see, and self-correct.  Below are some examples of that work relevant to the European Union. 

During the reporting period Meta ran a range of media literacy topics, focusing on a range of areas, including Youth, EU Elections, Gen AI, as well as EU national elections. These campaigns are outlined in more detail in QRE 17.2.1 with reach metrics outlined in SLI 17.2.1.

In the second half of 2024, Meta undertook several initiatives aimed at promoting digital literacy and combating misinformation in the EU. 

As part of these efforts, in November 2024, Meta launched a global Fraud and Scams campaign including several EU markets, such as France, Germany, Poland, Romania, Belgium, and Spain. The campaign featured ads from Facebook, Instagram, and WhatsApp, emphasizing our commitment to user safety. It educates users on how to identify, avoid, and report scams while highlighting our ongoing efforts to protect them on our platforms. 


In addition to these campaigns, we continued our collaboration with the European Disability Forum (EDF) by launching a media literacy initiative focused on accessible elections. This program aimed to promote inclusive and accessible electoral processes for all citizens, including those with disabilities.

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

Yes

If yes, which further implementation measures do you plan to put in place in the next 6 months?

In January 2025, Meta launched a Youth campaign running in France, Ireland, Spain, Italy and the Netherlands. 

Measure 17.1

Relevant Signatories will design and implement or continue to maintain tools to improve media literacy and critical thinking, for instance by empowering users with context on the content visible on services or with guidance on how to evaluate online content.

Instagram

QRE 17.1.1

Relevant Signatories will outline the tools they develop or maintain that are relevant to this commitment and report on their deployment in each Member State.

As mentioned in our baseline report, we have developed over the years a series of tools and resources - such as online tutorials, lesson plans for educators, tips for spotting false news, and awareness-raising ad campaigns - to educate and equip people with the necessary skills for navigating the digital world. 

A key pillar of our strategy is to inform our users by providing people with specific and relevant context when they come across a flagged post, we can help them be more informed about what they see and read. Here are some ways we provide context on relevant pieces of content that may be sensitive or misleading:
  • Warning screens on sensitive content on Instagram: 
    • To help people avoid coming across content that they'd rather not see, we limit the visibility of certain posts that are flagged by people on Instagram for containing sensitive or graphic material. Photos and videos containing such content will appear with a warning screen to inform people about the content before they view it. This warning screen appears when viewing a post in feed or on someone's profile.
  • Verified badges on Instagram: 
    • Our goal is to help people feel confident about the content and accounts that they interact with.
    • To combat impersonations and help people avoid scammers that pretend to be high-profile people, Meta provides verified badges on Pages and profiles that indicate a verified account. This means that we've confirmed the authentic presence of the public figure, celebrity or global brand that the account represents

SLI 17.1.1

Relevant Signatories will report, at the Member State level, on metrics pertinent to assessing the effects of the tools described in the qualitative reporting element for Measure 17.1, which will include: the total count of impressions of the tool; and information on the interactions/engagement with the tool.

We were not able to deliver this SLI  for this report. 

Country Total count of the tool’s impressions Interactions/ engagement with the tool Other relevant metrics
Austria 0 0 0
Belgium 0 0 0
Bulgaria 0 0 0
Croatia 0 0 0
Cyprus 0 0 0
Czech Republic 0 0 0
Denmark 0 0 0
Estonia 0 0 0
Finland 0 0 0
France 0 0 0
Germany 0 0 0
Greece 0 0 0
Hungary 0 0 0
Ireland 0 0 0
Italy 0 0 0
Latvia 0 0 0
Lithuania 0 0 0
Luxembourg 0 0 0
Malta 0 0 0
Netherlands 0 0 0
Poland 0 0 0
Portugal 0 0 0
Romania 0 0 0
Slovakia 0 0 0
Slovenia 0 0 0
Spain 0 0 0
Sweden 0 0 0
Iceland 0 0 0
Liechtenstein 0 0 0
Norway 0 0 0

Measure 17.2

Relevant Signatories will develop, promote and/or support or continue to run activities to improve media literacy and critical thinking such as campaigns to raise awareness about Disinformation, as well as the TTPs that are being used by malicious actors, among the general public across the European Union, also considering the involvement of vulnerable communities.

Instagram

QRE 17.2.1

Relevant Signatories will describe the activities they launch or support and the Member States they target and reach. Relevant signatories will further report on actions taken to promote the campaigns to their user base per Member States targeted.

National Elections:
We proactively point users to reliable information on the electoral process through in-app ‘Election Day Information’. These are notices at the top of feed on Instagram, reminding people of the day they can vote and re-directing them to national authoritative sources on how and where to vote.

For instance, for the legislative elections in France, the ‘Election Day Information’ feature ran between June 29 - 30, and July 6 - 7, 2024 and directed users to a voting information page on the Ministry of the Interior's website

Also, Meta launched a campaign aimed to increase awareness of the tools and processes that Meta deploys on its own platforms (Facebook and Instagram) in advance of an election, to help inform French users how Meta works to combat misinformation, prevent electoral interference and protect electoral candidates. The campaign ran from the 20th of June until the second round election on the 4th of July 2024. 

For the Irish General Election on 29 November 2024, Meta successfully launched its Voter Information Unit (VIU) and Election Day Reminder (EDR) campaigns on Instagram, reaching a significant number of users in support of the recent elections.  

Fraud and Scams:  Meta launched a campaign to raise awareness of fraud and scams. The campaign ran in several EU markets, including France, Germany, Poland, Romania, Belgium, and Spain and used a range of relevant mediums including Meta’s platforms (Facebook and Instagram) and other external platforms. The campaign featured ads from Facebook, Instagram, and WhatsApp, emphasizing our commitment to user safety. It educates users on how to identify, avoid, and report scams while highlighting our ongoing efforts to protect them on our platforms. The campaign ran from 11 November-31 December 2024. 

SLI 17.2.1

Relevant Signatories report on number of media literacy and awareness raising activities organised and or participated in and will share quantitative information pertinent to show the effects of the campaigns they build or support at the Member State level.

Below we have provided some engagement statistics for the above described media literacy campaigns:
  • French Elections ‘Election Day Information’ feature: Users in metropolitan France and overseas territories clicked on these in-app notifications more than 496K times on Instagram.
  • French Elections campaign: Reached 2.1 million users in France, generating 10.6 million impressions. This ran on Meta owned platforms only. 

Country Nr of media literacy/ awareness raising activities organised/ participated in Reach of campaigns Nr of participants Nr of interactions with online assets Nr of participants (etc)
Austria 0 0 0 0 0
Belgium 0 0 0 0 0
Bulgaria 0 0 0 0 0
Croatia 0 0 0 0 0
Cyprus 0 0 0 0 0
Czech Republic 0 0 0 0 0
Denmark 0 0 0 0 0
Estonia 0 0 0 0 0
Finland 0 0 0 0 0
France 0 0 0 0 0
Germany 0 0 0 0 0
Greece 0 0 0 0 0
Hungary 0 0 0 0 0
Ireland 0 0 0 0 0
Italy 0 0 0 0 0
Latvia 0 0 0 0 0
Lithuania 0 0 0 0 0
Luxembourg 0 0 0 0 0
Malta 0 0 0 0 0
Netherlands 0 0 0 0 0
Poland 0 0 0 0 0
Portugal 0 0 0 0 0
Romania 0 0 0 0 0
Slovakia 0 0 0 0 0
Slovenia 0 0 0 0 0
Spain 0 0 0 0 0
Sweden 0 0 0 0 0
Iceland 0 0 0 0 0
Liechtenstein 0 0 0 0 0
Norway 0 0 0 0 0

Measure 17.3

For both of the above Measures, and in order to build on the expertise of media literacy experts in the design, implementation, and impact measurement of tools, relevant Signatories will partner or consult with media literacy experts in the EU, including for instance the Commission's Media Literacy Expert Group, ERGA's Media Literacy Action Group, EDMO, its country-specific branches, or relevant Member State universities or organisations that have relevant expertise.

Instagram

QRE 17.3.1

Relevant Signatories will describe how they involved and partnered with media literacy experts for the purposes of all Measures in this Commitment.

As mentioned in our baseline report, Meta, working in partnership with experts, educators, civic society and governments around the world is central to our digital citizenship efforts. Our partners bring valuable subject matter expertise and are also important channels for distributing these tools and resources to a broader audience. Partners we work with include various government bodies (such as ministries of education and media regulators), our global network of third-party factcheckers, parent-teacher associations, the European Association for Viewers Interests (EAVI), the UNESCO Institute for Information Technologies in Education (UNESCO IITE), Yale University, Harvard University, the Micro:bit Educational Foundation, and many more.

Meta also belongs to the Steering Committee of the EU Digital Citizenship working group, launched in December 2020 to contribute multidisciplinary expertise from civil society and industry to the current EU debate on digital citizenship. 

Commitment 18

Relevant Signatories commit to minimise the risks of viral propagation of Disinformation by adopting safe design practices as they develop their systems, policies, and features.

We signed up to the following measures of this commitment

Measure 18.1 Measure 18.2 Measure 18.3

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Yes

If yes, list these implementation measures here

As mentioned in our baseline report, we continue to enforce our policies to combat the spread of misinformation.

In December 2024, we globally deprecated the feature on Instagram that displayed a pop-up when an account attempted to tag or mention another account that had been repeatedly fact-checked.

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

As mentioned in our baseline report, our policies are based on years of experience and expertise in safety combined with external input from experts around the world. 

Commitment 18 covers the current practices for Instagram in the EU. In keeping with Meta’s public announcements on 7 January 2025, we will continue to assess the applicability of this chapter to Instagram and we will keep under review whether it is appropriate to make alterations in light of changes in our practices, such as the deployment of Community Notes.

Measure 18.1

Relevant Signatories will take measures to mitigate risks of their services fuelling the viral spread of harmful Disinformation, such as: recommender systems designed to improve the prominence of authoritative information and reduce the prominence of Disinformation based on clear and transparent methods and approaches for defining the criteria for authoritative information; other systemic approaches in the design of their products, policies, or processes, such as pre-testing.

Instagram

QRE 18.1.1

Relevant Signatories will report on the risk mitigation systems, tools, procedures, or features deployed under Measure 18.1 and report on their deployment in each EU Member State.

As mentioned in our baseline report, we work to prevent the spread of harmful content, including misinformation, through: Meta’s technologies as well as through human review teams .

In our January to June 2023 report, we mentioned the publication of our Content Distribution Guidelines for Instagram. 

It lays down our guidelines for content lowered in feed and stories, which outline types of content that may be shown lower in feed and stories. 

QRE 18.1.2

Relevant Signatories will publish the main parameters of their recommender systems, both in their report and, once it is operational, on the Transparency Centre.

As mentioned in previous reports, Instagram System Cards help people understand how AI shapes their product experiences and provide insights into how the Feed ranking system dynamically works to deliver a personalised experience on Instagram. 

These cards provide detail on how our systems work in a way that is accessible for those who don’t have deep technical knowledge. In June 2023, we released 8 system cards for Instagram. There are 10 system cards for Instagram which are periodically updated. They give information about how our AI systems rank content, some of the predictions each system makes to determine what content might be most relevant, as well as the controls users can use to help customise users’ experience. They cover Feed, Stories, Reels and other surfaces where people go to find content from the accounts or people they follow. The system cards also cover AI systems that recommend “unconnected” content from people, groups, or accounts they don’t follow. A more detailed explanation of the AI behind content recommendations is available here.

To give a further level of detail beyond what’s published in the system cards, we have shared the types of inputs – known as signals – as well as the predictive models these signals inform that help determine what content users may find most relevant from their network on Instagram. Users can find these signals and predictions in the Transparency Centre, along with how frequently they tend to be used in the overall ranking process. 

We also use signals to help identify harmful content, which we remove as we become aware of it, as well as to help reduce the distribution of other types of problematic or low-quality content in line with our Content Distribution Guidelines


QRE 18.1.3

Relevant Signatories will outline how they design their products, policies, or processes, to reduce the impressions and engagement with Disinformation whether through recommender systems or through other systemic approaches, and/or to increase the visibility of authoritative information.

As mentioned in our baseline report, our policies articulate different categories of misinformation and try to provide clear guidance about how we treat that speech when we see it: 
  • We remove misinformation where it is likely to directly contribute to the risk of imminent physical harm. We also remove content that is likely to directly contribute to interference with the functioning of political processes.
  • For all other misinformation, we focus on reducing its prevalence or creating an environment that fosters a productive dialogue. As part of that effort, we partner with third-party fact-checking organisations to review and rate the accuracy of the most viral content on our platforms. We also provide resources to increase media and digital literacy so people can decide what to read, trust and share themselves.

Regarding the impact of our fact-checking labels, focused specifically on people who have already demonstrated an intent to share the fact-checked content: on average 46% of people on Instagram in the EU who start to share fact-checked content do not complete this action after receiving a warning from Meta that the content has been fact-checked

SLI 18.1.1

Relevant Signatories will provide, through meaningful metrics capable of catering for the performance of their products, policies, processes (including recommender systems), or other systemic approaches as relevant to Measure 18.1 an estimation of the effectiveness of such measures, such as the reduction of the prevalence, views, or impressions of Disinformation and/or the increase in visibility of authoritative information. Insofar as possible, Relevant Signatories will highlight the causal effects of those measures.

Rate of reshare non-completion among the unique attempts by users to reshare a content on Instagram that was treated with a fact-checking label in EU member state countries from 01/07/2024 to 31/12/2024.

Country % of reshares attempted that were not completed on treated content on Instagram between 01/07/2024 to 31/12/2024. of prevalence of disinformation
Austria 45% 0 0 0
Belgium 44% 0 0 0
Bulgaria 46% 0 0 0
Croatia 41% 0 0 0
Cyprus 50% 0 0 0
Czech Republic 44% 0 0 0
Denmark 49% 0 0 0
Estonia 44% 0 0 0
Finland 41% 0 0 0
France 48% 0 0 0
Germany 45% 0 0 0
Greece 48% 0 0 0
Hungary 46% 0 0 0
Ireland 43% 0 0 0
Italy 48% 0 0 0
Latvia 43% 0 0 0
Lithuania 47% 0 0 0
Luxembourg 48% 0 0 0
Malta 48% 0 0 0
Netherlands 42% 0 0 0
Poland 45% 0 0 0
Portugal 45% 0 0 0
Romania 44% 0 0 0
Slovakia 45% 0 0 0
Slovenia 46% 0 0 0
Spain 48% 0 0 0
Sweden 46% 0 0 0
Iceland N/A 0 0 0
Liechtenstein N/A 0 0 0
Norway N/A 0 0 0

Measure 18.2

Relevant Signatories will develop and enforce publicly documented, proportionate policies to limit the spread of harmful false or misleading information (as depends on the service, such as prohibiting, downranking, or not recommending harmful false or misleading information, adapted to the severity of the impacts and with due regard to freedom of expression and information); and take action on webpages or actors that persistently violate these policies.

Instagram

QRE 18.2.1

Relevant Signatories will report on the policies or terms of service that are relevant to Measure 18.2 and on their approach towards persistent violations of these policies.

As mentioned in our baseline report, our policies and approach to tackle misinformation - which are summarised in QRE 18.1.3 - are published in our Transparency Centre: 

These include specific actions taken against actors that repeatedly violate our policies. We take action against accounts that repeatedly share or publish content that is rated False or Altered, near-identical to what fact-checkers have debunked as False or Altered, and content we enforce against under our policy on vaccine misinformation. If accounts repeatedly share such content they will see their distribution reduced. 

For most violations, the user’s first strike will result in a warning with no further restrictions. If Meta removes additional posts that go against the Community Standards in the future, we'll apply additional strikes to the account, and the user may lose access to some features for longer periods of time.

If content that users have posted goes against our more severe policies, such as our policy on dangerous individuals and organisations or adult sexual exploitation, the user may receive additional, longer restrictions from certain features.

For most violations, if the user continues to post content that goes against the Community Standards after repeated warnings and restrictions, we will disable the account.

These policies apply across all EU Member States.

SLI 18.2.1

Relevant Signatories will report on actions taken in response to violations of policies relevant to Measure 18.2, at the Member State level. The metrics shall include: Total number of violations and Meaningful metrics to measure the impact of these actions (such as their impact on the visibility of or the engagement with content that was actioned upon).

Number of unique contents that were removed from Instagram for violating our harmful health misinformation or voter or census interference policies in EU member state countries from 01/07/2024 to 31/12/2024.

Country determined by inferred user (responsible for the content) location.

*Meta's policies to tackle false claims about COVID-19 which could directly contribute to the risk of imminent physical harm changed in June 2023 following Meta's independent Oversight Board’s advice. We now only remove this content in countries with an active COVID-19 public health emergency declaration (during the reporting period no countries had an active health emergency declaration). This change has impacted our enforcement metrics on removals for this reporting period but does not change our overall approach to fact-checking. These changes are an expected part of fluctuating content trends online*

Country Total no of violations
Austria 3 0 0 0
Belgium 1 0 0 0
Bulgaria 1 0 0 0
Croatia 1 0 0 0
Cyprus 0 0 0 0
Czech Republic 0 0 0 0
Denmark 0 0 0 0
Estonia 1 0 0 0
Finland 1 0 0 0
France 13 0 0 0
Germany 5 0 0 0
Greece 1 0 0 0
Hungary 1 0 0 0
Ireland 3 0 0 0
Italy 11 0 0 0
Latvia 0 0 0 0
Lithuania 0 0 0 0
Luxembourg 0 0 0 0
Malta 0 0 0 0
Netherlands 5 0 0 0
Poland 3 0 0 0
Portugal 12 0 0 0
Romania 7 0 0 0
Slovakia 1 0 0 0
Slovenia 0 0 0 0
Spain 7 0 0 0
Sweden 0 0 0 0
Iceland N/A 0 0 0
Liechtenstein N/A 0 0 0
Norway N/A 0 0 0

Measure 18.3

Relevant Signatories will invest and/or participate in research efforts on the spread of harmful Disinformation online and related safe design practices, will make findings available to the public or report on those to the Code's taskforce. They will disclose and discuss findings within the permanent Task-force, and explain how they intend to use these findings to improve existing safe design practices and features or develop new ones.

Instagram

QRE 18.3.1

Relevant Signatories will describe research efforts, both in-house and in partnership with third-party organisations, on the spread of harmful Disinformation online and relevant safe design practices, as well as actions or changes as a result of this research. Relevant Signatories will include where possible information on financial investments in said research. Wherever possible, they will make their findings available to the general public.

As noted in our baseline report, the following are some key initiatives we have supported to empower the independent research community and to help us gain a better understanding of what our users want, need and expect: such as Social Science Research, Data for Good, the Research Platform for coordinated inauthentic behaviour (CIB) Network Disruptions

Research Grants & Awards. In our baseline report, we mentioned that every year, we invest in numerous research projects as part of our overall efforts to make the internet and people on our platforms safer and more secure. Details of our most recent awards can be found here.

Commitment 19

Relevant Signatories using recommender systems commit to make them transparent to the recipients regarding the main criteria and parameters used for prioritising or deprioritising information, and provide options to users about recommender systems, and make available information on those options.

We signed up to the following measures of this commitment

Measure 19.1 Measure 19.2

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Instagram

If yes, list these implementation measures here

There have been no significant updates since the last submitted report.

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

As mentioned in our baseline report, our policies are based on years of experience and expertise in safety combined with external input from experts around the world. We are continuously working to protect the integrity of our platforms and adjusting our transparency and recommender tools.

Measure 19.1

Relevant Signatories will make available to their users, including through the Transparency Centre and in their terms and conditions, in a clear, accessible and easily comprehensible manner, information outlining the main parameters their recommender systems employ.

Instagram

QRE 19.1.1

Relevant Signatories will provide details of the policies and measures put in place to implement the above-mentioned measures accessible to EU users, especially by publishing information outlining the main parameters their recommender systems employ in this regard. This information should also be included in the Transparency Centre.

The range of measures and policies put in place in relation to this measure have been described in previous reports and are explained in greater detail on Meta’s Transparency Centre. For example, there it is possible to find detailed explanations relating to Instagram System Cards that help people understand how AI shapes their product experiences.

The policies outlined apply across all EU Member States.

Measure 19.2

Relevant Signatories will provide options for the recipients of the service to select and to modify at any time their preferred options for relevant recommender systems, including giving users transparency about those options.

Instagram

Commitment 21

Relevant Signatories commit to strengthen their efforts to better equip users to identify Disinformation. In particular, in order to enable users to navigate services in an informed way, Relevant Signatories commit to facilitate, across all Member States languages in which their services are provided, user access to tools for assessing the factual accuracy of sources through fact-checks from fact-checking organisations that have flagged potential Disinformation, as well as warning labels from other authoritative sources.

We signed up to the following measures of this commitment

Measure 21.1 Measure 21.2 Measure 21.3

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

No

If yes, list these implementation measures here

As mentioned in our previous report, we updated our fact-checking program guidelines to clarify that our existing policies allow fact-checkers to rate digitally created or edited content - including through the use of artificial intelligence (AI) - when content risks misleading people about something consequential that has no basis in fact. We also employed measures to improve fact-checkers ability to apply their ratings to fake or manipulated audio content.

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

As mentioned in our baseline report, our policies are based on years of experience and expertise in safety combined with external input from experts around the world. 

Commitment 21 covers the current practices for Instagram in the EU. In keeping with Meta’s public announcements on 7 January 2025, we will continue to assess the applicability of this chapter to Facebook and Instagram and we will keep under review whether it is appropriate to make alterations in light of changes in our practices, such as the deployment of Community Notes.

Measure 21.1

Relevant Signatories will further develop and apply policies, features, or programs across Member States and EU languages to help users benefit from the context and insights provided by independent fact-checkers or authoritative sources, for instance by means of labels, such as labels indicating fact-checker ratings, notices to users who try to share or previously shared the rated content, information panels, or by acting upon content notified by fact-checkers that violate their policies.

Instagram

QRE 21.1.1

Relevant Signatories will report on the policies, features, or programs they deploy to meet this Measure and on their availability across Member States.

As mentioned in our baseline report, Meta partners with  over 45 Independent third-party fact-checkers certified through the non-partisan International Fact-Checking Network (IFCN) and European Fact-Checking Standards Network (EFCSN) in Europe. In the EU specifically we work with over 29 partners, covering 23 languages, and 26 countries. 

The list of fact-checkers with whom we partner across the EU is in QRE 30.1.2. 

Fact-checkers review a piece of content and rate its accuracy. This process occurs independently from Meta. The ratings fact-checkers can use are False, Altered, Partly false, Missing context, Satire and True. Further details are shared on our Transparency Centre on these ratings. While we are responsible for setting these guidelines, fact-checkers review and rate content independently – we do not make changes to ratings.

When content has been rated by fact-checkers, we take action to (1) label it, (2) ensure less people see it, and (3) sanction repeat offenders. 

There is more detail on all the actions taken under QRE 31.1.1 as well as in our baseline report.

SLI 21.1.1

Relevant Signatories will report through meaningful metrics on actions taken under Measure 21.1, at the Member State level. At the minimum, the metrics will include: total impressions of fact-checks; ratio of impressions of fact-checks to original impressions of the fact-checked content–or if these are not pertinent to the implementation of fact-checking on their services, other equally pertinent metrics and an explanation of why those are more adequate.

1. Number of distinct articles written by 3PFCs that were used to apply a fact-checking label to content on Instagram from 01/07/2024 to 31/12/2024.*

2.  Number of distinct pieces of content viewed on Instagram that were  treated with a fact-checking label due to a falsity assessment by third party fact checkers between 01/07/2024 to 31/12/2024.

3. Rate of reshare non-completion among the unique attempts by users to reshare a content on Instagram that was treated with a fact-checking label in EU member state countries from 01/07/2024 to 31/12/2024..

*This metric shows the number of distinct fact-checking articles written by Meta’s 3PFC partners and utilised to label content in each EU member state. As articles may be used in multiple countries, and several articles may be used to label a piece of content, the total sum of articles utilised for all member states exceeds the number of distinct articles created in the EU (43,000). This is expected. 

Country Number of Articles written by third party fact checkers to justify rating on Instagram between 01/07/2024 to 31/12/2024. Content viewed on Instagram and treated with fact checks, due to a falsity assessment by third party fact checkers between 01/07/2024 to 31/12/2024. % of reshares attempted that were not completed on treated content - Instagram between 01/07/2024 to 31/12/2024.
Austria Over 13,000 Over 72,000 45% 0
Belgium Over 14,000 Over 83,000 44% 0
Bulgaria Over 8,300 Over 32,000 46% 0
Croatia Over 8,800 Over 35,000 41% 0
Cyprus Over 8,200 Over 32,000 50% 0
Czech Republic Over 10,000 Over 46,000 44% 0
Denmark Over 11,000 OverOver 53,00046,000 49% 0
Estonia Over 5,000 Over 14,000 44% 0
Finland Over 10,000 Over 47,000 41% 0
France Over 21,000 Over 200,000 48% 0
Germany Over 26,000 Over 310,000 45% 0
Greece Over 12,000 Over 69,000 48% 0
Hungary Over 8,500 Over 33,000 46% 0
Ireland Over 14,000 Over 89,000 43% 0
Italy Over 23,000 Over 220,000 48% 0
Latvia Over 5,400 Over 15,000 43% 0
Lithuania Over 5,900 Over 18,000 47% 0
Luxembourg Over 5,400 Over 15,000 48% 0
Malta Over 4,900 Over 14,000 48% 0
Netherlands Over 18,000 Over 130,000 42% 0
Poland Over 14,000 Over 84,000 45% 0
Portugal Over 17,000 Over 120,000 45% 0
Romania Over 11,000 Over 57,000 44% 0
Slovakia Over 7,900 Over 29,000 45% 0
Slovenia Over 6,200 Over 21,000 46% 0
Spain Over 23,000 Over 260,000 48% 0
Sweden Over 15,000 Over 100,000 46% 0
Iceland N/A N/A N/A 0
Liechtenstein N/A N/A N/A 0
Norway N/A N/A N/A 0

Measure 21.2

Relevant Signatories will, in light of scientific evidence and the specificities of their services, and of user privacy preferences, undertake and/or support research and testing on warnings or updates targeted to users that have interacted with content that was later actioned upon for violation of policies mentioned in this section. They will disclose and discuss findings within the permanent Task-force in view of identifying relevant follow up actions.

Instagram

QRE 21.2.1

Relevant Signatories will report on the research or testing efforts that they supported and undertook as part of this commitment and on the findings of research or testing undertaken as part of this commitment. Wherever possible, they will make their findings available to the general public.

Between July and December 2024, we displayed warnings on over  1 million distinct pieces of content on Instagram (including re-shares) in the EU based on over 43,000 debunking articles written by our fact-checking partners in the EU.

The impact of actions taken under Measure 21.1.1 between 01/07/2024 to 31/12/2024, meant that 46% of reshares attempted on Fact-Checked content on Instagram in EU Member States were not completed.

Measure 21.3

Where Relevant Signatories employ labelling and warning systems, they will design these in accordance with up-to-date scientific evidence and with analysis of their users' needs on how to maximise the impact and usefulness of such interventions, for instance such that they are likely to be viewed and positively received.

Instagram

QRE 21.3.1

Relevant Signatories will report on their procedures for developing and deploying labelling or warning systems and how they take scientific evidence and their users' needs into account to maximise usefulness.

As mentioned in our baseline report, the fact-checking programme’s ratings as well as its labels were developed in close consultation with fact-checkers and misinformation experts. 

Meta also works closely with independent experts who possess knowledge and expertise to determine what constitutes misinformation that is likely to directly contribute to imminent harm. 

Commitment 23

Relevant Signatories commit to provide users with the functionality to flag harmful false and/or misleading information that violates Signatories policies or terms of service.

We signed up to the following measures of this commitment

Measure 23.1 Measure 23.2

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

No

If yes, list these implementation measures here

As mentioned in our baseline report, we maintain a specific report category for users to flag to us what they believe is false information (in addition to content that they believe violates any of our other Community Standards).

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

As mentioned in our baseline report, our policies are based on years of experience and expertise in safety combined with external input from experts around the world. We are continuously working to protect the integrity of our platforms and adjusting our user reporting tools or processes. 

Measure 23.1

Relevant Signatories will develop or continue to make available on all their services and in all Member States languages in which their services are provided a user-friendly functionality for users to flag harmful false and/or misleading information that violates Signatories' policies or terms of service. The functionality should lead to appropriate, proportionate and consistent follow-up actions, in full respect of the freedom of expression.

Instagram

QRE 23.1.1

Relevant Signatories will report on the availability of flagging systems for their policies related to harmful false and/or misleading information across EU Member States and specify the different steps that are required to trigger the systems.

As mentioned in our baseline report, users can report content that they specifically identified as false information through the following process outlined on our website.  

We also provide an appeal system. More details about these systems can be found in our baseline and January to June 2023 report. 

Measure 23.2

Relevant Signatories will take the necessary measures to ensure that this functionality is duly protected from human or machine-based abuse (e.g., the tactic of 'mass-flagging' to silence other voices).

Instagram

QRE 23.2.1

Relevant Signatories will report on the general measures they take to ensure the integrity of their reporting and appeals systems, while steering clear of disclosing information that would help would-be abusers find and exploit vulnerabilities in their defences.

Meta’s processes include measures to uphold the integrity of our reporting and appeals systems. 

Mass reporting: We do not remove pieces of content based on the number of reports we receive. If a piece of content violates our Community Standards, one report is enough for us to remove it. If it does not violate our Community Standards, the number of reports will not lead to the content being removed, no matter how high.

Because of the volume of content we review across our platforms, we always need to prioritise cases for our content moderators, and we do that based on severity and virality. The amount of reports does not impact response times or enforcement decisions. 

Protection against misuse: We may suspend the processing of notices and complaints submitted through our notice and complaints mechanisms, for a limited period of time, where individuals and entities have, after being warned, frequently submitted notices and complaints that are manifestly unfounded.

Anonymous reporting: When something gets reported to Instagram, we'll review it and take action on anything we determine doesn't follow our Community Standards. Unless a user is reporting an incident of intellectual property infringement, their report will be kept confidential and the account that was reported won’t see who reported them.

Commitment 24

Relevant Signatories commit to inform users whose content or accounts has been subject to enforcement actions (content/accounts labelled, demoted or otherwise enforced on) taken on the basis of violation of policies relevant to this section (as outlined in Measure 18.2), and provide them with the possibility to appeal against the enforcement action at issue and to handle complaints in a timely, diligent, transparent, and objective manner and to reverse the action without undue delay where the complaint is deemed to be founded.

We signed up to the following measures of this commitment

Measure 24.1

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

No

If yes, list these implementation measures here

As mentioned in our baseline report, we’re committed to fighting the spread of misinformation on our platforms, but we also believe it’s critical to enable expression, debate and voice. We let users know when we remove a piece of content for breaching our Community Standards or when a fact-checker rated their content. In June 2023, we also took steps to improve our penalty system to make it fairer and more effective.

Relevant updates to user notice and appeal processes were also made in 2023,  in line with DSA requirements.

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

As mentioned in our baseline report, our policies are based on years of experience and expertise in safety combined with external input from experts around the world. We are continuously working to protect the integrity of our platforms and adjusting our processes. 

Measure 24.1

Relevant Signatories commit to provide users with information on why particular content or accounts have been labelled, demoted, or otherwise enforced on, on the basis of violation of policies relevant to this section, as well as the basis for such enforcement action, and the possibility for them to appeal through a transparent mechanism.

Instagram

QRE 24.1.1

Relevant Signatories will report on the availability of their notification and appeals systems across Member States and languages and provide details on the steps of the appeals procedure.

As mentioned in our baseline report, when we remove a piece of content, we let the user know that  something they posted goes against our Community Standards. Moreover, we are transparent with users when their content is fact-checked, and have an appeals process in place for users who wish to issue a correction or dispute a rating with a fact-checker.

Appeal procedures are outlined under QRE 23.1.1.

SLI 24.1.1

Relevant Signatories provide information on the number and nature of enforcement actions for policies described in response to Measure 18.2, the numbers of such actions that were subsequently appealed, the results of these appeals, information, and to the extent possible metrics, providing insight into the duration or effectiveness of processing of appeals process, and publish this information on the Transparency Centre.

Number of unique contents that were removed from Instagram for violating our harmful health misinformation or voter or census interference policies in EU member state countries from 01/07/2024 to 31/12/2024.

*Meta's policies to tackle false claims about COVID-19 which could directly contribute to the risk of imminent physical harm changed in June 2023 following Meta's independent Oversight Board’s advice. We now only remove this content in countries with an active COVID-19 public health emergency declaration (during the reporting period no countries had an active health emergency declaration). This change has impacted our enforcement metrics on removals for this reporting period but does not change our overall approach to fact-checking. These changes are an expected part of fluctuating content trends online*

Country Nr of enforcement actions Nr of actions appealed Metrics on results of appeals Metrics on the duration and effectiveness of the appeal process
Austria 3 0 0 0
Belgium 1 0 0 0
Bulgaria 1 0 0 0
Croatia 1 0 0 0
Cyprus 0 0 0 0
Czech Republic 0 0 0 0
Denmark 0 0 0 0
Estonia 1 0 0 0
Finland 1 0 0 0
France 13 0 0 0
Germany 5 0 0 0
Greece 1 0 0 0
Hungary 1 0 0 0
Ireland 3 0 0 0
Italy 11 0 0 0
Latvia 0 0 0 0
Lithuania 0 0 0 0
Luxembourg 0 0 0 0
Malta 0 0 0 0
Netherlands 5 0 0 0
Poland 3 0 0 0
Portugal 12 0 0 0
Romania 7 0 0 0
Slovakia 1 0 0 0
Slovenia 0 0 0 0
Spain 7 0 0 0
Sweden 0 0 0 0
Iceland 0 0 0 0
Liechtenstein 0 0 0 0
Norway 0 0 0 0

Empowering Researchers

Commitment 26

Relevant Signatories commit to provide access, wherever safe and practicable, to continuous, real-time or near real-time, searchable stable access to non-personal data and anonymised, aggregated, or manifestly-made public data for research purposes on Disinformation through automated means such as APIs or other open and accessible technical solutions allowing the analysis of said data.

We signed up to the following measures of this commitment

Measure 26.1 Measure 26.2 Measure 26.3

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

No

If yes, list these implementation measures here

As mentioned in our previous reports, Meta rolled out the Content Library and API tools to provide access to near real-time public content on Instagram. Details about the content, such as the number of reactions, shares, comments and, for the first time, post view counts are also available. Researchers can search, explore and filter that content on a graphical User Interface (UI) or through a programmatic API. 

Together, these tools provide comprehensive access to publicly-accessible content across Facebook and Instagram.

Individuals, including journalists affiliated with qualified institutions pursuing scientific or public interest research topics can apply for access to these tools through partners with deep expertise in secure data sharing for research, starting with the University of Michigan’s Inter-university Consortium for Political and Social Research. This is a first-of-its-kind partnership that will enable researchers to analyse data from the API in ICPSR’s Social Media Archives (SOMAR) Virtual Data Enclave.

Meta continues to publish reports with relevant data regarding content on Instagram via its Transparency Centre. We’ve shared our quarterly reports throughout 2024 there: 

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

Yes

If yes, which further implementation measures do you plan to put in place in the next 6 months?

We continue to, and are in process of adding new features and functionality to Meta Content Library, including  improvements to the application processes for access to the research tools. In addition to this, we regularly seek feedback from the research community for critical updates.

Measure 26.1

Relevant Signatories will provide public access to non-personal data and anonymised, aggregated or manifestly-made public data pertinent to undertaking research on Disinformation on their services, such as engagement and impressions (views) of content hosted by their services, with reasonable safeguards to address risks of abuse (e.g. API policies prohibiting malicious or commercial uses).

Instagram

QRE 26.1.1

Relevant Signatories will describe the tools and processes in place to provide public access to non-personal data and anonymised, aggregated and manifestly-made public data pertinent to undertaking research on Disinformation, as well as the safeguards in place to address risks of abuse.

As mentioned in our baseline report, we publish a wide range of regular reports on our Transparency Centre including to give our community visibility into how we enforce our policies or respond to some requests: https://transparency.fb.com/data/. We also publish extensive reports on our findings about coordinated behaviour in our newsroom and we have a dedicated public website hosting our Ad Library tools.

QRE 26.1.2

Relevant Signatories will publish information related to data points available via Measure 25.1, as well as details regarding the technical protocols to be used to access these data points, in the relevant help centre. This information should also be reachable from the Transparency Centre. At minimum, this information will include definitions of the data points available, technical and methodological information about how they were created, and information about the representativeness of the data.

Ad Library Tools: The dedicated website for the Ad Library allows users to search all of the ads currently running across Meta technologies. All ads that are currently running on Meta technologies show: the ad content; the basic information, such as when the ad started running and which advertiser is running it. For the ads that have run anywhere in the European Union in the past year, it includes additional transparency specific to the EU. Regarding Ads about social issues, elections or politics that have run in the past seven years, it shows: the ad content, the basic information, such as when the ad started running and which advertiser is running it and additional transparency about spend, reach and funding entities.

As mentioned in our baseline report, we publish on our Transparency Centre numerous reports : 
  • Community Standards Enforcement Report: We publish this report publicly in our Transparency Centre on a quarterly basis to more effectively track our progress and demonstrate our continued commitment to making our services safe and inclusive. The report shares metrics on how we are doing at preventing and taking action on content that goes against our Community Standards (against 12 policies on Instagram). 
  • Quarterly Adversarial Threat Report: We share publicly our findings about coordinated inauthentic behaviour (CIB) we detect and remove from our platforms. As part of our quarterly adversarial threat reports, we will publish information about the networks we take down to make it easier for people to see progress we’re making in one place.

SLI 26.1.1

Relevant Signatories will provide quantitative information on the uptake of the tools and processes described in Measure 26.1, such as number of users.

Over 600 researchers globally had access to the Meta Content Library User Interface, and more than 160 had access to the Meta Content Library API. 

Country Nr of users of public access Other quantitative information on public access
Austria 0 0
Belgium 0 0
Bulgaria 0 0
Croatia 0 0
Cyprus 0 0
Czech Republic 0 0
Denmark 0 0
Estonia 0 0
Finland 0 0
France 0 0
Germany 0 0
Greece 0 0
Hungary 0 0
Ireland 0 0
Italy 0 0
Latvia 0 0
Lithuania 0 0
Luxembourg 0 0
Malta 0 0
Netherlands 0 0
Poland 0 0
Portugal 0 0
Romania 0 0
Slovakia 0 0
Slovenia 0 0
Spain 0 0
Sweden 0 0
Iceland 0 0
Liechtenstein 0 0
Norway 0 0

Measure 26.2

Relevant Signatories will provide real-time or near real-time, machine-readable access to non-personal data and anonymised, aggregated or manifestly-made public data on their service for research purposes, such as accounts belonging to public figures such as elected official, news outlets and government accounts subject to an application process which is not overly cumbersome.

Instagram

QRE 26.2.1

Relevant Signatories will describe the tools and processes in place to provide real-time or near real-time access to non-personal data and anonymised, aggregated and manifestly-made public data for research purposes as described in Measure 26.2.

Meta Content Library includes public posts and data on Instagram. Data from the Library can be searched, explored, and filtered on a graphical UI or through a programmatic API. 

Meta Content Library is a web-based, controlled-access environment where researchers can perform deeper analysis of the public content by using Content Library API in a secured clean room environment: 
  • Searching and filtering: searching public posts across Facebook and Instagram is easy with comprehensive sorting and filtering options. Post results can be filtered by language, view count, media type, content producer and more.
  • Multimedia: Photos, videos and reels are available for dynamic search, exploration and analysis.
  • Producer lists: customizable collections of content producers can be used to refine search results. Researchers can apply custom producer lists to a search query to surface public content from specific content owners on Facebook or Instagram.


Content Library API allows programmatic queries of the data and is designed for computational researchers. Data pulled from the API can be analysed in a secure platform: 
  • Endpoints and data fields: With 8 dedicated endpoints, the Content Library API can search across over 100 data fields from Instagram posts, including a subset of personal Instagram accounts.
  • Search indexing and results: Powerful search capabilities can return up to 100,000 results per query.
  • Asynchronous search: allows for queries to run in the background while a researcher works on other tasks. Query progress is monitored and tracked by the API.

For more details - see here

QRE 26.2.2

Relevant Signatories will describe the scope of manifestly-made public data as applicable to their services.

Meta Content Library and API provide near real-time public content from Facebook and Instagram. Details about the content, such as the post owner and the number of reactions and shares, are also available: 
  • Posts shared by and information about Instagram business and creator accounts including from a subset of personal accounts.
  • Available for most countries and territories but excluded from countries where Meta is still evaluating legal and compliance requirements
  • The number of times a post or reel was displayed on screen

For more details - see here

QRE 26.2.3

Relevant Signatories will describe the application process in place to in order to gain the access to non-personal data and anonymised, aggregated and manifestly-made public data described in Measure 26.2.

Individuals, including journalists affiliated with qualified institutions pursuing scientific or public interest research topics are able to apply for access to these tools through a partner with deep expertise in secure data sharing for research, the University of Michigan’s Inter-university Consortium for Political and Social Research (ICPSR). 

For more details on the application process - see here

SLI 26.2.1

Relevant Signatories will provide meaningful metrics on the uptake, swiftness, and acceptance level of the tools and processes in Measure 26.2, such as: Number of monthly users (or users over a sample representative timeframe), Number of applications received, rejected, and accepted (over a reporting period or a sample representative timeframe), Average response time (over a reporting period or a sample representative timeframe).

On 31 December 2024, there were over 600 users globally with access to the Meta Content Library User Interface, and more than 160 with access to the Meta Content Library API. 

Country No of monthly users No of applications received No of applications rejected No of applications accepted Average response time Other metrics
Austria 0 0 0 0 0 0
Belgium 0 0 0 0 0 0
Bulgaria 0 0 0 0 0 0
Croatia 0 0 0 0 0 0
Cyprus 0 0 0 0 0 0
Czech Republic 0 0 0 0 0 0
Denmark 0 0 0 0 0 0
Estonia 0 0 0 0 0 0
Finland 0 0 0 0 0 0
France 0 0 0 0 0 0
Germany 0 0 0 0 0 0
Greece 0 0 0 0 0 0
Hungary 0 0 0 0 0 0
Ireland 0 0 0 0 0 0
Italy 0 0 0 0 0 0
Latvia 0 0 0 0 0 0
Lithuania 0 0 0 0 0 0
Luxembourg 0 0 0 0 0 0
Malta 0 0 0 0 0 0
Netherlands 0 0 0 0 0 0
Poland 0 0 0 0 0 0
Portugal 0 0 0 0 0 0
Romania 0 0 0 0 0 0
Slovakia 0 0 0 0 0 0
Slovenia 0 0 0 0 0 0
Spain 0 0 0 0 0 0
Sweden 0 0 0 0 0 0
Iceland 0 0 0 0 0 0
Liechtenstein 0 0 0 0 0 0
Norway 0 0 0 0 0 0

Measure 26.3

Relevant Signatories will implement procedures for reporting the malfunctioning of access systems and for restoring access and repairing faulty functionalities in a reasonable time.

Instagram

QRE 26.3.1

Relevant Signatories will describe the reporting procedures in place to comply with Measure 26.3 and provide information about their malfunction response procedure, as well as about malfunctions that would have prevented the use of the systems described above during the reporting period and how long it took to remediate them.

We provide comprehensive developer documentation and in depth technical guides that walk through how to use the different tools directly on our website, which also include a dedicated help centre.

Commitment 27

Relevant Signatories commit to provide vetted researchers with access to data necessary to undertake research on Disinformation by developing, funding, and cooperating with an independent, third-party body that can vet researchers and research proposals.

We signed up to the following measures of this commitment

Measure 27.1 Measure 27.2 Measure 27.3 Measure 27.4

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

No

If yes, list these implementation measures here

As mentioned in our baseline report, we are  actively engaged in the EDMO working group on Platform to Researcher data sharing to develop standardised processes for sharing data with researchers. 

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

Yes

If yes, which further implementation measures do you plan to put in place in the next 6 months?

We will continue to participate in the EDMO working group to further support the development of  an independent intermediary body to enable GDPR-compliant data sharing. This will include feeding learnings from the EDMO pilot described above into the EDMO working group.

We continue to provide access to new and existing researchers on Meta Content Library, while also evaluating and working towards any improvements to access methods and application processes.

Measure 27.1

Relevant Signatories commit to work with other relevant organisations (European Commission, Civil Society, DPAs) to develop within a reasonable timeline the independent third-party body referred to in Commitment 27, taking into account, where appropriate, ongoing efforts such as the EDMO proposal for a Code of Conduct on Access to Platform Data.

Instagram

QRE 27.1.1

Relevant Signatories will describe their engagement with the process outlined in Measure 27.1 with a detailed timeline of the process, the practical outcome and any impacts of this process when it comes to their partnerships, programs, or other forms of engagement with researchers.

As mentioned in our baseline report, we’ve been actively engaged in the EDMO working group on Platform to Researcher data sharing to develop standardised processes for sharing data with researchers since 2019, and in 2020, we shared extensive comments in response to EDMO call for comment on the GDPR and sharing data for independent social scientific research.

We are participating in the EDMO working group for the Creation of an Independent Intermediary Body to Support Research on Digital Platforms. In 2025 we continue our involvement in the EDMO working group.

Measure 27.2

Relevant Signatories commit to co-fund from 2022 onwards the development of the independent third-party body referred to in Commitment 27.

Instagram

QRE 27.2.1

Relevant Signatories will disclose their funding for the development of the independent third-party body referred to in Commitment 27.

As mentioned in our baseline report, while the EDMO process has been initially funded by the European Commission, we’ve actively supported it by skills-based sponsorship and participation in the EDMO pilot. Separately, we have funded a third party (CASD) to act as a third-party data sharing intermediary as part of the pilot. 

Measure 27.3

Relevant Signatories commit to cooperate with the independent third-party body referred to in Commitment 27 once it is set up, in accordance with applicable laws, to enable sharing of personal data necessary to undertake research on Disinformation with vetted researchers in accordance with protocols to be defined by the independent third-party body.

Instagram

QRE 27.3.1

Relevant Signatories will describe how they cooperate with the independent third-party body to enable the sharing of data for purposes of research as outlined in Measure 27.3, once the independent third-party body is set up.

N/A at this stage 

SLI 27.3.1

Relevant Signatories will disclose how many of the research projects vetted by the independent third-party body they have initiated cooperation with or have otherwise provided access to the data they requested.

At this time, the EDMO process has not yet vetted research proposals. We are engaging with another highly experienced third-party, ICPSR, who is vetting researchers and hosting access to datasets about the US 2020 election, and the Meta Content Library and API.

Country Nr of research projects for which they provided access to data
Austria 0
Belgium 0
Bulgaria 0
Croatia 0
Cyprus 0
Czech Republic 0
Denmark 0
Estonia 0
Finland 0
France 0
Germany 0
Greece 0
Hungary 0
Ireland 0
Italy 0
Latvia 0
Lithuania 0
Luxembourg 0
Malta 0
Netherlands 0
Poland 0
Portugal 0
Romania 0
Slovakia 0
Slovenia 0
Spain 0
Sweden 0
Iceland 0
Liechtenstein 0
Norway 0

Measure 27.4

Relevant Signatories commit to engage in pilot programs towards sharing data with vetted researchers for the purpose of investigating Disinformation, without waiting for the independent third-party body to be fully set up. Such pilot programmes will operate in accordance with all applicable laws regarding the sharing/use of data. Pilots could explore facilitating research on content that was removed from the services of Signatories and the data retention period for this content.

Instagram

QRE 27.4.1

Relevant Signatories will describe the pilot programs they are engaged in to share data with vetted researchers for the purpose of investigating Disinformation. This will include information about the nature of the programs, number of research teams engaged, and where possible, about research topics or findings.

As mentioned in our baseline report, since 2018, we have been sharing information with independent researchers about our network disruptions relating to coordinated inauthentic behaviour (CIB).  Since 2021, we have been expanding our Influence Operations (IO) Archive dataset— which provides information on Coordinated Inauthentic Behaviour and contains more than 100 removed networks — to more researchers studying influence operations worldwide. This dataset provides access to raw data where researchers can visualise and assess these network operations both quantitatively and qualitatively. In addition, we share our own internal research and analysis. 

Commitment 28

COOPERATION WITH RESEARCHERS Relevant Signatories commit to support good faith research into Disinformation that involves their services.

We signed up to the following measures of this commitment

Measure 28.1 Measure 28.2 Measure 28.3 Measure 28.4

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Yes

If yes, list these implementation measures here

Meta continues to explore options for sharing insights with research groups on these issues, in addition to our sharing through the IO Research Archive and in our public Quarterly threat reports. 

As part of our ongoing efforts to enhance the Meta Content Library tool and incorporate feedback from researchers, we've introduced several improvements. We've made searching more efficient by adding exact phrase matching, text-in-image search, and researchers can now share content producer lists with their peers, enabling quick filtering of public data from specific content producers on Instagram. 

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

Yes

If yes, which further implementation measures do you plan to put in place in the next 6 months?

We continue to, and are in process of adding new features and functionality to Meta Content Library, including enhancing application processes for access to the research tools. In addition to this, we regularly seek feedback from the research community for critical updates. By developing these tools and supporting the research community we continue to support good faith research. 

Measure 28.1

Relevant Signatories will ensure they have the appropriate human resources in place in order to facilitate research, and should set-up and maintain an open dialogue with researchers to keep track of the types of data that are likely to be in demand for research and to help researchers find relevant contact points in their organisations.

Instagram

QRE 28.1.1

Relevant Signatories will describe the resources and processes they deploy to facilitate research and engage with the research community, including e.g. dedicated teams, tools, help centres, programs, or events.

As mentioned in our baseline report, Meta has a team dedicated to providing academics and independent researchers with the tools and data they need to study Meta’s impact on the world.

Relevant details about research tools are available on our Transparency Centre.

Measure 28.2

Relevant Signatories will be transparent on the data types they currently make available to researchers across Europe.

Instagram

QRE 28.2.1

Relevant Signatories will describe what data types European researchers can currently access via their APIs or via dedicated teams, tools, help centres, programs, or events.

As mentioned in our baseline report, Meta provides a variety of data sets and tools for researchers and they can consult a chart to verify if the data would be available for request. All the data access opportunities for independent researchers are logged in one place

The main data available only to researchers are: 
  • Meta Content Library and API. For Instagram, it will include public posts and data.  Data from the Library can be searched, explored, and filtered on a graphical user interface or through a programmatic API. 700+ researchers globally now have access to Meta Content Library. 
  • Ad Targeting Data Set, which includes detailed targeting information for social issue, electoral, and political ads that ran globally since August 2020. 150+ researchers globally have accessed Ads Targeting API since it launched publicly in Sept 2022.
Influence Operations Research Archive for coordinated inauthentic behaviour (CIB) Network Disruptions, as outlined in QRE 27.4.1.

Measure 28.3

Relevant Signatories will not prohibit or discourage genuinely and demonstratively public interest good faith research into Disinformation on their platforms, and will not take adversarial action against researcher users or accounts that undertake or participate in good-faith research into Disinformation.

Instagram

QRE 28.3.1

Relevant Signatories will collaborate with EDMO to run an annual consultation of European researchers to assess whether they have experienced adversarial actions or are otherwise prohibited or discouraged to run such research.

No reporting possible at this stage 

Measure 28.4

As part of the cooperation framework between the Signatories and the European research community, relevant Signatories will, with the assistance of the EDMO, make funds available for research on Disinformation, for researchers to independently manage and to define scientific priorities and transparent allocation procedures based on scientific merit.

Instagram

QRE 28.4.1

Relevant Signatories will disclose the resources made available for the purposes of Measure 28.4 and procedures put in place to ensure the resources are independently managed.

No reporting possible at this stage 

Empowering fact-checkers

Commitment 30

Relevant Signatories commit to establish a framework for transparent, structured, open, financially sustainable, and non-discriminatory cooperation between them and the EU fact-checking community regarding resources and support made available to fact-checkers.

We signed up to the following measures of this commitment

Measure 30.1 Measure 30.2 Measure 30.3 Measure 30.4

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Yes

If yes, list these implementation measures here

In the first half of 2024, Meta provided all third-party fact-checkers (3PFCs) participating in our fact-checking programs with access to the Meta Content Library (MCL). This initiative aimed to enhance the fact-checking workflow and provide users with a more comprehensive toolset.

Throughout the second half of 2024, Meta has continued to release new features and improvements to the MCL, including collaborative dashboards, text-in-image search, and expanded data scope. These enhancements have been designed to support our users and promote best practices in fact-checking.

To facilitate a seamless transition of our 3PFCs to the MCL, we initiated a proactive outreach and education program. This comprehensive program included a targeted e-Newsletter series, training calls, and live tutorials. 

The education program has yielded encouraging results, with notable increases in usage by 3PFCs. We will continue to monitor the impact of our initiatives and make adjustments as needed to ensure that our users have the support and resources they need to effectively utilize our tools and contribute to a safer and more informed online community. 

As a part of stakeholder engagement initiatives, Meta participated in the EFCSN Conference in Brussels, where we were joined by over 40 of our third-party fact-checking (3PFC) partners from the European Fact-Checking Program. During the conference, we also conducted 20 strategic partner meetings to further strengthen our collaborations and advance our shared goals.

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

As currently drafted, this chapter covers the current practices for Facebook and Instagram in the EU. In keeping with Meta’s public announcements on 7 January 2025, we will continue to assess the applicability of this chapter to Facebook and Instagram and we will keep under review whether it is appropriate to make alterations in light of changes in our practices, such as the deployment of Community Notes.

Measure 30.1

Relevant Signatories will set up agreements between them and independent fact-checking organisations (as defined in whereas (e)) to achieve fact-checking coverage in all Member States. These agreements should meet high ethical and professional standards and be based on transparent, open, consistent and non-discriminatory conditions and will ensure the independence of fact-checkers.

QRE 30.1.1

Relevant Signatories will report on and explain the nature of their agreements with fact-checking organisations; their expected results; relevant quantitative information (for instance: contents fact-checked, increased coverage, changes in integration of fact-checking as depends on the agreements and to be further discussed within the Task-force); and such as relevant common standards and conditions for these agreements.

As mentioned in our baseline report, Meta’s fact-checking partners all go through a rigorous certification process with the IFCN. As a subsidiary of the journalism research organisation Poynter Institute, the IFCN is dedicated to bringing fact-checkers together worldwide.

All fact-checking partners follow IFCN’s Code of Principles, a series of commitments they must adhere to in order to promote excellence in fact-checking. 

The detail of our partnership with fact-checkers (i.e., how they rate content and what actions we take as a result) is outlined in QRE 21.1.1 and here.

QRE 30.1.2

Relevant Signatories will list the fact-checking organisations they have agreements with (unless a fact-checking organisation opposes such disclosure on the basis of a reasonable fear of retribution or violence).

Austria(German, Dutch, French) | AFP dpa-Faktencheck

Belgium(Dutch, French, German) | AFP dpa-FaktencheckKnack

Bulgaria (Bulgarian) | AFP FactCheck.bg

Croatia (Croatian) | Faktograf.hr AFP

Cyprus (Greek) | AFP

Czech Republic (Czech) | AFP Demagog.cz

Denmark (Danish) | TjekDet

Estonia (Estonian, Lithuanian, Russian, English) | Delfi Estonia/Ekspress M

Finland (Finnish) | AFP

France (French, English) | 20 Minutes AFP Les Observateurs de France 24 Les Surligneurs

Germany (German, Dutch, French)
| AFP Correctivdpa-Faktencheck

Greece (Greek) | AFP Ellinika Hoaxes

Hungary (Hungarian) | AFP

Ireland (English) | TheJournal.ie

Italy(Italian) | Open Pagella Politica

Latvia (Latvian, Lithuanian, Russian, English) | Delfi Re:Baltica

Lithuania (Lithuanian, Russian, English) | Delfi Patikrinta 15min

Luxembourg (German, Dutch, French) | dpa-Faktencheck

Netherlands (Dutch, German, French) | AFP dpa-Faktencheck

Poland (Polish) | AFP Demagog

Portugal (Portuguese) | Poligrafo Observador

Romania (Romanian) | AFP Funky Citizens/ Factual.ro

Slovakia (Slovak) | AFP Demagog.cz Demagog.sk

Slovenia (Slovene) | Oštro

Spain (Spanish, Catalan) | AFP EFE Verifica Maldito Bulo Newtral

Sweden (Swedish, English) | Kallkritikbyran AFP 

QRE 30.1.3

Relevant Signatories will report on resources allocated where relevant in each of their services to achieve fact-checking coverage in each Member State and to support fact-checking organisations' work to combat Disinformation online at the Member State level.

As mentioned in our baseline report, the list of fact-checkers with whom we partner across the EU is in QRE 30.1.2. 

SLI 30.1.1

Relevant Signatories will report on Member States and languages covered by agreements with the fact-checking organisations, including the total number of agreements with fact-checking organisations, per language and, where relevant, per service.

Number of individual agreements we have with fact-checking organisations. Each agreement covers both Facebook and Instagram. 

See list of countries and languages covered in QRE 30.1.2

Country Nr of agreements with fact-checking organisations
Austria 0
Belgium 0
Bulgaria 0
Croatia 0
Cyprus 0
Czech Republic 0
Denmark 0
Estonia 0
Finland 0
France 0
Germany 0
Greece 0
Hungary 0
Ireland 0
Italy 0
Latvia 0
Lithuania 0
Luxembourg 0
Malta 0
Netherlands 0
Poland 0
Portugal 0
Romania 0
Slovakia 0
Slovenia 0
Spain 0
Sweden 0
Iceland 0
Liechtenstein 0
Norway 0

Measure 30.2

Relevant Signatories will provide fair financial contributions to the independent European fact-checking organisations for their work to combat Disinformation on their services. Those financial contributions could be in the form of individual agreements, of agreements with multiple fact-checkers or with an elected body representative of the independent European fact-checking organisations that has the mandate to conclude said agreements.

Instagram

QRE 30.2.1

Relevant Signatories will report on actions taken and general criteria used to ensure the fair financial contributions to the fact-checkers for the work done, on criteria used in those agreements to guarantee high ethical and professional standards, independence of the fact-checking organisations, as well as conditions of transparency, openness, consistency and non-discrimination.

As mentioned in our baseline report, Meta’s fact-checking partners all go through a rigorous certification process with the IFCN. All our fact-checking partners follow IFCN’s Code of Principles, a series of commitments they must adhere to in order to promote excellence in fact-checking.

From 2024, third-party fact-checkers may also be onboarded to Meta if they are certified with the European Fact-Checking Standards Networks (EFCSN).

QRE 30.2.2

Relevant Signatories will engage in, and report on, regular reviews with their fact-checking partner organisations to review the nature and effectiveness of the Signatory's fact-checking programme.

As mentioned in our baseline report, Meta has a team in charge of maintaining our relationships with our fact-checking partners, understanding their feedback and improving our fact-checking program together. 

Meta has also dedicated the necessary resources to engage with the Taskforce including on work-streams related to fact-checking. 

QRE 30.2.3

European fact-checking organisations will, directly (as Signatories to the Code) or indirectly (e.g. via polling by EDMO or an elected body representative of the independent European fact-checking organisations) report on the fairness of the individual compensations provided to them via these agreements.

QRE 30.2.3 applies to fact-checking organisations

Measure 30.3

Relevant Signatories will contribute to cross-border cooperation between fact-checkers.

Instagram

QRE 30.3.1

Relevant Signatories will report on actions taken to facilitate their cross-border collaboration with and between fact-checkers, including examples of fact-checks, languages, or Member States where such cooperation was facilitated.

As outlined in QRE 30.2.2 Meta has a team in charge of our relationships with fact-checking partners where we take on feedback including on ways to support their cooperation.

Measure 30.4

To develop the Measures above, relevant Signatories will consult EDMO and an elected body representative of the independent European fact-checking organisations.

Instagram

QRE 30.4.1

Relevant Signatories will report, ex ante on plans to involve, and ex post on actions taken to involve, EDMO and the elected body representative of the independent European fact-checking organisations, including on the development of the framework of cooperation described in Measures 30.3 and 30.4.

As mentioned in our baseline report, Instagram is in touch with several EDMO regional hubs and looks forward to engaging with EDMO on our fact-checking efforts.

Commitment 31

Relevant Signatories commit to integrate, showcase, or otherwise consistently use fact-checkers' work in their platforms' services, processes, and contents; with full coverage of all Member States and languages.

We signed up to the following measures of this commitment

Measure 31.1 Measure 31.2 Measure 31.3 Measure 31.4

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

No

If yes, list these implementation measures here

There have been no updates since the last submitted report.

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

As currently drafted, this chapter covers the current practices for Facebook and Instagram in the EU. In keeping with Meta’s public announcements on 7 January 2025, we will continue to assess the applicability of this chapter to Facebook and Instagram and we will keep under review whether it is appropriate to make alterations in light of changes in our practices, such as the deployment of Community Notes.

Measure 31.1

Relevant Signatories that showcase User Generated Content (UGC) will integrate, showcase, or otherwise consistently use independent fact-checkers' work in their platforms' services, processes, and contents across all Member States and across formats relevant to the service. Relevant Signatories will collaborate with fact-checkers to that end, starting by conducting and documenting research and testing.

Instagram

Measure 31.2

Relevant Signatories that integrate fact-checks in their products or processes will ensure they employ swift and efficient mechanisms such as labelling, information panels, or policy enforcement to help increase the impact of fact-checks on audiences.

Instagram

QRE 31.2.1

Relevant Signatories will report on their specific activities and initiatives related to Measures 31.1 and 31.2, including the full results and methodology applied in testing solutions to that end.

As mentioned in our baseline report, when content has been rated by fact-checkers (as outlined in detail under QRE 21.1.1), We take action to (1) label it and (2) ensure less people see it. We also take action against accounts that repeatedly share misinformation. The current warning in place says that accounts that repeatedly share false information may experience temporary restrictions, including having their posts reduced.

Regarding rating AI-generated content. Fact-checkers may rate AI-generated media under our fact-checking program policies. They often rely on AI experts, visual techniques, and meta data analysis to aid in the detection of this content.

SLI 31.1.1 (for Measures 31.1 and 31.2)

Member State level reporting on use of fact-checks by service and the swift and efficient mechanisms in place to increase their impact, which may include (as depends on the service): number of fact-check articles published; reach of fact-check articles; number of content pieces reviewed by fact-checkers.

Filtered to content created on Instagram in EU Member State countries from 01/07/2024 to 31/12/2024:

1. Number of distinct pieces of content viewed on Instagram that were treated with a fact-checking label due to a falsity assessment by third party fact-checkers between 01/07/2024 to 31/12/2024.
2. Number of distinct articles written by 3PFCs that were used on Instagram to apply an inform treatment to a content from 01/07/2024 to 31/12/2024.*

*This metric shows the number of distinct fact-checking articles written by Meta’s 3PFC partners and utilised to label content in each EU Member State. As articles may be used in multiple countries, and several articles may be used to label a piece of content, the total sum of articles utilised for all Member States exceeds the number of distinct articles created in the EU (43,000). This is expected. 

Country Content viewed on Instagram and treated with fact checks, due to a falsity assessment by third party fact checkers between 01/07/2024 to 31/12/2024. Number of Articles written by third party fact checkers to justify rating on Instagram between 01/07/2024 to 31/12/2024.
Austria Over 72,000 Over 13,000 0 0
Belgium Over 83,000 Over 14,000 0 0
Bulgaria Over 32,000 Over 8,300 0 0
Croatia Over 35,000 Over 8,800 0 0
Cyprus Over 32,000 Over 8,200 0 0
Czech Republic Over 46,000 Over 10,000 0 0
Denmark Over 53,000 Over 11,000 0 0
Estonia Over 14,000 Over 5,000 0 0
Finland Over 47,000 Over 10,000 0 0
France Over 200,000 Over 21,000 0 0
Germany Over 310,000 Over 26,000 0 0
Greece Over 69,000 Over 12,000 0 0
Hungary Over 33,000 Over 8,500 0 0
Ireland Over 89,000 Over 14,000 0 0
Italy Over 220,000 Over 23,000 0 0
Latvia Over 15,000 Over 5,400 0 0
Lithuania Over 18,000 Over 5,900 0 0
Luxembourg Over 15,000 Over 5,400 0 0
Malta Over 14,000 Over 4,900 0 0
Netherlands Over 130,000 Over 18,000 0 0
Poland Over 84,000 Over 14,000 0 0
Portugal Over 120,000 Over 17,000 0 0
Romania Over 57,000 Over 11,000 0 0
Slovakia Over 29,000 Over 7,900 0 0
Slovenia Over 21,000 Over 6,200 0 0
Spain Over 260,000 Over 23,000 0 0
Sweden Over 100,000 Over 15,000 0 0
Iceland 0 0 0 0
Liechtenstein 0 0 0 0
Norway 0 0 0 0

Measure 31.3

Relevant Signatories (including but not necessarily limited to fact-checkers and platforms) will create, in collaboration with EDMO and an elected body representative of the independent European fact-checking organisations, a repository of fact-checking content that will be governed by the representatives of fact-checkers. Relevant Signatories (i.e. platforms) commit to contribute to funding the establishment of the repository, together with other Signatories and/or other relevant interested entities. Funding will be reassessed on an annual basis within the Permanent Task-force after the establishment of the repository, which shall take no longer than 12 months.

Instagram

QRE 31.3.1

Relevant Signatories will report on their work towards and contribution to the overall repository project, which may include (depending on the Signatories): financial contributions; technical support; resourcing; fact-checks added to the repository. Further relevant metrics should be explored within the Permanent Task-force.

There have been no significant updates since the last submitted report.

Measure 31.4

Relevant Signatories will explore technological solutions to facilitate the efficient use of this common repository across platforms and languages. They will discuss these solutions with the Permanent Task-force in view of identifying relevant follow up actions.

Instagram

QRE 31.4.1

Relevant Signatories will report on the technical solutions they explore and insofar as possible and in light of discussions with the Task-force on solutions they implemented to facilitate the efficient use of a common repository across platforms.

There have been no significant updates since the last submitted report.

Commitment 32

Relevant Signatories commit to provide fact-checkers with prompt, and whenever possible automated, access to information that is pertinent to help them to maximise the quality and impact of fact-checking, as defined in a framework to be designed in coordination with EDMO and an elected body representative of the independent European fact-checking organisations.

We signed up to the following measures of this commitment

Measure 32.1 Measure 32.2 Measure 32.3

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

No

If yes, list these implementation measures here

As mentioned in our baseline report, fact-checkers can identify hoaxes based on their own reporting, and Meta also surfaces potential misinformation to fact-checkers using signals, such as feedback from our community or similarity detection. Our technology can detect posts that are likely to be misinformation based on various signals, including how people are responding and how fast the content is spreading. We may also send content to fact-checkers when we become aware that it may contain misinformation.

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

As currently drafted, this chapter covers the current practices for Facebook and Instagram in the EU. In keeping with Meta’s public announcements on 7 January 2025, we will continue to assess the applicability of this chapter to Facebook and Instagram and we will keep under review whether it is appropriate to make alterations in light of changes in our practices, such as the deployment of Community Notes.

Measure 32.1

Relevant Signatories will provide fact-checkers with information to help them quantify the impact of fact-checked content over time, such as (depending on the service) actions taken on the basis of that content, impressions, clicks, or interactions.

Instagram

Measure 32.2

Relevant Signatories that showcase User Generated Content (UGC) will provide appropriate interfaces, automated wherever possible, for fact-checking organisations to be able to access information on the impact of contents on their platforms and to ensure consistency in the way said Signatories use, credit and provide feedback on the work of fact-checkers.

Instagram

Measure 32.3

Relevant Signatories will regularly exchange information between themselves and the fact-checking community, to strengthen their cooperation.

Instagram

QRE 32.3.1

Relevant Signatories will report on the channels of communications and the exchanges conducted to strengthen their cooperation - including success of and satisfaction with the information, interface, and other tools referred to in Measures 32.1 and 32.2 - and any conclusions drawn from such exchanges.

There have been no significant updates since the last submitted report.

Permanent Task-Force

Commitment 37

Signatories commit to participate in the permanent Task-force. The Task-force includes the Signatories of the Code and representatives from EDMO and ERGA. It is chaired by the European Commission, and includes representatives of the European External Action Service (EEAS). The Task-force can also invite relevant experts as observers to support its work. Decisions of the Task-force are made by consensus.

We signed up to the following measures of this commitment

Measure 37.1 Measure 37.2 Measure 37.3 Measure 37.4 Measure 37.5 Measure 37.6

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

No

If yes, list these implementation measures here

There have been no significant updates since the last submitted report.

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

Measure 37.1

Signatories will participate in the Task-force and contribute to its work. Signatories, in particular smaller or emerging services will contribute to the work of the Task-force proportionate to their resources, size and risk profile. Smaller or emerging services can also agree to pool their resources together and represent each other in the Task-force. The Task-force will meet in plenary sessions as necessary and at least every 6 months, and, where relevant, in subgroups dedicated to specific issues or workstreams.

Facebook, Instagram, WhatsApp, Messenger

Measure 37.2

Signatories agree to work in the Task-force in particular – but not limited to – on the following tasks: Establishing a risk assessment methodology and a rapid response system to be used in special situations like elections or crises; Cooperate and coordinate their work in special situations like elections or crisis; Agree on the harmonised reporting templates for the implementation of the Code's Commitments and Measures, the refined methodology of the reporting, and the relevant data disclosure for monitoring purposes; Review the quality and effectiveness of the harmonised reporting templates, as well as the formats and methods of data disclosure for monitoring purposes, throughout future monitoring cycles and adapt them, as needed; Contribute to the assessment of the quality and effectiveness of Service Level and Structural Indicators and the data points provided to measure these indicators, as well as their relevant adaptation; Refine, test and adjust Structural Indicators and design mechanisms to measure them at Member State level; Agree, publish and update a list of TTPs employed by malicious actors, and set down baseline elements, objectives and benchmarks for Measures to counter them, in line with the Chapter IV of this Code.

Facebook, Instagram, WhatsApp, Messenger

Measure 37.3

The Task-force will agree on and define its operating rules, including on the involvement of third-party experts, which will be laid down in a Vademecum drafted by the European Commission in collaboration with the Signatories and agreed on by consensus between the members of the Task-force.

Facebook, Instagram, WhatsApp, Messenger

Measure 37.4

Signatories agree to set up subgroups dedicated to the specific issues related to the implementation and revision of the Code with the participation of the relevant Signatories.

Facebook, Instagram, WhatsApp, Messenger

Measure 37.5

When needed, and in any event at least once per year the Task-force organises meetings with relevant stakeholder groups and experts to inform them about the operation of the Code and gather their views related to important developments in the field of Disinformation.

Facebook, Instagram, WhatsApp, Messenger

Measure 37.6

Signatories agree to notify the rest of the Task-force when a Commitment or Measure would benefit from changes over time as their practices and approaches evolve, in view of technological, societal, market, and legislative developments. Having discussed the changes required, the Relevant Signatories will update their subscription document accordingly and report on the changes in their next report.

Facebook, Instagram, WhatsApp, Messenger

QRE 37.6.1

Signatories will describe how they engage in the work of the Task-force in the reporting period, including the sub-groups they engaged with.

There have been no significant updates since the last submitted report.

Monitoring of the Code

Commitment 38

The Signatories commit to dedicate adequate financial and human resources and put in place appropriate internal processes to ensure the implementation of their commitments under the Code.

We signed up to the following measures of this commitment

Measure 38.1

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

No

If yes, list these implementation measures here

Globally we have around 40,000 people working on safety and security. 

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

As mentioned in our baseline report, our policies benefit from our experience and expertise. 

Measure 38.1

Relevant Signatories will outline the teams and internal processes they have in place, per service, to comply with the Code in order to achieve full coverage across the Member States and the languages of the EU.

Facebook, Instagram, WhatsApp, Messenger

QRE 38.1.1

Relevant Signatories will outline the teams and internal processes they have in place, per service, to comply with the Code in order to achieve full coverage across the Member States and the languages of the EU.

Globally we have around 40,000 people working on safety and security including around 15,000 content reviewers. All of these investments work to combat the spread of harmful content, including disinformation and misinformation, and thereby contribute to our implementation of the Code. 

Teams with expertise in content moderation, operations, policy design, safety, market specialists, data and forensic analysis, stakeholder and partner engagement, threat investigation, cybersecurity and product development all work on these challenges. These teams are distributed globally, and draw from the local expertise of their team members and local partners.

Commitment 39

Signatories commit to provide to the European Commission, within 1 month after the end of the implementation period (6 months after this Code’s signature) the baseline reports as set out in the Preamble.

We signed up to the following measures of this commitment

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Yes

If yes, list these implementation measures here

This report was submitted within the required timeline.

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

This report was submitted within the required timeline.

Commitment 40

Signatories commit to provide regular reporting on Service Level Indicators (SLIs) and Qualitative Reporting Elements (QREs). The reports and data provided should allow for a thorough assessment of the extent of the implementation of the Code’s Commitments and Measures by each Signatory, service and at Member State level.

We signed up to the following measures of this commitment

Measure 40.1 Measure 40.2 Measure 40.3 Measure 40.4 Measure 40.5 Measure 40.6

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Yes

If yes, list these implementation measures here

For this report, Facebook. Instagram, WhatsApp and Messenger  provided QREs and SLIs across the different chapters 

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

Yes

If yes, which further implementation measures do you plan to put in place in the next 6 months?

As mentioned in our baseline report, Facebook, Instagram, WhatsApp and Messenger will continue to provide relevant QREs and SLIs across the chapters of this Code.

Commitment 41

Signatories commit to work within the Task-force towards developing Structural Indicators, and publish a first set of them within 9 months from the signature of this Code; and to publish an initial measurement alongside their first full report.

We signed up to the following measures of this commitment

Measure 41.1 Measure 41.2 Measure 41.3

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Yes

If yes, list these implementation measures here

We continue to engage with the Taskforce Monitoring Working Group. 

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

Yes

If yes, which further implementation measures do you plan to put in place in the next 6 months?

We continue to engage with the Taskforce monitoring working group. 

Commitment 42

Relevant Signatories commit to provide, in special situations like elections or crisis, upon request of the European Commission, proportionate and appropriate information and data, including ad-hoc specific reports and specific chapters within the regular monitoring, in accordance with the rapid response system established by the Task-force.

We signed up to the following measures of this commitment

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Yes

If yes, list these implementation measures here

We continue to engage in the Taskforce’s election monitoring and crisis monitoring meetings.

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

Yes

If yes, which further implementation measures do you plan to put in place in the next 6 months?

We continue to engage in the Taskforce’s election monitoring and crisis monitoring meetings.

Commitment 43

Relevant Signatories commit to provide, in special situations like elections or crisis, upon request of the European Commission, proportionate and appropriate information and data, including ad-hoc specific reports and specific chapters within the regular monitoring, in accordance with the rapid response system established by the Taskforce.

We signed up to the following measures of this commitment

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Yes

If yes, list these implementation measures here

Facebook, Instagram, WhatsApp and Messenger provided their qualitative and quantitative information in the harmonised template provided.

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

Yes

If yes, which further implementation measures do you plan to put in place in the next 6 months?

Facebook, Instagram, WhatsApp and Messenger continue to engage with the Taskforce working group on reporting/monitoring as the template evolves.

Commitment 44

Relevant Signatories that are providers of Very Large Online Platforms commit, seeking alignment with the DSA, to be audited at their own expense, for their compliance with the commitments undertaken pursuant to this Code. Audits should be performed by organisations, independent from, and without conflict of interest with, the provider of the Very Large Online Platform concerned. Such organisations shall have proven expertise in the area of disinformation, appropriate technical competence and capabilities and have proven objectivity and professional ethics, based in particular on adherence to auditing standards and guidelines.

We signed up to the following measures of this commitment

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Yes

If yes, list these implementation measures here

As mentioned in our baseline report, we are taking steps to ensure that, following conversion of the Code into a Code of Conduct under the DSA, relevant Meta services will be undergoing appropriate independent audits under the DSA.

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

As mentioned in our baseline report, we are taking steps to ensure that, following conversion of the Code into a Code of Conduct under the DSA, relevant Meta services will be undergoing appropriate independent audits.