Instagram

Report March 2025

Submitted
Commitment 18
Relevant Signatories commit to minimise the risks of viral propagation of Disinformation by adopting safe design practices as they develop their systems, policies, and features.
We signed up to the following measures of this commitment
Measure 18.1 Measure 18.2 Measure 18.3
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
Yes
If yes, list these implementation measures here
As mentioned in our baseline report, we continue to enforce our policies to combat the spread of misinformation.

In December 2024, we globally deprecated the feature on Instagram that displayed a pop-up when an account attempted to tag or mention another account that had been repeatedly fact-checked.
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
No
If yes, which further implementation measures do you plan to put in place in the next 6 months?
As mentioned in our baseline report, our policies are based on years of experience and expertise in safety combined with external input from experts around the world. 

Commitment 18 covers the current practices for Instagram in the EU. In keeping with Meta’s public announcements on 7 January 2025, we will continue to assess the applicability of this chapter to Instagram and we will keep under review whether it is appropriate to make alterations in light of changes in our practices, such as the deployment of Community Notes.
Measure 18.1
Relevant Signatories will take measures to mitigate risks of their services fuelling the viral spread of harmful Disinformation, such as: recommender systems designed to improve the prominence of authoritative information and reduce the prominence of Disinformation based on clear and transparent methods and approaches for defining the criteria for authoritative information; other systemic approaches in the design of their products, policies, or processes, such as pre-testing.
Instagram
QRE 18.1.1
Relevant Signatories will report on the risk mitigation systems, tools, procedures, or features deployed under Measure 18.1 and report on their deployment in each EU Member State.
As mentioned in our baseline report, we work to prevent the spread of harmful content, including misinformation, through: Meta’s technologies as well as through human review teams .

In our January to June 2023 report, we mentioned the publication of our Content Distribution Guidelines for Instagram. 

It lays down our guidelines for content lowered in feed and stories, which outline types of content that may be shown lower in feed and stories. 

QRE 18.1.2
Relevant Signatories will publish the main parameters of their recommender systems, both in their report and, once it is operational, on the Transparency Centre.
As mentioned in previous reports, Instagram System Cards help people understand how AI shapes their product experiences and provide insights into how the Feed ranking system dynamically works to deliver a personalised experience on Instagram. 

These cards provide detail on how our systems work in a way that is accessible for those who don’t have deep technical knowledge. In June 2023, we released 8 system cards for Instagram. There are 10 system cards for Instagram which are periodically updated. They give information about how our AI systems rank content, some of the predictions each system makes to determine what content might be most relevant, as well as the controls users can use to help customise users’ experience. They cover Feed, Stories, Reels and other surfaces where people go to find content from the accounts or people they follow. The system cards also cover AI systems that recommend “unconnected” content from people, groups, or accounts they don’t follow. A more detailed explanation of the AI behind content recommendations is available here.

To give a further level of detail beyond what’s published in the system cards, we have shared the types of inputs – known as signals – as well as the predictive models these signals inform that help determine what content users may find most relevant from their network on Instagram. Users can find these signals and predictions in the Transparency Centre, along with how frequently they tend to be used in the overall ranking process. 

We also use signals to help identify harmful content, which we remove as we become aware of it, as well as to help reduce the distribution of other types of problematic or low-quality content in line with our Content Distribution Guidelines


QRE 18.1.3
Relevant Signatories will outline how they design their products, policies, or processes, to reduce the impressions and engagement with Disinformation whether through recommender systems or through other systemic approaches, and/or to increase the visibility of authoritative information.
As mentioned in our baseline report, our policies articulate different categories of misinformation and try to provide clear guidance about how we treat that speech when we see it: 
  • We remove misinformation where it is likely to directly contribute to the risk of imminent physical harm. We also remove content that is likely to directly contribute to interference with the functioning of political processes.
  • For all other misinformation, we focus on reducing its prevalence or creating an environment that fosters a productive dialogue. As part of that effort, we partner with third-party fact-checking organisations to review and rate the accuracy of the most viral content on our platforms. We also provide resources to increase media and digital literacy so people can decide what to read, trust and share themselves.

Regarding the impact of our fact-checking labels, focused specifically on people who have already demonstrated an intent to share the fact-checked content: on average 46% of people on Instagram in the EU who start to share fact-checked content do not complete this action after receiving a warning from Meta that the content has been fact-checked

SLI 18.1.1
Relevant Signatories will provide, through meaningful metrics capable of catering for the performance of their products, policies, processes (including recommender systems), or other systemic approaches as relevant to Measure 18.1 an estimation of the effectiveness of such measures, such as the reduction of the prevalence, views, or impressions of Disinformation and/or the increase in visibility of authoritative information. Insofar as possible, Relevant Signatories will highlight the causal effects of those measures.
Rate of reshare non-completion among the unique attempts by users to reshare a content on Instagram that was treated with a fact-checking label in EU member state countries from 01/07/2024 to 31/12/2024.

Country % of reshares attempted that were not completed on treated content on Instagram between 01/07/2024 to 31/12/2024. of prevalence of disinformation
Austria 45% 0 0 0
Belgium 44% 0 0 0
Bulgaria 46% 0 0 0
Croatia 41% 0 0 0
Cyprus 50% 0 0 0
Czech Republic 44% 0 0 0
Denmark 49% 0 0 0
Estonia 44% 0 0 0
Finland 41% 0 0 0
France 48% 0 0 0
Germany 45% 0 0 0
Greece 48% 0 0 0
Hungary 46% 0 0 0
Ireland 43% 0 0 0
Italy 48% 0 0 0
Latvia 43% 0 0 0
Lithuania 47% 0 0 0
Luxembourg 48% 0 0 0
Malta 48% 0 0 0
Netherlands 42% 0 0 0
Poland 45% 0 0 0
Portugal 45% 0 0 0
Romania 44% 0 0 0
Slovakia 45% 0 0 0
Slovenia 46% 0 0 0
Spain 48% 0 0 0
Sweden 46% 0 0 0
Iceland N/A 0 0 0
Liechtenstein N/A 0 0 0
Norway N/A 0 0 0