Facebook

Report March 2025

Submitted
Commitment 14
In order to limit impermissible manipulative behaviours and practices across their services, Relevant Signatories commit to put in place or further bolster policies to address both misinformation and disinformation across their services, and to agree on a cross-service understanding of manipulative behaviours, actors and practices not permitted on their services. Such behaviours and practices include: The creation and use of fake accounts, account takeovers and bot-driven amplification, Hack-and-leak operations, Impersonation, Malicious deep fakes, The purchase of fake engagements, Non-transparent paid messages or promotion by influencers, The creation and use of accounts that participate in coordinated inauthentic behaviour, User conduct aimed at artificially amplifying the reach or perceived public support for disinformation.
We signed up to the following measures of this commitment
Measure 14.1 Measure 14.2 Measure 14.3
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
Yes
If yes, list these implementation measures here
As mentioned in our baseline report, we continue to enforce and report publicly on our policies to tackle inauthentic behaviour. 
  • Fake accounts: Our goal is to remove as many fake accounts on Facebook as we can. We expect the number of accounts we action to vary over time due to the unpredictable nature of adversarial account creation. We actioned 1.1 billion accounts against our fake accounts policy in Q3 2024 and 1.4 billion fake accounts in Q4 2024 on Facebook globally. 
  • Inauthentic behaviour: We continue to investigate and take down coordinated adversarial networks of accounts, Pages and Groups on Facebook that seek to mislead people about who is behind them and what they are doing. We also work to scale our enforcement by feeding the insights we learn from investigating these networks globally into automated detection systems to help us find bad actors engaged in these and similar violating behaviours, including the networks that attempt to come back after we had taken them down.  

We also continue to improve our detection of inauthentic behaviour policy violations to counter new tactics and more quickly act against the spectrum of deceptive practices – both Coordinated Inauthentic Behaviour and other inauthentic tactics (often used by financially motivated actors) we see on our platforms - whether foreign or domestic, state or non-state. 

In July 2024, we stopped removing content solely on the basis of our manipulated video policy. We will continue to remove content if it violates our Community Standards, regardless of whether it is created by AI or not.
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
No
If yes, which further implementation measures do you plan to put in place in the next 6 months?
As mentioned in our baseline report, our policies are based on years of experience and expertise in safety combined with external input from experts around the world. We are continuously working to protect the integrity of our platforms and adjusting our policies, tools, and processes.
Measure 14.1
Relevant Signatories will adopt, reinforce and implement clear policies regarding impermissible manipulative behaviours and practices on their services, based on the latest evidence on the conducts and tactics, techniques and procedures (TTPs) employed by malicious actors, such as the AMITT Disinformation Tactics, Techniques and Procedures Framework.
Facebook
QRE 14.1.1
Relevant Signatories will list relevant policies and clarify how they relate to the threats mentioned above as well as to other Disinformation threats.
To clarify what we’ve included in our baseline report, depending on the context, the actor, and the activity, several TTPs can be combined and are covered by several of our policies. We have highlighted some examples below:

Inauthentic Behaviour -  Our Inauthentic Behaviour policy is targeted at addressing deceptive behaviours. In line with our commitment to authenticity, we do not allow people to misrepresent themselves on Facebook or use fake accounts.

CIB Policies - Our policy on Coordinated Inauthentic Behaviour (CIB) addresses covert influence operations (IO). Defined as “the use of multiple Facebook or Instagram assets, working in concert to engage in Inauthentic Behaviour (as defined by our policy), where the use of fake accounts is central to the operation”, the policy informs how we find, identify and remove IO networks on our platforms.

CIB can include a variety of different TTPs depending on the actors, context, and operation. Having said that, we often see (1) creation of inauthentic accounts (2) the use of fake / inauthentic reactions (e.g., likes, upvotes, comments), (3) the use of fake followers or subscribers (4) the creation of inauthentic pages, groups, chat groups, fora, or domains (5) inauthentic coordination of content creation or amplification and (6) account hijacking or impersonation and (7) inauthentic coordination.  

We also remove millions of fake accounts every day under our policy on Account Integrity and Authentic Identity. Our goal is to remove as many fake accounts on Facebook as we can to minimise opportunities for IO threat actors to operate on our platforms. 

Cybersecurity - Attempts to gather sensitive personal information or engage in unauthorised access by deceptive or invasive methods are harmful to the authentic, open and safe atmosphere that we want to foster. Therefore, we do not allow attempts to gather sensitive user information or engage in unauthorised access through the abuse of our platform, products, or services.

Spam - We work hard to limit the spread of spam because we do not want to allow content that is designed to deceive, or that attempts to mislead users, to increase viewership. We also aim to prevent people from abusing our platform, products or features to artificially increase viewership or distribute content en masse for commercial gain. This can be pertinent for several TTPs depending on the context including  (1) creation of inauthentic accounts (2) the use of fake / inauthentic reactions (e.g., likes, upvotes, comments), (3) the use of fake followers or subscribers (4) the creation of inauthentic Pages, groups, chat groups, fora, or domains and (5) the use of deceptive practices.

Branded Content Policies - Branded content may only be posted with the use of the branded content tool, and creators must use the branded content tool to tag the featured third-party product, brand, or business partner with their prior permission. Branded content may only be posted by Facebook Pages, Groups, and profiles with access to the branded content tool. This is pertinent to non-transparent promotional messages.

Privacy - We remove content that shares, offers or solicits personally identifiable information or other private information that could lead to physical or financial harm, including financial, residential, and medical information, as well as private information obtained from illegal sources.
QRE 14.1.2
Signatories will report on their proactive efforts to detect impermissible content, behaviours, TTPs and practices relevant to this commitment.
As mentioned in our baseline report, our approach to Coordinated Inauthentic Behaviour (CIB) more broadly, is grounded on behaviour-based enforcement. This means that we are looking for specific violating behaviours, rather than violating content (which is predicated on other specific violations of our Community Standards, such as misinformation and hate speech). Therefore, when CIB networks are taken down, it is based on their behaviour, not the content they posted.  

In addition to expert investigations against CIB, we also work to tackle inauthentic behaviour by fake accounts at scale. 

Besides, Pages and Groups directly involved in CIB activity are removed when detected as part of the deceptive adversarial network. Automatically, as these accounts are taken down, posts published by these accounts go down as well. Taking this behaviour-based approach essentially allows us to address the problem at the source.  

We monitor for efforts to re-establish a presence on Facebook by networks we previously removed.