Adobe

Report March 2025

Submitted
Since June 2022, Adobe has been a Signatory of the EU Code of Practice on Disinformation, supporting the intention and ambition of this Code.  In September 2024, Adobe also announced our commitment to uphold the pledges of the EU AI Pact, reiterating our support for the transparency requirements under the EU AI Act.

Adobe is a global leader in digital marketing and digital media solutions. Since the company’s founding in December 1982, we have pushed the boundaries of creativity with products and services that allow our customers to create, deploy, and enhance digital experiences. Our purpose is to serve the creator and respect the consumer, and our heritage is built on providing trustworthy and innovative solutions to our customers and communities. Adobe has a long history of pioneering innovation. As we continue harnessing the power of AI, it is critical to pair innovation with responsible innovation in order to ensure this technology is developed and deployed in a way that benefits everyone. 

We stand at a pivotal moment as AI has begun to transform the way we live, work and play. As AI becomes more prevalent however, we are also witnessing extraordinary challenges to trust in digital content. In today’s digital world, misattributed and miscontextualised content spreads quickly. Whether inadvertent misinformation or deliberate deception, inauthentic content is on the rise and once we are fooled once, we will begin to doubt anything we see or hear online – even if it is true.

With the increasing volume and velocity of digital content creation, including synthetic media, it is critical to ensure transparency in what we are consuming online. Adobe is committed to leading in this space and finding technical solutions that address the issues of manipulated media and deceptive digital content. 

Content provenance and media literacy are a major focus for Adobe and the work of the Content Authenticity Initiative (CAI), which Adobe co-founded in 2019 and leads today. We are focused on cross-industry participation, with an open, extensible approach for providing transparency for digital content (i.e. images, audio, video, documents, and AI) to allow for better evaluation of that content.

The Content Authenticity Initiative (CAI) now has more than 4,000 members globally working to increase trust in digital content through provenance tools, which are the facts about the origins of a piece of digital content. The CAI works in tandem with the Coalition for Content Provenance and Authenticity (C2PA), an open technical standards organisation also co-founded by Adobe in 2021, to implement C2PA´s solution to digital content provenance – called Content Credentials. 

Content Credentials are essentially a “nutrition label” for digital content that anyone can implement to show how a piece of content is created and modified. Content Credentials are a combination of cryptographic metadata, fingerprinting and watermarking, designed to remain securely attached and travel with the digital content wherever it goes (for more information, please see “durable content credentials”). They include important information which may include the creator’s name, the date an image was created, what tools were used to create an image and any edits that were made along the way, including whether AI was used. This empowers users to create a digital chain of trust and authenticity. The CAI developed free, open-source tools based on the C2PA standard for anyone to implement Content Credentials into their own products, services, or platforms. 

In 2024, some major developments concerning the C2PA took place, with several companies joining the C2PA steering committee and demonstrating support for Content Credentials. TikTok also joined the C2PA as a member and began labelling AI-generated content uploaded to its platform with Content Credentials. 

The Adobe-led CAI has also invested in creating and promoting media literacy curricula to educate the public about the dangers of deepfakes, the need for scepticism, and tools available today to help them understand what is true. In partnership with the Adobe Education team, the CAI updated its media literacy curriculum in February 2024 to include Generative AI curricular materials.

Provenance solutions such as Content Credentials are more important than ever as generative AI makes it easier to create, scale, and alter digital content. As AI rapidly evolves, our work will continue to adapt to emerging trends and evolving industry needs. We see Adobe’s focus on supporting and promoting wide adoption of Content Credentials as being particularly relevant to the EU Code of Practice on Disinformation and are encouraged that Commitments relating to provenance and the C2PA open standard have been adopted as commitments in the Code in the Empowering Users chapter. We encourage all relevant Signatories to sign up to these commitments and join this cross-industry effort to tackle disinformation through technology.

Download PDF

Commitment 1
Relevant signatories participating in ad placements commit to defund the dissemination of disinformation, and improve the policies and systems which determine the eligibility of content to be monetised, the controls for monetisation and ad placement, and the data to report on the accuracy and effectiveness of controls and services around ad placements.
We signed up to the following measures of this commitment
Measure 1.1 Measure 1.2 Measure 1.3 Measure 1.6
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
no
If yes, list these implementation measures here
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
no
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 1.1
Relevant Signatories involved in the selling of advertising, inclusive of media platforms, publishers and ad tech companies, will deploy, disclose, and enforce policies with the aims of: - first avoiding the publishing and carriage of harmful Disinformation to protect the integrity of advertising supported businesses - second taking meaningful enforcement and remediation steps to avoid the placement of advertising next to Disinformation content or on sources that repeatedly violate these policies; and - third adopting measures to enable the verification of the landing / destination pages of ads and origin of ad placement.
QRE 1.1.1
Signatories will disclose and outline the policies they develop, deploy, and enforce to meet the goals of Measure 1.1 and will link to relevant public pages in their help centres.
Adobe Advertising’s Ad Requirements Policy outlines the requirement for ads to not be “false or misleading ads”, accounting for both misinformation and disinformation. In summer 2020, Adobe halted permitting political content to be distributed via Adobe’s Services. Actions taken are the following: 

(1) Research was done to locate sites that spread misinformation and disinformation by referencing 3rd party reports from Global Disinformation Index, CheckMyAds and MediaBiasFactCheck. 

(2) Flagged Sites were reviewed and verified through manual checks of 3rd party verification services such as Global Disinformation Index, Politifact, and MediaBiasFactCheck. 

(3) Domains where misinformation or disinformation was confirmed were added to the Service’s Global Blocklist. 

(4) Historical impression reports were pulled to assess the impression delivery on the domains. 

(5) Incidents found during the third period of submission have been added to the tracker. 

(6) Adobe Advertising Cloud has reached out to existing partners for consultation on available services with relevant solutions to combatting dis/misinformation. No new services have been on-boarded. 

 

Link to ads requirements policy:  https://experienceleague.adobe.com/en/docs/advertising/policies/ad-requirements-policy

SLI 1.1.1
Signatories will report, quantitatively, on actions they took to enforce each of the policies mentioned in the qualitative part of this service level indicator, at the Member State or language level. This could include, for instance, actions to remove, to block, or to otherwise restrict advertising on pages and/or domains that disseminate harmful Disinformation.
15 domains have been added to Adobe's platform blocklist. This blocklist affects all transparent open market advertising.