Adobe users are outraged over vague new policy’s AI implications

Changes to Adobe’s Terms of Service have users confused and outraged that their work — even unpublished and in-progress projects — may be used to train AI models.

Users of various Adobe apps including Photoshop and Substance Painter received a pop-up notice on Wednesday saying “we may access your content through both manual and automated methods, such as for content review.”

The updated section (that went into effect all the way back on February 17, 2024) in Adobe’s Terms of Service says:

“Our automated systems may analyze your Content and Creative Cloud Customer Fonts (defined in section 3.10 (Creative Cloud Customer Fonts) below) using techniques such as machine learning in order to improve our Services and Software and the user experience.”

The language is vague. But the specific mention of “automated systems” and using “machine learning in order to improve our Services and Software,” immediately drew concerns that users’ creative work would be used as training data for Adobe’s AI tools.

Aside from the implication that any and all user content would be fodder for training data without credit or compensation, there’s the specific privacy concern for users working with confidential information. “I can’t use Photoshop unless I’m okay with you having full access to anything I create with it, INCLUDING NDA work?” posted artist @SamSantala on X.

Mashable Light Speed

On a separate page that breaks down how Adobe uses machine learning, Adobe says it doesn’t use content stored locally on your device, so only content that’s stored in the Creative Cloud. Otherwise, content that users make public, such as contributions to Adobe Stock, submissions to be featured on Adobe Express and to be used as tutorials in Lightroom are used to “train [Adobe’s] algorithms and thus improve [its] products and services.”

Such uses of public content have already been in place since Adobe launched its AI model Firefly, which generates images and powers other AI features like Generative Fill. Adobe touts Firefly as commercially safe, but has also said Firefly was trained on public domain data, which includes AI-generated images from its competitor Midjourney — a product that artists allege was the result of copyright infringement.

All that’s to say, gathering training data for AI models is a murky issue that has made it difficult for creatives and companies alike to trace copyrighted content and prevent unauthorized works from seeping into model training. And that has undermined Adobe’s deployment of purportedly ethical AI features and put customers’ trust in jeopardy.

To be clear, Adobe’s latest policy change has not been conclusively shown to expose users to privacy invasions, but users are understandably concerned at even a mere hint that their private work may be accessible to Adobe’s AI models. The new Terms of Service don’t make any explicit mention of Firefly or AI training data, but the update says it may need to access user content to “detect, prevent, or otherwise address fraud, security, legal, or technical issues,” and enforce its Terms which bans illegal or abusive content like child sexual abuse material. This may mean that Adobe seeks to monitor access user content for specific violations.

But the language used, including broad-term allusions to machine learning for “improving” Adobe tools taps into concepts the privacy-minded have justifiably become wary of at a very sensitive moment.

Mashable has reached out to Adobe for clarification and will update this story if we hear back.

Leave a Reply

Your email address will not be published. Required fields are marked *