“This is about empowering creators and giving them the tools to protect their work and maintain attribution as their content moves across the internet,” Ely Greenfield, the CTO of Adobe’s digital media business, said in an interview.
“With the rise of AI, it’s become crucial for creators to have a way to assert their rights and preferences over how their content is used.”
The tools will take the form of a free web app launching to the public in early 2025, allowing content creators to establish ownership over digital content even if it hasn’t been created with Adobe’s products. An extension for Google Chrome will also allow users to inspect any content credentials associated with content, including its edit history.
The metadata on a piece of content might show “this content was edited using an AI tool”, for example, along with which AI tool was used.
Crucially, Adobe has won buy-in for its ‘Content Authenticity Initiative’ from over 3700 member organisations including ChatGPT-maker OpenAI, Meta, Google, Microsoft, TikTok and Nvidia.
Greenfield said Adobe’s goal is for content credentials to become as ubiquitous as the lock icon in web browsers, signalling transparency and trustworthiness.
“Just as HTTPS has become the standard for secure web connections, we want content credentials to be the standard for digital content authenticity,” he said. “Our hope and expectation is that it gets attached to every piece of content you see. ”
Amid concerns around deepfakes and election integrity, the art world is also paying close attention. George Hartley is the co-founder of Australia’s largest online art gallery, Bluethumb.
Hartley said Australians prefer original, hand-made art over AI-generated pieces, and overwhelmingly demand transparency in the use of AI in art-making. Nearly 90 per cent of Bluethumb’s art collectors said they are not inclined to purchase AI-generated art – even if it comes at a lower price – and believe handmade art holds intrinsic value that AI cannot replicate.
A vast majority (91 per cent) of artists surveyed by Bluethumb believe it is crucial to disclose when AI tools are used in creating art, and 70 per cent say they never use AI, though many acknowledge it might become unavoidable in the future.
“Australians are aware of the potential for AI to reshape the art world in a way that adversely affects artists who are making art by hand with traditional mediums,” he said. “In the future, the key will be striking a balance between innovation and authenticity to preserve the integrity of human artistic expression and protect our visual arts community.”
Melbourne-based art collector Freddy Grant said there’s something soulless about AI art, in his opinion.
“What’s important to me is connecting with artists and supporting them. I’d prefer to know that I’ve bought from an artist who has created their work themselves rather than using AI.
The Australian government has proposed introducing EU-style mandatory guardrails for AI in high-risk settings. The government last month issued a proposals paper and is seeking further feedback.
“This is probably one of the most complex policy challenges facing governments the world over,” Industry and Science Minister Ed Husic told a press conference in September.
“The Australian government is determined that we put in place the measures that provide for the safe and responsible use of AI in this country.”
Husic said Australia’s current regulatory system is not fit for purpose to respond to the risks posed by AI. Generative AI, which can automatically generate images, video, audio, and computer code, is more likely to be ‘high-risk’, according to Husic.
An Adobe spokeswoman said the company supported the government’s proposed approach to focus on high-risk AI use cases.
Loading
“The government’s proposals paper rightly considers the impact of AI on democracy and elections, and we agree with their finding that transparency is a key pillar of democracy,” the spokeswoman said.
Adobe faced a wave of user complaints earlier this year when it changed its terms of service suggesting it was giving itself access to users’ work to train its generative AI models, even work protected by non-disclosure or confidentiality agreements.
According to Greenfield, that was a misunderstanding. He said Adobe does not train its Firefly models on customer content and never has.
“That was really unfortunate,” he said.
“What happened was we made what was a very minor and innocuous change to our terms of use. It was a two-word change that triggered an automatic requirement that our users accept them. And because of the heightened state of how companies use content, a bunch of people then said, ‘Oh, Adobe’s changing the terms of use. Let me go read them much more closely than I have ever read them before’.
Get news and reviews on technology, gadgets and gaming in our Technology newsletter every Friday.Sign up here.