Posted: 2024-10-01 02:30:00

By equipping individuals with the skills to critically assess the content they encounter online, society can better navigate the complex digital landscape.

Erosion of trust in digital content

The study also found that 77 per cent of respondents struggle to trust what they see online.

This erosion of trust is not just a theoretical concern—it is leading to real-world consequences.

“Over 30 per cent of people in Australia and New Zealand have stopped or slowed their use of certain social media platforms because they’re not sure what they can trust,” Mulveny says.

This has far-reaching implications for business. In an environment where misinformation can spread rapidly, maintaining trust is crucial for sustaining consumer confidence and loyalty, she says.

The integrity of online information is essential not only to uphold democratic processes but also to ensure informed decision-making among voters.

“About 66 per cent believe [AI is] going to make it easier to find information. Fifty-five per cent say it’s going to make them more productive,” says Mulveny.

However, this optimism does not come without risk, as AI technologies can also be harnessed to create more sophisticated and convincing deepfakes.

Addressing the challenge: Content Credentials

In response to these challenges, Adobe has been at the forefront of developing tools to verify the authenticity of digital content.

Content Credentials, which act like a “nutrition label” for digital content, are designed to provide transparency around the origins and integrity of digital media. Content Credentials tells you what has gone into creating or editing a piece of digital content so you can know what you’re consuming. Through this, Adobe aims to restore trust in the digital landscape by allowing users to verify the metadata behind images, videos, and other content.

“At a bare minimum, there should be a requirement that election campaign material must be labelled if it has been made using generative AI,” says Mulveny.

This would help voters distinguish between authentic and wholly AI-generated content, reducing the likelihood of misinformation swaying public opinion, or eroding even further the trust that people have in what they are viewing.

“We think that is the first step that should be taken so people can see the metadata or the truth of what they’re looking at online,” she says.

However, while Content Credentials offer a valuable tool in the fight against misinformation, Mulveny cautions that they are not a panacea.

“Content Credentials are not a silver bullet. We think it’s foundational to implement this technology, but we need to work together to see this come to fruition,” she says, stressing the importance of media literacy in complementing technological solutions.

Bridging the scepticism gap

Dr T J Thomson, a senior lecturer in visual communication at RMIT University, underscores the power and vulnerability of visual content in the digital era.

“Misleading and deceptive information has existed since the advent of communication, yet humans have traditionally trusted more what they could see compared to what they could hear or could read about,” he says.

“Images are powerful forms of evidence that are used to inform, educate, empower, and persuade. They’re used for everything from documenting deliveries have arrived safely to deciding court cases and helping people remember and make sense of their day-to-day experiences.

“These same images can sometimes mislead or deceive by the way they’re framed, by the way they’re shared, or by manipulating their content directly, however.”

Visual content, with its ability to quickly convey messages, is particularly influential in the online ecosystem.

“The information we consume in visual form can be particularly influential and persuasive,” Thomson says, noting that this makes the digital space a battleground where truth and deception vie for dominance.

The challenges in combating visual misinformation are formidable.

“Significant challenges in combatting mis- and disinformation stem from platform algorithms that privilege misleading, sensational, or untrue content over accurate content,” Thomson says. Additionally, the ease of creating, editing, and sharing digital content further complicates efforts to maintain the integrity of online information.

Education and regulation are key to addressing these challenges.

“Platforms arguably need to be better regulated to ensure they privilege high-quality information over the commercial benefits afforded by alluring but low-quality headlines and claims.

“Education and personal responsibility need to be invested in so that we can reap the benefits of a more visually and media-literate society.

“And people likewise need to carefully consider how and where they spend their time online so they don’t fall victim to the myriad false and misleading claims that poison the online ecosystem.

Adobe’s Content Credentials are one of a number of tools that thoughtful people can use when trying to determine what is real online and whether they should trust something, he says.

“Akin to a digital certificate of authenticity, seeing how something was made or edited can help inform how you respond to it.

“However, any tool has its limitations, and we shouldn’t outsource our critical thinking responsibilities to purely technological solutions. As such, any tool should be used in combination with critical thinking and a logical evaluation strategy to arrive at a considered decision when faced with potentially dodgy claims online.”

Learn more about Adobe’s approach to responsible innovation, and download the Future of Trust Study.

View More
  • 0 Comment(s)
Captcha Challenge
Reload Image
Type in the verification code above