Posted: 2024-04-16 10:00:00

X was contacted for comment and replied with an automated message saying “busy now, please check back later”.

Misinformation and the Bondi attack

Soon after the Bondi attack, in which six people were killed and several others injured, prominent accounts began sharing misinformation originally posted by Simeon Boikov – known on X as “Aussie Cossack” – that wrongly claimed the perpetrator was a person named Ben Cohen. Some accounts shared photos of a 20-year-old student by that name, pulled from social media, while others spread wild antisemitic conspiracy theories. Cohen’s name was trending on X for at least 12 hours and the student’s family was inundated with messages. The misinformation was even briefly picked up and reported as fact by some mainstream media.

Dr Belinda Barnet, senior lecturer at Swinburne University, said Twitter was never completely immune from inaccurate reports but that its ability to amplify verified information declined drastically after it was acquired by Musk as he intentionally tore down the processes that made it useful for disseminating news.

Elon Musk’s acquisition of Twitter is seen as a turning point in its attitude towards moderation.

Elon Musk’s acquisition of Twitter is seen as a turning point in its attitude towards moderation.Credit: Bloomberg

“Almost every decision he’s made has made it more difficult to distinguish fact from fiction,” she said.

“He immediately dismantled the verification system. That was important because a lot of the media and journalists and people who fact-check information had blue ticks. It definitely wasn’t a perfect system but it was a useful identifier of trustworthiness.”

Under the current system, blue ticks are given to paying X subscribers or gifted by Musk to accounts of his choosing, with the platform’s underlying algorithm pushing posts from those users above all others.

Barnet said that, combined with Musk’s moves to reduce moderation, reinstate accounts with histories of bad behaviour, and suppress posts with links to news organisations, the changes to verification had a profound effect on Twitter’s usefulness during a crisis as it was much harder to quickly assess the credibility of reports.

“It’s made it significantly less reliable as a source of information and, in fact, more dangerous in terms of its potential to spread misinformation,” she said, pointing to the fact Cohen’s name was trending all night on Saturday and into Sunday.

Loading

“[Before Musk] Twitter would have stepped in to kill the trend, or moderate, or take the original tweet or piece of misinformation out. They had some stopgaps in place that are not in place now.”

Australia has a voluntary code on misinformation, designed to prevent the spread of harmful posts, to which Meta, TikTok, Apple, Google and Microsoft are signatories. Twitter was previously a signatory but this, too, fell apart following Musk’s acquisition.

As part of the dismantling of safety features on the platform, X removed the ability for users to report misinformation, breaching the code. An independent committee investigating the breach reported that X promised to provide documents in its defence, and have an executive join a video-call discussion, but neither eventuated.

The Australian Communications and Media Authority (ACMA), which administers the voluntary code, is set to get expanded powers this year that would allow it to enforce a mandatory code or standard.

Live-streamed video and the church attack

When an armed assailant in Christchurch live-streamed himself opening fire in a mosque five years ago, it had an immediate impact on social media networks, which were just beginning to reckon with the moderation requirements of real-time video.

Facebook, Twitter and others founded a non-government organisation called the Global Internet Forum to Counter Terrorism (GIFCT), which monitors for violent extremist content and shares information to help platforms take action according to their own policies. This includes a hash-sharing database that allows all versions of a particular image or video to be blocked. Yet in some cases harmful images still proliferate.

Bishop Mar Mari Emmanuel was live-streaming a sermon when he was approached and stabbed several times, meaning thousands of viewers potentially saw the event in real time. But within hours X was filled with many versions of the violent and confronting video, which could be viewed on demand. Some users discussed the actions of the attacker and parishioners in detail, while some used it as evidence to support various speculative theories, and others even edited or remixed the video with other videos and Bishop Emmanuel clips.

At the time of writing, not only is it still easy to find graphic video of the stabbing on X but there are many terms in the “trending” sidebar (which does differ from user to user) that lead directly to the videos, to hate speech or to speculation about the attacker’s name, ethnicity, religion and motivation. Inman Grant has given X 24 hours to remove the footage.

On Facebook and Instagram, copies of the video are also floating around, though most are covered by a graphic content warning. A key difference between these and X is that searching on Meta’s platforms primarily brings up links to news sources, whereas X primarily brings up comments and videos posted directly to the platform. At the time of writing, video of the incident was difficult to find on TikTok.

Loading

Unlike the Christchurch live-stream, the stabbing video might not be considered “abhorrent violent material” under Australian law, since the footage was not taken by the perpetrator as part of their act of terror. That means there might not have been a legal obligation to remove it before Inman Grant’s order. But Zoe Hawkins, head of policy design at the Australian National University’s Tech Policy Design Centre, said X’s lack of action constituted a failure to protect its users.

“X should absolutely be doing more to moderate the broadcasting of graphic violence on their platform in order to comply with their obligations under Australia’s online safety codes, but also, importantly, to meet the community expectations of its Australian users”, she said.

“Unfortunately, X has already shown its willingness to disregard Australia’s existing online safety regulation. Australia needs to think creatively about how we recast the business incentives for social media companies, for example, by encouraging advertisers to vote with their feet or by building coalitions with other countries’ digital regulation agendas to create increasingly large market pressure for change.”

Get news and reviews on technology, gadgets and gaming in our Technology newsletter every Friday. Sign up here.

View More
  • 0 Comment(s)
Captcha Challenge
Reload Image
Type in the verification code above