Posted: 2022-02-05 00:55:00

It means it can differentiate itself from rival services like Apple Music, which offers basically the same array of big label songs as Spotify. Perhaps it will even be able to use its loyal podcast listeners as a way of negotiating better deals with record labels.

Those ties, and Spotify’s promotion of Rogan’s show, mean that even though it does not control the comedian and mixed martial arts commentators’ interviews, it is closer to having editorial oversight than Facebook does for posts on its platform.

Loading

Those ties distinguish the Spotify-Rogan case from most other decisions that platforms make about what content to remove, which happen at vast scale and often through the use of AI-driven systems. But while Spotify’s closeness to Rogan will likely have made it think more carefully about whether episodes of his show that spread doubts about vaccines should stay online, the balancing act it is playing is fundamentally the same as other platforms.

“We know we have a critical role to play in supporting creator expression while balancing it with the safety of our users,” Spotify chief executive Daniel Ek said in a statement last week. “In that role, it is important to me that we don’t take on the position of being content censor while also making sure that there are rules in place and consequences for those who violate them.”

But some level of censorship is the flipside of enforcing rules about acceptable speech. Spotify’s recently unveiled content rules, for example, prohibit content that incites violence based on things like sex or ethnicity or promotes the sale of illegal goods, like drugs.

Such policies are table stakes, mirroring much of what society at large prohibits, though enforcement is another matter.

Queensland University of Technology Professor Nicolas Suzor, who researches internet regulation, says governments are not capable of directly regulating all internet content.

QUT’s Nicolas Suzor, who was not speaking in his capacity as a member of the Facebook oversight board, says governments are not capable of directly regulating all internet content.

QUT’s Nicolas Suzor, who was not speaking in his capacity as a member of the Facebook oversight board, says governments are not capable of directly regulating all internet content.Credit:Attila Csaszar

“The scale that we’re talking about here, hundreds of millions of posts every day, means that the government machinery is just too slow to handle that,” says Suzor, who sits on Facebook’s oversight board, a kind of Supreme Court for the platform that handles appeals on content decisions, but was not speaking in that capacity.

“You can’t push this for tribunals and courts and those sorts of mechanisms. So it has to be, to some extent, devolved out to private regulation. That’s the game the platforms are in.”

Like Haugen, who advocates for Facebook to have to make its algorithms more transparent, Suzor also sees a role for more state intervention in the sector, where it is done appropriately.

In Australia, the eSafety Commissioner, a federal agency, has been given powers to order the takedown of some categories of material, such as image-based abuse and violent terrorist imagery. But the treatment of the thornier category of misinformation has been largely left up to the industry.

Digital Industry Group Incorporated, a lobby group for big technology firms such as Meta, Google, which owns YouTube, Twitter and Snap, which owns Snapchat, released its Australian Code of Practice on Disinformation and Misinformation in February last year. While its development was overseen by the Australian Communications and Media Authority, the government agency has no role in enforcing the code.

Spotify is not a signatory to the voluntary code but in the wake of the Rogan saga, it has unveiled its own policies globally. They include banning: “content that promotes dangerous false or dangerous deceptive medical information that may cause offline harm or poses a direct threat to public health”.

Loading

Rogan’s podcasts, for which he issued a statement at times both contrite and defiant, remain online, indicating Spotify does not see them as rising to that level. It will, however, add a disclaimer to podcast episodes that deal with COVID-19, directing users to trusted sources of information on the pandemic and vaccines. Rogan too has said he will have more mainstream medical guests on to counter some of the fringe voices he interviews.

Other platforms are taking a similar response to nudge users in the right direction or allow more choice. Speaking at the same parliamentary Online Safety Committee that Haugen addressed, Meta’s Australian head of public policy, Josh Machin told the committee that the company wanted to give its users more control over how their news feeds appear.

“It shouldn’t be a decision that we are making; we want to be transparent about steps we are taking that feed into how our ranking algorithms work, but we want to give individual users greater tools and controls so they can customise their news feed to look more like what they would like to see,” Machin said. “If they want a news feed that doesn’t include any politics, for example, because they only want to engage with family and friends, they should have those tools available.”

For Facebook, the appeal of such a system that lets users pick their content is obvious: it retains the content that people want to engage with, retaining users’ attention and the advertising dollars that go with it.

There are other nudge methods too, which platforms are deploying alongside their existing methods to counter outright hate speech, misinformation or illegal content. The major platforms have repeatedly emphasised how they are improving the calibre of their systems and taking down volumes of malicious posts. Facebook has published lists of the type of content it shares to fewer people, such as fact-checked misinformation, sensationalist health content, and posts from users with multiple accounts.

For example, in response to Holocaust denialism on its platform, TikTok announced last week it would include a banner on related search terms pointing viewers to reliable information. Figures from the United Nations showed that before the banners were introduced, 17 per cent of content about the Holocaust on TikTok either denied or distorted the truth of the cataclysm.

Pinterest, the online image board service; Instagram, the photo and video sharing app; YouTube, the video and streaming service; and Twitter, the short form public commentary platform, all have features that encourage users to reflect before posting potentially offensive material.

The major platforms have  emphasised how they are improving the calibre of their systems and taking down volumes of malicious posts.

The major platforms have emphasised how they are improving the calibre of their systems and taking down volumes of malicious posts.Credit: Supplied

There is some evidence that such nudge approaches can be effective. A study from Twitter and Yale University’s law school published this week found that 31 per cent of people shown pop up messages asking them to reconsider a potentially offensive tweet either revised it or chose not to post it at all.

But the overall impact was modest: these users posted 6 per cent fewer offensive tweets compared to a control group.

View More
  • 0 Comment(s)
Captcha Challenge
Reload Image
Type in the verification code above