There has been ferocious debate in Australia over the past few years about the responsibility big tech companies should bear for content on their platforms.
In the wake of incidents such as the livestreaming of the Christchurch mosque massacre in 2019, companies have become much more proactive in monitoring content that could encourage violence or other crimes.
Since then, Twitter has banned Donald Trump for spreading lies about the presidential election that encouraged the rioting at the Capitol on January 6, 2021.
During the election campaign this year, then-prime minister Scott Morrison promised to use a new law on defamation to fight trolling and cyberbullying.
Yet, Australia’s eSafety commissioner Julie Inman Grant is now putting pressure on tech companies over a problem which is even more urgent: the dissemination of child sexual abuse material.
Inman Grant has asked Apple, Microsoft and meta, the company that owns Instagram and Facebook, to tell her what, if anything, they are doing to identify, block and report child exploitation images and videos.
The eSafety commissioner has also asked for the same information from Omegle, an app that connects people to strangers, which security experts say is widely used by child stalkers.
It is the first use of powers granted under an Online Safety Bill last year to collect such information.
The practice of spreading child abuse online is rife: the Australian Centre to Counter Child Exploitation, received more than 33,000 reports of child exploitation in 2021, mostly relating to online image.
Overseas, firms make voluntary public reports of what they have detected on their platforms to groups such as the US-based National Centre for Missing and Exploited Children.









Add Category