According to the latest available data, ChatGPT has about 180 million users globally. Surveys have shown that only one-third of Australians agree Australia has adequate guardrails to make the design, development and deployment of AI safe.
“These big players are trying to work co-operatively [with governments] because they don’t want to be necessarily seen as antagonistic, it’s not in their business interest to be seen working against the interests of consumers long term,” Husic said.
“I think the world has crossed the threshold when it comes to technology regulation, where working together can make a difference. And Australia can play a part… We’ve got a lot of good people here, either through the National AI Centre or through the Responsible AI Network or through industry more broadly, to get the challenges and make sure we’re responding properly.”
The inquiry published a discussion paper last June and received 510 submissions including from tech giants Google and Meta, major banks, supermarkets, legal bodies and universities. It said almost all the submissions called for action on “preventing, mitigating and responding” to the harms of AI.
Husic said that submissions also stressed that a large proportion of AI uses would be considered low risk and should be enabled to flourish unimpeded.
Loading
Members of Australia’s technology sector say the federal government has made the right move in taking a different path to the EU, which has developed a stand-alone AI act.
“A risk-based and proportionate approach to AI regulation is the right path for Australia to take and will help to nurture our emerging AI ecosystem,” said Kate Pounder, CEO of the Technology Council of Australia, which represents the likes of Telstra, Microsoft and Canva.
“AI isn’t one technology, and a one-size-fits-all approach would never have put Australia at the forefront of AI development and adoption.
“We welcome the establishment of an advisory body to provide advice to government. This was one of [Technology Council of Australia’s] key recommendations to help drive a consistent, practical and well-coordinated approach to AI regulation across government.”
Chief executive of the Australian Information Industry Association, Simon Bush, agreed and said the benefits of AI adoption across the economy would unlock billions for the nation’s coffers.
He said the association had been advocating for strong government-led AI principles, a national AI strategy and industry funding since 2017.
“The regulation of AI will be seen as a success by industry if it builds not only societal trust in the adoption and use of AI by citizens and businesses, but also that it fosters investment and growth in the Australian AI sector,” Bush said.
“We are currently living through one of the most transformational step-changes in productivity and technology investment centred around AI that any of us have experienced.
“We need to both understand and ensure Australia takes advantage of this transformation and a well-considered regulatory framework and significant AI industry and skills funding is required.”
Catriona Wallace, the chair of AI-focused venture capital firm Boab and star of Channel 10’s Shark Tank, said that the government’s definition of “high-risk” didn’t go far enough.
“Interpol and the Australian Federal Police are already monitoring AI used to exploit and extort money from children. The Australian government must address unacceptable risks such as these and not just focus on risks to industry and jobs,” she said.
“The risks that AI poses to Australians are very real and dangerous and more needs to be done. Both high risks and unacceptable risks should be addressed in this government’s response strategy.”