Sign Up
..... Connect Australia with the world.
Categories

Posted: 2024-09-04 14:02:16

A third of businesses using artificial intelligence are not informing their employees or customers that they are doing so, a survey of business leaders using AI has revealed.

It comes as the federal government proposes "mandatory" rules for high-risk AI, including a possible AI Act and prohibitions on technologies deemed too risky.

A comprehensive survey of Australian chief executives, chief technology officers and other leaders of companies using AI found most had immature or "developing" approaches to using AI responsibly.

Those business leaders said their companies were using AI in a wide variety of ways, including providing 24/7 customer service, recommending additional products and altering prices for customers based on previous interactions.

But half had not carried out human rights or risk assessments on their uses of AI, and said they had no plans to do so.

'Guardrails' would require businesses to disclose AI use

As the use of AI in business and everyday life rapidly increases, the federal government has announced its intention to create "mandatory guardrails" for high-risk AI.

Industry Minister Ed Husic said AI offered great opportunity but the current standards were insufficient.

"Australians know AI can do great things but people want to know there are protections in place if things go off the rails," he said.

"From today, we're starting to put those protections in place."

The government has proposed three options to set rules for AI, ranging from a lightest-touch approach to adjust existing regulations, to a standalone economy-wide act that could include the prohibition of AI technologies deemed too risky.

One of those rules would require "developers and deployers" of AI to inform individuals when AI is used to make decisions about them or they are directly interacting with AI or AI-generated content.

The federal government has also asked whether tools that profile metrics such as a person's work performance, preferences and behaviour or movements and location for "social scoring" should be banned.

AI opportunity, but harm already caused

Australia's Tech Council says AI has the potential to add billions to the economy, and generative AI like ChatGPT alone could add up to $115 billion to the economy in the next five years.

But AI's potential to harm has become clearer, and the government in its consultations found multiple instances where AI had already caused damage.

For example, it found some AI resume screening applications had discriminated unfairly against job candidates belonging to a certain ethnic group and gender.

There had been cases of First Nations material being used without consent to train AI, or misappropriated by those tools.

Software used to detect AI cheating in school and university tests could discriminate against non-native English speakers and lead to false accusations of cheating.

And in the first three quarters of last year, almost 150,000 new deepfake pornographic videos were uploaded to the 40 most used deepfake pornography sites — including high-profile incidents including Taylor Swift and Italy's prime minister being targeted.

A sign out the front of Bacchus Marsh Grammar.

AI-generated sexual abuse material became a prominent issue earlier this year, after about 50 female students at Bacchus Marsh Grammar were targeted. (ABC News)

Overseas, Australia's allies have moved to develop laws to control AI as potential harms become clearer, including a world-first AI Act adopted by the European Union, proposals to create laws by Canada and the United Kingdom, and several states in the United States adopting measures to limit AI harms.

The Australian government is seeking to introduce precautionary measures based on AI's risk to individuals and society, similar to ones adopted by the European Union and Canada.

Among those measures would be requirements for developers and deployers of AI to establish accountability processes, risk management processes, enable human intervention in an AI system and to inform individuals when AI is used to make decisions about them.

Retail and hospitality worst performers on responsible AI use

The government-sponsored Responsible AI Index questioned leaders of 413 businesses using AI in chatbots, speech recognition, robotics and a variety of other technologies.

A handful had carefully considered how to use AI responsibly — but half had an immature or maturing approach.

Among those, hospitality and retail services performed worse than other sectors.

A chart shows all sectors on average have a "developing" AI score, with retail notably worse than others.

Retail and Hospitality scored lowest in maturity of AI use as a sector, according to surveys completed for the Responsible AI Index. (Fifth Quadrant)

Asked whether their organisation had "informed relevant stakeholders, including employees and customers, about the use of AI and AI-generated content in products and/or services", 34 per cent or about 140 business leaders said they had not done so, and had no plans to.

Alongside its mandatory guardrail proposals, the government has also published an AI Safety Standard for businesses, which offers guidance on guardrails businesses could implement to protect themselves and consumers.

View More
  • 0 Comment(s)
Captcha Challenge
Reload Image
Type in the verification code above