Posted: 2023-07-24 01:59:22

Draft rules published earlier this year state that AI content should embody socialist values and not contain anything that subverted the state’s authority, advocated overthrow of the political system or undermined national unity.

In the UK, there are proposals for “light touch” regulation, using existing regulators and regulation to manage the use of AI in their sectors, supported by a new central body.

US President Joe Biden hosted a meeting of major AI companies at the White House.

US President Joe Biden hosted a meeting of major AI companies at the White House.Credit: AP

In Australia, Industry and Science Minister Ed Husic released a discussion paper last month that is the starting point to develop measures to ensure AI is used responsibly and safely.

As in the UK, there is already a myriad existing legislation and regulation, from privacy laws, health and medical device laws, cyber and data protection laws, anti-discrimination laws and even corporations laws that could apply to particular AI applications.

The central theme of the conversations about regulating AI, whether its in Europe or Australia, is that the approach should be risk-based. Some AI uses, like facial recognition or the ability to use the technology to mislead or manipulate individuals or populations, pose greater risks to society than others.

Loading

In most of the jurisdictions there is a push for greater transparency. Generative AI hoovers up data from the internet and there are concerns that models might be built on data that is erroneous or misleading, or that has been designed to mislead and manipulate.

Regulators want to know what data has been used to train these machine-learning technology, and they want AI-generated content to be watermarked so users know that the material has been produced by an AI system.

There’s also a push for human oversight of the models and external audits because of the concern that, left unchecked, generative AI could, as some its leading proponents have warned, create serious risks to, not just individuals but humanity.

Regulating AI presents a serious challenge. The widespread adoption of AI offers the promise of a transformative leap forward in productivity. It also, however, has massive implications for existing social structures and workplaces and poses a range of critical ethical questions and risks.

The scale of the opportunities means lawmakers will be loath to restrict AI’s development while the range of risks means some regulation is inevitable, as most of the leading companies involved in generative AI concede. (The cynics would say that they see regulation as a moat to protect their positions in AI against smaller, less-resourced industry contenders).

To be effective, regulation of generative AI needs to be international and broadly harmonised.

Regulators are acutely aware that social media were allowed to develop within a regulatory void for the best part of two decades before there was any serious attempt to respond to digital privacy and competition concerns. They want to jump in far more quickly at the onset of generative AI.

As with regulation of social media, the European Union has moved first and hardest and its approach is likely to form the core of the international approach (or at least the approach of the G-7 and like-minded economies) to responding to the emerging AI industry even though its draft legislation has produced howls of protest from the big technology companies and complaints from others that the regulations aren’t tougher and more intrusive.

To be effective, regulation of generative AI needs to be international and broadly harmonised.

Understanding where the training data that the models use is sourced, how they make decisions and how the systems are used and making it clear when AI-based content is being deployed are obvious imperatives, even if they may be a burden on some smaller companies building their own models.

The Europeans also want some responsibility and accountability imposed on AI companies – they are proposing fines of up to 6 per cent of global sales for breaches of their regulations – and reserve the right to simply outlaw some applications of a technology that outsources many highly sensitive decisions to machines.

Some jurisdictions might adopt a lighter tough or more decentralised approach but the central themes of the responses to the development of AI will be similar because the recognised risks to society and individuals from the continuing development of AI-enabled technologies are broadly the same in Brussels as they are in Washington or Canberra.

Loading

Developing the frameworks won’t be straightforward, given the mind-boggling and far-reaching complexities that the technology generates even at this formative phase of its development, but its potential has such widely divergent implications for economies and societies that the effort will have to be made.

Get news and reviews on technology, gadgets and gaming in our Technology newsletter every Friday. Sign up here.

View More
  • 0 Comment(s)
Captcha Challenge
Reload Image
Type in the verification code above