AI overdue for regulation in Australia

5 minute read


The federal government has released a discussion paper on the regulation of AI technology in Australia.


Experts have welcomed the federal government’s move to regulate the use of artificial intelligence in Australia, which could include a ban on some types of the technology.

The government released a discussion paper titled Safe and responsible AI in Australia this month that seeks public comment on a range of AI issues, including views on whether any high-risk AI applications or technologies should be banned completely.

“This consultation will help ensure Australia continues to support responsible AI practices to increase community trust and confidence,” the discussion paper states.

“This paper builds on the recent rapid research report on generative AI delivered by the government’s National Science and Technology Council (NSTC).”

In addition to the discussion paper, the government has also released a National Science and Technology Council paper titled Rapid Response Report: Generative AI.

This report looks at the potential risks and opportunities in relation to AI in Australia.

“With the rapid acceleration of the development of AI applications, such as ChatGPT, and indications of increased capability, it is time for Australia to consider whether further action is required to manage potential risks while continuing to foster uptake,” the report states.

While Australia already has some safeguards in place in relation to AI, it’s appropriate that Australia considers whether these regulatory and governance mechanisms are fit for purpose.

Federal Industry and Science Minister Ed Husic said Australia was not alone in wanting to understand how best to manage AI.

“Using AI safely and responsibly is a balancing act the whole world is grappling with at the moment,” he said.

“The upside is massive, whether it’s fighting superbugs with new AI-developed antibiotics or preventing online fraud. But as I have been saying for many years, there needs to be appropriate safeguards to ensure the safe and responsible use of AI.”

Professor Mary-Anne Williams, Michael J Crouch Chair in Innovation and director of the Business AI Lab at The University of NSW, said Australia’s current investment in AI research, education, societal adaptation, innovation, employment opportunities and jobs creation seemed “insufficient as we face the magnitude of the impending transformations”.

She said prioritising safeguards and facilitating smooth transitions were vital and required an integrated effort across all fronts to advance our understanding and preparedness to lead post-AI world.

“It stands to reason that any form of artificial intelligence deemed unsafe should not be used or deployed,” Professor Williams said.

“The challenge, however, lies in defining and deciding what constitutes ‘unsafe’ AI. The question of responsibility and regulatory oversight remains largely unanswered, with ambiguities persisting within scientific, engineering, educational, and legal spheres.

“AI, much like electricity in its nascent stages over a century ago, is a revolutionary, general-purpose technology with the potential to overhaul every industry. Just as we didn’t outlaw electricity due to its inherent risks, banning ‘unsafe’ AI is problematic. Instead, we implemented rigorous safety measures such as the use of insulation, circuit breakers, and other measures, coupled with robust regulations and standards.”

Dr Vinh Bui is an IT and cybersecurity expert at Southern Cross University. He said generative AI was a powerful technology with vast potential that had sparked plenty of debate around the need for regulation.

“So in place of outright ban, it is important to have a concise assessment of the benefits, risks, and mitigation strategies associated with generative AI,” he said.
 
“Generative AI offers valuable benefits across domains such as healthcare and creativity. For example, Stanford University’s synthetic MRI images aid in medical diagnostics, while OpenAI’s MuseNet enhances musical creativity. However, concerns arise regarding the spread of misinformation and algorithmic bias.

“Rather than a ban, proponents suggest regulating generative AI through transparency, accountability, and mitigation strategies.”

Dr Bui said public engagement was crucial for informed decision-making.

“Regulating generative AI calls for a comprehensive approach. While benefiting various fields, addressing risks and ethical concerns is essential,” he said.

“Through responsible regulation, transparency, and technical advancements, we can harness the potential of generative AI while safeguarding against potential harms. Public engagement remains central to this process, ensuring a collective and responsible future.”

Professor Paul Salmon, co-director of the Centre for Human Factors and Sociotechnical Systems at The University of the Sunshine Coast, warned an absence of controls could be “catastrophic”.

“Though advanced AI could bring significant and widespread benefits, there are also many risks for which we currently do not have adequate controls,” he said.

“It is important to note that these risks do not relate only to the malicious use of AI, rather, there are also many risks associated with the creation and use of well-intentioned advanced AI. Some of these risks are even existential in nature.”  

He called for a halt to the development and use of advanced AI so that adequate controls could be developed.

“These include appropriate governance structures, such as an AI regulator, laws around the use of AI in different sectors, an agreed-upon ethical framework, and design standards to name only a few,” Professor Salmon said.

“If we continue on the current trajectory without the necessary controls to ensure safe, ethical, and usable AI, we will likely see catastrophic outcomes across society.”

The discussion paper will be open for consultation until the end of July.

End of content

No more pages to load

Log In Register ×