Feds poised to build guardrails for AI

4 minute read


The definition of ‘immediate’ remains fuzzy, but the response for high-risk sectors such as healthcare has been described as ‘proportional’.


The Australian government has released its interim response to the Safe and Responsible AI in Australia consultation, promising three “immediate actions”. 

While saying there would be further consultation on the possibility of introducing mandatory guardrails, Minister for Industry and Science Ed Husic said three moves were top priority: 

  • working with industry to develop a voluntary AI Safety Standard; 
  • working with industry to develop options for voluntary labelling and watermarking of AI-generated materials; 
  • establishing an expert advisory group to support the development of options for mandatory guardrails. 

“Australians understand the value of artificial intelligence, but they want to see the risks identified and tackled,” said Mr Husic. 

“We have heard loud and clear that Australians want stronger guardrails to manage higher-risk AI. 

“The Albanese government moved quickly to consult with the public and industry on how to do this, so we start building the trust and transparency in AI that Australians expect.  

“We want safe and responsible thinking baked in early as AI is designed, developed and deployed.” 

Mandatory guardrails to promote the safe design, development and deployment of AI systems will be considered, he said, including possible requirements relating to: 

  • testing of products to ensure safety before and after release; 
  • transparency regarding model design and data underpinning AI applications, including labelling of AI systems in use and/or watermarking of AI generated content; and 
  • accountability – training for developers and deployers of AI systems, possible forms of certification, and clearer expectations of accountability for organisations developing, deploying and relying on AI systems.   

The government received more than 500 responses to its discussion paper, including from tech giants Google and Meta, major banks, supermarkets, legal bodies and universities. 

The new AI advisory body would consider options for the types of legislative safeguards that should be created and whether these should be achieved through a single AI Act or by amending existing laws, according to Nine newspapers. 

One issue on the agenda is likely to be how to safeguard against algorithmic bias arising from incomplete datasets that can discriminate against people based on characteristics such as race or sex.  

Professor Lisa Given, director of the Social Change Enabling Impact Platform and professor of Information Sciences at RMIT University said the government interim response was “proportional” in its focus on high-risk settings such as healthcare. 

“This approach may be quite different to what other countries are considering; for example, the European Union is planning to ban AI tools that pose ‘unacceptable risk’, while the US has issued an executive order to introduce wide-ranging controls, such as requirements for transparency in the use of AI generally,” said Professor Given.  

“However, the Australian government will also aim to align its regulatory decisions with those of other countries, given the global reach and application of AI technologies that could affect Australians directly.  

“Taking a proportional approach enables the government to address areas where the potential harms of AI technologies are already known (e.g. potential gender discrimination when used in hiring practices to assess candidate’s resumes), as well as those that may pose significant risks to people’s lives (e.g. when used to inform medical diagnoses and treatments).  

“Focusing on workplaces and contexts where AI tools pose the greatest risk is an important place to start.  

“The creation of an advisory body to define the concept of ‘high-risk technologies’ and to advise government on where (and what kinds of) regulations may be needed is very welcome. It will complement other initiatives that the Australian government has taken recently to manage the risks of AI.” 

There was criticism from another RMIT University expert, however. Dr Nicole Shackleton, a law lecturer, told the media that there was a “lack of consideration of AI use in sex and intimate technologies”, a booming market in Australia as well as internationally. 

“Other than the Government’s focus on AI-generated pornography or intimate images, often referred to as deepfake pornography, which is increasingly being developed and used without consent to bully and harass, the interim report shows little interest in issues of sexual privacy, the safe use of AI in technologies in sexual health education, or the use of AI in sex technologies such as personal and intimate robots,” said Dr Shackleton.   

“It is vital that any future AI advisory body be capable of tackling such issues, and that the risk-based framework employed by the Government does not result in unintended consequences which hinder potential benefits of the use of AI in sex and intimate technologies.” 

The full interim response is available here

End of content

No more pages to load

Log In Register ×