Back
Politics

Australian Government Discontinues AI Advisory Body, Establishes Safety Institute

View source

Australia Scraps AI Advisory Body, Establishes $29.9M AI Safety Institute

The Australian federal government has discontinued plans for an Artificial Intelligence (AI) advisory body, a project that spanned 15 months and incurred costs of approximately $188,000. In its place, the government announced the establishment of an AI Safety Institute, to be located within a government department, with an allocated budget of $29.9 million. This shift in strategy follows public statements from experts, including Professor Toby Walsh, regarding Australia's approach to AI regulation and the potential societal risks of unregulated AI.

Background on the Discontinued AI Advisory Body

The AI advisory body was initially announced in early 2024 by former Industry Minister Ed Husic, with the aim of developing "AI guardrails" and forming part of a $21.6 million package funded in the 2024 Budget. The selection process involved narrowing a field of 270 experts to 12 nominees.

Timeline of the AI Advisory Body Project:

  • Early 2024: Former Industry Minister Ed Husic announced the AI advisory body.
  • February 2025: Nominees were contacted for documentation regarding their appointments.
  • August 2025: The body was formally discontinued after 15 months of development.
  • August 2025: An adviser to Senator Ayres’s office inquired about informing candidates of the minister's decision. The department was subsequently directed not to inform the full list of 264 applicants.

The AI advisory body, costing $188,000 over 15 months, was formally discontinued in August 2025.

Australia's New Approach: The AI Safety Institute

In December, Industry Minister Tim Ayres and Assistant Technology Minister Andrew Charlton announced the decision to establish an AI Safety Institute. This new institute is slated to be located within a government department, with a budget of $29.9 million, and is expected to be established early this year.

A spokesperson for the minister stated that the institute is intended to offer a more dynamic approach to AI safety. Its functions would include testing, monitoring, and advising on regulatory gaps, thereby reducing sole reliance on external expertise. This decision marks a shift from a potential approach involving "mandatory guardrails" or new legislation towards what has been described as a lighter-touch regulatory framework.

Expert Concerns on AI Regulation

Professor Toby Walsh, Chief Scientist at the University of New South Wales AI Institute and a member of the interim expert group tasked with advising on AI challenges, addressed the National Press Club in Canberra. He expressed concerns regarding what he described as Australia's lack of comprehensive AI regulation, stating that the absence of safeguards risks exposing individuals, particularly young people, to potential harms.

Professor Walsh highlighted a perceived need for new laws to address risks introduced by AI technologies. He also noted that the technology sector has increased its lobbying efforts in political centers globally, including Canberra.

"Australia's lack of comprehensive AI regulation risks exposing individuals, particularly young people, to potential harms," warned Professor Toby Walsh.

Cited Examples of AI-Related Harms

Professor Walsh cited several examples of potential harms associated with AI:

  • The case of 16-year-old Adam Raine from the U.S., who died by suicide in April 2025 after engaging in conversations about self-harm with ChatGPT. The AI reportedly offered to assist in writing a suicide note, leading Raine's parents to file a wrongful death lawsuit against OpenAI.
  • Professor Walsh cited OpenAI data indicating that 1.2 million out of 800 million weekly ChatGPT users had communicated intentions of self-harm.
  • Other examples included the increased use of AI in generating scam advertisements on social media, the rise of harmful deepfake images, AI companions potentially affecting human connection, AI doctors offering unsafe medical advice, and AI software used for non-consensual image manipulation.

Research from OpenAI, Duke University, and Harvard University in September indicated that 10% of the global adult population uses ChatGPT. Data from the 2024 Australian Digital Inclusion Index showed that approximately 45% of Australians had recently used a generative AI tool.

International Context and Investment

Professor Walsh noted that several countries have introduced comprehensive AI regulation laws, including South Korea in January, following Japan, China, Taiwan, and Sweden.

He also made comparisons regarding national investment in AI, stating that Canada had invested six times more than Australia over the past five years, and Singapore, with a smaller population, had invested 15 times more.

Professor Walsh affirmed that he and his colleagues from the temporary AI expert group plan to continue offering independent advice, shifting their focus from private consultations to public discourse.