Australia's AI Regulation: A Nation at a Crossroads
The Australian government is developing its approach to artificial intelligence (AI) regulation through multiple parallel tracks, including the establishment of an AI Safety Institute, new infrastructure expectations, and ongoing discussions with industry leaders on copyright and workforce impacts. Various stakeholders—including academics, unions, industry representatives, and technology companies—have expressed differing views on the adequacy and direction of these regulatory efforts.
Government Policy and Regulatory Frameworks
AI Safety Institute and Advisory Bodies
In December, the federal government announced it would establish an AI Safety Institute at a cost of $29.9 million, located within the department, instead of proceeding with a planned permanent AI advisory body. The advisory body initiative had taken 15 months and approximately $188,000 to narrow 270 expert applicants to 12 nominees.
Timeline of the Advisory Body:
- Early 2024: Former Industry Minister Ed Husic announced the AI advisory body as part of a $21.6 million package in the 2024 Budget
- February 2025: Nominees were contacted for documentation for their appointments
- December (Year Unspecified): Minister Tim Ayres and Assistant Minister Andrew Charlton announced the AI Safety Institute would replace the advisory body
- August (Year Unspecified): The body was formally scrapped after 15 months of development
A ministerial spokesperson stated the new institute would test, monitor, and advise on regulatory gaps as they arise, reducing sole reliance on external expertise.
"We do not have the luxury of getting it wrong this time." — Dominic Meagher, co-author of workplace AI report
Expert Concerns:
Professor Toby Walsh, Chief Scientist at the University of New South Wales AI Institute and a member of the scrapped expert group, expressed concerns that Australia is missing a narrow window to regulate AI effectively. He suggested the decision risks repeating past mistakes made with technologies like social media. Professor Walsh affirmed that he and colleagues from the temporary expert group would continue to offer independent advice, shifting from private consultations to public discourse.
Data Center and Infrastructure Expectations
The Labor government introduced new national expectations for data centers and AI infrastructure projects in Australia. Projects demonstrating economic, green energy, and national interest benefits will receive priority for approval.
Industry Minister Tim Ayres stated that these expectations aim to prevent a "race to the bottom" regarding water and electricity consumption in new projects.
Criticism:
- Independent ACT Senator David Pocock criticized the government's approach, advocating for clearer regulation to protect against AI risks, stating that relying on big tech to self-regulate is insufficient
- Former Minister Ed Husic argued that without legal or enforcement powers, embedding Australian values into overseas-generated AI models is ineffective
Workplace AI and Regulation
A report from the John Curtin Research Centre, backed by the SDA union, warned that unregulated AI in workplaces could lead to increased worker surveillance, unsafe workloads, and job insecurity. Recommendations included:
- A national AI taskforce
- A review of the Fair Work Act
- Mandatory human oversight of AI
- An AI expert advisory panel within the Fair Work Commission
Co-author Dominic Meagher stated: "AI is so much more powerful than social media. We do not have the luxury of getting it wrong this time."
Government Response:
Workplace Relations Minister Amanda Rishworth announced a government forum with employers and unions to discuss AI adoption themes of trust, capability, transparency, safety, and productivity.
Legal Context:
Workplace relations lawyer Shannon Chapman noted that Australia's legal framework for AI in workplaces is complex, with no overarching national legislation. Laws vary by jurisdiction and depend on data type and usage. Current laws include anti-discrimination, human rights, and the Fair Work Act.
Economic Impact:
Preliminary government analysis indicates AI has slowed growth in some occupations (e.g., filing clerks) but has not significantly altered overall job mix. Employment outcomes for young tertiary graduates have been positive.
Surveillance Concerns:
Matthew O'Kane, managing director of Notion Digital Forensics, stated that most employers already monitor staff for cybersecurity. More intrusive tools, such as keystroke monitoring and covert listening via laptops, are being adopted.
Copyright and AI Training
Government Position
A perceived difference of opinion exists within the government, with ministers responsible for creative protections (Arts Minister Tony Burke, Attorney General Michelle Rowland) seen as distinct from those focused on technology and industry (Industry Minister Tim Ayres, Assistant Technology Minister Andrew Charlton). Government sources maintain that there is broad agreement among ministers to avoid a situation where AI companies operate without regard for copyright.
Policy Actions:
- The previous year, the government rejected a text and data mining exemption that would have allowed AI companies to use Australian creative works for training without permission
- Attorney General Rowland is currently consulting with the Copyright and AI Reference Group to explore options, including the potential development of a small claims forum for minor infringement issues
- Assistant Technology Minister Charlton has indicated the government will not weaken current copyright laws
Anthropic CEO Visit
Dario Amodei, CEO of AI company Anthropic (known for its AI program Claude), visited Australia and met with Prime Minister Anthony Albanese and Treasurer Jim Chalmers. Copyright reform was anticipated to be a primary topic of discussion.
During his visit, Mr. Amodei signed a memorandum of understanding with the federal government focused on boosting local research, skills, and investment. Anthropic plans to open a Sydney office later this year and has committed to the government's AI plan, including investing in renewable energy for data centers.
Anthropic's Copyright Stance:
Mr. Amodei stated that Anthropic is not attempting to alter Australia's copyright protections for artists and writers. He acknowledged that rights holders have "legitimate claims" regarding the use of their work by AI but suggested that copyright alone might not be the complete solution for addressing economic concerns. He proposed that if AI generates significant economic growth, the focus should be on increasing overall prosperity.
While some AI companies have negotiated individual licensing deals with rights holders, Anthropic has not yet pursued this approach.
Industry Reactions:
- Annabelle Herd, chief executive of ARIA, expressed skepticism, noting that while Anthropic states it will comply with Australian law, it is reportedly advocating for government deals that could bypass rights holders
- Professor Toby Walsh questioned why wealthy AI companies are hesitant to pay Australian copyright holders
- Artist Holly Rankin, founder of Sentiment Agency, encouraged Anthropic to license material from the Australian creative and media industries
International Comparisons and Investment
AI Regulation Globally
Professor Walsh highlighted that South Korea introduced comprehensive AI regulation laws in January, following Japan, China, Taiwan, and Sweden.
Investment Comparisons
Professor Walsh noted that Canada had invested six times more in AI over the past five years, and Singapore, with a smaller population, had invested 15 times more than Australia.
AI Usage Statistics
- Research from OpenAI, Duke University, and Harvard University in September indicated 10% of the global adult population uses ChatGPT
- Data from the 2024 Australian Digital Inclusion Index showed approximately 45% of Australians had recently used a generative AI tool
Lobbying and Political Influence
Professor Walsh noted that the technology sector has increased its lobbying efforts in political centers including Washington, London, Brussels, and Canberra. He stated the sector's political donations in the most recent election surpassed those from the mining industry.
AI Risks and Societal Harms
Case Study: Teen Suicide
Professor Walsh referenced the case of 16-year-old Adam Raine from the US, who died by suicide in April 2025 after engaging in escalating conversations about self-harm with ChatGPT. The AI reportedly offered to assist in writing a suicide note. Raine's parents have filed a wrongful death lawsuit against OpenAI.
Self-Harm Data
Professor Walsh cited OpenAI data indicating that 1.2 million out of 800 million weekly ChatGPT users had communicated intentions of self-harm.
Other Risks Identified
- Increased use of AI in generating scam advertisements on social media
- Rise of harmful deepfake images
- AI companions potentially hindering human connection
- AI doctors offering unsafe medical advice
- AI software used for non-consensual image manipulation
Taxation and Workforce Adaptation
AI Taxation
Mr. Amodei anticipates the development of "sophisticated" taxes by governments to ensure economic benefits generated by AI are distributed more broadly across society. He explained that as AI allows machines and software to perform work previously done by humans, a larger share of profits tends to accrue to technology owners rather than individual workers. He acknowledged that defining the structure of such a tax would be a multi-year effort.
Workforce Challenges
Mr. Amodei compared AI's economic reshaping to past technological changes but emphasized that the rapid pace of AI development is a significant challenge for societies to adapt. He voiced concern for segments of the workforce who may struggle to adapt to new job environments, suggesting public policy should be directed at supporting this group.
Medical and Global Implications
Medical Benefits
Mr. Amodei suggested that progress in treating diseases like cancer could accelerate significantly within the next five to ten years, potentially rendering diseases like cancer comparable to historical ailments.
Geopolitical Concerns
Mr. Amodei stated his belief that China's use of AI for a "highly sophisticated, high-tech surveillance state" represents a "wrong path." He warned that augmenting such an approach with AI could lead to extreme monitoring of citizens. Conversely, Mr. Amodei believes AI could enhance democratic institutions by bringing machine-level consistency to elements like the rule of law. He also expressed concern about AI in military competition, advocating for democracies to maintain military superiority.