PauseAI Australia (our campaigns)
What risks is Australia facing?
Artificial Intelligence is advancing at an astonishing rate. AI CEOs like Sam Altman , Dario Amodei , and scientists like Geoffrey Hinton warn that AI could surpass human intelligence within the next five years. Without international cooperation, this could result in economic chaos, war, and even human extinction.
“As general-purpose AI becomes more capable, evidence of additional risks is gradually emerging. These include risks such as large-scale labour market impacts, AI-enabled hacking or biological attacks, and society losing control over general-purpose AI.”
– International AI Safety Report (2025) , co-authored by 96 experts from 30 countries, including Australia.
Don’t we want AI’s benefits?
Sure. Artificial Intelligence already has the potential to be a powerful tool. If AI remains under control, it could be used to cure diseases, drive scientific breakthroughs, and spread opportunity and wellbeing. But it would be tragic to achieve these advances only to then lose control and suffer catastrophic losses.
“We seem to be assuming AI will neatly fit into a benign pattern. That assumption only holds to the extent AI is analogous with most of what has come before. And in the circumstances, we’d be wise to examine it far more rigorously before settling on it because there are good reasons to suppose it is a different species altogether, for which history is a poor guide.”
– Waleed Aly
New technologies have always brought change, but humans need time to adjust, safeguard, and plan for the future. For any other technology—whether aeroplanes, skyscrapers, or new medications—we insist on expertly designed safety measures before exposing the public to risks. This is not happening with AI.
AI companies are in a race, fueled by billions of dollars of investment, to build superhuman AI first. When one company succeeds, your life and that of your loved ones will become radically different, and you won’t have any say in what this future holds. This isn’t just a tech issue— it will affect everyone.
What can be done?
PauseAI proposes an international treaty to pause the development of smarter-than-human general AI until there is a credible plan to ensure it is safe. It is in Australia’s interest to advocate for this.
“Who will show leadership on negotiating an AI non-proliferation treaty? It is a collective responsibility and certainly one to which Australia could contribute.”
– Alan Finkel, Australia’s Chief Scientist (2016–2020)
History shows that smaller countries can make a big difference in solving global problems. Take the 1982 ban on whale hunting and the 1987 agreement to protect the ozone layer. Australia, which used to hunt whales itself, became a leader in protecting ocean life by supporting the ban and even taking Japan to court over its whaling. Australia also helped protect the environment by quickly joining the agreement to stop using chemicals that were damaging the ozone layer. These stories show that countries like Australia can make real change happen worldwide by taking action and working with other nations.
Aren’t there more important issues?
We agree that there are many important issues facing Australia, but we won’t be able to solve them in a world with uncontrolled AI. Australia should be advocating for an international treaty at the same time as it works on other issues.
Why isn’t anything being done already?
Australian politicians have looked at some of the smaller risks of AI, but rarely acknowledge the big ones.
We acknowledge that not everyone agrees about the risk of an AI catastrophe. We address some of the common objections here. We don’t claim to be 100% certain, but we think the probability of very bad outcomes is more than high enough to justify a pause.
It is psychologically difficult to think about potential catastrophes. Many people assume that the risks are out of their control and therefore not worth worrying about. Yet, anyone can take action right now by speaking up. We think it’s better to act than to simply worry.
PauseAI Australia campaigns
AI Summits Need to Take Safety Seriously Again
In February 2026, PauseAI Australia joined with 14 other countries, urging delegates to the International AI Summit in India to prioritise AI safety. The campaign petition gathered 2,000 signatures and sent 2,000 emails to policymakers, and resulted in an op-ed in France’s top weekly—signed by five Australian AI experts.
December: Politicians sign Superintelligence Statement
Emails from PauseAI Australia volunteers and a question at a live townhall event prompted five politicians to sign the Future of Life Institute Superintelligence Statement . Watch the moment at the townhall on YouTube .
IABIED Canberra book launch
On 7 October 2025, PauseAI Australia held a book launch and discussion event at Smith’s Alternative bookshop in Canberra to mark the release of If Anyone Builds It, Everyone Dies . Laura Nuttall, MLA, and Peter Cain, MLA, joined the discussion and read excerpts from the book.
Petition to the House of Representatives
In September 2025, e-petition EN7777 to the Australian House of Representatives was open for 30 days and collected 168 signatures. The petition asked the House to legislate that all future frontier artificial intelligence systems must pass rigorous independent safety evaluations, and further asked the House to advocate proactively for an international treaty to pause frontier AI development until global safety mechanisms are in place. The Minister gave an official response.
Productivity commission submission
In September 2025, PauseAI Australia responded to the interim report on Harnessing Data and Digital Technology with this submission . Volunteers also made individual submissions (David , Peter , Michael ).
Investigate OpenAI
In July 2025 , volunteer Mark Brown brought OpenAI to the attention of the Australian Federal Police and the Attorney-General of Australia, alleging potential breaches of the Crimes (Biological Weapons) Act 1976. It was discussed in a news story and on a video podcast . We are still waiting for a response from the AFP and the Attorney-General.
Melbourne protest
In February 2025, volunteers in Melbourne protested the missed opportunity of the Paris AI Action Summit. The protest received coverage in the Nine newspapers.