PauseAI protest @ Melbourne - June 16th
Today we protested in Melbourne, where OpenAI's Sam Altman was speaking. OpenAI aims to build a superintelligence, which has a serious chance to kill everyone on earth. We're demanding our governments to step in and #PauseAI.— PauseAI (@pause_ai_info) June 16, 2023
Press release: https://t.co/xu7XXTUUyT https://t.co/HtYymXpqjf
Join #PauseAI for an upcoming peaceful protest at the Melbourne Convention and Exhibition Centre (MCEC) where Sam Altman will be having a talk.
- Date & Time: Friday, June 16, 2 pm AEST
- Venue: Main entrance of MCEC, 1 Convention Centre Place, South Wharf, VIC 3006, Australia
- Protest Times: 1.30 pm to 3 pm (arrival time) & 4:30 pm onwards (departure time)
- Logistics: Bring signs and flyers, no fee is required to participate, Startup Victoria membership ticket is currently free
On Friday, June 16th, volunteers from the new PauseAI movement will gather at the Melbourne Convention and Exhibition Centre to urge the Australian government to take the lead on pausing the development of more powerful and dangerous AI systems.
A rapidly increasing number of AI experts signed a statement last week that reads:
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
This has been signed by virtually all AI labs (OpenAI, Google Deepmind, Ahthropic) and hundreds of AI scientists including Geoffrey Hinton, the “Godfather of AI”.
AI safety researchers have not reached on consensus on how large the risk of human extinction will be. Results from the “Existential risk from AI survey” show that estimates range from 2% to 98%, with an average of 30%.
The protesters are urging the Australian government to take the lead on global AI safety and pause the development of more dangerous AI systems. They are also asking them to prioritize the Pause on the AI Safety Summit, which is being organized by the UK and will be held later in 2023.
Pausing AI development is a radically different approach to safety from what the AI lab CEOs like Sam Altman are proposing. OpenAI believes that “it would be unintuitively risky and difficult to stop the creation of superintelligence”, so they are pursuing further development toward superintelligence.
“We have a choice: do we risk everything to build a superintelligence that the public was never consulted on, or do we stop while we still can?” - PauseAI protesters
“AI companies are putting everything at risk; we’re already seeing the damage, and it will get far worse. Technology development is not inevitable, and pausing should be considered a feasible option. We can’t cede the future to a few CEOs who acknowledge they are willing to risk humanity for their dreams. We all deserve a say on our future, and a global pause gives us that chance.”
“Despite acknowledging the dangers of continued AI development, these companies are merely using it as an excuse to carry on, and seem to refuse to voluntarily give up this dangerous power. In such situations, global collaboration in reigning in this dangerous development is key so that we make sure technology development works for all.”
“We may not have the luxury of time. AI developments are happening at a frantic pace, and we need to act now to prevent the worst-case scenarios. The summit in autumn could be even too late to prevent the worst. We need governments to pause AI development right now”
For more information, please visit PauseAI.info.
- Michael Huang (Twitter)