Take action
The group of people who are aware of AI risks is still small.
You are now one of them.
Your actions matter more than you think.
You can do this
- Join PauseAI and help us grow.
- Join our Discord, where our community is most active. We have a #projects channel where people are working on campaigns, videos, images, apps and more.
- Learn more about AI alignment and the risks we are facing.
- Protest: join one of the protests or organize one yourself.
- Lobby: convince your government to work towards a pause and prepare for the summit.
- Talk to people in your life about this. Answer their questions, and get them to act.
- Share about AI risk on social media. This website might be a good start.
- Create articles, videos or memes. Collaborate with others in the Discord server (in the #projects channel).
- Sign petitions:
- pause giant AI experiments,
- demand responsible AI
- statement on AI risk
- or one of the national petitions: UK, AUS, NL.
- Follow our social media channels and stay updated:
If you …
If you are convincing
- Convince one person in your government to prepare for the summit and work towards a global pause. This is the most important thing you can do. See lobby tips.
- Convince journalists to write about AI safety. Point them to educational materials.
- Ask the management at your current organization to take an institutional position on this.
- Write to your political representatives.
If you are a politician / work in government
- Prepare for the AI safety summit. Form coalitions with other countries. Get informed about the problem and solutions.
- Invite (or subpoena) AI lab leaders to parliamentary/congressional hearings to give their predictions and timelines of AI disasters.
- Establish a committee to investigate the risks of AI.
If you know (international) law
- Help draft policy. Draft examples. (some frameworks)
- Make submissions to government requests for comment on AI policy (example).
If you have money to spare
- Donate to the Campaign for AI Safety, Future of Life Institute, the Machine Intelligence Research Institute or NonLinearNetwork
If you can write web content
If you work in AI
- Don’t work towards superintelligence. If you have some cool idea on how we can make AI systems 10x faster, please don’t build it / spread it / talk about it. We need to slow down AI development, not speed it up.
- Talk to your management and colleagues about the risks. Get them to take an institutional position on this.
- Hold a seminar on AI safety at your workplace. Check out the videos for inspiration.
- Sign the open letter.
If you work on AI safety
If you are just starting out in AI Alignment, unless you are extremely skilled and/or have had significant new flashes of insight on the problem, consider switching to advocacy for the Pause. Without the Pause in place first, there just isn’t time to spin up a career in Alignment to the point of making useful contributions.
If you are already established in Alignment, consider more public communication, and adding your name to calls for the Pause and regulation of the AI industry.
Tips for being effective
- Be bold in your public communication of the danger. Don’t use hedging language or caveats by default; mention them when questioned, or in footnotes, but don’t make it sound like you aren’t that concerned if you are.
- Be less exacting in your work. 80/20 more. Don’t do the classic geek thing and spend months agonizing and iterating on your Google doc over endless rounds of feedback. Get your project out into the world and iterate as you go. Time is of the essence.
Consider this: all our other work may just be the equivalent of rearranging deckchairs on the Titanic. We need to be running to the bridge, grabbing the wheel, and steering away from the iceberg. We may not have much time, but we can try. We can do this!