Learn why AI safety matters
One of the most important things you can do to help with AI alignment and the existential risk (x-risk) that superintelligence poses, is to learn about it. Here are some resources to get you started.
Videos
- Don’t look up - The Documentary: The Case For AI As An Existential Threat (17 mins). Powerful and nicely edited documentary about the dangers of AI, with many expert quotes from interviews.
- How to get Empowered, not overpowered by AI (15 mins). A brief introduction to the importance of getting AI alignment right.
- Robert Miles’ YouTube videos are a great place to start understanding most of the fundamentals of AI alignment.
- Max Tegmark with Lex interview (2 hrs). Interview that dives into the details of our current dangerous situation. “It’s like ‘Don’t look up’, but we are building the asteroid ourselves.”
- Max Tegmark Ted Talk (2023) (15 mins). AI capabilities are improving quicker than expected.
- The AI Dilemma (1hr). Presentation about the dangers of AI and the race which AI companies are stuck in.
- How not to destroy the world with AI (1hr). Presentation by Stuart Russell.
- Exploring the dangers from Artificial Intelligence (25mins). Summary of cybersecurity, biohazard and power-seeking AI risks.
Websites
- AISafety.info. Very complete database of questions and answers.
- NavigatingAIRisks.ai. A blog with various interesting articles.
- IncidentDatabase.ai. Database of incidents where AI systems caused harm.
Podcasts
- AI X-Risk Research podcast. In-depth interviews with experts in the field of AI alignment.
- Future of Life podcast
Articles
- The ‘Don’t Look Up’ Thinking That Could Doom Us With AI (by Max Tegmark)
- Pausing AI Developments Isn’t Enough. We Need to Shut it All Down (by Eliezer Yudkowsky)
- Preventing an AI-related catastrophe (by 80,000 hours)
- The AI Revolution: The Road to Superintelligence (by WaitButWhy)
- AI Alignment, Explained in 5 Points
- How rogue AIs may arise (by Yoshua Bengio)
- A simple explanation of why advanced AI could be incredibly dangerous
Courses
- AGI safety fundamentals (30hrs)
- CHAI Bibliography of Recommended Materials (50hrs+)
- AIsafety.training: Overview of training programs, conferences, and other events
Organisations
- Future of Life Institute started the open letter, led by Max Tegmark.
- FutureSociety
- Conjecture. Start-up that is working on AI alignment and AI policy, led by Connor Leahy.
- Existential Risk Observatory. Dutch organization that is informing the public on x-risks and studying communication strategies.
- Center for AI Safety (CAIS) is a research center at the Czech Technical University in Prague, led by
- Center for Human-Compatible Artificial Intelligence (CHAI), led by Stuart Russell.
- Machine Intelligence Research Institute (MIRI), doing mathematical research on AI safety, led by Eliezer Yudkowsky.
- Centre for the Governance of AI
- Institute for AI Policy and Strategy (IAPS)
- The AI Policy Institute
- AI Safety Communications Centre
- TheMidasProject. Corporate pressure campaigns to push for AI safety.
Books
- Superintelligence: Paths, Dangers, Strategies (Nick Bostrom)
- The Alignment Problem (Brian Christian)
- Human Compatible: Artificial Intelligence and the Problem of Control (Stuart Russell)
- Our Final Invention: Artificial Intelligence and the End of the Human Era (James Barrat)
- The Precipice: Existential Risk and the Future of Humanity (Toby Ord)