Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.
Statement on AI Risk
Signed by hundreds of experts, including the top AI labs and scientists
If you take the existential risk seriously, as I now do, it might be quite sensible to just stop developing these things any further.
Geoffrey Hinton
Nobel Prize winner & "Godfather of AI"
The development of full artificial intelligence could spell the end of the human race.
Stephen Hawking
Theoretical physicist and cosmologist
... we should have to expect the machines to take control.
Alan Turing
Inventor of the modern computer
If we pursue [our current approach], then we will eventually lose control over the machines
Stuart Russell
Writer of the AI textbook
Rogue AI may be dangerous for the whole of humanity. Banning powerful AI systems (say beyond the abilities of GPT-4) that are given autonomy and agency would be a good start.