Top

Latest

  • We call for a prohibition on the development of superintelligence, not lifted before there is broad scientific consensus that it will be done safely and controllably, and strong public buy-in
    Statement on Superintelligence

    Statement on Superintelligence

    110,000+ signatories including AI researchers, political, faith and industry leaders, artists and media celebrities

  • Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.
    Statement on AI Risk

    Statement on AI Risk

    Signed by hundreds of experts, including the top AI labs and scientists

  • If you take the existential risk seriously, as I now do, it might be quite sensible to just stop developing these things any further.
    Geoffrey Hinton

    Geoffrey Hinton

    Nobel Prize winner & "Godfather of AI"

  • The development of full artificial intelligence could spell the end of the human race.
    Stephen Hawking

    Stephen Hawking

    Theoretical physicist and cosmologist

  • It seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers… At some stage therefore, we should have to expect the machines to take control.
    Alan Turing

    Alan Turing

    Inventor of the modern computer

  • If we pursue [our current approach], then we will eventually lose control over the machines.
    Stuart Russell

    Stuart Russell

    Writer of the AI textbook

  • Rogue AI may be dangerous for the whole of humanity. Banning powerful AI systems (say beyond the abilities of GPT-4) that are given autonomy and agency would be a good start.
    Yoshua Bengio

    Yoshua Bengio

    AI Turing Award winner

of AI scientists believe that the technical challenge of aligning AIs with human values is real & important

of citizens want AI to be slowed down by our governments