(Top)

Polls & surveys

Catastrophic risks from AI

  • AI researchers, AIImpacts 2022
    : give “really bad outcomes (such as human extinction)” a 14% probability, with a median of 5%. 82% believe the control problem is important.
  • AI researchers, AIImpacts 2023
    : Average p(doom) between 14 and 19.4%, depending on how the question of phrased. 86% believe the control problem is important.
  • AI engineers / startup founders, State of AI Engineering
    : over 60% have a p(doom) > 25%. Only 12% have a p(doom) = 0.
  • AI safety researchers, AlignmentForum
    : respondents assigned a median probability of 20% to x-risk caused due to a lack of enough technical research, and 30% to x-risk caused due to a failure of AI systems to do what the people deploying them intended, with huge variation (for example, there are data points at both ~1% and ~99%).
  • UK citizens, PublicFirst
    : think there’s a 9% probability humans will go extinct because of AI. About 50% of say they’re very or somewhat worried about this.
  • German citizens, Kira
    : Only 14% believe AI will have a positive influence on the world, 40% mixed, 40% negative.
  • US citizens, RethinkPriorities
    : agrees with (59%) and supports (58%) the x-risk statement. Disagreement (26%) and opposition (22%) were relatively low, and sizable proportions of people remained neutral (12% and 18% for agreement and support formats, respectively).
  • Australian citizens, Ready Research
    : 80% think AI risk is a global priority, 64% want the government to focus on catastrophic outcomes (compared to only 25% on job loss, or 5% on bias).

Regulations & governance

  • US citizens, RethinkPriorities
    : 50% support a pause, 25% oppose a pause.
  • US citizens, YouGov
    : 72% want AI to slow down, 8% want to speed up. 83% of voters believe AI could accidentally cause a catastrophic event
  • US citizens, YouGov
    : 73% believe AI companies should be held liable for harms from technology they create, 67% think the AI models’ power should be restricted, and 65% believe keeping AI out of the hands of bad actors is more important than providing AI’s benefits to everyone.
  • US citizens, AIPI
    : 49:20 support “an international treaty to ban any ‘smarter-than-human’ artificial intelligence (AI)?”, 70:14 support “Preventing AI from quickly reaching superhuman capabilities”
  • US CS professors, Axios Generation Lab
    : About 1 in 5 predicted AI will “definitely” stay in human control. The rest were split between those saying AI will “probably” or “definitely” get out of human control and those saying “probably not”. Just 1 in 6 said AI shouldn’t or can’t be regulated. Only a handful trust the private sector to self-regulate.
  • US citizens, Sentience Institute
    : There was broad support for steps that could be taken to slow down development. People supported public campaigns to slow down AI development (71.3%), government regulation that slows down development (71.0%), and a six-month pause on some kinds of AI developments (69.1%). Support for a ban on artificial general intelligence (AGI) that is smarter than humans was 62.9%.
  • UK citizens, YouGov
    : 74% believe the government should prevent superhuman AI from quickly being created. Over 60% support a treaty with a global ban on superintelligence.
  • UK citizens, AISCC
    : 83% of people said that governments should require AI companies to prove their AI models are safe before releasing them.
  • NL, US, UK Citizens, Existential Risk Observatory
    : public awareness of existential risk grew in the US from 7% to 15%, and in the Netherlands and the UK 19%. Support for a government-mandated AI pause has risen in the US from 56% to 66%.