Polls & surveys on AI governance, safety and risks
How much do regular people and experts worry about AI risks?
Catastrophic risks from AI
- AI researchers, AIImpacts: give “really bad outcomes (such as human extinction)” a 14% probability, with a median of 5%. Only 18% believe the control problem is not important.
- AI engineers / startup founders, State of AI Engineering: over 60% have a p(doom) > 25%. Only 12% have a p(doom) = 0.
- AI safety researchers, AlignmentForum: respondents assigned a median probability of 20% to x-risk caused due to a lack of enough technical research, and 30% to x-risk caused due to a failure of AI systems to do what the people deploying them intended, with huge variation (for example, there are data points at both ~1% and ~99%).
- UK citizens, PublicFirst: think there’s a 9% probability humans will go extinct because of AI. About 50% of say they’re very or somewhat worried about this.
- German citizens, Kira: Only 14% believe AI will have a positive influence on the world, 40% mixed, 40% negative.
- US citizens, RethinkPriorities: agrees with (59%) and supports (58%) the x-risk statement. Disagreement (26%) and opposition (22%) were relatively low, and sizable proportions of people remained neutral (12% and 18% for agreement and support formats, respectively).
Regulations & governance
- US citizens, RethinkPriorities: 50% support a pause, 25% oppose a pause.
- US citizens, YouGov: 72% want AI to slow down, 8% want to to speed up.
- US citizens, YouGov: 73% believe AI companies should be held liable for harms from technology they create, 67% think the AI models’ power should be restricted, and 65% believe keeping AI out of the hands of bad actors is more important than providing AI’s benefits to everyone.
- US citizens, TheAIPI: 49:20 support “an international treaty to ban any ‘smarter-than-human’ artificial intelligence (AI)?”, 70:14 support “Preventing AI from quickly reaching superhuman capabilities”
- US CS professors, Axios Generation Lab: About 1 in 5 predicted AI will “definitely” stay in human control. The rest were split between those saying AI will “probably” or “definitely” get out of human control and those saying “probably not”. . Just 1 in 6 said AI shouldn’t or can’t be regulated. Only a handful trust the private sector to self-regulate.
- US citizens, Sentience Institute: There was broad support for steps that could be taken to slow down development. People supported public campaigns to slow down AI development (71.3%), government regulation that slows down development (71.0%), and a six-month pause on some kinds of AI developments (69.1%). Support for a ban on artificial general intelligence (AGI) that is smarter than humans was 62.9%.
- UK citizens, YouGov: 74% believe the government should prevent superhuman AI from quickly being created. Over 60% support a treaty with a global ban on superintelligence.
- UK citizens, AISCC: 83% of people said that governments should require AI companies to prove their AI models are safe before releasing them.