(Top)

List of p(doom) values

p(doom) is the probability of very bad outcomes (e.g. human extinction) as a result of AI. This most often refers to the likelihood of AI taking over from humanity, but different scenarios can also constitute "doom". For example, a large portion of the population dying due to a novel biological weapon created by AI, social collapse due to a large-scale cyber attack, or AI causing a nuclear war. Note that not everyone is using the same definition when talking about their p(doom) values. Most notably the time horizon is often not specified, which makes comparing a bit difficult.

Press the p(doom) percentage to open the source.
  • Roman Yampolskiy
    AI safety scientist

  • Eliezer Yudkowsky
    Founder of MIRI

  • Dan Hendrycks
    Head of Center for AI Safety

  • Daniel Kokotajlo
    Forecaster & former OpenAI researcher

  • Zvi Mowshowitz
    Independent AI safety journalist

  • Holden Karnofsky
    Co-founder of Open Philanthropy

  • Emad Mostaque
    Founder of Stability AI

  • Jan Leike
    Former alignment lead at OpenAI

  • Paul Christiano
    Head of AI safety, US AI Safety Institute, formerly OpenAI, founded ARC

  • AI engineer

    (Estimate mean value, survey methodology may be flawed)
  • Joep Meindertsma
    Founder of PauseAI

    (The remaining 60% consists largely of "we can pause".)
  • Eli Lifland
    Top competitive forecaster

  • Scott Alexander
    Popular Internet blogger at Astral Codex Ten

  • AI Safety Researchers

    (Mean from 44 AI safety researchers in 2021)
  • Geoff Hinton
    one of three godfathers of AI

    (Recently said "Kinda 50-50" on good outcomes for humanity. Earlier he mentioned 10%.)
  • Emmett Shear
    Co-founder of Twitch, former interim CEO of OpenAI

  • Reid Hoffman
    Co-founder of LinkedIn

  • Yoshua Bengio
    one of three godfathers of AI

  • Dario Amodei
    CEO of Anthropic

  • Lina Khan
    head of FTC

  • Elon Musk
    CEO of Tesla, SpaceX, X

  • Machine learning researchers

    (Mean in 2023, depending on the question design, median values: 5-10%)
  • Vitalik Buterin
    Ethereum founder

  • Forecasting Research Institute Superforecasters

    (From the same study: Domain experts estimated 3% AI x-risk, and AI catastrophe at 12%)
  • Yann LeCun
    one of three godfathers of AI, works at Meta

    (less likely than an asteroid)

Do something about it

However high your p(doom) is, you probably agree that we should not allow AI companies to gamble with our future. Join PauseAI to prevent them from doing so.