(Top)

Rebutting skeptical arguments about AI existential risks

This page is a summary of the AI Risk Skepticism

article by Ambartsoumean & Yampolskiy.

We’ll have a long time to prepare

  • Skeptics claim AI progress is not as fast as some predict, and AGI is still far away. They point to past failed predictions and limitations of current AI systems.
  • However, the pace of AI progress has actually been quite rapid, with capabilities growing exponentially in many subfields. While exact predictions are hard, continued progress makes powerful AI systems inevitable at some point. Even if distant, AI safety research needs ample time.

AI cannot have human-like capabilities

  • Skeptics argue AI lacks qualities associated with human intelligence like creativity, general reasoning, emotions, consciousness. They claim computers can only optimize narrow tasks.
  • But AI systems are already displaying some human-like capabilities like creativity and general game playing. There is no fundamental reason AI could not continue advancing across all dimensions of intelligence. AI does not need consciousness or emotions to pose risks.

AI cannot have goals or autonomy

  • Skeptics say AI systems just optimize goals we give them, and cannot act independently or have their own goals. Autonomy and unpredictable self-directed behavior is a myth.
  • However, complex AI systems can potentially have emergent autonomy and goals, especially around self-preservation, as predicted by AI drives theory. Lack of autonomy does not make AI safe if misused by humans.

AI will not have uncontrolled power

  • Skeptics argue AI systems will be limited tools under human control. They see no path for AI to gain unlimited intelligence and power to take over.
  • It only takes one uncontrolled AI system to potentially cause harm. AI capability will likely far surpass human control eventually. Underestimating the power of exponential technological progress is shortsighted.

AI will be aligned with human values

  • Skeptics expect that beneficial values will emerge naturally as AI gets smarter. They compare it to friendly domestic animals and human moral progress.
  • There is no guarantee of such value alignment absent concerted efforts. Creating AI aligned with complex, nuanced human values faces steep technical challenges requiring extensive research.

Regulation will prevent AI risks

  • Skeptics say regulatory oversight and ethical guidelines will restrain harmful AI applications, so we need not worry.
  • But regulatory policy often lags behind technological developments, especially exponential advances. Self-regulation in a competitive environment is also insufficient. Technical AI safety research is still crucial.

Conclusion

The skeptical arguments generally exhibit flawed reasoning, underestimate the exponential pace and unpredictability of AI progress, and lack appreciation of alignment difficulties. Taking a cautious, proactive approach to AI safety makes sense given the stakes involved. Though future prospects remain unclear, dismissing AI existential risks outright seems unwise. More nuanced, technical analysis and debate is needed.