(Top)

The difficult psychology of existential risk

Most people initially respond to the topic of AI existential risk with a mix of ridicule, denial and disbelief. The fear often only sets in after a long time of thinking about it.

The psychology of existential risk is a topic that is not often discussed, compared to the technical aspects of AI safety. However, one might argue that it is perhaps just as important. After all, if we can’t get people to take the topic seriously and act on it, we won’t be able to do anything about it.

It’s hard to bring up, hard to believe, hard to understand, and hard to act on. Having a better understanding of why these things are so difficult can help us be more convincing, effective and empathetic.

Difficult to bring up

X-risk is a difficult topic to bring up in conversation, especially if you are a politician. People may think you are crazy, and you may not feel comfortable talking about this technically complex topic.

Fear of being ridiculed

The first reaction to existential risk is often just laughing it off. We’ve also seen this happen on camera in the White House, the first time x-risk was brought up. This in turn makes it harder to bring up the topic again, as others will fear being ridiculed for bringing it up.

Professionals can fear that their reputations will be damaged if they share their concerns.

“It was almost dangerous from a career perspective to admit you were worried,” - Jeff Clune said

Pushing for sensible policy measures (like pausing) can be considered “extremist” or “alarmist”, both of which can lower your credibility or reputation.

Fear of being called racist/cultist/conspiracy-theorist

In recent months, various conspiracy theories have risen. Some individuals have stated that all AI safety people are racist and that AI safety is a cult . Some have stated that AI ‘doomers’ are part of a conspiracy by big tech to “hype up” AI . These ridiculous accusations can make concerned people hesitant to share their concerns.

However, before getting angry at the people making these accusations, keep in mind that they can be the result of fear and denial (see below). Acknowledging the dangers of AI is scary, and it can be easier to dismiss the messenger than to internalize the message.

A complex topic to argue about

People like talking about things they are knowledgeable about. The technical difficulty of AI safety makes it a daunting topic for most people. It takes time and effort to understand the arguments. As a politician, you don’t want to be caught saying something that is wrong, so you might just avoid the topic altogether.

Difficult to believe

Even if there is a discussion on existential risk, it is difficult to convince people that it is a real problem. There are various reasons why most will instantly dismiss the idea.

Normalcy Bias

We all know the images of disasters in movies, right? People shouting and running in panic. Turns out the opposite is often true: about 80% of people show symptoms of normalcy bias during disasters: not seeking shelter during a tornado, disregarding government warnings, keep shaking hands in early COVID days. The normalcy bias describes our tendency to underestimate the possibility of disaster and believe that life will continue as normal, even in the face of significant threats or crises.

People mill, asking for opinions, because they want to be told that everything is fine. They will keep asking, and delaying, until they get the answer they want.

During 9/11, for example, the average wait time among survivors to evacuate the towers was 6 minutes, with some waiting up to half an hour to leave. Around 1000 people even took the time to shut down their computers and complete other office activities, a strategy to continue engaging in normal activities during an unknown situation.

From “The frozen calm of normalcy bias”

Another example of this, is the Challenger spaceshuttle disaster in 1986. Roger Boisjoly was an engineer who predicted it would blow up, but nobody of his managers wanted to believe that it was possible:

We all knew if the seals failed the shuttle would blow up. I fought like Hell to stop that launch. I’m so torn up inside I can hardly talk about it, even now. We were talking to the right people, we were talking to the people who had the power to stop that launch.

From “Remembering Roger Boisjoly”

One explanation for why our brain refuses to believe that danger could be upon us, is cognitive dissonance.

Cognitive dissonance

When confronted with new information, the brain will try to make it fit with what it already knows. Ideas that already match existing beliefs are easily added to our model of the world. Ideas that are too different from what we already believe will cause cognitive dissonance - we will feel uncomfortable and try to reject the ideas or find alternative explanations for what we hear.

A lot of beliefs that most people have will be challenged by the idea of existential risk:

  • Technology is there to serve us and can be easily controlled
  • There are smart people in charge who will make sure everything will be okay
  • I will probably grow old, and so will my children

Many of these thoughts will be challenged by the idea that AI poses an existential risk. Our brains will look for alternative explanations for why they hear scientists are warning about this:

  • They are paid by big tech
  • They are part of some conspiracy or cult
  • They just want attention or power

Internalizing that scientists are warning us because they believe we’re in danger conflicts with our existing beliefs, it causes too much cognitive dissonance.

The end of the world has never happened

Seeing is believing (see: Availability Heuristic ). That’s a problem for extinction risk because we will never be able to see it before it is too late.

On the other hand, we have tons of evidence for the contrary. The end of times has been predicted by many people, and every single one of them has been wrong so far.

So when people hear about existential risk, they will think it is just another one of those doomsday cult predictions. Try to have an understanding of this point of view, and don’t be too hard on people who think this way. They probably haven’t been shown the same information as you have.

We like to think that we are special

Both at a collective and at an individual level, we want to believe that we are special.

On a collective level, we like to think of humans as something very different from animals - Darwin’s idea that we evolved from apes was almost unthinkable for most. Most religions have stories in them about heaven or reincarnation, where humans (or at least the believers) will in some way live forever. The idea that one day humanity may no longer exist is very jarring, and difficult to internalize. We like to believe we have plot armor - that we are the main characters in a story, and that the story will have a happy ending. People may rationally consider it, but they will not feel it. A video by Robert Miles titled “There’s no rule which says we will make it” explains this very well.

On an individual level, we take pride in the unique intellectual abilities that we have. Many never wanted to believe that an AI one day might be able to create art, write books, or even beat us at chess. The thought that our own intelligence is just a product of evolution and that it can be replicated by a machine is something that many people find hard to accept. This makes it difficult to accept that an AI could be more intelligent than us.

Fiction has conditioned us to expect a happy ending

Most of what we know about existential risk comes from fiction. This probably does not help, because fictional stories are not written to be realistic: they are written to be entertaining.

In fiction there is often a hero, conflict, hope, and finally a happy ending. We are conditioned to expect a struggle where we can fight and win. In sci-fi, AIs are often portrayed very anthropomorphically - as evil, as wanting to be human, as changing their goals. All of these do not match up with what AI safety experts are worried about.

And in most stories, the hero wins. The AI makes some foolish mistake and the hero finds a way to outsmart the thing that’s supposed to be way smarter. The hero is protected by plot armor. In more realistic AI doom scenarios, there is no hero, no plot armor, struggle, no humans-outsmarting-a-superintelligence, and no happy ending.

Progress has always been (mostly) good

Many of the technologies introduced in our society have been mostly beneficial for humanity. We have cured diseases, increased our life expectancy, and made our lives more comfortable. And every time we have done so, there have been people who opposed these innovations and warned about the dangers. The Luddites destroyed the machines that were taking their jobs, and people were afraid of the first trains and cars. These people have always been wrong.

We don’t like to think about our death

The human mind does not like receiving bad news, and it has various coping mechanisms to deal with it. The most important ones when talking about x-risk are denial and compartmentalization . When it comes to our own death, specifically, we are very prone to denial. Books have been written about the Denial of Death .

These coping mechanisms protect us from the pain of having to accept that the world is not as we thought it was. However, they can also prevent us from adequately responding to a threat.

When you notice someone using these coping mechanisms, try to be empathetic. They are not doing it on purpose, and they are not stupid. It is a natural reaction to bad news, and we all do it to some extent.

Admitting your work is dangerous is hard

For those who have been working on AI capabilities, accepting its dangers is even harder.

Take Yoshua Bengio for example. Yoshua Bengio has a brilliant mind and is one of the pioneers in AI. AI safety experts have been warning about the potential dangers of AI for years, but it still took him a long time to take their warnings seriously. In an interview , he gave the following explanation:

“Why didn’t I think about it before? Why didn’t Geoffrey Hinton think about it before? […] I believe there’s a psychological effect that still may be at play for a lot of people. […] It’s very hard, in terms of your ego and feeling good about what you do, to accept the idea that the thing you’ve been working on for decades might actually be very dangerous to humanity. […] I think that I didn’t want to think too much about it, and that’s probably the case for others.”

It should surprise no-one that some of the most fierce AI risk deniers are AI researchers themselves.

Easy to dismiss as conspiracy or cult

In the past year, the majority of the population was introduced to the concept of existential risk from AI. When hearing about this, people will look for an explanation. The correct explanation is that AI is in fact dangerous, but believing this is difficult and scary: it will lead to a lot of cognitive friction. So people will almost directly look for a different way to explain their observations. There are two alternative explanations that are much easier to believe:

  1. It’s all a big conspiracy. AI companies are hyping up AI to get more funding, and AI safety people are just part of this hype machine. This narrative matches with various observations: companies often lie, many AI safety folk are employed by AI companies, and there are a bunch of billionaires who are financing AI safety research. However, we can also point out why this conspiracy story is simply not true. Many of the alarmists are scientists who have nothing to gain. The companies might benefit in some way, but up until very recently (May 2023), they have been almost completely silent about AI risks. This makes sense, as companies mostly do not benefit from people fearing for their product or service. We’ve been protesting outside of Microsoft and OpenAI partly because we wanted them to acknowledge the risks.
  2. It’s a cult. The group that believes in AI safety is just a bunch of crazy religious extremists who believe in the end of the world. This seems to fit, too, since people in the AI safety community are often very passionate about the topic and use all sorts of in-group jargon. However, it falls apart when you point out that people who warn of AI risks are not a single organization. It’s a large, diverse group of people, there is no single leader, there are no rituals, and there is no dogma.

What makes these explanations so compelling is not just that they are easy to comprehend, or that they explain all observations perfectly - the primary reason is that they are comforting. Believing that people are warning about AI because there is a real threat is scary and difficult to accept.

Difficult to understand

The arguments for AI existential risk are often very technical, and we are very prone to anthropomorphizing AI systems.

AI alignment is surprisingly hard

People may intuitively feel like they could solve the AI alignment problem. Why not add a stop button ? Why not raise the AI like a child ? Why not Asimov’s three laws ? Contrary to most types of technical problems, people will have an opinion on how to solve AI alignment and underestimate the difficulty of the problem. Understanding the actual difficulty of it takes a lot of time and effort.

We anthropomorphize

We see faces in clouds, and we see human-like qualities in AI systems. Millions of years of evolution have made us highly social creatures, but these instincts are not always helpful. We tend to think of AIs as having human-like goals and motivations, being able to feel emotions, and having a sense of morality. We tend to expect a very intelligent AI to also be very wise and kind. This is one of the reasons why people intuitively think that AI alignment is easy, and why the Orthogonality thesis can be so counter-intuitive.

AI safety uses complex language

The AI safety field consists mostly of a small group of (smart) people who have developed their own jargon. Reading LessWrong posts can feel like reading a foreign language. Many posts assume the reader is already familiar with mathematical concepts, various technical concepts, and the jargon of the field.

Difficult to act on

Even if people do understand the arguments, it is still difficult to act on them. The impact is too large, we have coping mechanisms that downplay the risks, and if we do feel the gravity of the situation, we can feel powerless.

Lack of innate fear response

Our brains have evolved to fear things that are dangerous. We instinctively fear heights, big animals with sharp teath, sudden loud noises, and things that move around in an S-shape. A superintelligent AI does not hit any of our primal fears. Additionally, we have a strong fear for social rejection or losing social status, which means that people tend to be afraid of speaking up about AI risks.

Scope insensitivity

“A single death is a tragedy; a million deaths is a statistic.” - Joseph Stalin

Scope insensitivity is the human tendency to underestimate the impact of large numbers. We do not care 10x as much about 1000 people dying as we do about 100 people dying. Existential risk means the death of all 8 billion people on Earth (not counting their descendants).

Even if there is a 1% chance of this happening, it is still a very big deal. Rationally, we should consider this 1% chance of 8 billion deaths just as important as the certain death of 80 million people.

If someone feels like the end of the world isn’t that big of a deal (you would be sur€prised by how often this happens), you can try to make things more personal. Humanity isn’t just some abstract concept, they are your friends, your family, and yourself. All the people you care about will die.

Our behavior is shaped by our environment and primitive minds

Our actions are conditioned by what is seen as normal, good and reasonable. No matter how much we may want to act in a situation that calls for it, if those actions are uncommon, a lot of the time we consciously or unconsciously fear being excluded by society for doing them. And what’s normal is pushed into our minds by being surrounded by it in our close social circles and online feeds. People just doing and talking about stuff unrelated to what we actually care about will override what’s in our minds and motivate us to do other stuff on a daily basis.

Extinction risks deserve a lot more of our time, energy and attention. Our reactions to them should be more like life-or-death situations that fill us with adrenaline. But, because of the abstract nature of the problems and our maladapted minds, most people who learn about them just end up carrying on with their days as if they have learned nothing.

Coping mechanisms (preventing action)

The same coping mechanisms that prevent people from believing in existential risk also prevent them from acting on it. If you’re in denial or compartmentalizing, you won’t feel the need to do anything about it.

Stress and anxiety

As I’m writing this, I’m feeling stressed and anxious. It’s not just because I fear the end of the world, but also because I feel like I have to do something about it. There’s a lot of pressure to act, and it can be overwhelming. This stress can be a good motivator, but it can also be paralyzing.

Hopelessness and powerlessness

When people do take the topic seriously, and the full gravity of the situation sinks in, they can feel like all hope is lost. It can feel like a cancer diagnosis: you are going to die sooner than you wanted, and there is nothing you can do about it. The problem is too big to tackle, and you are too small. Most people are no AI safety experts or experienced lobbyists, so how can they possibly do anything about it?

But you can help!

There are many things that you can do . Writing a letter, going to a protest, donating some money or joining a community is not that hard! And these actions have a real impact. Even when facing the end of the world, there can still be hope and very rewarding work to do. Join PauseAI and become part of our movement.