(Top)

FAQ

Contents

Who are you?

We are a community of volunteers and local communities coordinated by a non-profit that aims to mitigate the risks of AI (including the risk of human extinction ). We aim to convince our governments to step in and pause the development of superhuman AI . We do this by informing the public, talking to decision-makers, and organizing protests.

You can find us on Discord , Twitter , Substack , Facebook , TikTok , LinkedIn , YouTube and Instagram . You can mail/contact us at joep@pauseai.info .

Aren’t you just scared of changes and new technology?

You might be surprised that most people in PauseAI consider themselves techno-optimists. Many of them are involved in AI development, are gadget lovers, and have mostly been very excited about the future. Particularly many of them have been excited about the potential of AI to help humanity. That’s why for many of them the sad realization that AI might be an existential risk was a very difficult one to internalize .

Do you want to ban all AI?

No, only the development of the largest general-purpose AI systems, often called “Frontier models”. Almost all currently existing AI would be legal under our proposal , and most future AI models will remain legal, too. We’re calling for a ban on AI systems more powerful than GPT-4, until we know how to build provably safe AI, and we have them under democratic control.

Do you believe GPT-4 is going to kill us?

No, we don’t think current AI models are an existential threat. It seems likely that most next AI models won’t be as well. But if we keep building more and more powerful AI systems, eventually we will reach a point where one will become an existential threat .

Can a Pause backfire and make things worse?

We’ve addressed these concerns in this article .

Is a Pause even possible?

AGI is not inevitable. It requires hordes of engineers with million-dollar paychecks. It requires a fully functional and unrestricted supply chain of the most complex hardware. It requires all of us to allow these companies to gamble with our future.

Read more about the feasibility of a Pause .

Who is paying you?

Virtually all of our actions so far have been done by volunteers. However, since February 2024, PauseAI is a registered non-profit foundation , and we have received multiple donations from individuals. We’ve also received 20k funding from the LightSpeed network.

You can also donate to PauseAI if you support our cause! We use most of the money to enable local communities to organize events.

What are your plans?

Focus on growing the movement , organizing protests, lobbying politicians, and informing the public.

Check out our roadmap for a detailed overview of our plans and what we could do with more funding.

How do you think you can convince governments to pause AI?

Check out our theory of change for a detailed overview of our strategy.

Why do you protest?

  • Protesting shows the world that we care about this issue. By protesting, we show that we are willing to spend our time and energy to get people to listen.
  • Protests can and often will positively influence public opinion, voting behavior, corporate behavior and policy.
  • By far most people are supportive of peaceful/non-violent protests
  • There is no “backfire” effect unless the protest is violent . Our protests are peaceful and non-violent.
  • It’s a social bonding experience. You meet other people who share your concerns and willingness to take action.
  • Check out this amazing article for more insights on why protesting works

If you want to organize a protest , we can help you with advice and resources.

How likely is it that superintelligent AI will cause very bad outcomes, like human extinction?

We have composed a list of ‘p(doom)’ values (probability of bad outcomes) from various notable experts in the field.

AI safety researchers (who are the experts on this topic) are divided on this question, and estimates range from 2% to 97% with an average of 30% . Note that no (surveyed) AI safety researchers believe that there’s a 0% chance. However, there might be selection bias here: people who work in the AI safety field are likely to do so because they believe preventing bad AI outcomes is important.

If you ask AI researchers in general (not safety specialists), this number drops to a mean value of around 14% , with a median of 5%. A minority, about 20% of them, believe that the alignment problem is not a real or important problem. Note that there might be a selection bias here in the opposite direction: people who work in AI are likely to do so because they believe AI will be beneficial.

Imagine you’re invited to take a test flight on a new airplane. The plane engineers think there’s a 14% chance of crashing. Would you enter that plane? Because right now, we’re all boarding the AI plane.

How long do we have until superintelligent AI?

It might take months, it might take decades, nobody knows for sure. However, we do know that the pace of AI progress is often grossly underestimated. Just three years ago we thought we’d have SAT-passing AI systems in 2055. We got there in April 2023. We should act as if we have very little time left because we don’t want to be caught off guard.

Read more about urgency .

If we Pause, what about China?

For starters, at this point, China has stricter AI regulations than virtually any other country. They didn’t even allow chatbots and disallowed training on internet data up until September 2023 . China has a more controlling government and thus has even more reason to fear the uncontrollable and unpredictable impacts of AI. During the UNSC meeting on AI safety, China was the only country that mentioned the possibility of implementing a pause.

Also note that we are primarily asking for an international pause, enforced by a treaty. Such a treaty also needs to be signed by China. If the treaty guarantees that other nations will stop as well, and there are sufficient enforcement mechanisms in place, this should be something that China will want to see as well.

OpenAI and Google are saying they want to be regulated. Why are you protesting them?

We applaud OpenAI and Google for their calls for international regulation of AI. However, we believe that the current proposals are not enough to prevent an AI catastrophe. Google and Microsoft have not yet publicly stated anything about the existential risk of AI. Only OpenAI explicitly mentions the risk of extinction , and again we applaud them for taking this risk seriously. However, their strategy is quite explicit: a Pause is impossible, we need to get to superintelligence first. The problem with this, however, is that they do not believe they have solved the alignment problem . The AI companies are locked in a race to the bottom, where AI safety is sacrificed for competitive advantage. This is simply the result of market dynamics. We need governments to step in and implement policies (at an international level) that prevent the worst outcomes .

Are AI companies pushing the existential risk narrative to manipulate us?

We can’t know for certain what motivations these companies have, but we do know that x-risk was not initially pushed by AI companies - it was scientists, activists and NGOs. Let’s look at the timeline.

There have been many people who have warned about x-risk since the early 2000s. Eliezer Yudkowsky, Nick Bostrom, Stuart Russell, Max Tegmark, and many others. They had no AI tech to push - they were simply concerned about the future of humanity.

The AI companies never mentioned x-risk until very recently.

Sam Altman is an interesting exception. He wrote about existential AI risk back in 2015, on his private blog , before founding OpenAI. In the years since he made virtually no explicit mention of x-risk again. During the Senate hearing on May 16, 2023, when asked about his x-risk blog post, he only answered by talking about jobs and the economy. He was not pushing the x-risk narrative here, he was actively avoiding it.

In May 2023, everything changed:

These companies have been very slow to acknowledge x-risk, considering that many of their employees have been aware of it for years. So in our view, the AI companies are not pushing the x-risk narrative, they have been reactive to others pushing it, and have waited with their response until it was absolutely necessary.

The business incentives point in the other direction: companies would rather not have people worry about the risks of their products. Virtually all companies downplay risks to attract customers and investments, rather than exaggerating them. How much strict regulation and negative attention are the companies inviting due to admitting these dangers? And would a company like OpenAI dedicate 20% of its compute resources to AI safety if it wouldn’t believe in the risks?

Here’s our interpretation: the AI companies signed the statement because they know that x-risk is a problem that needs to be taken very seriously.

A big reason many other people still don’t want to believe that x-risk is a real concern? Because acknowledging that we are in fact in danger is a very, very scary thing.

Read more about the psychology of x-risk .

Ok, I want to help! What can I do?

There are many things that you can do . On your own, you can write a letter , post flyers , inform others , join to a protest , ir donating some money! But even more important: you can join PauseAI and coordinate with others who are taking action. If you want to contribute more, you can become a volunteer and join one of our teams .

Even when facing the end of the world, there can still be hope and very rewarding work to do. 💪