(Top)

Quotes

[About frontier AI:] Substantial risks may arise from potential intentional misuse or unintended issues of control relating to alignment with human intent. These issues are in part because those capabilities are not fully understood [...] There is potential for serious, even catastrophic, harm, either deliberate or unintentional, stemming from the most significant capabilities of these AI models.

The Bletchley Declaration,

signed by 28 countries, including all AI leaders, and the EU.

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

Statement on AI risk,

signed by hundreds of AI experts and other notable figures. Including three of the most cited scientists ever and top people from all the top AI labs. Endorsed by the UK Prime Minister and the EU President.

We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.

Pause Giant AI Experiments letter

signed by 30,000+ people, including academic AI researchers and industry CEOs. Note that this is different that our ask, which is an indefinite pause.

The development of full artificial intelligence could spell the end of the human race [...] It would take off on its own, and re-design itself at an ever increasing rate.

Stephen Hawking

Theoretical Physicist and Cosmologist

[...] the other thing that I think is maybe the most dangerous thing out there of anything, because there’s no real solution — the AI, as they call it.

Donald Trump

President of the United States

Artificial Intelligence is one of the most powerful tools of our time, but to seize its opportunities, we must first mitigate its risks. [...] Social media has shown us the harm that powerful technology can do without the right safeguards in place [...] we must be clear-eyed and vigilant about the threats emerging — of emerging technologies that can pose — don’t have to, but can pose — to our democracy and our values.

Joe Biden

46th President of the United States

Artificial intelligence is the future, not only for Russia, but for all humankind. It comes with colossal opportunities, but also threats that are difficult to predict. Whoever becomes the leader in this sphere will become the ruler of the world [...] If we become leaders in this area, we will share this know-how with [the] entire world, the same way we share our nuclear technologies today.

Vladimir Putin

President of Russia

AI must be guided in a direction that is conducive to the progress of humanity. So there should be a red line in AI development, a red line that must not be crossed [...] It should not just benefit only a small group of people, but benefit the overwhelming majority of mankind [...] It is essential that we work together and coordinate with each other.

Li Qiang

China's Head of Government

[We] should not underestimate the real threats coming from AI [...] It is moving faster than even its developers anticipated [...] We have a narrowing window of opportunity to guide this technology responsibly.

Ursula von der Leyen

Head of the executive branch of the European Union

AI poses a long-term global risk. Even its own designers have no idea where their breakthrough may lead. I urge [the UN Security Council] to approach this technology with a sense of urgency [...] Its creators themselves have warned that much bigger, potentially catastrophic and existential risks lie ahead.

António Guterres

Chief Executive Officer of the United Nations

I urge the global community of nations to work together in order to adopt a binding international treaty that regulates the development and use of artificial intelligence in its many forms [...] [AI] may pose a risk to our survival and endanger our common home.

Pope Francis

Head of the Catholic Church

Get this wrong, and AI could make it easier to build chemical or biological weapons. Terrorist groups could use AI to spread fear and destruction on an even greater scale. Criminals could exploit AI for cyber-attacks, disinformation, fraud, or even child sexual abuse. And in the most unlikely but extreme cases, there is even the risk that humanity could lose control of AI completely through the kind of AI sometimes referred to as ‘super intelligence’.

Rishi Sunak

United Kingdom's former Prime Minister

[...] just as AI has the potential to do profound good, it also has the potential to cause profound harm. From AI-enabled cyberattacks at a scale beyond anything we have seen before to AI-formulated bio-weapons that could endanger the lives of millions, these threats are often referred to as the “existential threats of AI” because, of course, they could endanger the very existence of humanity. These threats, without question, are profound, and they demand global action.

Kamala Harris

Former Vice President of the United States

The potential impact of AI might exceed human cognitive boundaries. To ensure that this technology always benefits humanity, we must regulate the development of AI and prevent this technology from turning into a runaway wild horse [...] We need to strengthen the detection and evaluation of the entire lifecycle of AI, ensuring that mankind has the ability to press the pause button at critical moments.

Zhang Jun

China's Ambassador of the United Nations

It seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers… They would be able to converse with each other to sharpen their wits. At some stage therefore, we should have to expect the machines to take control.

Alan Turing

Father of Computer Science and Artificial Intelligence

AI is a rare case where I think we need to be proactive in regulation than be reactive [...] I think that [digital super intelligence] is the single biggest existential crisis that we face and the most pressing one. It needs to be a public body that has insight and then oversight to confirm that everyone is developing AI safely [...] And mark my words, AI is far more dangerous than nukes. Far. So why do we have no regulatory oversight? This is insane.

Elon Musk

Founder/ Co-Founder of OpenAI, Neuralink, SpaceX, xAI, PayPal, CEO of Tesla, CTO of X/ Twitter

Superintelligent AIs are in our future. [...] There’s the possibility that AIs will run out of control. [Possibly,] a machine could decide that humans are a threat, conclude that its interests are different from ours, or simply stop caring about us.

Bill Gates

Co-Founder of Microsoft

[Suggesting about how to ask for a global regulatory body:] “any compute cluster above a certain extremely high-power threshold – and given the cost here, we’re talking maybe five in the world, something like that – any cluster like that has to submit to the equivalent of international weapons inspectors” […] I did a big trip around the world this year, and talked to heads of state in many of the countries that would need to participate in this, and there was almost universal support for it.

Sam Altman

Co-Founder and CEO of OpenAI, Former President of Y Combinator

We must take the risks of AI as seriously as other major global challenges, like climate change. It took the international community too long to coordinate an effective global response to this, and we’re living with the consequences of that now. We can’t afford the same delay with AI [...] then maybe there’s some kind of equivalent one day of the IAEA, which actually audits these things.

Demis Hassabis

Co-Founder and CEO of DeepMind

When I think of why am I scared [...] I think the thing that's really hard to argue with is like, there will be powerful models; they will be agentic; we're getting towards them. If such a model wanted to wreak havoc and destroy humanity or whatever, I think we have basically no ability to stop it.

Dario Amodei

Co-Founder and CEO of Anthropic, Former Head of AI Safety at OpenAI

[About a Pause] I don’t rule it out. And I think that at some point over the next five years or so, we’re going to have to consider that question very seriously.

Mustafa Suleyman

CEO of Microsoft AI, Co-Founder of DeepMind

The future is going to be good for the AIs regardless; it would be nice if it would be good for humans as well [...] It’s not that it’s going to actively hate humans and want to harm them, but it’s just going to be too powerful, and I think a good analogy would be the way humans treat animals [...] And I think by default that’s the kind of relationship that’s going to be between us and AGIs which are truly autonomous and operating on their own behalf.

Ilya Sutskever

One of the most cited scientists ever because his contributions to the field of modern AI, Co-Founder and Former Chief Scientist at OpenAI

The exact way the post-AGI world will look is hard to predict — that world will likely be more different from today’s world than today’s is from the 1500s [...] We do not yet know how hard it will be to make sure AGIs act according to the values of their operators. Some people believe it will be easy; some people believe it’ll be unimaginably difficult; but no one knows for sure

Greg Brockman

Co-Founder and Former CTO of OpenAI

[Talking about times near the creation of the first AGI] you have the race dynamics where everyone's trying to stay ahead, and that might require compromising on safety. So I think you would probably need some coordination among the larger entities that are doing this kind of training [...] Pause either further training, or pause deployment, or avoiding certain types of training that we think might be riskier.

John Schulman

Co-Founder of OpenAI

Do possible risks from AI outweigh other possible existential risks…? It's my number 1 risk for this century [...] A lack of concrete AGI projects is not what worries me, it's the lack of concrete plans on how to keep these safe that worries me.

Shane Legg

Co-Founder and Chief AGI Scientist at DeepMind

[After resigning at OpenAI, talking about sources of risks] These problems are quite hard to get right, and I am concerned we aren't on a trajectory to get there [...] OpenAI is shouldering an enormous responsibility on behalf of all of humanity. But over the past years, safety culture and processes have taken a backseat to shiny products. We are long overdue in getting incredibly serious about the implications of AGI.

Jan Leike

Former co-lead of the Superalignment project at OpenAI

The research question is: how do you prevent them from ever wanting to take control? And nobody knows the answer [...] The alarm bell I’m ringing has to do with the existential threat of them taking control [...] If you take the existential risk seriously, as I now do, it might be quite sensible to just stop developing these things any further [...] it’s as if aliens had landed and people haven’t realized because they speak very good English

Geoffrey Hinton

1st/3rd most cited scientist ever, Godfather of modern AI, Turing Award Recipient, resigned from Google due to ethical concerns

It's very hard, in terms of your ego and feeling good about what you do, to accept the idea that the thing you've been working on for decades might actually be very dangerous to humanity... I think that I didn't want to think too much about it, and that's probably the case for others [...] Rogue AI may be dangerous for the whole of humanity. Banning powerful AI systems (say beyond the abilities of GPT-4) that are given autonomy and agency would be a good start.

Yoshua Bengio

2nd/3rd most cited scientist ever, Godfather of modern AI, Turing Award Recipient

There is no question that machines will become smarter than humans—in all domains in which humans are smart—in the future. It’s a question of when and how, not a question of if.

Yann LeCun

Godfather of modern AI, Turing Award Recipient, Chief AI Scientist at Meta

If we pursue [our current approach], then we will eventually lose control over the machines.

Stuart Russell

Co-Author of the most popular AI textbook, Co-Founder of the Center for Human-Compatible AI.

An ultraintelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion', and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.

I. J. Good

Cryptologist at Bletchley Park with Alan Turing

Day by day, however, the machines are gaining ground upon us; day by day we are becoming more subservient to them... That the time will come when the machines will hold the real supremacy over the world and its inhabitants is what no person of a truly philosophic mind can for a moment question.

Samuel Butler

Novelist, "Darwin among the Machines", 1863

I’ve not met anyone in AI labs who says the risk [from training a next-gen model] is less than 1% of blowing up the planet. It’s important that people know lives are being risked [...] One thing that a pause achieves is that we will not push the Frontier, in terms of risky pre-training experiments.

Jaan Tallinn

Co-Founder of Skype, Kazaa, Future of Life Institute

I do not expect something actually smart to attack us with marching robot armies with glowing red eyes where there could be a fun movie about us fighting them. I expect an actually smarter and uncaring entity will figure out strategies and technologies that can kill us quickly and reliably and then kill us.

Eliezer Yudkowsky

Co-Founder of the Machine Intelligence Research Institute

Everyone should be very unhappy if you built a bunch of AIS who are like, 'I really hate these humans but they will murder me if I don't do what they want'. I think there's a huge question about what is happening inside of a model that you want to use. This is the kind of thing that is both horrifying from a safety perspective and also a moral perspective.

Paul Christiano

Head of AI Safety at US AI Safety Institute, Founder of the Alignment Research Center and Former Head of the Alignment Team at OpenAI

Could we solve this problem? Create AI that empowers us rather than disempowering? [...] Yes! It is possible! But, it is not what happens by default!

Connor Leahy

Hacker, CEO ConjectureAI, Ex-Head of EleutherAI

Background credits

Pause Giant AI Experiments letter

“Steve Wozniak speaking at an event in Paradise Valley, Arizona.” by Gage Skidmore. Modified and licensed under CC BY-SA 3.0.

Li Qiang

“李强 Li Qiang 20230313 02” by China News Service. Modified and licensed under CC BY 3.0.

Pope Francis

“pope-francis-1a” by ThiênLong. Modified and licensed under CC BY-SA 2.0.

Zhang Jun

“Zhang Jun (2022-06-08)” by 中国新闻社. Modified and licensed under CC BY 3.0.

Sam Altman

“Disrupt SF TechCrunch Disrupt San Francisco 2019 - Day 2 (48838377432)” by TechCrunch. Modified and licensed under CC BY 2.0.

Demis Hassabis

“File:PhotonQ-Demis Hassabis on Artificial Playful Intelligence (15366514658) (2)” by PhOtOnQuAnTiQuE from Earth France. Modified and licensed under CC BY-SA 2.0.

Dario Amodei

“TechCrunch Disrupt 2023 - Day 2” by TechCrunch. Modified and licensed under CC BY 2.0.

Mustafa Suleyman

“POZ_0446” by collision.conf. Modified and licensed under CC BY 2.0.

Greg Brockman

“TechCrunch Disrupt San Francisco 2019 - Day 2” by TechCrunch. Modified and licensed under CC BY 2.0.

Geoffrey Hinton

“RCZ_1437” by collision.conf. Modified and licensed under CC BY 2.0.

Yoshua Bengio

“File:Yoshua Bengio (36468299790)” by Jérémy Barande. Modified and licensed under CC BY-SA 2.0.

Yann LeCun

“Yann Lecun during a conference at EPFL, Lausanne, 5 october 2018” by Alain Herzog. Modified and licensed under CC BY 4.0.

Stuart Russell

“CTBTO Science and Technology conference” by The Official CTBTO Photostream, London. Modified and licensed under CC BY 2.0.

Eliezer Yudkowsky

“Eliezer Yudkowsky, Stanford 2006 (square crop)” by Roland Dobbins. Modified and licensed under CC BY-SA 2.0.