Why we need AI Safety Summits
AI presents numerous risks to humanity, including the risk of extinction . Progress in AI capabilities is accelerating at a frantic pace , and we are not prepared for the consequences. AI companies are locked in a race to the bottom, where safety is not the highest priority. We need governments to step in and prevent AI from reaching superhuman levels before we know how to make it safely. This pause needs to happen on an international level because countries are locked in a race similar to the companies. International agreements means treaties, and that requires countries to meet in person and negotiate. The only way to achieve a true pause is through a summit.
There have been some examples of international summits & resulting treaties that have been successful in reducing risks:
- Montreal Protocol (1987): The Montreal Protocol is an international environmental treaty designed to protect the ozone layer by phasing out the production and consumption of ozone-depleting substances. It has been highly successful in reducing the use of substances like chlorofluorocarbons (CFCs) and has contributed to the gradual recovery of the ozone layer.
- Stockholm Convention on Persistent Organic Pollutants (2001): The Stockholm Convention is an international treaty aimed at protecting human health and the environment from persistent organic pollutants (POPs). These are toxic chemicals that persist in the environment, bioaccumulate in living organisms, and can have serious adverse effects on human health and ecosystems. Scientists raised concerns about the harmful effects of POPs, including their ability to travel long distances through air and water currents. The convention led to the banning or severe restrictions on the production and use of several POPs, including polychlorinated biphenyls (PCBs), dichlorodiphenyltrichloroethane (DDT), and dioxins.
Coming Summits
November 2024 San Francisco AI Safety Conference
In September, the AISI and the US government surprised us with the announcement of a new summit. Or, more precisely, two new convenings in San Francisco.
On November 20th-21st, the first international meeting of AI safety institutes , organized by the US government, aiming to “begin advancing global collaboration and knowledge sharing on AI safety”. The initial members of the International Network of AI Safety Institutes are Australia, Canada, the European Union, France, Japan, Kenya, the Republic of Korea, Singapore, the United Kingdom, and the United States. China is notably absent from this list - even thought China’s new AI Safety Institute has recently been announced. This very much seems like a missed opportunity, as the China-US relationship is of the utmost importance when it comes to AI Safety.
On November 21st-22nd, The British AISI is hosting a conference in San Francisco . The main goal here is to “convene experts from signatory companies and from research organizations to discuss the most pressing challenges in the design and implementation of frontier AI safety frameworks”.
One could argue that some safety-minded higher-ups were disappointed by the choices that France made, and decided that a true Safety Summit was needed soon.
2024 2025 France AI Safety Action Summit
During the 2023 Bletchley summit, France opted to host the next major one in November 2024. France postponed it to February 2025. It was also renamed to “AI Action Summit”, dropping the all-important “Safety” focus. We’ve been told that safety will be just one of five tracks at the summit. It is led by AI-sceptic Anne Bouverot, who is dismissive of “alarmist discourse”, comparing AI with calculators and comparing AI safety concerns with Y2K concerns, being certain that “AI is not going to replace us, but rather help us”. It seems increasingly unlikely that this summit will lead to the types of international regulations that we are calling for.
Past Summits
2023 UK AI Safety Summit
The primary goal of PauseAI was to convince one government to organize such a summit. Just 5 weeks after the first PauseAI protest, the UK government announced that they would host an AI safety summit, which was held on November 1st and 2nd 2023. The summit was relatively small (only 100 people were invited) and was held in Bletchley Park. Although it did not lead to a binding treaty, it did lead to the “Bletchley Declaration” , which was signed by all 28 attending countries. In this declaration, the countries acknowledged AI risks (including ‘issues of control relating to alignment with human intent’). This summit also led to two follow-up summits to be announced for 2024, in Seoul and Paris.
2024 South Korea AI Safety Summit (May 21st, 22nd)
For months, it was unclear what the scope of this Seoul summit would be. All that we knew, was that it was going to be a “virtual mini summit” . A rather unambitious way to deal with the highly alarming calls for regulation. In April 2024, the second AI safety summit was officially announced by the UK government. We organized a protest on May 13th to convince our ministers to attend the summit (some were not planning on even attending and initialize treaty negotiations toward a pause.
The Summit led to the following things:
- 16 Companies (most prominent AI companies) signed the “Frontier AI Safety Commitments” , which means these companies will publish RSPs. Previous voluntary commitments were ignored .
- A new statement was signed by 27 countries.