(Top)

Towards the next AI Safety Summit (Seoul 2024)

AI presents numerous risks to humanity, including the risk of extinction . Progress in AI capabilities is accelerating at a frantic pace , and we are not prepared for the consequences. AI companies are locked in a race to the bottom, where safety is not the highest priority. We need governments to step in and prevent AI from reaching superhuman levels before we know how to make it safely. This pause needs to happen on an international level because countries are locked in a race similar to the companies. The only way to achieve a true pause is through a summit.

What should the next AI safety summit achieve?

The goal of a Summit is often a treaty, which is a formal agreement between two or more states in reference to peace, alliance, commerce, or other international relations. In our case, we are hoping for AI Safety Treaty which would be an agreement between the participating states to pause AI development until the risks are better understood.

2023 UK AI Safety Summit

The primary goal of PauseAI was to convince one government to organize such a summit. Just 5 weeks after the first PauseAI protest, the UK government announced

that they would host an AI safety summit, which was held on November 1st and 2nd 2023. The summit was relatively small (only 100 people were invited) and was held in Bletchley Park. It did not lead to a treaty, unfortunately. However, it did lead to the “Bletchley Declaration”
, which was signed by all 28 attending countries. In this declaration, the countries acknowledged AI risks (including ‘issues of control relating to alignment with human intent’). In our opinion, this declaration is an important first step, yet it is far from enough. We need an actual binding treaty that pauses frontier AI development.

This summit also led to two follow-up summits to be announced for 2024, in Seoul and Paris.

2024 South Korea AI Safety Summit (May 21st, 22nd)

For months, it was unclear what the scope of this Seoul summit would be. All that we knew, was that it was going to be a “virtual mini summit”

. A rather unambitious way to deal with the highly alarming calls for regulation.

In April 2023, the second AI safety summit was officially announced

by the UK government. The first day will be a virtual event, the second day is an in-person event in Seoul for digital ministers. Unfortunately, a report from Reuters
tells us that several countries are not planning to attend it. We are organizing a protest on May 13th to convince our ministers to attend the summit and initialize treaty negotiations.

2024 2025 France AI Safety Action Summit

France opted to host an AI Safety Summit in November 2024, but has postponed it to February 2025. It was also renamed to “AI Action Summit”, dropping the all-important “safety” focus. We’ve been told that safety will be just one of five tracks at the summit. It is led by AI-sceptic Anne Bouverot, who is dismissive

of “alarmist discourse”, comparing AI with calculators and comparing AI safety concerns with Y2K concerns, being certain that “AI is not going to replace us, but rather help us”. It seems increasingly unlikely that this summit will lead to the types of international regulations that we are calling for.

Examples of Summits and resulting in treaties

  • Montreal Protocol (1987): The Montreal Protocol is an international environmental treaty designed to protect the ozone layer by phasing out the production and consumption of ozone-depleting substances. It has been highly successful in reducing the use of substances like chlorofluorocarbons (CFCs) and has contributed to the gradual recovery of the ozone layer.
  • Stockholm Convention on Persistent Organic Pollutants (2001): The Stockholm Convention is an international treaty aimed at protecting human health and the environment from persistent organic pollutants (POPs). These are toxic chemicals that persist in the environment, bioaccumulate in living organisms, and can have serious adverse effects on human health and ecosystems. Scientists raised concerns about the harmful effects of POPs, including their ability to travel long distances through air and water currents. The convention led to the banning or severe restrictions on the production and use of several POPs, including polychlorinated biphenyls (PCBs), dichlorodiphenyltrichloroethane (DDT), and dioxins.

Suggested educational agenda

Many people will be attending the AI safety summit, and not all of them will be deeply familiar with AI safety. They must be able to follow the discussions and make informed decisions. Therefore, we believe it is paramount to make education on x-risk a part of the Summit.

One particularly interesting (yet unconventional) approach, is to require attendees to learn about AI safety before attending the summit. Additionally, the summit itself should include a few days of education on AI safety and policy.

The following is a suggested agenda for the summit:

  • Introduction to Artificial Intelligence. Without understanding the basics of AI, it is almost impossible to understand the risks.
    • Neural networks.
    • Large language models.
    • Market dynamics of AI
  • AI safety. The difficulty of the alignment problem is not obvious. Understanding the core challenges of the field is necessary to understand the urgency of the situation.
    • What is a Superintelligence
    • The alignment problem
    • Instrumental convergence
    • Orthogonality thesis
  • AI safety policy. Governing the complex field of AI safety is not easy. We need to understand the challenges and opportunities of AI safety policy.
    • International level
    • Funding AI safety research
    • Risks of AI research publications
    • Governance of open source models
  • Negotiation of the treaty. See our proposal for concrete suggestions on the contents of the treaty.