Towards the next AI Safety Summit

AI presents numerous risks to humanity, including the risk of extinction. Progress in AI capabilities is accelerating at a frantic pace, and we are not prepared for the consequences. AI companies are locked in a race to the bottom, where safety is not the highest priority. We need governments to step in and prevent AI from reaching superhuman levels before we know how to make it safely. This pause needs to happen on an international level because countries are locked in a race similar to the companies.

The only way to achieve a true pause is through a summit.

What should an AI summit achieve?

The goal of a Summit is often a treaty, which is a formal agreement between two or more states in reference to peace, alliance, commerce, or other international relations. In our case, we are hoping for AI Safety Treaty which would be an agreement between the participating states to pause AI development until the risks are better understood.

2023 UK AI Safety Summit

The primary goal of PauseAI was to convince one government to organize such a summit. Just 5 weeks after the first PauseAI protest, the UK government announced that they would host an AI safety summit, which was held on November 1st and 2nd 2023. The summit was relatively small (only 100 people were invited) and was held in Bletchley Park. It did not lead to a treaty, unfortunately. However, it did lead to the “Bletchley Declaration”, which was signed by all 28 attending countries. In this declaration, the countries acknowledged AI risks (including ‘issues of control relating to alignment with human intent’). In our opinion, this declaration is an important first step, yet it is far from enough. We need an actual binding treaty that pauses frontier AI development.

Examples of Summits and resulting in treaties

  • Montreal Protocol (1987): The Montreal Protocol is an international environmental treaty designed to protect the ozone layer by phasing out the production and consumption of ozone-depleting substances. It has been highly successful in reducing the use of substances like chlorofluorocarbons (CFCs) and has contributed to the gradual recovery of the ozone layer.
  • Stockholm Convention on Persistent Organic Pollutants (2001): The Stockholm Convention is an international treaty aimed at protecting human health and the environment from persistent organic pollutants (POPs). These are toxic chemicals that persist in the environment, bioaccumulate in living organisms, and can have serious adverse effects on human health and ecosystems. Scientists raised concerns about the harmful effects of POPs, including their ability to travel long distances through air and water currents. The convention led to the banning or severe restrictions on the production and use of several POPs, including polychlorinated biphenyls (PCBs), dichlorodiphenyltrichloroethane (DDT), and dioxins.

Suggested educational agenda

Many people will be attending the AI safety summit, and not all of them will be deeply familiar with AI safety. They must be able to follow the discussions and make informed decisions. Therefore, we believe it is paramount to make education on x-risk a part of the Summit.

One particularly interesting (yet unconventional) approach, is to require attendees to learn about AI safety before attending the summit. Additionally, the summit itself should include a few days of education on AI safety and policy.

The following is a suggested agenda for the summit:

  • Introduction to Artificial Intelligence. Without understanding the basics of AI, it is almost impossible to understand the risks.
    • Neural networks.
    • Large language models.
    • Market dynamics of AI
  • AI safety. The difficulty of the alignment problem is not obvious. Understanding the core challenges of the field is necessary to understand the urgency of the situation.
    • What is a Superintelligence
    • The alignment problem
    • Instrumental convergence
    • Orthogonality thesis
  • AI safety policy. Governing the complex field of AI safety is not easy. We need to understand the challenges and opportunities of AI safety policy.
    • International level
    • Funding AI safety research
    • Risks of AI research publications
    • Governance of open source models
  • Negotiation of the treaty. See our proposal for concrete suggestions on the contents of the treaty.
Join PauseAI >