UK Technology Secretary overlooks risks posed by AI
PauseAI UK says the Secretary of State for Science, Innovation and Technology, Liz Kendall, is overlooking the risks posed by AI systems after she said that a pause would leave “this powerful technology to be exploited by other nations to their advantage and our disadvantage.”
Pause advocates are not calling for Britain to abandon AI, nor for a unilateral halt while other countries race ahead. PauseAI is calling for an international agreement to pause the development of frontier systems until there is clear consensus that they can be built and governed safely. This would apply to only the most powerful AI systems, not systems designed for specific, narrow applications such as medical diagnoses.
Joseph Miller, Director of PauseAI UK, said:
No serious advocate is calling for Britain to shut down its AI industry unilaterally. We are calling for an international agreement and we expect the country that hosted the Bletchley Park Summit to lead it, not duck it.”
The scientific problem of making advanced AI reliably do what its operators intend remains unsolved. Hundreds of researchers, including Turing Award winners Geoffrey Hinton and Yoshua Bengio, have publicly warned that mitigating the risk of extinction from AI should be a global priority alongside pandemics and nuclear war.
AI threatens the UK’s digital sovereignty
Ms Kendall did express concerns over the concentration of AI power, acknowledging that 70 percent of global AI computational resources are controlled by just five companies.
PauseAI echoes this concern: if systems are created that can automate large parts of the economy, which is the direction in which we are currently headed, unprecedented economic and political power will be concentrated among a handful of tech companies. As Ms Kendall herself said, “We must shape this technology, not just be shaped by it.”
The Technology Secretary’s department houses the AI Security Institute (AISI), whose recent evaluation of Anthropic’s Claude Mythos found its hacking capabilities to be unprecedented after it became the first model to complete its 32-step simulated corporate network attack. Kanishka Narayan, the UK’s AI minister, has said UK businesses “should be worried” about the model’s ability to spot flaws in IT systems.
AI poses unacceptable risks of catastrophe, according to a growing consensus
Professor Stuart Russell, the author of the standard textbook used in over 1,500 universities worldwide, told MEPs in February that “eight out of ten top AI researchers are convinced that the creation of artificial general intelligence will lead to a loss of control.”
A House of Lords Library briefing notes: “Passive loss of control could occur if humans stopped exercising appropriate oversight over AI systems. This could be because the AI decisions were too opaque, complex or fast to allow for meaningful oversight.”
The next meeting of the international network of AI Security Institutes, chaired by the UK, will be held in July. “We hope to see AI safety prominent on the agenda. We speak for a growing number of concerned citizens who wish to see the UK government take a leading role in bringing us closer to an AI treaty that protects us all,” Miller said.
