(Top)

Open Letter

Letter to Sir Demis Hassabis

Parliamentarians from across the UK call on Google DeepMind to honour their AI safety commitments

4
Civil Society Organisations
10+
Political Parties
60
Parliamentarians
29 August 2025

Voices of Concern What parliamentarians are saying

Portrait of Lord Browne of Ladyton
If leading companies like Google treat these commitments as optional, we risk a dangerous race to deploy increasingly powerful AI without proper safeguards.
Lord Browne of Ladyton
Portrait of Baroness Kidron
Voluntary safety promises only work if they're transparent. It is important to understand the timeline, know the identity of those who have tested it, and have faith in the process.
Baroness Kidron
Portrait of Right Revd Dr Steven Croft
Ethics cannot be an afterthought in AI development. Google's failure to honor their safety commitments betrays the trust necessary for responsible innovation.
The Lord Bishop of Oxford
Portrait of Baroness Chakrabarti
AI safety commitments without transparency are meaningless. The public has a right to know how these powerful systems are tested.
Baroness Chakrabarti
Portrait of Ben Lake MP
AI safety isn't just for Silicon Valley to decide. Google must be transparent about who tests their systems.
Ben Lake MP
Portrait of Sir Desmond Swayne TD MP
Big Tech shouldn't be above the commitments they make. Google must come clean about their AI safety testing.
Sir Desmond Swayne TD MP
From: PauseAI UK
5 Brayford Square
London E1 0SG
To: Sir Demis Hassabis
Chief Executive Officer, Google DeepMind
London, United Kingdom

Dear Sir Demis,

We write to express profound concern about Google DeepMind's failure to honour the Frontier AI Safety Commitments signed at the AI Seoul Summit in 2024. The release of Gemini 2.5 Pro without the transparency required by paragraph VIII of the commitments represents a troubling breach of trust with governments and the public.

At the AI Seoul Summit, Google explicitly committed to:

  • Conducting safety tests "before deploying" AI models with input from "independent third-party evaluators" as appropriate.
  • Providing "public transparency" into testing processes.
  • Disclosing "how, if at all, external actors, such as governments... are involved in the process".

Yet when you released Gemini 2.5 Pro on 25 March, no safety evaluation report accompanied it. A month later, only a minimal "model card" appeared, lacking any substantive detail about external evaluations. Even when directly questioned by journalists, Google refused to confirm whether government agencies like the UK AI Security Institute participated in testing.

This is not a matter of semantics or technicalities. Labelling a publicly accessible model as "experimental" does not absolve Google of its safety obligations. When anyone on the internet can use a frontier AI system, it has been deployed in every meaningful sense.

You yourself have stated that AGI may arrive within five years. Leading AI researchers, such as Geoffrey Hinton and Yoshua Bengio, estimate a 10% or greater chance that advanced AI could cause human extinction. These are not distant hypotheticals but near-term possibilities requiring immediate, serious action.

We are particularly troubled that Google, having helped establish these safety standards, would be among the first to abandon them. This sets a dangerous precedent that undermines global efforts to develop AI safely. If industry leaders treat safety commitments as optional when convenient, how can we expect others to take them seriously?

We therefore call on Google DeepMind to:

  1. Establish clear definitions of "deployment" that align with common understanding - when a model is publicly accessible, it is deployed.
  2. Publish a specific timeline for when safety evaluation reports will be released for all future models.
  3. Clarify unambiguously, for each model release, which government agencies and independent third-parties are involved in testing, and the exact timelines of their testing procedures.

The development of artificial general intelligence may be humanity's most consequential undertaking. It demands the highest standards of responsibility, transparency, and caution. Google's technical capabilities come with commensurate obligations to society.

We await your response and concrete actions to address these critical concerns.

Yours sincerely,

60 Signatories

United in calling for AI safety transparency

Portrait of Clare Adamson

Clare Adamson

MSP
Portrait of Baroness Foster of Aghadrumsee

Baroness Foster of Aghadrumsee

Peer
Portrait of Dr Rosena Allin-Khan

Dr Rosena Allin-Khan

MP
Portrait of Doug Beattie MC

Doug Beattie MC

MLA
Portrait of Siân Berry

Siân Berry

MP
Portrait of Miles Briggs

Miles Briggs

MSP
Portrait of Keith Buchanan

Keith Buchanan

MLA
Portrait of Ariane Burgess

Ariane Burgess

MSP
Portrait of Lord Campbell-Savours

Lord Campbell-Savours

Peer
Portrait of Gerry Carroll

Gerry Carroll

MLA
Portrait of Finlay Carson

Finlay Carson

MSP
Portrait of Lord Cashman

Lord Cashman

Peer
Portrait of Baroness Chakrabarti

Baroness Chakrabarti

Peer
Portrait of Ellie Chowns

Ellie Chowns

MP
Portrait of Alex Cole-Hamilton

Alex Cole-Hamilton

MSP
Portrait of Viscount Colville of Culross

Viscount Colville of Culross

Peer
Portrait of Baroness D'Souza

Baroness D'Souza

Peer
Portrait of Ann Davies

Ann Davies

MP
Portrait of Carla Denyer

Carla Denyer

MP
Portrait of Stewart Dickson

Stewart Dickson

MLA
Portrait of Baroness Miller of Chilthorne Domer

Baroness Miller of Chilthorne Domer

Peer
Portrait of Baroness Ritchie of Downpatrick

Baroness Ritchie of Downpatrick

Peer
Portrait of Connie Egan

Connie Egan

MLA
Portrait of Deborah Erskine

Deborah Erskine

MLA
Portrait of Baroness Featherstone

Baroness Featherstone

Peer
Portrait of Luke Fletcher

Luke Fletcher

MS
Portrait of Heledd Fychan

Heledd Fychan

MS
Portrait of Harry Harvey

Harry Harvey

MLA
Portrait of Mike Hedges

Mike Hedges

MS
Portrait of Bill Kidd

Bill Kidd

MSP
Portrait of Baroness Kidron

Baroness Kidron

Peer
Portrait of Lord Browne of Ladyton

Lord Browne of Ladyton

Peer
Portrait of Ben Lake

Ben Lake

MP
Portrait of The Lord Bishop of Leeds

The Lord Bishop of Leeds

Bishop
Portrait of Naomi Long

Naomi Long

MLA
Portrait of Peter Martin

Peter Martin

MLA
Portrait of Sinéad McLaughlin

Sinéad McLaughlin

MLA
Portrait of Stuart McMillan

Stuart McMillan

MSP
Portrait of Lord McNally

Lord McNally

Peer
Portrait of Llinos Medi

Llinos Medi

MP
Portrait of Iqbal Mohamed

Iqbal Mohamed

MP
Portrait of The Lord Bishop of Oxford

The Lord Bishop of Oxford

Bishop
Portrait of Baroness Prashar

Baroness Prashar

Peer
Portrait of Yasmin Qureshi

Yasmin Qureshi

MP
Portrait of Adrian Ramsay

Adrian Ramsay

MP
Portrait of Jenny Rathbone

Jenny Rathbone

MS
Portrait of Willie Rennie

Willie Rennie

MSP
Portrait of Baroness Harris of Richmond

Baroness Harris of Richmond

Peer
Portrait of Liz Saville Roberts

Liz Saville Roberts

MP
Portrait of Baroness Kennedy of The Shaws

Baroness Kennedy of The Shaws

Peer
Portrait of Lord Strasburger

Lord Strasburger

Peer
Portrait of Sir Desmond Swayne

Sir Desmond Swayne

MP
Portrait of Carolyn Thomas

Carolyn Thomas

MS
Portrait of Michelle Thomson

Michelle Thomson

MSP
Portrait of Baroness Uddin

Baroness Uddin

Peer
Portrait of Lee Waters

Lee Waters

MS
Portrait of Lord Knight of Weymouth

Lord Knight of Weymouth

Peer
Portrait of Elena Whitham

Elena Whitham

MSP
Portrait of Lord Singh of Wimbledon

Lord Singh of Wimbledon

Peer
Portrait of Baroness Morris of Yardley

Baroness Morris of Yardley

Peer

Learn More

More detail about Google DeepMind's violation can be found in our background information document.

How You Can Help

PauseAI volunteers emailed their MPs asking them to sign this letter to Sir Demis Hassabis, calling for transparency in Google DeepMind's AI safety commitments.