Letter to Sir Demis Hassabis
Parliamentarians from across the UK call on Google DeepMind to honour their AI safety commitments
Voices of Concern What parliamentarians are saying

If leading companies like Google treat these commitments as optional, we risk a dangerous race to deploy increasingly powerful AI without proper safeguards.Lord Browne of Ladyton

Voluntary safety promises only work if they're transparent. It is important to understand the timeline, know the identity of those who have tested it, and have faith in the process.Baroness Kidron

Ethics cannot be an afterthought in AI development. Google's failure to honor their safety commitments betrays the trust necessary for responsible innovation.The Lord Bishop of Oxford

AI safety commitments without transparency are meaningless. The public has a right to know how these powerful systems are tested.Baroness Chakrabarti

AI safety isn't just for Silicon Valley to decide. Google must be transparent about who tests their systems.Ben Lake MP

Big Tech shouldn't be above the commitments they make. Google must come clean about their AI safety testing.Sir Desmond Swayne TD MP
5 Brayford Square
London E1 0SG
Chief Executive Officer, Google DeepMind
London, United Kingdom
Dear Sir Demis,
We write to express profound concern about Google DeepMind's failure to honour the Frontier AI Safety Commitments signed at the AI Seoul Summit in 2024. The release of Gemini 2.5 Pro without the transparency required by paragraph VIII of the commitments represents a troubling breach of trust with governments and the public.
At the AI Seoul Summit, Google explicitly committed to:
- Conducting safety tests "before deploying" AI models with input from "independent third-party evaluators" as appropriate.
- Providing "public transparency" into testing processes.
- Disclosing "how, if at all, external actors, such as governments... are involved in the process".
Yet when you released Gemini 2.5 Pro on 25 March, no safety evaluation report accompanied it. A month later, only a minimal "model card" appeared, lacking any substantive detail about external evaluations. Even when directly questioned by journalists, Google refused to confirm whether government agencies like the UK AI Security Institute participated in testing.
This is not a matter of semantics or technicalities. Labelling a publicly accessible model as "experimental" does not absolve Google of its safety obligations. When anyone on the internet can use a frontier AI system, it has been deployed in every meaningful sense.
You yourself have stated that AGI may arrive within five years. Leading AI researchers, such as Geoffrey Hinton and Yoshua Bengio, estimate a 10% or greater chance that advanced AI could cause human extinction. These are not distant hypotheticals but near-term possibilities requiring immediate, serious action.
We are particularly troubled that Google, having helped establish these safety standards, would be among the first to abandon them. This sets a dangerous precedent that undermines global efforts to develop AI safely. If industry leaders treat safety commitments as optional when convenient, how can we expect others to take them seriously?
We therefore call on Google DeepMind to:
- Establish clear definitions of "deployment" that align with common understanding - when a model is publicly accessible, it is deployed.
- Publish a specific timeline for when safety evaluation reports will be released for all future models.
- Clarify unambiguously, for each model release, which government agencies and independent third-parties are involved in testing, and the exact timelines of their testing procedures.
The development of artificial general intelligence may be humanity's most consequential undertaking. It demands the highest standards of responsibility, transparency, and caution. Google's technical capabilities come with commensurate obligations to society.
We await your response and concrete actions to address these critical concerns.
Yours sincerely,
60 Signatories
United in calling for AI safety transparency

Clare Adamson
MSP
Baroness Foster of Aghadrumsee
Peer
Dr Rosena Allin-Khan
MP
Doug Beattie MC
MLA
Siân Berry
MP
Miles Briggs
MSP
Keith Buchanan
MLA
Ariane Burgess
MSP
Lord Campbell-Savours
Peer
Gerry Carroll
MLA
Finlay Carson
MSP
Lord Cashman
Peer
Baroness Chakrabarti
Peer
Ellie Chowns
MP
Alex Cole-Hamilton
MSP
Viscount Colville of Culross
Peer
Baroness D'Souza
Peer
Ann Davies
MP
Carla Denyer
MP
Stewart Dickson
MLA
Baroness Miller of Chilthorne Domer
Peer
Baroness Ritchie of Downpatrick
Peer
Connie Egan
MLA
Deborah Erskine
MLA
Baroness Featherstone
Peer
Luke Fletcher
MS
Heledd Fychan
MS
Harry Harvey
MLA
Mike Hedges
MS
Bill Kidd
MSP
Baroness Kidron
Peer
Lord Browne of Ladyton
Peer
Ben Lake
MP
The Lord Bishop of Leeds
Bishop
Naomi Long
MLA
Peter Martin
MLA
Sinéad McLaughlin
MLA
Stuart McMillan
MSP
Lord McNally
Peer
Llinos Medi
MP
Iqbal Mohamed
MP
The Lord Bishop of Oxford
Bishop
Baroness Prashar
Peer
Yasmin Qureshi
MP
Adrian Ramsay
MP
Jenny Rathbone
MS
Willie Rennie
MSP
Baroness Harris of Richmond
Peer
Liz Saville Roberts
MP
Baroness Kennedy of The Shaws
Peer
Lord Strasburger
Peer
Sir Desmond Swayne
MP
Carolyn Thomas
MS
Michelle Thomson
MSP
Baroness Uddin
Peer
Lee Waters
MS
Lord Knight of Weymouth
Peer
Elena Whitham
MSP
Lord Singh of Wimbledon
Peer
Baroness Morris of Yardley
PeerLearn More
More detail about Google DeepMind's violation can be found in our background information document.
How You Can Help
PauseAI volunteers emailed their MPs asking them to sign this letter to Sir Demis Hassabis, calling for transparency in Google DeepMind's AI safety commitments.