(Top)

Email Builder

A web app to help you write an email to a politician. Convince them to Pause AI!

Why sending an email is awesome

  • Exit your filter bubble. If you're talking about AI risks or pausing in a discord server or twitter, you're mostly preaching to the choir. With email, you can reach people who don't read about this stuff all day.
  • It's the medium for the pros. Politicians, journalists, lobbyists - all of them use email. If you want to be taken seriously, you should use email too.
  • No social pressure. If you post something publicly, a politician might be hesitant to respond to a topic they haven't made up their mind on.
  • Not many people actually do it. That means that your email will stand out.

Who to send to

  • Ideally, someone who might visit the next Summit. The next AI Safety Summit will be attended by many countries. Who is likely to represent your country? Maybe a minister of foreign affairs or science?
  • Someone who is likely to act. Is there a politician who's often at the forefront of discussing new digital / science topics? Perhaps even someone who's already shared concerns about AI? Or someone who's just good at pitching new, controversial topics and convincing others?
  • Someone who politically represents you. Maybe a politician in parliament from the party that you voted for.
  • Enter their name:

Pick a concern

  • What are you most concerned about? Don't be afraid of being judged for your concerns. It's the job of politicians to represent you - including the things that you worry about.
  • Consider the person who you're writing to, and what they may already believe. If you're writing to someone who's already worked on IT and cybersecurity issues before, consider focsing on that particular issue.
  • Select one:

Pick an action

  • What do you want the recipient to do after receiving your mail? Prepare for the summit, organize a debate, have a meeting? As with every section, you can replace the suggested text if you have a better idea.
  • Select one:

Last steps

Before sending the email you have to manually replace "__THING__" and "__COUNTRY__". It can also be effective to further personalise the message. Here are some tips:

  • Know your audience. Read up about the person you're sending a letter to. What are they working on? How do they think about AI? What has happened in their professional life the last weeks?
  • Share something about yourself. Why do you care about AI safety? Why did you take the time to send this letter?
  • Make it newsworthy. The mail template is not always up-to-date. Make sure you mention recent AI policy advancements (especially local ones).

For more information, you can take a look at our page on how to write a letter or email to someone in power.

Result

You can edit the message directly in the browser.

Dear __ENTER_NAME__,

First of all, thank you very much for everything you have done for __THING__. I am emailing you today to bring an issue to your attention, in which I believe __COUNTRY__ and you in particular can play a very important role. The issue is the existential threat of artificial intelligence.

Half of AI researchers believe that there is a 10% or greater chance that the invention of artificial superintelligence will mean the end of humanity. Among AI safety scientists, this chance is estimated to be an average of 30%. Notable examples of individuals sounding the alarm are Prof. Geoffrey Hinton and Prof. Yoshua Bengio, both Turing-award winners and pioneers of the deep learning methods that are currently achieving the most success. The existential risk of AI has been acknowledged by hundreds of scientists, the UN, the US and recently the EU.

To make a long story short: we don't know how to align AI with the complex goals and values that humans have. When a superintelligent system is realised, there is a significant risk that one of these instances will pursue a misaligned goal without us being able to stop it. And even if such a superhuman AI will remain under human control, the one wielding such a power could use this to drastically, irreversibly change the world. Such an AI could be used to develop new technologies and weapons, manipulate masses of people or topple governments.

The advancements in the AI landscape have progressed much faster than anticipated. In 2020, it was estimated that an AI would pass university entrance exams by 2050. This goal was achieved in March 2023 by the system GPT-4 from OpenAI. These massive, unexpected leaps have prompted many experts to request a pause in AI development through an open letter to major AI companies. The letter has been signed over 33,000 times so far, including many AI researchers and tech figures.

Unfortunately, it seems that companies are not willing to jeopardise their competitive position by voluntarily halting development. A pause would need to be imposed by a government. Luckily, there seems to be broad support for slowing down AI development. A recent poll indicates that 63% of American support regulations to prevent AI companies from building superintelligent AI. At the national level, a pause is also challenging because countries have incentives to not fall behind in AI capabilities. That's why we need an international solution.

The UK organised an AI Safety Summit on November 1st and 2nd at Bletchley Park. We hoped that during this summit, leaders will work towards sensible solutions that prevent the very worst of the risks that AI poses. The Summit did not lead to any international agreement or policy. We have seen proposals being written by the US Senate, and even among AI company CEOs, there is “overwhelming consensus” that regulation is needed. Unfortunately, none of the existing proposals would do anything to slow down or prevent a superintelligent AI from being created. I am afraid that lobbying efforts by AI companies to keep regulation at a minimum are turning out to be highly effective.

I would like to ask you to work towards a treaty that prevents the worst of the risks that AI poses, and propose this draft at the upcoming Seoul AI Safety Summit in May. One country should take the lead, and at this moment, not a single one is doing so.

The most important part of such a document is that there should be some mechanism in place that can pause dangerous training runs. Ideally, this happens before a model is trained, as AI accidents might happen during lab tests.

Best regards,

__YOUR NAME__