(Top)

Why we might have superintelligence sooner than most think

Current State-of-the-Art AI models are already superhuman in many domains, but luckily not in all. If we reach superintelligence before we solve the alignment problem, we face a risk of extinction . So having an estimated range of when we could have superintelligence is essential to making sure we don’t get caught off guard. If our predictions are too far off, we may not be able to prepare in time.

But how far off are we? When will we have superintelligence? It could be sooner than most think.

Compounding exponential growth

AI models require algorithms, data, and chips. Each of these components is rapidly improving due to huge investments in AI - . The improvements in each of these components are compounding, leading to exponential growth in AI capabilities.

  • More chips. ChatGPT was trained on 10.000
    specialized chips. Meta has announced
    that they will have 600.000 next-gen chips to train their next AI models this year.
  • Faster chips. Every year chips get faster due to new architectures and lithography innovations. The chips that Meta is using are 10x faster than the chips used for ChatGPT. We’re also seeing highly specialized hardware like the Groq chips, which are 13x faster
    than the competition. On a longer timeline, ternary architectures
    or photonic chips
    could make chips even faster.
  • More data. GPT3 was trained on 45TB
    of text, GPT4 used about 20x as much. AI companies are starting to use video data, audio data and even generating synthetic data to train these models.
  • Better data. The “Textbooks are all you need” paper showed
    that using high-quality synthetic data can drastically improve model performance, even if far less data and compute is used.
  • Better algorithms. The Transformer architecture enabled the current LLM revolution. New architectures can enable similar capability jumps. The Mamba model, for example, is showing
    5x faster throughput.
  • Better runtimes. Agentic runtimes, Retrieval Augmented Generation or even simply clever prompting (through Graph of Thought
    , for example) can have a huge impact on the capabilities of these models.

It is entirely possible that simply scaling up will get us to dangerous capabilities in a year or two, but with all these compounding factors, it could be even sooner.

We reached human-level performance in many domains in 2023

In 2022, AI researchers thought it would take 17 years

until AI would be able to write a New York Times bestseller. A year later, a Chinese professor won a writing contest
with an AI-written book.

On Metaculus, the community prediction for (weak) AGI

was 2057 just three years ago, and now it’s 2027 2026.

Now, let’s dive into the definition of AGI used in that survey:

  • Score >90% in the Winograd Schema Challenge
  • Score >75% in SAT scores
  • Pass a Turing test
  • Finish Montezuma’s revenge

GPT-4 Scores 94.4% on the Winograd Schema Challenge

, and 93% on the SAT reading exam, 89% on the SAT math exam . It hasn’t passed the Turing test, but probably not because of a lack of capabilities. It’s because GPT-4 has been fine-tuned to not mislead people. It’s not good for business if your AI is telling people it’s actually a person. That only leaves Montezuma’s Revenge. It is not unthinkable that it can be finished by a clever setup of GPT-4, using something like AutoGPT to analyze the screen and generate the correct inputs. In May 2023, GPT-4 was able to write code to get diamond gear in Minecraft
. In short: GPT-4 got 2/4 criteria with certainty, with the other two in reach.

We’re there, folks. We already have (weak) AGI. It did not take us 35 years, it took us three. We were off by a factor of 10.

Why most underestimate the progress of AI

There are many reasons why people underestimate the progress of AI.

  • It’s hard to keep up. Almost daily we see new breakthroughs in AI. It’s almost impossible to keep up with the pace of progress. You’re not alone if you feel like you’re falling behind.
  • We keep moving the goalpost. In the 90s, people thought the holy grail of AI was something that could play chess. When AI beat Kasparov, its next challenge was Go. Now, we have machines that score in the 99.9th percentile in IQ tests
    , can translate 26 languages
    and win photography contests
    , yet we’re still asking questions like “When will AI reach human level?“. It already surpasses us in many areas, but we always focus on the increasingly small number of things we can still do better.
  • We like to think that we’re special. Humans like to feel that we are special. If an AI can do what we can do, we’re not special anymore. This is a hard pill to swallow, and the brain has many defense mechanisms to avoid this .
  • We’re really bad at exponential growth. We tend to structurally and predictably underestimate how exponential growth cumulates over time. This has been shown in scientific studies
    .

Luckily there are still some things that an AI can’t do yet. It cannot hack better than the best hackers , and it cannot do AI research as well as the best AI researchers. When we reach either of these thresholds, we will be in a new regime of increased risk.

So when will we reach the point when an AI can do all these things at a superhuman level? When will have a superintelligence?

The Ilya threshold

I think the crucial point that we should consider, is the point at which an AI is more capable of doing AI research than someone like Ilya Sutskever (former chief scientist at OpenAI). An AI that can make meaningful contributions to AI algorithms and architectures is likely to be able to improve itself. Let’s call this point of potential self-improvement the Ilya threshold. When it reaches this, an AI might improve itself because it was explicitly instructed to do so, or because being smarter is a useful sub-goal for other goals (AIs are already creating their own sub-goals

). These iterations might take weeks (training GPT-3 took 34 days), but it is also possible that some type of runtime improvement is implemented that makes significant progress in a matter of minutes: an Intelligence Explosion
.

So how far off are we from the Ilya threshold? It’s fundamentally difficult to predict when certain capabilities emerge

as LLMs scale, but so far we’ve seen many capabilities emerge that were previously thought to be far off. GPT-4 is already an impressive programmer, and combined with AutoGPT it can do autonomous research on the internet
. Being able to autonomously do AI research and making meaningful improvements to a codebase does not seem impossible in the near future.

Better chips, more data, and better algorithms will all contribute to reaching the Ilya threshold. We have no idea how to align such an AI (even OpenAI admits this

), and the consequences of having a misaligned superintelligence are likely to be catastrophic .

Policy implications

We could have a superintelligence in months. A 1% risk is unacceptably large. We can only conclude that we need to slow down AI development right now. It’s up to each of us to take action and make sure that we don’t get caught off guard.