State-of-the-art AI capabilities vs humans

Let’s take a look at how the most competent AI systems compare with humans in various domains. The list below is regularly updated to reflect the latest developments.

Last update: 2024-03

Superhuman (Better than all humans)

  • Games: For many games (Chess, Go , Starcraft, Dota, Gran Turismo etc.) the best AI is better than the best human.
  • Memory: An average human can remember about 7 items (such as numbers) at a time. Gemini 1.5 Pro can read and remember 99% of 7 million words .
  • Thinking speed: AI models can read thousands of words per second, and write at speeds far surpassing any human.
  • Learning speed: A model like Gemini 1.5 Pro can read an entire book in 30 seconds. It can learn an entirely new language and translate texts in half a minute.
  • Amount of knowledge: GPT-4 knows far more than any human, its knowledge spanning virtually every domain, even remembering things like URLs.
  • Storage efficiency: GPT-4 has about 1.7 trillion parameters , whereas humans have about 100 to 1000 times as much . However, GPT-4 knows thousands of times more, storing more information in a smaller amount of parameters.

Better than most humans

Worse than most humans

  • Saying “I don’t know”. Virtually all Large Language Models have this problem of ‘hallucination’, making up information instead of saying it does not know. This might seem like a relatively minor shortcoming, but it’s a very important one. It makes LLMs unreliable and strongly limits their applicability.
  • Movement. The Atlas robot can walk, throw objects and do somersaults , but it is still limited in its movements. Google’s RT-2 can turn objectives into actions in the real world, like “move the cup to the wine bottle”.
  • Continuous learning. Current SOTA LLMs separate learning (‘training’) from doing (‘inference’). Humans learn and do at the same time, and can learn from a single example.

The endpoint

As time progresses and capabilities improve, we move items from lower sections to the top section. When some specific dangerous capabilities are achieved, AI will pose new risks. At some point, AI will outcompete every human in every metric imaginable. When we have built this superintelligence, we will probably soon be dead .