
Demis Hassabis, CEO of Google DeepMind, has predicted that artificial general intelligence (AGI), which is more intelligent than humans, will emerge within the next five to ten years. While he acknowledges that AI’s ability to compete directly with humans is still far off, he believes it will become a reality shortly.
Hassabis, who leads Google DeepMind, was awarded the Nobel Prize in Chemistry last year for his contributions to developing AlphaFold, an AI system that predicts protein structures.
On Monday, during a briefing at DeepMind’s headquarters in London, Hassabis stated that AGI, which matches or surpasses human intelligence, will appear within five to ten years. He explained, “We’re not quite there yet. These systems are very impressive at certain things. But there are other things they can’t do yet, and we’ve still got quite a lot of research work to go before that.”
Hassabis discussed ASI, stating that it will emerge following the development of AGI. He noted that ASI will surpass human intelligence, but the exact timing of such a groundbreaking event remains unknown.
His forecast places the arrival of AGI much later than some current predictions.
In contrast, Dario Amodei, CEO of AI startup Anthropic, OpenAI’s main competitor, stated at the Davos Forum in January that a form of AGI could emerge within two to three years. Tesla CEO Elon Musk suggested that AGI might appear as early as next year, while OpenAI CEO Sam Altman has predicted that AGI will arrive in the “near future.” Moreover, Cisco’s Chief Product Officer, Gitu Patel, claimed significant evidence of AGI functioning could be observed early this year.
Hassabis noted that the biggest challenge in developing AGI is enabling AI to reach a level where it can understand the context of the real world.
He elaborated that the key issue with artificial general intelligence (AGI) is how quickly it can plan and reason and how flexibly it can adapt to everyday situations. While AI systems capable of autonomously completing tasks in controlled environments, like board games such as Go, already exist, developing AI models that can comprehend and navigate the complexities of the real world—where numerous variables interact simultaneously—remains challenging.
Hassabis revealed that Google DeepMind is undertaking extensive research to reach this stage. One such project involves developing an AI agent capable of learning how to play the popular strategy game StarCraft.
Unlike simple chatbots that provide straightforward answers, AI agents are designed to interact with humans and respond dynamically. Hassabis stated, “One of the things we are working on is enabling AI agents to communicate with one another and express themselves.”