Synopsis
Talking to YouTuber Cleo Abram in a recent interview, the Google DeepMind CEO said his original motivation for entering AI was to harness the world’s data to tackle fundamental questions—about the nature of reality and consciousness, among others. But that long-term vision shifted dramatically with the arrival of ChatGPT in November 2022, the viral success of which reset the industry and accelerated the pace at which everyone now operates. Compounding the issue are the risks of AI that are still not fully known. These include its misuse by bad actors for harmful purposes.“The best use case of AI was to improve human health and accelerate scientific discovery,” he told YouTuber Cleo Abram in a recent interview.
“In fact, for me, I got into AI in the first place because I was interested in all the big questions in the world. The nature of reality, the nature of consciousness, these kinds of things. And I felt we needed a tool to help us, even the best scientists, to help us make sense of the amount of data and information out there and find insights,” he said.
That long-term approach shifted sharply after ChatGPT arrived in November 2022. Its sudden viral success, he said, changed the entire industry and the pace at which everyone now has to operate.
Hassabis described the current environment as highly intense, saying, "We're in this sort of ferocious commercial pressure race that everyone's locked into currently," the Google DeepMind CEO said. "And then on top of that, there are the geopolitical issues like the US-China race. So, there are multiple levels of pressure to move fast."
As a result, he believes the entire industry was pushed into a faster, more competitive race than anyone originally planned, with little room left for a slower scientific approach.
He also talked about how many AI labs, including Google, had AI systems with capabilities similar to ChatGPT. "OpenAI scaled it and put it out. I think even they say it was kind of a research experiment," he said. "They didn't realise it would go so viral."
In his view, the irony is that researchers themselves underestimated the usefulness of their own systems.
“I think when you're building that technology, you are so close to it that you're very aware of the things it can't do — the flaws it has. You don't realise that, actually, people out there would find use even though it was hallucinating and doing other things that we're obviously all still trying to improve on now.”
Challenges
Looking ahead, Hassabis says attention is now shifting to risks that are still not fully addressed. One concern is misuse of AI by bad actors, from individuals to nation-states, for harmful purposes.
Even more worrying, he says, is the possibility of systems behaving unpredictably as they become more advanced and autonomous.
He said companies working at the frontier need to think carefully about guardrails — whether they are strong enough to ensure AI systems do exactly what they are told. That also means making sure goals are clearly defined, and there is no way for systems to bypass or accidentally break those safety limits.
“That's going to be an incredibly hard technical challenge if you think about how powerful and how smart and capable these systems are eventually going to get. So, I tend to worry about those are medium-term things I think people are perhaps not paying enough attention to at the moment, (sic)” he said.