AI-Generated Summary
Demis Hassabis, the 48-year-old British scientist and CEO of Google DeepMind, is a pioneer in artificial intelligence (AI) and a Nobel Prize winner for his work on protein structure prediction using AI. Celebrating his achievements by playing poker with a chess champion, Hassabis has long been driven by a fascination with consciousness, reality, and advancing human knowledge through AI. He envisions artificial general intelligence (AGI)โa versatile, superhuman intellectโwithin the next decade, transforming fields like healthcare, robotics, and everyday life. While optimistic about AIโs potential for radical abundance and curing diseases, he warns of risks, including misuse and loss of control, emphasizing the need for global collaboration and ethical guardrails to ensure AI aligns with human values.
๐ Full Transcript
When Demis Hassabas won the Nobel Prize last year, he celebrated by playing poker with a world champion of chess. Habisas loves a game, which is how he became a pioneer of artificial intelligence. The 48-year-old British scientist is co-founder and CEO of Google’s AI powerhouse called Deep Mind. We met two years ago when chatbots announced a new age. Now Habis and others are chasing what’s called artificial general intelligence. A silicon intellect as versatile as a human but with superhuman speed and knowledge. After his Nobel and a nighthood from King Charles, we hurried back to London to see what’s next from a genius who may hold the cards of our future. What’s always guided me and and and the passion I’ve always had is understanding the world around us. I’ve always been um since I was a kid fascinated by the biggest questions, you know, the the the meaning of of life, the the the nature of consciousness, the nature of reality itself. I’ve loved reading about all the great scientists who’ve worked on these problems and the philosophers and I wanted to uh see if we could advance human knowledge and for me my expression of doing that was to build what I think is the ultimate tool for for advancing human knowledge which is which is AI. We sat down in this room two years ago and I wonder if AI is moving faster today than you imagined. It’s moving incredibly fast. uh I think we are on some kind of exponential curve of improvement. Of course, the success of the field in the last few years has attracted even more attention, more resources, more talent. So, um that’s adding to the to this exponential progress, exponential curve, in other words, straight up. Yep. Straight up and increasing speed of progress. Start. Yeah, we saw the progress. Hello, Scott. It’s nice to see you again. in an artificial companion that can see and hear and chat about anything. Early chatbots learned only the internet. An app called Astra also takes in the world. Do we call her she? Um, it’s a good question. I’m not sure we I’m not sure we all know the answer yet. Bibbo Shu is product manager for Project Astra, an app in a new generation of chatbots that interpret the world with their own eyes. We challenged Astra with virtual paintings we chose and showed to Astra for the first time. This is July Hi by Thomas Hartbenton from 1942. How about this one? That is the Virgin of Charity by Elgreco. That painting is Automat by Edward Hopper. The person in the painting, what emotion is she exhibiting? The subject in the painting appears pensive and contemplative, her expression suggesting a sense of solitude. Can you create a story around this picture? It’s a chilly evening in the city, a Tuesday perhaps. The woman perhaps named Eleanor sits alone in the diner. She is feeling melancholy due to the uncertainty of her future and the weight of unfulfilled dreams. The scene freezes in time. Only the flow of ideas moving onward. That line, only the flow of ideas moving onward. I wish I’d written that. And did we hear a limit to Astra’s patience? What was that fictional conversation about? Ah, in the story I imagined, Eleanor and her friend talked about their shared dreams and the challenges of pursuing them. Something about the way you said ah made me wonder if you were getting bored. I apologize if my tone came across that way. My aim is always to engage thoughtfully. And she said, “Well, I’m sorry if you don’t like my tone.” Wow. What’s happening there? Well, that’s interesting. That’s again in challenge with these systems as they act in the moment with the context that’s around them and that may have never been tested before. He’s often surprised because AI programs are sent out on the internet to learn for themselves. They can return later with unexpected skills. So we have theories about what kinds of uh capabilities these systems will have. That’s obviously what we try to build into the architectures. But at the end of the day, how it learns, what it picks up from the data is part of the training of these systems. We don’t program that in. It learns like a human being would learn. So, um, so new capabilities or properties can emerge from that training situation. You understand how that would worry people. Of course, it’s the duality of these types of systems that they’re able to uh do incredible things, go beyond the things that we’re able to uh uh design ourselves or understand ourselves. But of course, the challenge is is making sure um that the the knowledge databases they create um we understand what’s in them. Now, DeepMind is training its AI model called Gemini to not just reveal the world, but to act in it, like booking tickets and shopping online. It’s a step toward AGI, artificial general intelligence with the versatility of a human mind. On track for AGI in the next 5 to 10 years, I think, and in 2030, you will have what? Well, we’ll have a system that um really understand everything around you in very uh nuanced and deep ways um and kind of embedded in your everyday life. Embedded like Astra in eyelasses. What can you tell me about this building I’m looking at? This is the cold drops yard, a shopping and dining district. She sees what I see. There’s a speaker in the earpiece only I can hear. What was it originally before it became shops? The coal drops yard was originally a set of Victorian coal warehouses used to receive and distribute coal across London. Was coal ever a problem for the environment in London? Yes, coal was a significant source of air pollution in London, particularly during the industrial revolution. It occurred to us that the only thing we contributed to this relationship were legs which will also soon be engineered. I also think another big area will be robotics. I think it will have a breakthrough moment in the next couple of years where we’ll have demonstrations of maybe humanoid robots or other types of robots that can start really doing useful things. For example, hey, robot. Researchers Alex Lee and Julia Vazani showed us a robot that understands what it sees. That’s a tricky one. And reasons its way through vague instructions. Put the blocks whose color is the combination of yellow and blue into the matching color ball. The combination of yellow and blue is green and it figured that out. It’s reasoning. Yep, definitely. Yes. The toys of Deis Hassabus’ childhood weren’t blocks, but chess pieces. At 12, he was the number two champion in the world for his age. This passion led to computer chess, video games, and finally thinking machines. He was born to a Greek criate father and Singaporean mother. Cambridge, MIT, Harvard. He’s a computer scientist with a PhD in neuroscience. Because he reasoned he had to understand the human brain first. Are you working on a system today that would be selfaware? I don’t think any of today’s systems to me feel self-aware or you know conscious in any way. Um of obviously everyone needs to make their own decisions by interacting with these chat bots. Um I think theoretically it’s possible but is self-awareness a goal of yours? Not explicitly but it may happen implicitly. These systems might acquire some feeling of self-awareness. That is possible. I think it’s important for these systems to understand you um self and other and that’s probably the beginning of something like self-awareness. But he says if a machine becomes self-aware, we may not recognize it. I think there’s two reasons we regard each other as conscious. One is that you’re exhibiting the behavior of a conscious being very similar to my behavior. But the second thing is you’re running on the same substrate. We’re made of the same carbon matter with our squishy brains. Now, obviously with machines, they’re running on silicon. So, even if they exhibit the same behaviors and even if they they say the same things, it doesn’t necessarily mean uh that this sensation of consciousness that we have um is the same thing they will have. Has an AI engine ever asked a question that was unanticipated? Not so far that I’ve experienced. And I think that’s getting at the idea of what’s still missing from these systems. They still can’t really yet go beyond um asking a new novel question or a new novel conjecture or coming up with a new hypothesis that um has not been thought of before. They don’t have curiosity. No, they don’t have curiosity and they’re probably lacking a little bit in what we would call imagination and intuition. But they will have greater imagination, he says. And soon I think actually in the next maybe 5 to 10 years I think we’ll have systems that are capable of not only solving a important problem or conjecture in science but coming up with it in the first place. Solving an important problem won Habisas a Nobel Prize last year. He and colleague John Jumper created an AI model that deciphered the structure of proteins. Proteins are the basic building blocks of life. So everything in biology, everything in your body depends on proteins. You know, your neurons firing, your muscle fibers twitching, it’s all mediated by proteins. But 3D protein structures like this are so complex, less than 1% were known. Mapping each one used to take years. Deep Mind’s AI model did 200 million in one year. Now, Habas has AI blazing through solutions to drug development. So, on average, it takes, you know, 10 years and billions of dollars to design just one drug. We could maybe reduce that down from years to maybe months or maybe even weeks, which sounds incredible today, but that’s also what people used to think about protein structures. It would revolutionize human health. And I think one day maybe we can cure all disease with the help of AI. The end of disease. I think that’s within reach maybe within the next decade or so. I don’t see why not. Demisabas told us AI could lead to what he calls radical abundance, the elimination of scarcity. But he also worries about risk. There’s two worries that I worry about. One is that bad actors, human uh pe you know users of these systems repurpose these systems for harmful ends. Then the second thing is the AI systems themselves as they become more autonomous and more powerful. Can we make sure that we can keep control of the systems that they’re aligned with our values? They they’re doing what we want that benefits society um and they stay on guard rails. Guard rails are safety limits built into the system. And I wonder if the race for AI dominance is a race to the bottom for safety. So that’s one of my big worries actually is that of course all of this energy and racing and resources is great for progress but it might incentivize certain actors in in that to cut corners and one of the corners that can be shortcut would be safety and responsibility. Um so the question is is how can we uh coordinate more you know as leading players but also nation states even I think this is an international thing. AI is going to affect every country. Everybody in the world. Um, so I think it’s really important that the world uh and the international community has a say in this. Can you teach an AI agent morality? I think you can. They learn by demonstration. They learn by teaching. Um, and I think that’s one of the things we have to do with these systems is to give them uh a value system and a and a guidance and some guard rails around that much in the way that you would teach a child. Google DeepMind is in a race with dozens of others striving for artificial general intelligence so human that you can’t tell the difference which made us think about Deus Hassaba’s signing the Nobel book of laurates when does a machine sign for the first time and after that will humans ever sign it again I think in the next steps is going to be these amazing tools that enhance our almost every uh endeavor we do as humans. And then beyond that uh when AGI arrives, you know, I think it’s going to change uh pretty much everything about the way we do things. And and it’s almost, you know, I think we need new great philosophers to come about hopefully in the next 5 10 years to understand the implications of images, bringing to life your own holiday photos at 60 minutesovertime.com.