Hannah wraps up the series by meeting DeepMind co-founder and CEO, Demis Hassabis. In an extended interview, Demis describes why he believes AGI is possible, how we can get there, and the problems he hopes it will solve. Along the way, he highlights the important role of consciousness and why he’s so optimistic that AI can help solve many of the world’s major challenges. As a final note, Demis shares the story of a personal meeting with Stephen Hawking to discuss the future of AI and discloses Hawking’s parting message.
Hannah wraps up the series by meeting DeepMind co-founder and CEO, Demis Hassabis. In an extended interview, Demis describes why he believes AGI is possible, how we can get there, and the problems he hopes it will solve. Along the way, he highlights the important role of consciousness and why he’s so optimistic that AI can help solve many of the world’s major challenges. As a final note, Demis shares the story of a personal meeting with Stephen Hawking to discuss the future of AI and discloses Hawking’s parting message.
For questions or feedback on the series, message us on Twitter @DeepMind or email podcast@deepmind.com.
Interviewee: Deepmind co-founder and CEO, Demis Hassabis
Credits
Presenter: Hannah Fry
Series Producer: Dan Hardoon
Production support: Jill Achineku
Sounds design: Emma Barnaby
Music composition: Eleni Shaw
Sound Engineer: Nigel Appleton
Editor: David Prest
Commissioned by DeepMind
Thank you to everyone who made this season possible!
Further reading:
DeepMind, The Podcast: https://deepmind.com/blog/article/welcome-to-the-deepmind-podcast
DeepMind’s Demis Hassabis on its breakthrough scientific discoveries, WIRED: https://www.youtube.com/watch?v=2WRow9FqUbw
Riemann hypothesis, Wikipedia: https://en.wikipedia.org/wiki/Riemann_hypothesis
Using AI to accelerate scientific discovery by Demis Hassabis, Kendrew Lecture 2021: https://www.youtube.com/watch?v=sm-VkgVX-2o
Protein Folding & the Next Technological Revolution by Demis Hassabis, Bloomberg: https://www.youtube.com/watch?v=vhd4ENh5ON4
The Algorithm, MIT Technology Review: https://forms.technologyreview.com/newsletters/ai-the-algorithm/
Machine learning resources, The Royal Society: https://royalsociety.org/topics-policy/education-skills/teacher-resources-and-opportunities/resources-for-teachers/resources-machine-learning/
How to get empowered, not overpowered, by AI, TED: https://www.youtube.com/watch?v=2LRwvU6gEbA
Professor Hannah Fry
Welcome back to the final episode in this season of the DeepMind podcast, and boy have we covered a lot of ground. From protein folding AIs, to sarcastic language models, sauntering robots, synthetic voices, and much more, it has been quite the journey. But we do have one more treat in store for you: a chance to hear from DeepMind’s CEO and co-founder Demis Hassabis.
Demis Hassabis
The outcome I've always dreamed of is AGI has helped us solve a lot of the big challenges facing society today, be that health, creating a new energy source. So that's what I see as happening, is this sort of amazing flourishing to the next level of humanity's potential with this very powerful technology.
Professor Hannah Fry
This was my opportunity to ask Demis all the things that have popped into my head during the making of the series. Well, most things. We’'ll see how far I can push it. As luck would have it, the day I sat down with Demis coincided with the opening of DeepMind’s sparkling new premises in London's Kings Cross. There weren't many people about yet, so it felt like an exclusive preview.
I feel like I'm in a high-end furniture catalogue.
Let me set the scene for you. This new building is rather beautifully appointed. It's got a double helix staircase running through the middle. There are fiddle leaf trees in practically every corner and they are stylish fluted glass crittal doors between offices. And yes, those meeting rooms christened after great scientists: Galileo, Ada Lovelace, Leonardo - they are all still a feature.
[Background noise as Hannah is offered a drink]
While sipping on my beverage of choice, some memorabilia outside Demis’s office caught my eye, a nod to AlphaGo's famous victory over Lee Sedol in the game Go. There is, sitting underneath two extremely fancy black spotlights, a chessboard in a black frame and if I go over to it, there's a picture of Garry Kasparov, the legendary chess player who was Beaten by Deep Blue, the IBM computer. He’s signed the chess board and it says: “for the AlphaGo team, keep conquering new heights”. I mean, just a chess board signed by Kasparov on the wall, perfectly standard. Oh, we're going in.
After settling down inside Demis's office, I started by asking him about DeepMind’s long-term vision of building AGI or Artificial General Intelligence. It's an ambition that has been baked into DeepMind DNA from the very beginning.
I think it's fair to say that there's some people in the field who don't think that AGI is possible. They sort of say that it's a distraction from the actual work of building practical AI systems. What makes you so sure that this is something that's possible?
Demis Hassabis
I think it comes down to the definition of AGI. So if we define it as a system that's able to do a wide variety of cognitive tasks to a human level, that must be possible, I think, because the existence proof is the human brain. And unless you think there's something non-computable in the brain - which so far there's no evidence for - then it should be possible to mimic those functions on, effectively, a Turing machine, a computer.
And then the second part of that which is “it's a distraction from building practical systems”.
Well, I mean, that may be true in the sense of what you're most interested is in the practical systems. AGI itself is a big research goal and a long-term one. It's not going to happen anytime soon. But our view is that if you try and shoot for the stars, so to speak, then any technologies that you sort of build on the way can be broken off in components and then applied to amazing things. And so we think striving for the long-term ambitious research goal is the best way to create technologies that you can apply right now.
Professor Hannah Fry
How will you recognize AGI when you see it? Will you know it when you see it?
Demis Hassabis
What I imagine is going to happen is some of these AI systems will start being able to use language and - I mean, they already are - but better. Maybe we'll start collaborating with them, say, scientifically. And I think more and more as you put them to use at different tasks, slowly that portfolio will grow. And then eventually we could end up with it controlling a fusion power station. And eventually I think one system or one set of ideas and algorithms will be able to scale across those tasks and everything in between.
And then once that starts being built out, there will be, of course, a philosophical argument about: is that covering all the space of what humans can do? And I think in some respects it will definitely be beyond what humans are able to do which will be exciting as long as that's done in the right way. And, you know, there will be cognitive scientists that look into: does it have all the cognitive capabilities we think humans have. Creativity? What about emotion, imagination, memory? And then there'll be the subjective feeling that these things are getting smarter.
But I think that's partly why this is the most exciting journey, in my opinion, that humans have ever embarked on. Which is I'm sure that trying to build AGI with a sort of neuroscience inspiration is going to tell us a lot about ourselves and the human mind.
Professor Hannah Fry
The way you're describing it there is as if there’s this big goal in the future that you steadily approach. I'm wondering whether in your mind there's also, like, a day where this happens. Like, you know how children dream of lifting the world cup? Have you thought about the day when you walk away from the office and you're like “it happened today”?
Demis Hassabis
Yeah. I have dreamed about that for a very long time. I think it would be more romantic, in some sense, if that happened where you, you know, one day you're coming in and then this lump of code is just executing. Then the next day you come in and it sort of feels sentient to you. It would be quite amazing. From what we've seen so far, it will probably be more incremental and then a threshold will be crossed. But I suspect it will start feeling interesting and strange in this middle zone as we start approaching that. We're not there yet, I don't think. None of the systems that we interact with or have built have that feeling of sentience or awareness, any of those things. They’re just kind of programs that execute, albeit they learn. But I could imagine that one day that could happen. You know, there's a few things I look out for, like perhaps coming up with a truly original idea, creating something new, a new theory in science that ends up holding, maybe coming up with its own problem that it wants to solve. These kinds of things would be sort of activities that I'd be looking for on the way to maybe that big day.
Professor Hannah Fry
If you were a betting man, then when do you think that that will be?
Demis Hassabis
So I think that the progress so far has been pretty phenomenal. I think that it's coming relatively soon, in the next, you know, I wouldn't be super surprised if it were in the next decade or two.
Professor Hannah Fry
Shane said that he writes down predictions and his confidence on them and then checks back to see how well he did in the past. Do you do the same thing?
Demis Hassabis
I don't do that. No, I am not as methodical as Shane, and he hasn't showed me his recent predictions. I don't know where he’s secretly putting them down, I’ll have to ask him.
Professor Hannah Fry
It's just a drawer in his house.
Demis Hassabis
Exactly.
Professor Hannah Fry
Like Shane Legg, DeepMind’s co-founder and chief scientist, who we heard from in an earlier episode, Demis believes that there are certain abilities that humans have, but are missing from current AI systems.
Demis Hassabis
Today's learning systems are really good at learning in messy situations. So dealing with vision or intuition in Go. So pattern recognition - they're amazing for that. But we haven't yet got them satisfactorily back up to be able to use symbolic knowledge. So doing mathematics, or language even, we have some coarse language models but they don't have a deep understanding yet, still, of concepts that underlie language. And so they can't generalize or write a novel or make something new.
Professor Hannah Fry
How do you test? Whether, say, a language model, has a conceptual understanding of what it's coming out with.
Demis Hassabis
That's a hard question and something that we're all wrestling with still. So we have our own large language model, just like most teams these days. And it's fascinating probing it, you know, at three in the morning. That's one of my favorite things to do is just have a, have a little chat with the AI system.
Professor Hannah Fry
Does it ever tell you something interesting?
Demis Hassabis
Sometimes! But I'm generally trying to break, to see exactly this: like does it really understand what you're talking about? One of the things that’s suspected they don't understand properly is quite basic real world situations that rely on maybe experiencing physics or acting in the world. Because obviously these are passive language models, right? They just learn from reading the internet. So you can say sort of things like “Alice threw the ball to Bob, the ball flew back to Alice, Alice throws it over the wall, Bob goes and gets it. Who's got the ball? And, you know, obviously in that case it’s Bob. But it can get quite confused, sometimes it'll say Alice or it’ll say something random.
So it's those types of, you know, almost like a kid would understand that. And it's interesting to see: are there basic things like that that it can't get about the real world because it sort of only knows it from words. But that in itself is a fascinating philosophical question. I think what we're doing is philosophy, actually, in the greatest tradition of that. Trying to understand philosophy of mind, philosophy of science.
Professor Hannah Fry
When it's 3 a.m. and you're talking to a language model, do you ever ask it if it's an AGI?
Demis Hassabis
Yeah. Yeah. I think I must have done that. Yes with varying answers.
Professor Hannah Fry
But it has responded “yes” at some point.
Demis Hassabis
Yeah it does sometimes respond “Yes” and, you know, “I'm an artificial system” and it knows what AGI is to some level. I don't think it really knows anything to be honest. That would be my conclusion. It knows some words.
Professor Hannah Fry
A clever parrot.
Demis Hassabis
Yes exactly.
Professor Hannah Fry
For the moment at least, AI systems like language models show no signs of understanding the world. But could they ever go beyond this, in future?
Do you think that consciousness could emerge as a sort of natural consequence of a particular architecture? Or do you think that it's something that has to be intentionally created?
Demis Hassabis
I'm not sure. I suspect that intelligence and consciousness are what's called double dissociable. You can have one without the other, both ways. My argument for that would be that if you have a pet dog, for example, I think they quite clearly have some consciousness. You know, they seem to dream, they're sort of self aware of what they want to do, but they're not, you know - dogs are smart, but they're not that smart. Right? At least my dog isn't any way.
But on the other hand, if you look at intelligent systems, the current ones we build, ok they're quite narrow, but they are very good at, say, games. I could easily imagine carrying on with building those types of AlphaZero systems and they're getting more and more general, more and more powerful, but they just feel like programs. So, that's one path. And then, the other path is that it turns out consciousness is integral with intelligence. So, at least in biological systems, they seem to both increase together. So it suggests that maybe there's a correlation. It could be that it's causative. So, it turns out if you have these general intelligence systems, they automatically have to have a model of their own conscious experience.
Personally I don't see why that's necessary. So I think by building AI and deconstructing it we might actually be able to triangulate and pin down what the essence of consciousness is. And then we would have the decision of do we want to build that in or not? My personal opinion is, at least in the first stages, we shouldn't if we have the choice because I think that brings in a lot of other complex, ethical issues.
Professor Hannah Fry
Tell me about some of those.
Demis Hassabis
Well, I mean, I think if an AI system was conscious and you believed it was then you'd have to consider what rights it might have. And then the other issue as well is that conscious systems or beings have generally come with Free Will, and wanting to set their own goals. And I think, you know, there's some safety questions about that as well. And so, I think it would fit into a pattern that we are much more used to with our machines around us to view AI as a kind of tool or, if it's language-based,a kind of oracle. It's like the world's best encyclopedia, right? You ask a question and it has, like, you know, all the best research to hand. But not necessarily an opinion or a goal to do with that information. Right? It's goal would be to give that information in the most convenient way possible to the human
Professor Hannah Fry
Wikipedia doesn't have a theory of mind and maybe it’s best to keep it like that.
Demis Hassabis
Maybe it's best to keep it like that. Exactly.
Professor Hannah Fry
Okay. How about a moral compass then? Can you impart a moral compass into AI? And should you?
Demis Hassabis
I mean, I'm not sure I would call it a moral compass, but definitely it's going to need a value system. Because whatever goal you give it you’re effectively incentivising that AI system to do something. And so, as that becomes more and more general, you can sort of think about that as almost a value system. What do you want it to do in its set of actions? What you do want to sort of disallow? How should it think about side effects versus its main goal? What's its top level goal? If it's to keep humans happy, which set of humans? What does happiness mean?
We’re going to definitely need help from philosophers and sociologists and others about defining - and psychologists probably - about defining what a lot of these terms mean. And of course a lot of them are very tricky for humans to figure out, our collective goals.
Professor Hannah Fry
What do you see as the best possible outcome of having AGI?
Demis Hassabis
The outcome I've always dreamed of, or imagined, is AGI has helped us solve a lot of the big challenges facing society today, be that health, cures for diseases like Alzheimer's, I would also Imagine AGI helping with climate, creating a new energy source that is renewable. And then what would happen after those kind of first stage things, is you kind of have this - sometimes people describe it as radical abundance.
Professor Hannah Fry
If we're talking about radical abundance of, I don't know, water and food and energy, how does AI help to create that?
Demis Hassabis
So it helps to create that by unlocking key technological breakthroughs. Let's take energy, for example. We are looking for, as a species, renewable, cheap, ideally free, non-polluting energy. And to me, there's at least a couple of ways of doing that. One would be to make fusion work. Much better than nuclear fission, it's much safer. That's obviously the way the sun works. We're already working on one of the challenges for that, which is containing the plasma in a fusion reactor. And we already have the state-of-the-art way of doing that, sort of unbelievably.
The other way is to make solar power work much better. If we had solar panels just tiling something, you know, half the size of Texas, that would be enough to power the whole world's uses of energy. So it's just not efficient enough right now, but if you had superconductors - you know, room-temperature superconductors, which is obviously the Holy Grail in that area - if that was possible, suddenly that would make that much more viable.
And I could imagine AI helping with Material Science. That's a big combinatorial problem. Huge search space, all the different compounds you can combine together. Which one's are the best? And of course Edison sort of did that by hand when he found tungsten for light bulbs, but imagine doing that at enormous scale, on much harder problems than a lightbulb. That's kind of the sorts of things I'm thinking an AI could be used for.
Professor Hannah Fry
I think you probably know what I'm going to ask you next. If that is the fully optimistic utopian view of the future, it can't all be positive when you're lying awake at night. What are the things that you worry about?
Demis Hassabis
Well, to be honest with you, I do think that is a very plausible end state, the optimistic one I painted you. And of course, that's the reason I work on AI because I hoped it would be like that. On the other hand one of the biggest worries I have is what humans are going to do with AI technologies, on the way to AGI. Like most technologies, they could be used for good or bad and I think that's down to us as a society and governments to decide which direction they're going to go in.
Professor Hannah Fry
Do you think Society is ready for AGI?
Demis Hassabis
I don't think yet. I think that's part of what this podcast series is about, as well, is to give the general public more of an understanding of what AGI is, what AI is, and what's coming down the road. And then we can start grappling with - as a society, not just the technologists - what we want to be doing with these systems?
Professor Hannah Fry
You said you've got this sort of 20-year prediction and then simultaneously where society is in terms of understanding and grappling with these ideas. Do you think that DeepMind has a responsibility to hit pause at any point?
Demis Hassabis
Potentially. I always imagine that as we got closer to the sort of grey zone that you were talking about earlier, the best thing to do might be to pause the pushing of the performance of these systems so that you analyse down to minute detail exactly, and maybe even prove things mathematically about the system, so that you know the limits and otherwise of the systems that you're building. At that point I think all the world's greatest minds should probably be thinking about this problem. So that was what I would be advocating to, you know, the Terence Taos of this world, the best mathematicians, is that actually - and I've even talked to him about this - I know you're working on the Riemann hypothesis or something, which is the best thing in mathematics, but actually, this is more pressing. I have this sort of idea of like almost Avengers Assembled of the scientific world. That's, that's a bit of like, my dream.
Professor Hannah Fry
Did Terence Tao agree to be one of your Avengers?
Demis Hassabis
I didn't quite tell him the full plan of that.
Professor Hannah Fry
I know that some quite prominent scientists have spoken in quite serious terms about this path towards getting AGI. I'm thinking about Stephen Hawking here. Do you ever have debates with those kind of people about what the future looks like?
Demis Hassabis
Yeah. I actually talked to Stephen Hawking a couple of times. I went to see him in Cambridge, it was only supposed to be a half hour meeting but we ended up talking for hours. He wanted to understand what was going on at the coal face of AI development and I explained to him what we were doing, the kinds of things we've discussed today, what we're worried about. And he felt much more reassured that people were thinking about this in the correct way, and at the end he said “I wish you the best of luck but not too much” then he looked right in my eye with a twinkle in his eye, like, it was just amazing.
That was literally his last sentence to me: “best of luck, but not too much”. Which I thought was perfect.
Professor Hannah Fry
Along the road to AGI, there have already been some significant breakthroughs with particular AI systems, or narrow AI as it's sometimes known. Not least the DeepMind system known as AlphaFold, which we heard about in episode 1.
AlphaFold has been shown to accurately predict the 3D structures of proteins with implications for everything from the discovery of new drugs to pandemic preparedness. I asked Demis aow a company known for getting computers to play games to a superhuman level was able to achieve success in some of the biggest scientific challenges in the space of just a few short years.
Demis Hassabis
The idea was always, from the beginning of DeepMind, to prove our general learning ideas - reinforcement learning, deep learning, combining that - on games; tackle the most complex games there are out there, so Go and Starcraft in terms of computer games and board games; and then the hope was we could then start tackling real world problems. Especially in science, which is my other huge passion.
And at least my personal reason for working on AI was to use AI as the ultimate tool really to accelerate scientific discovery in almost any field, because if it's a general tool then it should be applicable to many many fields of science. And I think AlphaFold, which is our program for protein folding, is our first massive example of that. And I think it's woken up the scientific world to the possibility of what AI could do.
Professor Hannah Fry
What impact do you hope that AlphaFold will have?
Demis Hassabis
I hope AlphaFold is the beginning of a new era in biology where computation or AI methods are used to help model all aspects of biological systems and therefore accelerate our discovery process in biology. So I'm hoping that it will have a huge effect on drug discovery, but also fundamental biology, understanding what these proteins do in your body.
And I think that if you look at machine learning, it's the perfect description language for biology in the same way that maths was the perfect description language for physics. And many people, obviously, in the last 50 years have tried to apply mathematics to biology with some success, but I think it's too complex for mathematicians to describe in a few equations. But I think it's the perfect regime for machine learning to spot patterns. Machine learning is really good at taking weak signals, messy signals, and making sense of them. Which is, I think, the regime that we're in with biology.
Professor Hannah Fry
How could AI be used for a future pandemic?
Demis Hassabis
So one of the things actually we're looking for now is the top twenty pathogens that biologists are identifying could cause the next pandemic. To fold all the proteins, you know, it's feasible, involved in all those viruses so that drug discovery and pharma can have a head start at figuring out what drugs or antidotes or antivirals would they make to combat those, if those viruses ended up mutating slightly and becoming the next pandemic.
I think in the next few years we will also have automated drug discovery processes as well. So we won't just be giving the structure of the protein, we might even be able to propose what sort of compound might be needed. So I think there's a lot of things AI can potentially do. And then on the other side of things, maybe on the analysis side, to track trends and predict how spreading might happen.
Professor Hannah Fry
Given how significant the advances are for science, that are being created by these AI systems, do you think that there will ever be a day where an AI wins a Nobel Prize?
Demis Hassabis
I would say that just like any tool, it's the human ingenuity that's gone into it. You know, it's sort of like saying, who should we credit with spotting Jupiter's moons: is it his telescope? No, I think it's Galileo. And, of course, he also built the telescope right, famously, as well as it was his eye that saw it. And then he wrote it up.
So I think it's a nice sort of science fiction story to say, well, the AI should win it. But at least until we get to full AGI - if it's sentient, it's picked the problem itself, it's come up with a hypothesis and then it's solved - that's a little bit different. But for now, where it's just a fairly automated tool effectively, I think the credit should go probably to the humans.
Professor Hannah Fry
I don’t know, I quite like the idea of giving Nobels to inanimate objects. Like the Large Hadron Collider can have one, Regression can have one -
Demis Hassabis
The Hubble telescope can have one!
Professor Hannah Fry
Exactly, I just quite like that idea.
Even before AGI has been created, it's clear that AI systems like AlphaFold are already having a significant impact on real world problems. But, for all their positives, there are also some tricky ethical questions surrounding the deployment of AI which we've been exploring throughout this series. Things like the impact of AI on the environment and the problem of biased AI systems being used to help make decisions on things like access to healthcare or eligibility for parole.
What's your view on AI being used in those situations?
Demis Hassabis
I just think we have to be very careful that the hype doesn't get ahead of itself. There are a lot of people who think AI can just do anything already and actually if they understood AI properly, they’d know that the technology is not ready. And one big category of those things is very nuanced human judgment about human behavior. So a parole board hearing would be a good example of that. There's no way AI is ready yet to kind of model the balance of factors that an experienced, say, parole board member is balancing up across society.
How do you quantify those things mathematically or in data? And then if you add in a further thing, which is how critical that decision is either way, then all those things combined mean to me that it's not something that AI should be used forl. Certainly not to make the decision. At the level AI is at the moment I think it's fine to use it as an analysis tool to triage, like, a medical image, but the doctor needs to make the decision.
Professor Hannah Fry
In our episode on language models, we talked about some of the more concerning potential uses of them. Is there anything that DeepMind can do to really prevent some of those nefarious purposes of language models? Like spreading this information?
Demis Hassabis
We're doing a bunch of research ourselves on, you know, the issues with language models. I think there's a long way to go like in terms of building analysis tools to interpret what these systems are doing, and why they're doing it. I think this is a question of understanding. Why are they putting this output out? And then how can you fix those issues like biases, fairness, and what's the right way to do that?
Of course, you want truth at the heart of it but then there are subjective things where people from different, say, political persuasions, have a different view about something.What are you going to say is the truth at that point? So then it sort of impinges on, like, well what does society think about that? And then which society are you talking about? And these are really complex questions and, because of that, this is an area I think that we should be proceeding with caution in terms of deploying these systems in products and things
Professor Hannah Fry
How do you mitigate the impact that AI is having on the environment? Is there just a danger of building larger and larger and larger energy hungry systems and having a negative impact?
Demis Hassabis
Yeah. I mean, we have to consider, I think that AI systems are using a tiny sliver of the world's energy usage, even the big models, compared to watching videos online. All of these things are using way more computers and bandwidth.
And the second thing is that actually most of the big data centers now, especially things like Google, are pretty much 100% carbon neutral. But we should continue that trend to become fully green data centers. And then, of course, you have to look at the benefits of what you're trying to build. So let's say a healthcare system or something like that, relative to energy usage, most AI models are hugely net positive.
And then the final thing, which is something we've proven, is that actually building the AI models can then be used, you know, to optimize the energy systems itself. So, for example, one of the best applications we've had about AI systems is to control the cooling in data centers and save, like, 30% of the energy they use. You know, that saving is way more than we've ever used for all of our AI models put together probably. So it's an important thing to bear in mind, to make sure it doesn't get out of hand, but I think right now, that particular worry is sort of slightly over-hyped.
Professor Hannah Fry
While Demis and his colleagues at DeepMind are thinking hard about what could go wrong when AI is deployed in the real world, what really shone through during our conversation was Demis’s faith in the idea that ultimately building AI and AGI will be a net positive for the whole of society.
Demis Hassabis
If you look at the challenges that confront humanity today - climate, sustainability, inequality, the natural world - all of these things are, in my view, getting worse and worse. And there's going to be new ones coming soon down the line like access to water and so on which I think are going to be really major issues in the next 50 years. And if there wasn't something like AI coming down the road, I would be extremely worried for our ability to actually solve these problems. But I'm optimistic. We are going to solve those things because I think AI is coming and I think it will be the best tool that we've ever created.
Professor Hannah Fry
In some ways, it's hard not to be drawn in by Demis's optimism, to be enthused by the tantalizing picture he paints of the future. And it's becoming clearer that there are serious benefits to be had as this technology matures.
But as research swells behind that single north star of AGI, it's also evident that this progress comes with its own serious risks too. There are technical challenges that need resolving, but ethical and social challenges too that can't be ignored, and much of that can't be resolved by AI companies alone. They require a broader societal conversation, one which I hope, at least in some small way, is fueled by this podcast. But I'm struck most of all by how far the field has come in such a short space of time.
At the end of the last season we were talking enthusiastically about AI p;aying Atari games and Go and chess. And now all of a sudden, as these ideas have found their feet, we can reasonably look forward to AI making a difference in drug discovery and nuclear fusion and understanding the genome. And I do wonder what new discoveries might await when we meet again.
DeepMind: The Podcast has been a Whistledown production.
The series producer is Dan Hardoon, with production support from Jill Achineku. The editor is David Prest, sound design is by Emma Barnaby, and Nigel Appleton is the sound engineer.
The original music for this series was specially composed by Eleni Shaw, and what wonderful music it was.
I'm Professor Hannah Fry. Thank you for listening.