DeepMind: The Podcast

Demis Hassabis: The interview

Episode Summary

In this special extended episode, Hannah Fry meets Demis Hassabis, the CEO and co-founder of DeepMind. She digs into his former life as a chess player, games designer and neuroscientist and explores how his love of chess helped him to get start-up funding, what drives him and his vision, and why AI keeps him up at night. If you have a question or feedback on the series, message us on Twitter @DeepMind using the hashtag #DMpodcast or email us at podcast@deepmind.com.

Episode Notes

In this special extended episode, Hannah Fry meets Demis Hassabis, the CEO and co-founder of DeepMind. She digs into his former life as a chess player, games designer and neuroscientist and explores how his love of chess helped him to get start-up funding, what drives him and his vision, and why AI keeps him up at night.

If you have a question or feedback on the series, message us on Twitter (@DeepMind using the hashtag #DMpodcast) or email us at podcast@deepmind.com.

Further reading:

Interviewees: Deepmind CEO and co-founder, Demis Hassabis

Credits:
Presenter: Hannah Fry
Editor: David Prest
Senior Producer: Louisa Field
Producers: Amy Racs, Dan Hardoon
Binaural Sound: Lucinda Mason-Brown
Music composition: Eleni Shaw (with help from Sander Dieleman and WaveNet)
Commissioned by DeepMind

Episode Transcription

Hannah: So here we are, the last episode in this series of the DeepMind podcast. My name is Hannah Fry. I am a mathematician, and someone who is deeply intrigued by artificial intelligence. Much like you, I imagine, since you made it this far. Now we’ve been toying with the big questions in this series. What is intelligence? How does an algorithm learn? And what to do with the AI future once we get there? And I have been asking the team of scientists and engineers here at DeepMind to give us their take of where things are at and where they’re going. But now for this final episode, we have a chance to catch up with Demis Hassabis, the co-founder and CEO of DeepMind to hear what he has to say on these questions and more.

00:52

Hannah: Demis Hassabis grew up in North London in the 1970s. By the age of 13, he was ranked 2nd in the world at Chess in his age group. At 16 he worked as a games designer- remember Theme Park? That was him. Then he went on to study computer science and then neuroscience before setting up DeepMind with his two co-founders Shane Legg and Mustafa Suleyman. His accomplishments are ferociously intimidating. But as an article in the Times put it, Demis doesn’t even have the good grace to be socially deficient.

But if all of that makes it sound like Demis is a man with a plan, then you’d be right.

Demis: I had in mind something like creating a company like DeepMind to research AI from a long time ago, so I was sort of working back from the end state which is what would I need, what skills would I need, what experiences would I need to even stand a chance of even building something like that.

Hannah: Because these different aspects of your life - the chess, the neuroscience, the games design, they’re not disconnected, I mean they do build to a bigger picture.

Demis: They do. And it’s hard to say which way round it is - so I picked those subjects and those things to study - say computer science at Cambridge, and then cognitive neuroscience at UCL because I I wanted this component of computer science and neuroscience to come together and obviously that’s what we do at DeepMind, but even the game stuff, that taught me about creative thinking, also a massive engineering projects, and then of course it ended up that we used games as our main vehicle for proving out our AI algorithms. One other thing I’ve learnt from games is to use every scrap of asset that you have - like in games, you always have a limited resource pool like you know, if it’s a chess game, it’s the chess pieces you have left on the board, and one way to think about games is maximising the use of the of the assets you have left. Perhaps that’s why I was biassed towards using games, but I also felt it was the logical way to go about building AI.

Hannah: What was your Phd in?

Demis: My PhD was in cognitive neuroscience and I actually decided to study how memory and imagination works in the brain, and the reason I chose cognitive neuroscience is I wanted to better understand how the brain does certain cognitive functions so that perhaps we could be inspired on new types of algorithms based on how the brain works, and so it’s a good idea to pick functions that we don’t know how to do in AI and ah I went to study with Eleanor Maguire at UCL and she’s one of the world’s leading experts in the hippocampus, which is critical for memory. But I told her that what I really wanted to look at was imagination which is um you can also think of it as simulating things in the future, in your mind. Ah for obviously it’s useful for planning but also for creativity. And the reason I was interested in that is of course that’s an incredibly important part of human intelligence and it’s also something I used a lot in my games design career. So I used a lot of visualisation techniques and imagining how would a player viscerally like play this game, like Theme Park and then you try and change something about it - all in your mind before or in sketches - before you went to the trouble of programming it all. And it felt to me that we were using a similar type of process to the way when we’ve lucidly remember things that have happened to us, so I thought there maybe there would be a connection with this kind of you could imagine like a simulation engine of the mind that was being used both for imagination and memory, and that’s what I wanted to work on during my PhD. And we ended up discovering something quite important that that in fact the hippocampus was at the core of both of those two types of function. It’s critical for memory, which we already know, but it’s also critical for imagination. And you can’t really imagine vividly without your hippocampus so we ended up discovering this important thing and then subsequently that’s been at the heart of a lot of what we try to do in AI is build memory and imagination abilities into our AI systems, and we’re still doing that now.

Hannah: When it comes to bringing those ideas across and trying to implement them in AI, where do you find that balance between just directly copying what the brain’s doing and and and using it for inspiration?

05:02

Demis: So that’s very important sign post when you’re um scrabbling around in the dark in the unknown of science, any signpost is really valuable, and the brain is the only existence proof we have in the universe that intelligence is possible. So it always felt to me that it would be crazy to ignore that as a source of information of how to build AI. So we use neuroscience for two things - one is inspiration for new ideas about algorithms or architectures to representations that the brain uses. And then we can get some inspiration for that for new types of algorithms. The second way we use neuroscience is what I call for validation. So we may already have some idea from coming from engineering or mathematics about how to build a learning system that say reinforcement learning - that came from engineering disciplines and operational research first, but then in the 90s we discovered that the brain also implements a form of reinforcement learning and what that means is is that from an AI perspective you can be sure that reinforcement learning could plausibly be a component part of an AI system because it’s in the brain and we know the brain is a general intelligence. So that means - that’s really important if you’re thinking about where to put your engineering resources and effort - you know that if it doesn’t work right now, and things never work first time in research or engineering, it’s worthwhile pushing that harder because you know eventually this must work because the proof of concept is the brain. Having said that though, there is another school of thought from AI practitioners and neuroscientists that we need to slavishly copy the brain completely from the bottom up like on a neuronal level and I think that’s also the wrong approach. What we’re after is what I call a systems neuroscience approach which is that you’re interested in the algorithms and the architectures that the brain is using. Not necessarily the exact implementation details, because I think that’s likely to be different for in silicon systems - like in computers compared to carbon based systems like our minds. THere’s no reason to think that we would implement exactly the same implementation details in a silicon based system that’s going to have different strengths and weaknesses than a carbon based system like our minds.

07:15

Hannah: Normally tech startups have customers, they have products, but this is sort of more like a start-up research facility.

Demis: Yes.

Hannah: How do you get something like that off the ground?

Demis: It’s pretty hard, I mean - you’re right in that it’s a very unusual company. What I try to do is basically take the best from start-up world, the kind of focus and energy and um pace that you get in the best start-ups um say in Silicon Valley. And I wanted to combine that with the best from academia, which is blue sky thinking, incredibly bright people, working on you know long-term, big research questions and stepping into the unknown all the time, and you know obviously I spend some time in academia myself, and there’s very great aspects about academia, but there’s also some things that are frustrating. Um mostly around um the organisational aspects and the pace of it can sometimes be slower than you would like. It’s difficult to get momentum behind things in the way you can a start-up if it if it’s you know if it things are going well, but I when I was in academia and I’d already obviously started and been involved with a few start-ups before going back to do my PhD, so I’d experience both sides, and I didn’t feel like there was any reason why these should be mutually exclusive environments. Although they have generally been treated as very different, almost opposite environments. And there’s a lot of things that do seem opposite, but I felt if you if you were kind of smart about it, you could extract the best of both those worlds and combine them into some kind of hybrid organisation, and I feel that’s what DeepMind is. And I don’t think many people have ever done that and and so that’s why it seems quite strange. Probably as an as a organisation I think we’ve shown with our scientific output, even if you measure it by normal measures, Nature, Science papers, this kind of thing that normal academic labs would measure themselves by, we’ve been very successful, and I think you know, we’ve been also on the other side things we’ve been able to produce really big breakthroughs that took a lot of engineering effort as well like AlphaGo which would have been very difficult I think to do in academia, in a in a normal, you know, small academic lab.

09:17

Hannah: One of the thing that makes DeepMind I guess a bit unusual is that you publish your work - I mean other companies don’t really do this - is there some way that you’re kind of giving away your competitive advantage, really.

Demis: Yeah, it’s an interesting issue actually, so we’ve always published everything we’ve done, and lots of companies do wonder you know why we’re doing that because a lot of other companies don’t necessarily do that. We feel it’s part of the scientific discourse, like that’s the right way to do science. We really believe in sort of peer review journals so that your work is scrutinised at the highest level by your peers, which is the you know, the gold standard in science. That’s why we publish in the top journals like Nature and Science. Also you do get more exposure for your ideas like that, so some of our top papers had been cited like more than 5,000 times now in the last 2-3 years, some of the most cited papers in the world, so that’s great. You know, and I think that if a community shares ideas like that the whole field can advance much more quickly than if everyone was to keep their ideas secret, but you know, there are some interesting aspects to that - one is the competitive side, I mean I feel there that you just need to carry on innovating than a faster pace than anyone else. That’s the most important thing, not trying to keep hold of the ideas you already innovated. You should just be - by the time you’ve you know, you’ve published it, you should be one or two ideas even further down the line if you’re continuing to work at the same pace and the same level of innovation, so I think that’s really the biggest sort of protection against competitors - is your pace of innovation.

Hannah: In the early days, I mean back in sort of you know 2009 - 2010, when AI wasn’t the hot topic that perhaps it is today, did you find it difficult to get attention from from the people, that I mean, I think I remember reading that you you decided to go straight for billionaires rather than millionaires when it came to investment.

11:04

Demis: Yes. Well, that was incredibly tough, so so it’s really hard to remember now, even for me, like 10 years ago, you know no one was investing in AI. It was an impossible thing to get money for actually. You know no one would invest, and it’s still very difficult today I think on on what I would call a deep technology or a science based start-up, right? And with no clear product in mind. I mean what we were basically saying is we were going to build this incredible general purpose technology that as it got more powerful there should be a myriad of things you could just apply it to. But you know, it that that sounds probably pretty far-fetched to a normal kind of investor. It it was almost like well that’s what academia is for, isn’t it? You should be just - if you don’t know - you know it’s blue sky research, you don’t know when it’s going to work, it’s it’s pure research, then um go and do that for another 10 years in academia, and then come and talk to them when it’s working. But that would have been too slow. And I could see that within academia, so then that’s why I decided not to go after normal venture capitalists who certainly in Europe or in the UK they would want to make 10x return, maybe within 3 - 5 years, right? That’s the kind of time horizon. Of course that’s no good for a research based company, you know you haven’t barely got going after 3 years right? So what you need is a profile investor who is more interested in 1,000x return but they’re willing to wait 10 years, maybe even 20 years, and that kind of profile of an investor basically just does not exist in Europe. Certainly didn’t back in 2010. Really you’re talking about Silicon Valley, self-made billionaires I guess, who both have deep enough pockets to take that kind of bet, and if it doesn’t work, it’s okay. But they’re also personally interested in these types of topics and have seen incredibly ambitious things work because usually that’s how they’ve made their money.

12:51

Hannah: Just going back to the idea of how everything sort of slots into place - all the different aspects, all the different passions that you have slots into place. Didn’t chess play a role in you getting your first funding?

Demis: Yes, it did. So um chess has been key you know core part of my personality I guess because I’ve been playing it for so long, and I think a lot of my thought processes developed because of that - so planning, and thinking about problem-solving, all of these aspects which I think are useful for for anything that you do in your life. But it also turned out to be useful directly. Because one of the first investors we talked to was a chess player themselves and a pretty strong junior chess player in the US and when we were doing sort of our background reading on this, we spent about a year preparing for this meeting almost and there was an important meeting because we knew that not very many people would get what we were doing, and this was one of the people who felt that we would. So it was important and we couldn’t get a meeting cause we had no contacts in Silicon Valley, you know I didn’t know anyone over there in California -nor did Shane and Mustafa - you know it’s sort of um how do you break into that world? And so we finally managed to get asked to a conference where this billionaire was sponsoring, and then we knew we would meet him at some kind of after-conference party, but the problem is, is that there are hundreds of people all trying to pitch him their ideas. So if you’re just if you’re just another one pitching your another crazy idea, it’s very unlikely you’re going to get noticed, right? So I thought instead of that, I’d take this calculated risk and talk to him about chess instead, but then you have to have something interesting to say about chess that may be you hadn’t thought of - so I used my number one fact on chess that even surprises grand masters, which is that thinking about it from a game designer point of view, you know why is chess such a great game? How did it evolve into such a great game? And what is it that makes it so great? And my belief is that it’s actually because of the creative tension of the bishop and knight. The bishop and knight are basically worth the same - they are three points each, but they have completely different powers, and that asymmetry - that sort of creative asymmetry that happens with the bishop and knights being swapped into various positions I think is what makes chess a fascinating game. And so I basically pretty much led out with that line - I don’t know how I managed to crow bar that into a drinks party - but I did, but it made him sort of stop and think, which is exactly what I was hoping. And then he invited us back the next day to do a proper pitch of our business idea, so we actually got half an hour with him, rather than one minute over over some drinks, so ah that actually you know worked out. So you could say chess worked on two levels then - the meta level of planning for that and also actually chess is an intriguing subject in itself.

15:28

Hannah: So actually getting funding from billionaires is really simple - spend a year studying their interests, come up with a genius idea that will catch their attention and then and then away you go.

Demis: [laughs] and you have to do a good pitch after that, easy.

Hannah: simple! Very straightforward.

One thing that you hear time and time again in this building and actually throughout this podcast series is how DeepMind wants to use artificial intelligence to solve everything. So here is the chance to ask - what do they actually mean by that? Do they want to be the ones addressing every one of the world’s problems once intelligence is cracked?

Demis: So I’ve you know I have listed - a working list of a dozen to a couple of dozen scientific problems that I feel are these kinds of root node problems, and if we could crack all of those, then I feel like that would transform society for the better, and open up all sorts of areas in you know medicine and science for us to make breakthroughs in.

Hannah: Go on - give me the list

[Demis laughs] Demis: I can’t -

Hannah: Some of the list

Demis: Well some of the list of things like I think a key thing we need to crack is cheap, abundant energy that is renewable and clean. And so, if that’s fusion, or just way better solar panels with way better batteries with room temperature superconductors, that would also solve that problem. So you know I think there’s a number of solutions to that - some are material science solutions, some are physics solutions, and we should have a go at cracking all of those, but if you crack that, then that would open up all sorts of new issues, so I’ll give you an example - water access - access to clean water. It’s going to become increasingly more important as the population in the world grows, right? And it’s already becoming in some countries more valuable than oil because you know there’s just so little, fresh clean water around, right? for a lot of communities, especially poor communities, it’s an incredible problem. But we have a solution already - it’s desalination, right? 70% of the earth is water but it’s salt water, so how do we deal with that? Well desalination technologies exist, the problem is they cost too much energy. They’re too costly. So some rich countries can do it - so I think Israel gets a lot of their water like this and some other countries like that, but of the poor countries it’s too expensive. So if you’ve solved the renewable, cheap, clean energy problem, then you would automatically solve the water access problem. Almost straight away because um that’s actually the issue.

17:53

Hannah: So where do you want to be when the the water salivation problem is solved? Where do you fit into that story?

Demis: I hope that we will have been integral in coming up with those solutions by doing something say in fusion or in material science more likely where we’ve come up with using an AlphaZero like system, you know a battery that is 50% more efficient and costs 1/10 of the price of current batteries and lasts you know, 10 times longer. You know or we would come up with a solar panel - photovoltaic material that is twice as efficient at converting heat energy into electrical energy. So it would be something like that would then unlock the possibility of making desalination within reach of every community. Perhaps that has to be some improvement with the desalination technology itself as well and maybe we can be involved in that. But you know we’re a relatively small company, so and we’re going to stay relatively small, so we have to be efficient with the solutions that we work on.

18:56

Demis: This all just comes, at least from my perspective, from rationally, logically thinking out what’s the best thing you can do and what’s happened so far looking at civilisation in this I mean maybe you could say as a slightly strange way of looking at it but I think it’s the correct way to look at things, and I think most people just don’t think about questions in the right way, and maybe that’s what I’ve done in my whole life is try to ask the right questions and I feel like this is the obvious answer.

Hannah: This is DeepMind, the podcast. An introduction to AI - one of the most fascinating fields in science today.

Demis: Like I always say to people - whatever your question, the answer is AI. Because it sort of is in the limit, right? I mean that’s a little bit flippantly, but I mean in the limit, it must be, because the the answer so far, you know why we’re here, why we’re talking, why we’re using these amazing computers and devices is because of intelligence, human intelligence, and I think it’s miraculous, and the scientific method. Another miraculous thing, I think the greatest discovery of all is that the scientific method works, you know the enlightenment and why should it be? I mean also you must have to question things like that - why should the universe work like that - that the scientific method works, you know it could be a little bit more random, then it would be really confusing, right? If sometimes the sun rose, and sometimes it didn’t - it would be quite hard to do science, then. Right? Sometimes you repeated the experiment with exactly the same conditions, something else, something something different happened. But it doesn’t this world doesn't’ seem to work like that, it seems to be repeatable, it seems to be consistent, so therefore knowledge is possible. And incredibly strangely our brains even though they’re evolved for hunting gathering can somehow deal with it - which is kind of miraculous in itself. So how could you not want to a.) work on those questions, and b.) why would there be limits to what that is capable of doing?

Hannah: But the ultimate goal in all of this is to create artificial general intelligence, exactly what is meant by that?

Demis: Yeah, artificial - I mean there’s not an agreed definition of artificial general intelligence, but the way that Shane and I think about artificial general intelligence is a system that is capable of a wide range of tasks, and if we think about human level artificial general intelligence, then we’re talking about system that can pretty much do the full spectrum of cognitive tasks that humans can at least as good as humans are able to do, that’s you know one reasonable definition of artificial general intelligence.

21:21

Hannah: What’s the threshold for AGI then, how will you know when you’re done?

Demis: That’s a philosophical issue of like how do we know we’re done with building AGI? Certainly for me I’m waiting to see a lot of key moments, for example I think a really big moment will be when an AI system comes up with a new scientific discovery that’s of Nobel prize-winning level, that to me would be a big watershed moment and I think ah an important step in the capabilities of these systems, so you know, capable of some kind of true creativity in some sense. I think other big points will be when it can use language and converse with us in a naturalistic way. It’s capable of learning abstract concepts - these are all things that I think are high-level cognitive abilities that we’re nowhere near yet - and I think will be big signposts on the way.

Hannah: When were you convinced that that all of this was possible?

Demis: Well I had this in mind since my early teens, I probably read way too much sci-fi I’m guessing. Some of the really formative things on me were Asimov’s Foundation series, so interestingly, not the robotics books, I haven’t really read any of his robot books, but the Foundation series was this really amazing series of sci-fi novels, and then Ian Banks’ Culture Series which is his sort of Space Opera about how the universe would look after humanity has built AI and co-exists with it. And then a really big scientific book for me was when I was writing Theme Park, obviously I was working on AI and building AI for t he game, but I was also reading books like Gödel, Escher, Bach, by Hofstadter which I suppose is more of a philosophy book, but it’s an incredible piece of work, tying together Gödel’s incompleteness theorem about mathematics, with Escher’s drawings and Bach’s fugues and showing that they’re all related in some way, this repeating cycle of patterns, this infinite patterns that they they all exhibit, and then he tied it to consciousness, and intelligence, and it was just really inspiring for me and made me think about these deep questions, and I was discussing this with a lot of my friends who were you know we were writing games together, and we were doing that 24/7 and we would discuss these things about what the limits of AI could be if we could not just use it for what we were doing in games, but actually advance it to the level where it would become the same level as human, and they just felt like the sky was the limit. I mean maybe another way I can put it is if you look around us today, you know you look at modern civilisation and its incredible, well what built modern civilisation, intelligence did, right? That’s what built it - human intelligence - and if you were to take us back to our hunter gatherer days - 10, 20,000 years ago, 30,000 years ago, and you were to say one day we’re going to build Manhattan, and then fly over from London to Manhattan on a 747 regularly, above the clouds, I mean, what would you have said, that it would be mind boggling right? And yet humanity’s done that incredibly, and I don’t think we stop to think how amazing that is enough because the other thing about the human brain is that it’s incredibly adaptable - right? As soon as something, you know we do something and then it becomes kind of boring and mundane and it’s trivial, right? But I I always think about that when I’m taking a transatlantic flight, about how have we with our monkey brains managed to come up with these types of technologies, it’s unbelievable. A 100 tonne of metal flying through the sky above the clouds, so reliably, and so, if you think about that, then, if we now build something like AGI and en and enhance our own capabilities with this amazing tool, then I feel like almost anything might be possible within the laws of physics and perhaps even beyond the laws of physics because with AI we might discover more about the laws of physics or some holes or flaws in our understanding of the laws of physics, so if you think about that and you extrapolate it a few hundred more years with these kinds of technologies like AGI around, and then what we might be able to build with AGI, I think it could be truly incredible where we’ll be - and I feel it will be something like with the realisation of the true potential of of humanity.

25:36

Hannah: But however exciting the grand ambition of AGI is, there is also a need to proceed with caution.

Demis: We’re cognisant of some of the technical questions around AGI - making sure that they do exactly what we want, how do we programme in our values, how do we specify our goals? So these are all the theoretical and technical questions around AGI that people like Nick Bostrom are worried about, and so we and Shane leads our safety team that works on a lot of these questions from a research and technical perspective, and I think there’s a lot more work that needs to be done there. And I think that’s what we’re going to see over the next decade or two.

26:18

Hannah: I mean even if you proceed with caution, even if you act as safely as you possibly can based on the information that you have in front of you at that moment, you can’t really mitigate against bad actors, you can’t stop someone else coming in and mucking it up for everyone. Or can you?

Demis: I think there are ways of minimising that, I think we’ll have to think carefully about that, like at the moment we are at the stage where these systems are still quite nascent. They can do impressive things like play Go, but they are not yet properly general purpose or you know you couldn’t use them for anything very dangerous, or in the real world, so we’ve committed that we won’t do certain applications, you know obviously like things like military and surveillance, so which we think would be bad for society, we don’t think AI should be applied to those things, and we certainly wouldn’t do it ourselves, but if you publish some great algorithms, you have to think about the indirect impact of that if other people - bad actors and so on around the world - potentially use your algorithms for things that you would not have agreed with, we have to use this time now to think about what principles need to be put in place, whether that’s carefully thought out regulation, whether that is technical solutions to these problems, mathematical proofs, more engineering solutions, so like one of the projects we have here we call it the virtual brain analytics project and it’s inspired by what we do in neuroscience with FMRI machines to brain scan people while they’re doing tasks to see what parts of the brain light up so we can understand what the brain is doing better, and we should be doing the equivalent of that with our virtual brains, our our artificial neural networks, so I think there’s both sort of behavioural, there’s experimental understanding and then there’s mathematical understanding of these systems, and we should do all of those, and what I’m hoping is that we will eventually in the next few years have a much better understanding of these systems than we have today and we may even have some mathematical proofs about if you want to limit a system in a certain way, what do you need to do? What components do you require? Also you know we have to think about publications and other stuff that we talked about earlier - whether um this free exchange of of information and knowledge is okay even under situations where there are potentially dangerous applications. And where I would take our lead from this - this is not the first time again this has happened - you know this has happened a lot in biology with designing synthetic biology and viruses, embryology, now with CRISPR so biology actually has a long-standing multi-decade experience of coming together with wider society in the scientists and regulatory bodies and figuring out what are the rules of the road that is safe for everybody, and I think that’s the kind of coordination we’re going to have to have over the next decade.

29:05

Hannah: Where do you see DeepMind as sitting in terms of the the brand tech industry in terms of ethics and safety, are people following your lead?

Demis: We’re only one company but we are the biggest probably group anywhere in the world, and we you know are definitely one of the world leaders and acknowledge as so. So that gives a powerful platform to set an example. We can create initiatives, like we helped create the Partnership on AI, which is a cross-industry initiative to talk about some of these issues in products, and we’ve also sponsored a lot of academic groups, we’ve given them money for post-docs and other things at arm’s length to study this, we have close contacts with a lot of the institutes that work on these things, like the Future of Humanity Institute which is just down the road in Oxford, we talk with them all the time, and we ourselves as leaders at DeepMind, we’ve always talked about ethics, and I’ve always had that in mind, and the reason we have from the beginning of DeepMind is that we plan for success, so if we’re planning all these ambitious things for AI to do, then we also need to think carefully about what would that mean, and so we’re doing a lot of things behind the scenes, and I think by sort of being an example, that will influence everybody else I’m hoping, given that we’re in the lead technologically, so, for the moment. That’s why I think also it’s very important we have a technical lead, because why would anyone listen to what you have to say on the ethics front if you’re not one of the leaders technologically right then you could just be any group saying that. So I think to have a seat at the table, whether that’s a a country, or a company, or even individuals, you need for credibility purposes, you have to be close to the forefront of the technical side itself.

30:41

Hannah: Where does the public fit into all of this, do you need to have the trust in you as a a technology company, or do you need to be on board at all - can you kind of do it without them?

Demis: No, it’s really critical that this is discussed at large with society and everybody and the general public, and I think they need to engage, I mean the problem is is that a lot of these technical things are very complex, and you need Phds to understand them and so on. But some of the fundamentals are quite easy to understand, and what you really need to understand is the consequences of it and then society has to decide as I said before about how these things get used, and how the benefits accrue to different people in society and make sure that is fair, and I think that’s the key thing is to engage with the public now and try and educate them about some of the complexities of the technology, but also the implications, and this is partly what things like this podcast series is about, but also we’ve done a lot of public engagement with the Royal Society, the ‘You and AI’ series that we we did last year and I think a lot more of that has to be done.

31:42

Hannah: Are you optimistic about the future?

Demis: Yeah I’m very optimistic about the future, but the reason I’m optimistic is because I think AI is coming down the road and I feel like if we build it in the right way and we deploy it in the right way for the benefit of everyone, I think it’s going to be the most amazing transformative technology that humanity’s ever invented. And I would be quite pessimistic about some of the problems we are facing as a society, like climate change and sustainability or inequality. I think these are going to get exacerbated in the next few years. And I’d be pessimistic about our ability to solve that if there wasn’t something like AI on the way, in the near future. I think in order to solve some of these big problems, challenges society has, we either need an exponential improvement in human behaviour, more cooperation, less selfishness, more collaboration, or we need an exponential improvement in technology, and unfortunately the way politics is going right now, I don't really see much evidence of the former and um you know we don’t seem to be able to get our act together globally to do something about climate anywhere near fast enough to deal with the problem and so I think we have to double down on a technical solution. At least um we should try both, but I think we need some technology bets.

33:03

Hannah: It’s hard not to be inspired by Demis’s insatiable thirst for knowledge, and find yourself drawn in by his positive view of the future, because if I’m honest, I’m normally quite skeptical. I don’t have a lot of time for overly optimistic marketing talk, and one of my favourite hobbies is to roll my eyes at self-titled futurists at tech conferences, but you know over the last 12 months that I’ve spent hanging around here, I’ve come to the conclusion that there really is something quite special going on at the cutting edge of AI. After 50 years of quite slow progress, it really feels as though the field is finally beginning to deliver. The problems that everyone thought were completely out of reach only a few short years ago are tumbling one by one. And the science is moving forward at a blistering pace, both here and at research labs all around the world. And the questions that people are working on are profound and important. Including some of the potential pitfalls and ethical concerns of this kind of technology. So with all of that in mind, I think I’m going to join Demis in being optimistic about the future, and the potential of AI to be a real positive force for good. But don’t just take my word for it, as we’ve said throughout this podcast, we hope to inspire you on your own AI journey, maybe even by finding the answers to some of the biggest questions there are.

34:37

Demis: I’m an entrepreneur second, I’m a scientist first. It’s just that this was the right vehicle to make this happen, and it seems to have born out, but if I could have made it happen in academia, I would have just done it in academia, it just wasn’t possible under the constraints academia has. And that’s why we also sold the company to Google because it was about what accelerates the mission and the science, and ultimately that’s what I want to do with my life is I want to understand what’s going on here in the universe. Both inside here in the brain and externally out there in the universe. And I guess that’s what’s always driven me is this deep desire to understand what seems to me to be an incredibly interesting and fascinating mysteries that are going all around us and I I don’t really understand why more people don’t think about that all the time - I can barely sleep because I’m just fascinated and also troubled by the things around us that we seemingly don’t understand - all the big questions, you know the meaning of life - how did the universe start, what is consciousness, all these questions, which I feel like a blaring klaxon in my mind that I would like to understand and my attempt at doing that is to build AI first.

35:51

Hannah: If you would like to find out more about some of the things that we’ve talked about in this episode, or explore the world of AI research beyond DeepMind, you’ll find plenty of useful links in the show notes for each episode, and if there are stories or resources that you think other listeners would find helpful, than let us know -you can message us on Twitter, or email the team at podcasts@deepmind.com.

You can also use that address to send us your questions or feedback on the series.

DeepMind the podcast has been a Whistledown production, binaural sound recordings were by Lucinda Mason-Brown. The music from this series has been especially composed by Eleni Shaw. Producers were Amy Racs and Dan Hardoon. The Senior Producer was Louisa Field and the series editor was David Prest.

I’m Hannah Fry, thank you for listening.