AI won't get smarter than humans in at least another 1000 yrs - technology futurist
The era of superintelligence may be just round the corner, raising fears that we could one day end up being the pets of our super-smart robots. What can we do to avoid it? We talked to Pablos Holman, inventor, hacker and technology futurist.
Sophie Shevardnadze: Pablos Holman, welcome to the show. It's great to have you in our programme. I'll just start off with something that you've said many times: that robots are coming for our jobs. So will this happen in my lifetime? Should I be afraid that some self-perfecting thing would be sitting where I'm sitting right now and doing my job better than I'm doing?
Pablos Holman: Oh, you might not get so lucky. But yes, a lot of jobs, I think, robots are going to definitely take. I mean, that's pretty clear. But it's also been happening for your whole lifetime already. So it's not really new - it's just that the robots are getting better at doing things that they weren't able to do before. But we've been experiencing that for the last century.
SS: The core of the question is: is there going to be a time in the near future - I'm saying 10, maybe 20 years - where pretty much everything that we do will be done better by the robots and we will be jobless?
PH: No, I don't think we're anywhere close to that. Because there's different kinds of things that humans are good for, different kinds of things that robots are good for. And right now for our lives we've been good at making robots do sort of very clearly structured, logical progressions. You can have a robot to do the same thing over and over and over again. What they weren't good at is doing things where the logical progression was super complicated or difficult to define. And now that we have machine learning, we're able to have them do those kinds of things. But there are so many other things that robots aren't going to do. They're still not creative at all, they are still not good in taking care of humans.
SS: Yes, talking about creative, a lot of theorists and scientists that I spoke to, like yourself, are saying that robots won't ever be able to replace humans in creative jobs because the work requires unpredictable thinking, inventing mindsets… Won't the AI be able to eventually produce creative stuff as well? Artificial intelligence can already write poetry, music.
PH: Sure. What I think is important to understand, is it would be a poor idea to say that they'll never be able to do anything in particular. We don't know what they'll eventually be able to do. But what we can see is that there are things that they could do now. It's easy to extrapolate that they could do these kinds of things faster and better and cheaper. But what is irresponsible, I think, is to make these logical leaps that we're most certainly going to get these miraculous breakthroughs. And we need miraculous breakthroughs for robots to actually be creative and for them to do things. I mean, a lot of the things, like writing poetry that you think of as creative, are not inherently creative. Writing poetry is a craft and a robot can learn to do a craft. That's what we see them doing. But the creativity, figuring out how to do something new the first time… When you have a system that runs on data and random numbers - that's giving you a different thing that sometimes might seem creative, might seem like it's just as good, but it's a different process. And so I don't think that it's reasonable to say that they're the same or that this one's better or going to replace humans. And I think to get there, we need real breakthroughs. We need to invent something that might take a thousand years.
SS: OK, let's say that instead of creative we say craft. And then that also means that it's not only McDonald's jobs being taken over by robots that we're talking about. Artificial intelligence could be doing the job of a lawyer, of an accountant, of a diagnosis doctor even. Will people have to compete, to fight robots for livelihoods? Is this conflict inevitable?
PH: Well, maybe they compete in the same way that the buggy whip drivers had to find something else to do because we have cars now. But in the sense that robots are taking over the job of a lawyer - it's probably because what the lawyer is doing is repetitive and monotonous and boring and low value. And we're actually going to be better off having a machine do it because the machine is doing a better job of it.
And maybe it's true that we have too many lawyers. It's entirely possible: humans haven’t always had this many lawyers. No offense to lawyers. But if you've got a kid getting into college right now, it might be good to advise them not to become a lawyer. Most of the lawyers we have probably are going to get to finish out their career because it takes a while to deploy these technologies. But yes, I think the progression has to be looked at on longer time horizons. And if you look on a horizon of more than a decade, going back or going forward, you can see that a lot of these jobs were obsoleted by machines a long time ago. And we've replaced them with new jobs. And that's what we want to do is. We want to free up humans to do what they're uniquely good at. In the last five or ten years, because of machine learning, we got to a point where we can really free up a lot of humans that we didn't have a way of freeing up before. And hopefully they can find something better to do than…
SS: Yes. Apart from freeing up humans, maybe the extra wealth created by the artificial intelligence and machines will be used to just pay people money like universal basic income so that they can just finally enjoy life and not be forced to work?
PH: Yes, it's possible. I mean in my experience anyway, not working turns out to be a pretty poor choice for a lot of humans. You know, it doesn't make them happy. They don't get a lot of fulfillment out of that. And so it's not that we want to put humans out of work. When I imagine freeing up humans, I certainly hope they don't all go to the beach. What I'm hoping is that they'll work on the things that the robots can't do yet. We're choosing to have humans do these things you describe — diagnostics and reading legal documents and these kinds of things— when what we really would probably rather have them do is teach kids or become nurses and take care of people who can't take care of themselves. These are not problems with the technologies, these are problems with humans choosing what to do with the technology. And right now, I feel like we're choosing a lot of the wrong things. And so if you get that opportunity because a robot is doing the work for you, you get some free time, you get your attention back, you get that freedom, let's spend some of it watching Netflix. But then let's spend some of it taking care of other humans. I mean, that's what the potential here really is.
SS: Then there is this other thing with technology: yes, it has already enabled our economy to grow like crazy for the past couple of decades. Yet the winners are fewer and fewer, and most of the people aren't really getting any richer. I mean, the gap is growing as we speak. And while technology will supposedly improve lives, the rich will keep getting richer and the poor will keep becoming poorer.
PH: Well, the poor keep getting richer, too. I think that there are some misunderstandings about how this works, right? Like when you calculate, say, the poverty level, the poverty level in the United States is about 31 times higher than the median poverty level for the world. That means a poor person in the US has 31 times the wealth of a poor person on average in the world. That's pretty significant. Every day, hundreds of thousands of people are coming out of extreme poverty. That's extraordinary progress. Now, we're not done. We have a lot of work to do. But humanity as a whole is on a rampage, doing better and better and better all the time. Now those gaps for individuals, it can be hard when you feel them. I mean, even I, I'm doing pretty well, but I work with people who are a lot richer than I'll ever be. And what you have to understand is that those gaps are about how you feel. You're actually still doing better than you were, than your ancestors were. And I think that there are maybe measures we could do to reduce those gaps. And especially in environments where you have an economy where people aren't getting a fair chance to participate. There's really a lot of work there for humans to do to change the way their societies work, to change the way civilizations work so that everybody gets a fair chance to contribute, a fair chance to benefit, and that's sort of equally distributed. But those aren't technology problems. Those are actually problems with human decision-making. Those are problems with policy making. Those are problems with how we choose to set up and structure business and economies and those things. And so we have to go figure out how to improve on them. The wealth gap in the United States is made… I know less about other countries. So in the United States, where I know more, we do have a lot of people who are super wealthy because of technology or working in the tech industry, I should say. And if you look at what they're doing with their wealth, most of them, if you want to have more money than you know what to do with, a lot of it gets reinvested in companies to try and grow. A lot of it becomes philanthropic wealth. You know, I spent some time working on projects for Bill Gates, and those are philanthropic in nature. We're taking the money that Bill earned and trying to use it to solve global health problems. So I think those kinds of things sound worse than they really are a lot of times. And maybe it's not reasonable that someone should be able to get so wealthy. But I wouldn't say that's the problem of technology or the tech industry. That's the problem that comes from how we've set up incentives in our economy in the US.
SS: Compared to tech advances of the past, this time around, it is super fast. This isn't the steam engine or the automobile that happened in like one or two generation lifespans, right? Will our society even be able to adapt to the change just as quickly as it happens? Because right now it's twice, if not more, faster than we are developing in terms of generations.
PH: I think it's easier to adapt over a generation. So if you just look at the way your kids handle technology, they're pretty good at it. They internalise it really fast. They get something new and in two days it's normal to them. So in some sense, I think, a lot of these technologies in one generation - it's enough for people to accommodate them.
SS: What do you think? Do you think we right now are ready to be dealing with a computer that surpasses human mind?
PH: Well, we've had computers that surpassed the human mind for our whole lives like a calculators always been better at math than you and me, right? That's what we're talking about here. And now computers are so better at memorizing phone numbers.
SS: But math is a very particular little thing that, you know, not every human can be quick in calculating. If you're like a mathematician or you're in that field, like you're good at it.
PH: Even a mathematician.
SS: Yes, but then here we're talking about all-encompassing human mind. We're not just talking about one or two or three fields that are technical.
PH: I think it's not true that we're close to creating computers that outperform the all-encompassing human mind. We really don't know how brains work. The one thing we do know is that they don't work the way computers do. And so, trying to extrapolate that computers get faster every year on an exponential curve and at this point they'll be smarter than humans - it's like saying, "Cars get faster every year and at this point you'll be able to drive to Australia". It's just not how they work. That's not how brains work. And so even though the computers do get better and they can do more and more and more stuff, they're not on a course that replaces brains - in my mind, I don't see that happening, and I think it's kind of disingenuous to frame it that way. It's making people terrified and there's no reason for that. These are just better and better tools that we can use our brains to put to use however we want.
SS: Stephen Hawking said that artificial intelligence is very good at accomplishing its goals, and if its goals aren't aligned with ours, then we're in trouble. How do we make sure that artificial intelligence we create sticks to human values? And if we try to humanise it won’t that limit its cognitive superpowers?
PH: I think it's like every other tool: you have to express your values and how it gets used, right? And so when artificial intelligence is doing the wrong thing, it's because we didn't tell it what we cared about and what we wanted it to do. These are still tools. A hammer, the same hammer I can use to smash someone's head is the same hammer I can use to build them a home. But that's a human choosing what to do with that? That's me expressing my values with a tool. And that's what we need to do with artificial intelligence as well. And I think those warnings... I don't appreciate them so much because I think it's getting people more scared than they need to be. But fundamentally, they're true. The truth is we need to express to these tools when we build them what we want them to care about.
SS: Yes. There are two camps. One camp is the camp that talks like you. And then there is the other camp... Because you are saying that being afraid of artificial intelligence is like being afraid of the steam engine back in the days,or a tractor or something. And it's true that generally people are suspicious of technology. But hey, there are, with IA, voices of, I would say, smart men like Elon Musk or Stephen Hawking or even Bill Gates. And they're saying that we need to be alert and there is reason to be afraid of new technologies unless we start to somehow lead them into the right path.
PH: Right. Those are all really smart guys, I have a massive amount of respect for all of them. None of them work in artificial intelligence - you might want to take note of that. All of them read Super Intelligence, which is a terrifying dystopian book about how computers are gonna get smarter than humans and make us all obsolete. I don't believe it. We have some agency in this and we can choose how to use these tools to build a better future instead of a worse one. And that's just one version of the story, right? That's just one story about how the whole world could go terribly wrong in the future.
SS: OK, so let's talk about you and your vision and your inventions. You invent solutions to global issues and you come up with some amazingly ambitious projects like hurricane suppression by cooling off the surface of the ocean or tethering a giant helium balloon 14 miles above the earth, which will pump out chemicals into the stratosphere and help hold back global warming. But considering that ideas like this are way too expensive, are they basically not more than just a big sketch of the imagination? I mean, an exercise in daring?
PH: Well, actually. Yes. So both of those inventions are things that I worked on at the Intellectual Ventures Lab, and what we tried to do was… those are called geoengineering inventions, and they are ways to ameliorate some of the effects of global warming. So they're like band-aids for the environment. Both of them were explicitly worked on because they were way to show that we could do cost effective geoengineering projects. So the hurricanes sink is just a giant tube made out of recycled truck tires that you stick in the ocean. That's all there is to it. But it causes the hot water to get pumped down and mixed up with the cold water blow. And that gets your Cat. 5 hurricanes down to Cat. 4, Cat. 3 - category three. And so the cost of it is orders of magnitude less than the cost of the damage from hurricane. So these are technologies that we could do — they are very simple —that could buy us some time. They're not solutions. They don't solve the problems with global warming. What they do is they buy us some time to transition off carbon emitting fuels and to transition to better technology for energy. But that's all they're good for. And what we do is try to show that they're possible. And it's going to take humans 10 or 20 years to figure out if that's the best idea we have.
SS: So the Internet mode of the life that we actually live in today's world is that we give up our privacy completely for the sake of using a free product like Gmail or Instagram. And the founder and leader of the Free Software movement, Richard Stallman, told me once that we have to sacrifice our comfort and convenience a little bit for the sake of our freedom and privacy. But if I have to give up free Internet products, that inconvenience in itself is un-freedom to me. So will it ever be possible for us to enjoy progress with privacy without having to bring our daily comfort to the altar?
PH: Yes, and that's exactly one of the challenges right now is that we need to take control of these things ourselves and do them ourselves. So it turns out, in the 90s we had online services like AOL, which were America Online. In the US, we used this service for email and things. A centralised service, kind of like Facebook, where they controlled, who could publish, who could transact, who you could talk to — all those kinds of things. And what happened is the Internet, which is a decentralised protocol — TCP/IP is a decentralised protocol — managed to become vastly more useful, vastly more ubiquitous. And it took over and practically put AOL out of business. And the reason is no one can control it. We're all free to do what we want. All you’ve got to do is connect to the Internet, and I can publish, I can subscribe, I can transact, and you can do whatever you want - I can't stop you. That's the beauty of decentralised protocols. Now because Facebook and Google gave us a lot of free stuff, we ran into a big walled garden. So now we're in that modality again where we gave up our privacy, we gave up our freedom for all that, all those free toys. But the truth is there's not a lot of real technology behind most of these things. Instagram isn't that hard to build. We could build our own decentralised Instagram that has no ads and where we control the knobs that decide what we get to see. There's no reason we can't build that and choose to use it. It's not expensive to do. It's just that people are too busy hanging out on Instagram and not busy enough building a replacement. But the technologies we use are all very democratised. The same software, the same toolkits, the same servers - all the stuff that I use for the projects I work on. Kids in elementary school in Latin America and Africa are getting access to that more and more and more. A lot of that stuff and the things that Stallman is referring to, those are open source technologies. They're made to be free. And everybody gets access to them. And that's what Google built their stuff on. That's what Facebook built their stuff on - open source technologies. So we can go build alternatives anytime we want. And I think that that's part of the job for humans right now.
SS: I mean, I think it's easier said for you than done for me, who has no idea how to build that technology. So, yes. That still remains somewhat a problem for someone like me who has no clue how to start doing your own Instagram or Facebook.
PH: Well, actually, there are dozens of alternatives right now that already were built. They don't have the user base of Instagram or Facebook, but you can go sign up for them and use them for free. You don't need to know anything about coding. Other people built those for you. They're not in the news. They're not making billions of dollars. They're not making anybody rich. But they do the same thing with no ads for free. And that's available. People could choose to use them. People are choosing Instagram. And I think that's the thing we need to understand. There is personal responsibility here. It's not that Instagram is being pushed on you or Facebook is being pushed on you. You can choose something else. There's plenty of alternatives. They're just as good and they're free and they don't take your privacy. But people are still choosing Instagram.
SS: I see your point. Pablos, unfortunately, that's all the time we have for today. It was really interesting talking to you. I wish we had a little more time to sort of deepen into your brain.
PH: I feel so antagonistic today.
SS: It's good. It makes up for a good interview. So, anyways, thanks a lot for your thoughts and your insight and good luck with everything that you're doing. And hopefully we'll talk sometime soon. We were talking to Pablos Holman, an inventor, hacker and futurist discussing solutions that technology can offer to the global problems of today and what should be done so that technology doesn't become a problem in itself.