Where are we in artificial intelligence right now, what does the future look like and how will the industry likely reach superintelligence? Will it help, hinder or threaten the survival of the human species? Caitlin Gardner, a South Australian student researching into this field, recently interviewed Dr. John Flackett, an expert with a PhD in the AI space, on this very topic.
Dev Diner’s editorial artist interpretation of the interview
Caitlin’s research is looking specifically at Artificial Superintelligence and its direct effect on the survival of the human species, attempting to answer the question: To what extent would the development of Artificial Superintelligence (ASI) result in the extinction of the human species? Here are Dr. John Flackett’s insights:
“In AI, at the moment, we’re really doing very well. We’ve achieved great strides in just the last few years but it’s really important to understand that we’re still in the ‘narrow’ field of AI. We get computers that can play chess really well. Recently, there was a computer that beat the world ‘Go’ champion, a very difficult game for a computer to learn how to play. Chess is quite easy (the rules are suited to searching a lot of possible moves as quickly as possible and finding the best move), but ‘Go’ is quite difficult which is why that was a big thing. The thing about it is that if you asked that computer (that won the ‘Go’ competition) to tell you how to make a cup of tea, it would have no idea.”
“At the moment we’re in a situation where we’re building AI with a very narrow focus, it can just do one thing. So we don’t need to be very scared of it.”
“The next step up from Narrow Intelligence is what we call General AI, which is around about human level intelligence. Trying to get a computer to do lots of things fairly well. So, that same computer that could beat people at chess, could also [know how to] make a cup of tea, [control a] run down a flight of stairs, pick up objects, or find its way somewhere. That’s kind of what everyone in AI is really working towards at the moment.
A superintelligent computer, however, would reach our level of intelligence and very quickly surpass everything that we could do as humans.”
“If we can build a machine that has human-level intelligence, that’s known as strong AI. Most AI falls into two categories. Weak AI — creating AI to do really cool stuff — like driverless cars; that’s a form of weak AI. It’s very good at one thing but it’s not conscious and it can’t make other decisions. Strong AI is where we start getting into the stage of consciousness. I’m a proponent of strong AI, I think we will get to that point. We will build machines which probably will be conscious, however, what’s interesting is that we won’t really know how that came about. It will evolve as part of training the machine and allowing the machine to learn for itself, just in the way that we did through evolution, but it will just happen a lot faster than the billions of years that we’ve had.”
“An analogy I always use is that we only really learnt to fly when we understood aerodynamics, lift, wings and airflow. Copying birds never really worked, it was only when we understood the physics that we could create [good] planes. Planes don’t look exactly like birds, although they work in a [fundamentally] similar way, however, they’re completely different. I think with AI, we need to fully understand the brain in order to be able to build something that is like the brain. We will use neural networks to get very clever machines to do really cool stuff, and in fact, do different stuff as well.
What’s interesting with neural networks, is that they are very computationally expensive. In the late 90’s, some of the experiments I was doing would maybe use 3000 bits of data and it would take 3 weeks to learn. The reason people are using them so much (you hear this term deep learning now) is because we have all this computational power in the cloud – a huge data service and much faster processors – that same work can be done in a matter of minutes. We’ve gone from 3 weeks of computational learning to about 6 minutes, which is very impressive.
However, I think we will start looking towards more biology and implants and if we get ASI, it will be because we’re augmenting ourselves.”
“It’s probably our only chance of survival. It’s really a question of are we going to become immortal, because essentially that’s what augmenting ourselves with this technology will mean. We’ll be able to replace bits of our bodies that wear out, enhance bits of our bodies, but also hopefully slow down the process of aging. We’ll also get to a stage where our memories are actually uploaded and we can change bodies. When a body wears out completely, you take the implants out, plug it into another clone. I think that’s probably where the future lies.”
“If we look down the other road, we build a computer or a set of computers that become super intelligent, much more intelligent than us, then by its very definition, that machine is going to be able to build other machines just as clever, probably more intelligent. That could quickly get out of control and I’m not sure how you could stop bad things happening if that were the case.”
“If we look at it from the possibility of a biologically enhanced human, then I don’t think we’d have to worry about that too much because we would just be enhancing our own intelligence; meaning ethics would remain intact within the biological system. Unless it takes over, which is another issue.
If we look down the purely machine based route, that’s really what we’re starting to think about now. What kind of goals do we put within the system to make sure that they don’t wipe us out if they become super-intelligent? That’s a very difficult question because if you’re building very intelligent AI, that’s more intelligent than us, and you tell it its goal is to look after the world, or make the world a better place to live in, then obviously the best thing it can do is wipe out humans because we just destroy the world. Even if we do build in a rule that says “don’t cause any harm to humans” as its number one goal, that’s still very difficult because what happens if you’re dealing with superintelligence that is sentient and has consciousness and wants to live. If we then decide we’re not happy with it and we want to switch it off, will it ignore the rule that we’ve given it to not harm humans, to protect itself? There are some really big issues around that and of course, if it’s superintelligence, if you’ve got a switch, a theoretical button you could use to turn it off, then surely it’s going to be able to override that and outsmart us as a race. At the moment I don’t think anyone has the answers to that and it is a big ethical question. And then the question then is, do we build it?”
“It really does worry me. The first paper I ever wrote on AI was about the ethical side of the argument. Especially now, where the tools to build AI are so readily available to everyone, not to push the boundaries of AI, but there are more and more people getting into it and less people looking at that side of things. I think we really need to have the conversation, because what’s the reference at the moment?
Apart from a few AI philosophers that have been in the field a long time, it’s really only films talking about how scary it is. Most of this science fiction is becoming science fact very quickly. There is obviously a lot of ethical talk around cloning, can we change things, switch things on and off in the foetus, should we do that?”
“In the medical field there’s a lot of committees and ethical consideration but not so much in AI. I think we really do need to talk about it a lot more.”
“It really depends what form this takes. It may not even be in one of the ways I’ve outlined. I could see ASI grow from a ‘baby’ computer into a super intelligent computer from years of training and exposing it to lots of different things. That computer may have a lot of empathy with us because we’ve brought it up, like a baby. That may well work as a way of controlling superintelligence because there’s a bond there.
If we look at it from the area of biological enhancements, there’s a lot of benefits there and we’re already seeing some of that. For example with paraplegics, we’re now able to tap into some of the neural signals received from the brain to allow people to walk again. We have only got to take that to the next step. It could give us brilliant eyesight or fantastic hearing; just regenerate parts of our bodies that can be regenerated. There are huge benefits from that point of view.
From developing a pure computing ASI, there would also be huge benefits. Even if you look at everyone that has a mobile phone and all the things you could do with that, for an ASI to be able to connect everything up, it [the ASI] could shut down things, move people away from danger areas, predict earthquakes, predict natural disasters and get people out of there. There’s a huge benefit to having that kind of superintelligence and that kind of predictive functionality that we would never see because we’re just not smart enough to see that big picture. If we get to that stage, that superintelligence could develop technology that we haven’t even thought about. We may be able to bend space and time, achieve time travel and get to other planets. It may well be that we’re just not smart enough to figure that out, but the ASI is.”
“I think just education is the most important thing, and that’s changing too. Even three years ago, when I would tell people I’m in AI, they would just assume that I build robots. That’s all changing now and we can have this conversation. I think more people being interested and learning to ask these kinds of questions, is a big step forward.
One of the biggest dangers, of course, is the way that governments use this and obviously the military. Narrow AI, which is the kind of stuff we already have, is okay in a way because you might use it to identify enemy troops in a tree line. When you have general intelligence or superintelligence in the battlefield, it becomes very scary because their goals will be to search and destroy. Also, governments are probably the last people who will want to talk about it to the general public because they have their own agenda. I see that as a danger and a large hurdle to get over but a lot of people are very aware of it.”
“The tools and techniques that are being used at the moment to identify photos and process natural language, like Google and Siri and even planning routes in Google Maps, all of these fairly recent things use underlying techniques that have been around for 30-40 years. There’s not really anything new in terms of techniques, it’s just pure processing power that allows us to train these systems a lot quicker with a lot more data. I think we have a long way to go in order to develop new techniques, which is what we need to develop General AI. We’re probably 20 years away from General Intelligence.
That includes self-driving cars because so many safeguards are going to be written into self-driving cars, people will figure out that if you wanted to cross the road, you could probably just step out in front of the road and cars will stop. In order for this to work, everything will have to change, road rules, everything. We’re a long way even from having streets filled with self-driving cars.”
“To get to superintelligence, we’re probably 100 years off. Again there’s an interesting question, if you ask people if they’d like to be augmented, have parts of their bodies replaced by machinery, I think most people will say, ‘no I don’t want that’, but it’s fascinating that people don’t think that way about hip replacements or knee replacements which are essentially the same thing. When AI does get smarter than us, it becomes a question of whether we’re still important as part of its being and its existence and whether it willingly wants to help us and enhance everything we want to do. You can look at it either way. You can say ‘this will be the end of us’, or you can look at it the other way and see that there’ll be no more wars, there’ll be world peace, they’ll be enough food for everyone, you won’t have to work – it could be a utopian world.”
“For centuries, what has defined us has been that constant search for knowledge, but just because we develop ASI, it just means that it knows more than us, it doesn’t mean that it knows everything; it doesn’t become ‘God’. Everything it has learnt has come from us; it only needs to be smarter than us, not smarter than anything you’ll find anywhere. I don’t think we’ll lose the drive we have to interact with each other and I think ASI will only give us more things to discover. I’d look at the future as less like Terminator and more like Star Trek, as that’s probably a lot more accurate.”
I’d like to give a big thank you to both Caitlin Gardner for providing her interview for us to feature in Dev Diner, and a thank you to Dr. John Flackett, a long time friend of Dev Diner for sharing these insights!
Know other emerging tech enthusiasts who might want to read this too? Please like and share this post with them!