For decades, Hollywood movies have taught us two things about robots: First, they’ll someday walk on two feet, like people; and second, that most of them will eventually turn on us! But in real life, humanoid robots like we see in the movies have always seemed to be 20 years away. Well, don’t look now … Last weekend, robots competed in a competition run by DARPA, the military’s advanced technology division. You may have heard of some of its previous projects: self-driving cars, GPS, and a little thing called the Internet.
DARPA offered $3.5 million in prizes for robots that can navigate a disaster-rescue scenario. With only intermittent remote control by a human operator, the robots have to perform tasks like driving, turning off a valve, drilling out a wall, crossing a pile of rubble, and climbing stairs. “It is an extraordinary thing, isn’t it?” said Gill Pratt, who heads the DARPA Robotics Challenge. “When the robot does well and it scores a point, everyone cheers as if they’re the ones that are getting the points. And then of course when the robot teeters and then suddenly falls, everybody goes, ‘Oh,’ and they sympathize with it.”
Pratt described the robots competing as “Model Ts.” “I think that in coming years, first of all, the most important thing is reliability will go up, prices will go down, and we’ll find more and more reasons for [cheering].” Yes, the crowd was cheering — the robot walked. As evidenced by the number of falling robots, just walking is a major accomplishment. “We’re still a long way from science fiction,” said Russ Tedrake, a Massachusetts Institute of Technology professor who led MIT’s robot in the DARPA competition. His robot was spotted falling out a car — Tedrake said it tried to drive while it was getting out of the vehicle.
But even with a broken right arm, the robot finished the course one-handed, earning a respectable 7 out of 8 points. “This competition, a few similar competitions have convinced the world that robots are capable of doing real things in the real world,” said Tedrake. “That has led to massive new investments from Google, Apple, Uber, Qualcomm. And that’s gonna mean an acceleration of technology. Things are gonna go really fast from here on out.” Alex Garland would agree. He’s the writer-director of “Ex Machina,” a movie that considers what technology might be like just a little bit in the future.
The film, about a thinking robot (played by Alicia Vikander), is just one example of such creations in recent popular culture. When asked to explain the resurgence of interest in robots, Garland said, “I think it may not actually to do with AI. In some respects I think it’s more to do with technology, and a fear of technology. We all have cell phones and we all have tablets and laptops and computers. And we don’t really understand how these things work. But they seem to understand how we work. And that makes us feel uneasy.”
In most movies, said Pogue, “where there is a very smart robot, like yours, [they] turn out to be menacing or threatening in some way, if not pure evil.” “Actually, in the case of this film, ‘Ex Machina,’ I don’t think the robot is evil,” said Garland. “What I think is that the robot is like us. It’s sentient. And that robot has been unreasonably imprisoned and — like us — wants to get out of that prison.
“We have a bad history, humans, with not respecting sentients. And we don’t want to keep making the same kinds of mistakes.” Whether self-aware machines will ever exist is a question researchers debate endlessly. But getting there will require more than advances in robotics. It will also require breakthroughs in AI (artificial intelligence). We’ll have to teach machines how to think.
Most people think of the smartphone app Siri as a remarkable, human-like intelligence. If we ask it, “When’s the next Cleveland Indians game?” Siri will reply, “The Indians-White Sox game starts at 5:10 p.m.” If anyone knows how close we are to being able to talk to our machines, it’s Dag Kittlaus. He and his team created Siri, Apple’s personal assistant. (Siri, by the way, also began life as a DARPA project.)
Kittlaus explained the process by which Siri works: “The first thing that happens is to change and understand the sounds that you said and turn them into words. So that’s the first step. And then the words need to be understood. So there’s an artificial intelligence inside that understands the context.” “Sometimes I’ve noticed Siri seems to have a sense of humor,” said Pogue.
Q: “Hey, Siri, what’s the best smartphone?”
SIRI: “Wait … there are other phones?”
“Well, we anticipated, originally, that people were gonna ask funny questions,” said Kittlaus. “And we spent quite a bit of time preparing Siri to be funny and have a little bit of a dry wit.” And so, as impressive as Siri is, she’s not actually thinking; everything she says was written in advance by a programmer. But after the Siri team left Apple, they began working on something much, much more ambitious, with much more intelligence. It’s called Viv.
With Viv, you would be able to say something like, “Find me a great place to go, take my kids, to the Caribbean in the last week of February.” In a split second, Viv will consult several different services on the Web — stores, travel agencies, databases — to execute much more complicated commands. According to Kittlaus, “The system would know who your kids are, the last five trips that you took, and approximately how much budget you’d like to spend on those types of vacations. It would begin a dialogue.”
It all sounds great … but not to everyone. “If we succeed in getting true AI that’s smarter than us, it will be the most powerful technology ever,” said MIT’s Max Tegmark. “And it’ll either be the best thing ever to happen to humanity, or the worst thing. And it’s up to us, now, to see which way it’s gonna go.” Tegmark is so concerned, he started a group called the Future of Life Institute to consider the dangers of AI.
“The basic concern is very simple: If you can make a machine which can out-compete us humans on all cognitive tasks, then by definition, it’s better than us also at programming AI,” he said. “So first thing it can do is improve its own software. Now it’s even smarter! Then it can do it again and again and again.” “You’re not saying they’re going to develop emotions and turn on us willingly, right?” asked Pogue.
“That’s right. There are a lot of misconceptions. And one of the most common ones is that somehow if you make your robot really, really smart, it’s suddenly gonna become sentient, and it’s going to become evil and it’s going to decide to kill all the people. That’s completely ridiculous. Being intelligent just means that you’re really good at accomplishing your goals, whatever they are: playing chess, getting rich, whatever. You just want to make sure that its goals are aligned with our human goals, and you’ll be fine.”
All the experts agree that recent leaps in robotics and AI should make us both excited and cautious. In the short term, Kittlaus said, we’re safe, “in terms of having to worry about super-intelligences taking over the world. We’re talking 50 or 100 years from now before we need to really, seriously get too worried about that.”
“Ex Machina” director Alex Garland warns, “Artificial intelligence contains dangers and benefits. And it’s not gonna be down to the AIs which of those we encounter. It’s gonna be down to us.” MIT professor Russ Tedrake expressed optimism about the future: “You know, robots, artificial intelligence are going to change the way we interact with technology. There’s no question. That’s good. That’s a good thing. And we’re gonna have to adapt. But we’re gonna love what happens.”