Artificial Intelligence: Should You Be Afraid of It?

Sean Gallup/Getty Images

Sean Gallup/Getty Images

While most people commenting on sci-fi films on Facebook have strong opinions on whether robots will improve our lives or bring about the end of the human race, top AI researchers don’t often comment on the topic. But earlier this year, leading artificial intelligence researchers met at a private conference in Puerto Rico, where they discussed whether the rise of intelligent machines will be good or bad for people. As Benjamin Wallace-Wells reports for New York Magazine, that’s a topic often discussed by the public, but rarely addressed by AI experts themselves. The meeting was arranged by a group called the Future of Life Institute, a think tank run by MIT cosmologist Max Tegmark, who gained notoriety for a book in which he hypothesized that the universe could be merely the articulation of a mathematical structure.

According to Wallace-Wells, researchers in the audience at the conference were presented with two propositions: the first, that they were “the stewards of an exceptional breakthrough,” through which machines will become better than humans at human tasks in the very near future; and the second, that the researchers needed to consider whether this breakthrough is, in fact, a very bad thing for the human race. Wallace-Wells writes that famed technologists — not just Elon Musk and Jaan Tallinn, but Steve Wozniak, Bill Gates, and Stephen Hawking — have warned about the threats that artificial intelligence could pose.

Hawking recently said that “the development of full artificial intelligence could spell the end of the human race,” and Musk warned that “with artificial intelligence, we are summoning the demon.” And in Wallace-Wells’ estimation, “Tegmark’s conference was designed to sketch that demon so that the researchers might begin to see the worries as more serious than science fiction.” Economists explained that the risk of economic inequality could escalate as machines grow more adept at a wider range of tasks. Academic and industry researchers noted that the “machine brain” is now able to comprehend and even generate concepts that could plausibly be described as “beliefs.” And law professors detailed the challenges of assigning legal responsibility to computers that suggest driving directions or identify a target for a bombing.

“AI is beginning to work at a level where it is coming out of the lab and into society,” Tegmark told New York Magazine. Robots can sense the world around them and perform physical tasks within it, and can both interpret what they see at a “near human” level and learn to comprehend the emotions behind human facial expressions. Wallace-Wells notes that the fact that robot ethics is even a question has much to do with the work of a group of IBM researchers on an intelligent machine called Watson, which was originally built to beat human champions at the game of Jeopardy!.

Since its victory in 2011, Watson has continued to evolve, and has so far been applied in 75 industries in 17 countries around the world. “In these experiences,” Wallace-Wells writes, “Watson has functioned as an early probe into the relationship between humans and intelligent machines — what we need from them, what gaps they fill, what fears they generate.” The Watson project forced researchers to grapple with language, a subject that traditionally required programmers to explain each concept mathematically to the machine.

But the researchers behind Watson took advantage of the efficiencies of big data technology and machine learning algorithms; Watson would depend on semantic context, proximity, and statistical patterns instead of formulating concepts behind the Jeopardy! clues. They uploaded a massive database of text and built bots to scrutinize different aspects of a clue and produce potential responses. While Watson had no comprehension, it could learn from experience. It got faster and more accurate as it scrutinized “the entire cultural corpus” of written human expertise, “making the same kind of mistakes that a child might, mispronouncing new words, mistaking the line between reality and myth, misinterpreting what adults could not express clearly.”

After a victory on Jeopardy!, IBM’s researchers continued to fine-tune Watson’s capabilities and add new ones, and the computer went from generating “candidate answers” to generating “hypotheses.” And as Watson became famous, confusion around over what, exactly, Watson is has escalated. While Watson can learn on its own and generate new ideas, it would likely only pass the Turing Test in limited circumstances. But the researchers behind Watson say that it feels very much as if Watson has “grown up and gone places by itself.”

Max Tegmark told Wallace-Wells that many of the scenarios that people fear relate to artificial intelligence becoming more capable and doing things on its own. But long before that, he thinks, there will be powerful, expert machines. And expertise seems a fundamental advantage for humans to give away. “If you think about it, why do human beings have more civilization than lions? It’s because we’re smarter. Because we have more expertise.” And Watson is becoming a “foot soldier” in the debate over how rational we really are, and how the ascent of artificial intelligence could improve on human inefficiencies and limitations.

Another public fear, that dextrous robots will soon set about actively destroying the human race, seems removed from reality. John Markoff recently reported for The New York Times that in glossy sci-fi movies like Ex Machina and Chappie, robots move with “impressive — and frequently malevolent — dexterity,” seeming to confirm the worst fears of prominent technologists and scientists. But a preview of the work that engineers will showcase at the final competition of the Defense Advanced Research Projects Agency’s Robotics Challenge in June offers a reality check on the state of robotics.

Since the previous contest, held in Florida in December 2013, where the robots were “glacially slow” in accomplishing tasks like opening doors, entering rooms, and climbing ladders, the researchers seem to have made limited progress. This year, the robots will be given an hour to complete a set of eight tasks that would take a human less than 10 minutes. None of the robots will be autonomous, as researchers have turned their attention to creating “ensembles” of humans and robots to achieve “multiplicity,” in which groups of humans and machines solve problems collaboratively. That development lends credence to the concept, explained to Markoff by Sebastian Thrun, that technology will progress by complementing humans, rather than by replacing them.

We argue over whether artificial intelligence will effectively counter human weaknesses, or so perfectly mold to them that we no longer notice how it limits us. And Wallace-Wells reports that the fears about artificial intelligence aren’t really about a discomfort with technology — even senior technologists are alarmed — but depend “mainly upon what you think about people: of where you line up in the intellectual wars over human limitation and irrationality.” The risk of technologies that haven’t been developed yet is impossible to measure. So ultimately, your position comes down to the question of whether you think that we are rational enough to run businesses and societies on our own, or if you believe that we’re so irrational that the complex cognitive tasks of work and society would be better left to machines better and specifically equipped to complete them.

More from Tech Cheat Sheet: