2 Reasons Why People Are Worried About Artificial Intelligence

Source: iStock

Source: iStock

Superstar entrepreneurs Elon Musk and Bill Gates recently joined a chorus of notable voices calling for caution in AI development. A letter that lists research priorities for artificial intelligence on the Future of Life website serves as a template of their concerns.

Their fears rest on the creation of super intelligence that outperforms human brains in every significant cognitive domain. Using computational models of how the basic elements operate, the whole brain could then be emulated on a sufficiently large computer with enough memory to handle complex transactions.

The emulated brain would, in effect, be the last invention that biological man would need since, by definition, it would be much better than humans at invention. This could pose an existential risk for humanity. Hence, the future of life letter demands that the seed AI’s architectural goals should be articulated clearly.

Fear of artificial intelligence has pervaded institutions, as well: Stanford University recently announced a century-long study of the effects of artificial intelligence in 18 areas of society including the economy, war, and crime.

To be sure, this is not Musk’s first flirtation with artificial intelligence. The 43-year-old serial entrepreneur has invested in two AI-related startups: Vicarious and Deep Mind (which was acquired by Google last year). Critics refer to his fears (and subsequent donation to the Future of Life institute) as a way to preempt criticism against his investments in AI. But, Musk claims that his investments were merely to “keep an eye on what’s going on with AI.”

Controversies apart, destruction by AI machines has been the stuff of science fiction stories for decades. Recent developments in science and technology, however, have transposed the technology from books onto life and forced us to think about the consequences of AI in our immediate surroundings.

Here are two ways in which AI could spell the death of humanity.

1. The inexpensive wars of AI

Currently warfare is a mix of analysis, strategy, and tactics. Machines are tactical instruments in a war but overall strategy is always driven by human judgement. Human abilities and risk analysis is also key to operating instruments, such as remotely-piloted systems.

But, AI-driven warfare might be the opposite of such a war. Machines will take over human roles. A story in The New York Times last year outlined the U.S. military’s efforts in testing missiles that “rely on artificial intelligence, rather than human instruction” to conduct bomb targets. According to the article, the United States is not the only country testing autonomous missiles:

Britain, Israel and Norway are already deploying missiles and drones that carry out attacks against enemy radar, tanks or ships without direct human control … Britain’s “fire and forget” Brimstone missiles, for example, can distinguish among tanks and cars and buses without human assistance, and can hunt targets in a predesignated region without oversight. The Brimstones also communicate with one another, sharing their targets … Israel’s antiradar missile, the Harpy, loiters in the sky until an enemy radar is turned on. It then attacks and destroys the radar installation on its own.

By enabling autonomous missiles to judge their targets, the costs associated with missile operations and control are significantly reduced. In turn, war becomes relatively cheap. Increased tension and conflict could be the result of a reduction in the opportunity cost of such wars.

There is also the more serious problem of accountability because autonomous missiles (and weapons) are not beholden to human control and ethics. Who will be responsible, if a software glitch results in wanton destruction and misfiring? Further refinement of such systems can only come from expanding their operational scope and precision. In effect, this would involve excising human agency completely from such systems, enabling them to do reconnaissance and launch at will to cause destruction and carnage.

Source: iStock

Source: iStock

2. A predictably inert life

In a Wired story more than a decade ago, Sun Systems co-founder Bill Joy outlined his fears about AI. In the article, he quoted a passage from a book about spiritual robots by Ray Kurzweil, chief scientist at Google, about humanity living with the effects of artificial intelligence. The passage below is written by Unabomber Theodore Kacyzinski:

We are suggesting neither that the human race would voluntarily turn power over to the machines nor that the humans would not willfully seize power. What we do suggest is that the human race might easily permit itself to drift into a position of such dependence on machines that it would have no practical choice but to accept all of the machine’s decisions. As society and the problems it faces become more complex and machines become more intelligent, people will let machines make more of the decisions for them, simply because machine-made decisions will bring better results than man-made ones.

Sound familiar?

This is already happening in our lives through the use of smart devices and the Internet. To achieve the desired results, artificial intelligence depends on a predictable set of outcomes. The number of outcomes belongs to a finite event space. The IBM computer Watson was able to beat humans in a game of chess because the number and types of moves in chess can be predicted. Similarly, although it has a much larger event space as compared to chess, Jeopardy also consists of a predictable set of questions.

Human life and experience, however, consists of an unpredictable set of events. Our days and their outcomes are rarely the same. Even our interests change. But, they are slowly becoming predictable, thanks to prediction algorithms. Thus, prediction algorithms that “suggest” choices based on our interests are really offering choices from a finite set of outcomes. The ostensible reason for the existence of prediction algorithms is to help us manage the complexity of an increasingly complicated world.

But, the notion of a complex world is a misleading one. Apart from sites such as Google and Facebook, there are very few sites that can claim complexity in products or offerings. The number of choices available online is from a finite event space. (There is a separate but interesting discussion about this concept with Netflix’s algorithms, which purportedly create the impression of a vast library of movie choices but it is, in fact, a limited choice).

Prediction algorithms serve another purpose. They take away human agency involved in evaluating circumstances and decision making. Through continual force feeding of suggestions, your interests are defined and a garden wall of “customized interests” is created.

The beauty of human experience lies in its unpredictability. Our loves, our passions, and even our hates are in a constant state of flux based on new information and outcomes in our lives. Currently, these outcomes are infinite. In a walled garden of circumscribed custom interests, the unpredictable nature of human existence becomes a bland and predictable set of mechanical tasks and, eventually, dissipates. And, that is when human identity is defined by machines.

More from Business Cheat Sheet:

More Articles About:   , ,