Will Siri Develop Emotions in the Future? AI Expert Says ‘Maybe’

Joaquin Phoenix in Spike Jonze's Her

Source: Herthemovie.com

Yann LeCun is one of the best people to ask about the future of artificial intelligence and where it will take us. LeCun is Facebook’s director of artificial intelligence research and the founding director of New York University’s Center for Data Science. He’s one of the world’s foremost experts on deep learning, an area of machine learning research that aims to move computers toward artificial intelligence. In an interview with Quentin Hardy of The New York Times, LeCun discussed the significance and the future of artificial intelligence, and projected that advanced computing techniques will create “digital partners” that will accompany us through life.

“A lot of interaction with each other and with the digital world will come from what you could call ‘digital companions,’ that will help us work through things,” LeCun said. While LeCun made it clear that he wasn’t making any claims about Facebook’s future products, he explained how that principle might look applied to the News Feed.

Depending on the number of posts, photos, and news items that your Facebook friends share, there could be up to 2,000 that your News Feed could show you on a given day. But because people have a limited amount of time to spend on the social network, Facebook can only practically show you somewhere between 100 and 150 items per day. The News Feed’s algorithm determines which of those 2,000 updates are the most useful by labeling images, recognizing faces, classifying text, and taking into consideration factors like your interests, what you want to do, and who your friends are in different situations.

“I’m not saying this is a future product,” LeCun notes, “but a way to think about this if there is an intelligent digital companion that allows you to think about things in a new way, the way you interact with friends, expand your thinking. There will be a single point of control that knows and respects your private information.” This type of artificial intelligence, he says, will be created incrementally. “In some ways, parts of this are already there in the Facebook News Feed, in Apple’s Siri, or Microsoft Cortana. They are shallow now, in the kind of interactions you are having. They are somewhat scripted.” But in the future, deep learning — LeCun’s particular area of expertise — will enable more complex interactions.

In an AMA on Reddit last year, LeCun said that the activities of Facebook’s artificial intelligence research team have a variety of focuses: “learning methods and algorithms (supervised and unsupervised), deep learning + structured prediction, deep learning with sequential/temporal signals, applications in image recognition, face recognition, natural language understanding.” The team focuses on theoretical work and developing new methods that have potential to drive important advances in the future.

Deep learning  enables a machine to figure things out on its own by completing a task, receiving feedback, and figuring out where it went wrong. LeCun says that “solving” artificial intelligence — an area of research that he thinks will benefit from Facebook’s culture of openness about its software and hardware — will require figuring out how to build machines with motivations and emotions.

Hardy is quick to put that notion to the test, stating to LeCun that “you can’t render emotions in software.” But to LeCun, solving the paradox is a matter of choosing the appropriate machine learning technique. LeCun breaks down what emotions are on their most basic level. He explains that people are “prediction machines,” and emotions are “registers of things we like or don’t like,” things to which, in his estimation, you could assign values. People make choices to achieve or avoid specific outcomes, using predictions to do so. And oftentimes, we’re conflicted about the choices we make. “There is no reason to think we can’t encode this in a machine,” LeCun says.

One of the biggest challenges to actually creating such a machine is figuring out the logistics of an area of machine learning called unsupervised learning. Supervised learning involves teaching a computer to recognize images, such as of dogs or cars. In reinforcement learning, you don’t tell the machine the correct answer, but score its performance. The machine then figures out the rules by determining where it made a mistake. Unsupervised learning, on the other hand, is what people do. AI Horizon explains how the two machine learning techniques differ:

Supervised learning is the type of learning that takes place when the training instances are labelled with the correct result, which gives feedback about how learning is progressing. This is akin to having a supervisor who can tell the agent whether or not it was correct. In unsupervised learning, the goal is harder because there are no pre-determined categorizations.

On Reddit, LeCun explained that, “Unsupervised learning is about discovering the internal structure of the data, discovering mutual dependencies between input variables, and disentangling the independent explanatory factors of variations.” He characterizes unsupervised learning as a means to an end, such as creating a way for a machine to learn to understand the world the way a human does.

His explanation of the technique to Hardy highlights the differences between how humans and machines learn. “A baby learns that when you put a toy behind a box, the toy is still in the world,” LeCun explains. “Humans and animals have that capacity. Contrast that with machines, where most learning is still supervised. We don’t have a good grand model yet.”

While some forms of deep learning are approaching something like unsupervised learning, LeCun says that “We are still missing a basic principle of what unsupervised learning should be built upon.” During his AMA on Reddit, LeCun told the community that “everyone agrees that the future is in unsupervised learning. Unsupervised learning is believed to be essential for video and language. Few of us believe that we have found a good solution to unsupervised learning.” And in another answer, he noted, “The theory of deep learning is a wide open field. Everything is up for the taking. Go for it.”

One of the most striking pop-culture depictions of an intelligent digital assistant in recent years was the character of an operating system who names herself Samantha in Spike Jonze’s film Her. When a Reddit user queried him on the challenges of engineering an intelligent operating system, like Samantha, LeCun responded that something like Samantha is “totally out of reach of current technology. We will need to invent new concepts, new principles, new paradigms, new algorithms.” As depicted in the film, the operating system had a deep understanding of human behavior and human nature — something that current computers are ill-equipped to match.

LeCun thinks that a major component of what we’d need to make that technology a possibility is an engine or a paradigm that can learn to represent and understand the world, “in ways that would allow it to predict what the world is going to look like following an event, an action, or the mere passage of time.” He notes that the human brain is “very good at learning to model the world and making predictions (or simulations). This may be what gives us ‘common sense.'” People can build mental pictures of the world that enable them to reason, predict, answer questions, and carry on intelligent dialogues. But LeCun thinks that emotions, too, will play a significant part in artificial intelligence systems that can interact with people.

One interesting aspect of the digital character in Her is emotions. I think emotions are an integral part of intelligence. Science fiction often depicts AI systems as devoid of emotions, but I don’t think real AI is possible without emotions. Emotions are often the result of predicting a likely outcome. For example, fear comes when we are predicting that something bad (or unknown) is going to happen to us. Love is an emotion that evolution built into us because we are social animals and we need to reproduce and take care of each other. Future AI systems that interact with humans will have to have these emotions too.

More from Tech Cheat Sheet:

More Articles About:   ,