Robots Need to Get Emotional to be More Trusted, say Expert Panelists

1976
Researcher Anca Dragan, assistant professor of Electrical Engineering and Computer Science at UC Berkeley

by Melissa Pandika, Science Writer

A panel of leaders in human-robot interaction challenged attendees at a recent TechCrunch session on Robotics + AI held at the University of California, Berkeley, to broaden their expectations of artificial intelligence in robots. As these devices become more integrated into our lives—already emerging in office, healthcare, and retail settings, to name a few—conventional AI capabilities such as object detection and speech recognition won’t suffice. For us to coexist with and, ultimately, trust, robots, they need emotional and social intelligence, too.

Moderator and TechCrunch writer Lucas Matney opened the panel discussion with a fundamental question: Why can’t humans and robots just get along?

Anca Dragan, assistant professor of electrical engineering and computer sciences at UC Berkeley and head of the InterACT Lab, which works on human-robot interaction algorithms, sees the tension between humans and robots as twofold. For starters, today’s robots can’t interpret or respond to intent. Instead, they do exactly as we tell them—and we might find ourselves frustrated with them if what we tell them doesn’t reflect what we mean. Secondly, while robots can perform relatively straightforward tasks like avoiding obstacles in their path, engaging in the subtle negotiations required to, say, navigate a crowd of people, has proven far trickier.

Matney then asked the panel whether a lack of emotional understanding has strained the human-robot relationship. “If we think of human intelligence, IQ is a very important piece of our intelligence, but our EQ, our emotional intelligence, is just as important,” said Rana el Kaliouby, co-founder and CEO of Affectiva. “I believe that that’s true for any AI or robotic system that interacts with humans on a day-to-day basis.” She noted that Affectiva focuses on building social and emotional intelligence into these systems, enabling them to detect facial expressions, gestures and vocal analytics in real time and adapt their behavior accordingly.

Rana el Kaliouby, co-founder and CEO of Affectiva.

El Kaliouby then described how Affectiva’s technology is incorporated into Pepper, SoftBank Robotics America’s humanoid robot, which provides customer service—by making product recommendations and locating items, for instance—in retail settings. Pepper allows customers to seek assistance without the pressure of speaking with a sales associate, making any future interactions with these employees more informed, explained Matt Willis, head of design and human-robot interaction at SoftBank. If Pepper detects confusion, it repeats itself; if it senses boredom, it stops the conversation.  

The panelists then delved deeper into how today’s robots interact with humans and where they see room for growth. Dragan reiterated robots’ limitations in responding to intent. As it stands, engineers train a robot to produce a behavior autonomously to perform a specific task—but what if someone shoves it away? Ideally, it should not only move, but understand that perhaps the correct task specification isn’t exactly what the person commanded. “If you’re mean to robots… I think robots need to be responding somehow,” Dragan said. She isn’t sure what an appropriate response would entail, “but if the robots change their behavior and seem more vulnerable or even sad…it’s a reaction that we can read into that then shapes our behavior.” In contrast, devices like the Amazon Echo respond the same way, regardless of the tone we use with them.

Dragan believes even robots that don’t explicitly interact with people should also have the capability to sense and respond to their emotions. For instance, even if a self-driving car doesn’t engage in a dialogue with a passenger, it should adjust how it drives if it senses that he or she is anxious.

El Kaliouby added that a self-driving car should also have the ability to reassure the passenger. If a pedestrian crosses in front of the car, and the passenger gets scared, “maybe the vehicle needs to say, ‘It’s ok, I got this. I can see the pedestrian,” she said. Willis agreed. “People aren’t just going to jump straight in a [self-driving] car and go, ‘Ok, cool I trust this thing.’” Eventually, engineers can remove cognitive models that offer reassurance as our trust in these devices grows.

Mattney then noted that while we’re several years out from Level 5, or fully autonomous, cars, once they do roll out, we can’t expect ourselves to immediately trust them. How can we gradually build trust in robots in the meantime?

That trust has to go both ways, el Kaliouby said. “We talk about this idea of reciprocal trust, so it’s not just about humans trusting in the AI. The AI has to trust the humans back.” For example, a semi-autonomous vehicle may need to transfer its control back to a passenger—but first, it needs to ensure that he or she is awake and alert. “It’s very interesting, this partnership between human and machine,” el Kaliouby said. “How do you quantify that level of trust?”

Dragan, on the other hand, is concerned about placing too much trust in robots. “I worry about the fact that we tend to anthropomorphize these agents, which means that if they do a little bit, we tend to think that they can actually do a lot,” she said. “I don’t want to be relying on the robot to be doing things that are outside of its capability.” Her lab has begun exploring what steps robots can take to establish an appropriate level of trust, which may even include exposing their limitations.

“Especially the humanoid robot, we’ll expect it to act like a human and have all of these capabilities that a human has, and the reality is, they don’t,” Willis said. He explained that Pepper has a tablet mounted on its chest, designed to not only compensate for its limitations, but also lessen the pressure on users who may have never interacted with a robot before. Chances are, though, they’ve used a tablet. As the technology improves, and people become more used to social robots, “we can start to pare back some of that,” Willis said. But Dragan believes that ideally, robots would understand human behavior so well, they would generate highly compatible responses to them. “I don’t know how much we want to be relying on, ‘Oh, but over time, people will adapt, and they’ll figure it out.’ I want it to click from the start.”

To close out the session, the panelists shared what lies ahead for them. “On the horizon, looking at emotional cues as an additional form of input into the behavior that you generate and what you’re optimizing for definitely makes sense,” Dragan said. But “there’s just so many hurdles before I even think about the emotional expressions that people have.” Currently, she and her lab are tackling the challenge of a robot not having rich enough data to draw from to capture what a user wants. On the theoretical side, they want to formalize the problem of a robot having to coordinate and interact with people, as well as ask fundamental questions regarding what assumptions about human behavior need to be made for robots to be able to assist people.

Meanwhile, el Kaliouby and colleagues are looking into mitigating data bias in their algorithms. While they use huge volumes of data to power them, she expressed concern about accidentally incorporating bias and highlighted the importance of drawing data from diverse populations on the training and validation sides, as well as in building the teams that design these systems.

“I think we’re just scratching the surface about how these robots can be valuable in our lives,” Willis said. He’s excited by the possibility of integrating robotic systems, such as Pepper with Simbe Robotics’ Tally, which scans shelves for out-of-stock items and other inventory issues “By virtue of connecting these systems, we can provide more value to people.” Willis predicted that as we expand our knowledge bases through integrating these and other systems, including those that generate emotional responses, human-robot interactions will only yield more value.

Learn more at TechCrunch Sessions: Robotics + AI.

Melissa Pandika writes about science and health.  She holds a B.A. in molecular and cell biology from the University of California, Berkeley. She can be reached at mmpandika@gmail.com.