We’re bearing witness to an interesting time in technological development: the rise of robot servants. They’ll soon be in our cars (if they haven’t been uploaded to your vehicle already) and delivering our Amazon packages. Many dream of one day having a robot butler that will service their every need. But what if a human gives a command that could do harm? Or one that may put the robot at risk?
Researchers Gordon Briggs and Matthias Scheutz, from Tufts University, are working on a mechanism to teach robots to say, ‘No,’ to their human overlords. The system is one that allows a robot to not only understand the language of a command, but the larger context — whether the robot is actually capable of executing it.