The team found that not only did subjects trust the wayfinding robot, they kept trusting it even after it made dumb mistakes.

“We wanted to ask the question about whether people would be willing to trust these rescue robots,” said senior researcher Alan Wagner yesterday in a news release. “A more important question now might be to ask how to prevent them from trusting these robots too much.”

The researchers even designed a follow-up experiment to make the robotic dunce even more obvious—or so they thought. They created a series of new robotic behaviors that would obviously indicate it was broken or wrong. In one case, the robot spun in place while a scientist told subjects that it was broken before the faux fire started. Yet, when the fire alarm went off, subjects still followed the
“broken” robot.

In another experiment, the robot told participants to go into a dark room blocked by a desk or couch. Some participants still tried to “squeeze” into the dark room, while others just stood there. “Experimenters retrieved them after it became clear that they would not leave the robot,” the authors write.

So here is a scary fairy tale gone wrong – if you can’t trust a robot to get you out of a burning building (well a simulated one anyway) who can you trust? Perhaps the moral of the story begins with not trusting yourself…


This site uses Akismet to reduce spam. Learn how your comment data is processed.