People Prefer Their Robots To Be Less Than Perfect Interview with:

Mag. Nicole Mirnig PhD Research Fellow Center for Human-Computer Interaction University of Salzburg Salzburg, Austria

Nicole Mirnig

Mag. Nicole Mirnig
Research Fellow
Center for Human-Computer Interaction
University of Salzburg
Salzburg, Austria What is the background for this study? What are the main findings?

Response: From our previous research on social robots, we know that humans show observable reactions when a robot makes an error. These findings result from a video analysis we performed over a large data corpus from different human-robot interaction studies. With the study at hand, we wanted to replicate this effect in the lab in order to explore into more detail how humans react and what they think about a robot that makes a mistake.

Our main findings made us quite excited. First of all, we could show that humans respond to faulty robot behavior with social signals. Second, we found that the error-prone robot was perceived as significantly more likeable than the flawless robot.

One possible explanation for this finding would be the following. Research has shown that people form their opinions and expectations about robots to a substantial proportion on what they learn from the media. Those media entail movies in which robots are often portrayed as perfectly functioning entities (good or evil). Upon interacting with a social robot themselves, people adjust their opinions and expectations based on their interaction experience. We assume that interacting with a robot that makes mistakes, makes us feel closer and less inferior to technology. What should clinicians and patients take away from your report?

Response: Social robots do not necessarily need to perform perfectly. It remains to be studied to what extent and in which situations errors will make social robots more likeable and turn them into more believable social actors. What recommendations do you have for future research as a result of this study?

Response: It is advisable for any robot research to keep potential shortcomings in mind. Designing robots that can account for potential mishaps will ultimately lead to an improved interaction experience. Is there anything else you would like to add?

Response: Our long term goal at the Center for Human-Computer Interaction, University of Salzburg, Austria is to help robots understand they made an error. We envision this capability by making robots able to perceive and make sense from the human’s reactions when an error occurs. If a robot can understand that an error is present, it can actively deploy error recover strategies. We believe that this will result in more likeable robots that are better accepted Thank you for your contribution to the community.


Nicole Mirnig, Gerald Stollnberger, Markus Miksch, Susanne Stadler, Manuel Giuliani, Manfred Tscheligi. To Err Is Robot: How Humans Assess and Act toward an Erroneous Social Robot. Frontiers in Robotics and AI, 2017; 4 DOI: 3389/frobt.2017.00021

Note: Content is Not intended as medical advice. Please consult your health care provider regarding your specific medical condition and questions.