Author Interviews, Technology / 21.01.2018
Machines Learn To Cooperate With Human Partners, Who Often Cheat or Become Disloyal
MedicalResearch.com Interview with:
[caption id="attachment_39439" align="alignleft" width="200"]
Dr. Jacob Crandall[/caption]
Jacob Crandall PhD
Associate Professsor, Computer Science
Brigham Young University
MedicalResearch.com: What is the background for this study?
Response: As autonomous machines become increasingly prevalent in society, they must have the ability to forge cooperative relationships with people who do not share all of their preferences. Unlike the zero-sum scenarios (e.g., Checkers, Chess, Go) often addressed by artificial intelligence, cooperation does not require sheer computational power. Instead, it is facilitated by intuition, emotions, signals, cultural norms, and pre-evolved dispositions. To understand how to create machines that cooperate with people, we developed an algorithm (called S#) that combines a state-of-the-art reinforcement learning algorithm with mechanisms for signals.
We compared the performance of S# with people in a variety of repeated games.
Dr. Jacob Crandall[/caption]
Jacob Crandall PhD
Associate Professsor, Computer Science
Brigham Young University
MedicalResearch.com: What is the background for this study?
Response: As autonomous machines become increasingly prevalent in society, they must have the ability to forge cooperative relationships with people who do not share all of their preferences. Unlike the zero-sum scenarios (e.g., Checkers, Chess, Go) often addressed by artificial intelligence, cooperation does not require sheer computational power. Instead, it is facilitated by intuition, emotions, signals, cultural norms, and pre-evolved dispositions. To understand how to create machines that cooperate with people, we developed an algorithm (called S#) that combines a state-of-the-art reinforcement learning algorithm with mechanisms for signals.
We compared the performance of S# with people in a variety of repeated games.






















Nicole Mirnig [/caption]
Mag. Nicole Mirnig
Research Fellow
Center for Human-Computer Interaction
University of Salzburg
Salzburg, Austria
MedicalResearch.com: What is the background for this study? What are the main findings?
Response: From our previous research on social robots, we know that humans show observable reactions when a robot makes an error. These findings result from a video analysis we performed over a large data corpus from different human-robot interaction studies. With the study at hand, we wanted to replicate this effect in the lab in order to explore into more detail how humans react and what they think about a robot that makes a mistake.
Our main findings made us quite excited. First of all, we could show that humans respond to faulty robot behavior with social signals. Second, we found that the error-prone robot was perceived as significantly more likeable than the flawless robot.
One possible explanation for this finding would be the following. Research has shown that people form their opinions and expectations about robots to a substantial proportion on what they learn from the media. Those media entail movies in which robots are often portrayed as perfectly functioning entities (good or evil). Upon interacting with a social robot themselves, people adjust their opinions and expectations based on their interaction experience. We assume that interacting with a robot that makes mistakes, makes us feel closer and less inferior to technology.