摘要:SummaryMany technical and psychological challenges make it difficult to design machines that effectively cooperate with people. To better understand these challenges, we conducted a series of studies investigating human-human, robot-robot, and human-robot cooperation in a strategically rich resource-sharing scenario, which required players to balance efficiency, fairness, and risk. In these studies, both human-human and robot-robot dyads typically learned efficient and risky cooperative solutions when they could communicate. In the absence of communication, robot dyads still often learned the same efficient solution, but human dyads achieved a less efficient (less risky) form of cooperation. This difference in how people and machines treat risk appeared to discourage human-robot cooperation, as human-robot dyads frequently failed to cooperate without communication. These results indicate that machine behavior should better align with human behavior, promoting efficiency while simultaneously considering human tendencies toward risk and fairness.Graphical AbstractDisplay OmittedHighlights•Experiments show that people learned risk-averse solutions without communication•With and without communication, robot pairs learned risky, but efficient, outcomes•Human-robot pairs often learned risky, but efficient, solutions with communication•Without communication, behavioral asymmetries inhibited human-robot cooperationHuman-Computer Interaction; Social Sciences; Psychology