In socially assistive robotics, an important research area is the development of adaptation techniques and their effect on human-robot interaction. We present a meta-learning based policy gradient method for addressing the problem of adaptation in human-robot interaction and also investigate its role as a mechanism for trust modelling. By building an escape room scenario in mixed reality with a robot, we test our hypothesis that bi-directional trust can be influenced by different adaptation algorithms. We found that our proposed model increased the perceived trustworthiness of the robot and influenced the dynamics of gaining human's trust. Additionally, participants evaluated that the robot perceived them as more trustworthy during the interactions with the meta-learning based adaptation compared to the previously studied statistical adaptation model.
In this paper, we present an exploratory study of the use of tangible implicit probes to gauge the user's social engagement with a robot. Our results show that users' paying attention to the robot's implicit probes is related to higher social engagement, but also that introducing implicit probes can lead to a more positive interaction with a robot. As we observed that users in our study started paying more attention to the implicit probes after they had encountered them, the need for careful design to capture changes in social engagement over time is justified here. Finally, we discuss some of the user recommendations to design better implicit probes.