The next IPAB workshop will take place on October 29th at 1pm. The weblink can be found in the original email.
Speaker: Mriganka Biswas
Abstract: In this presentation I will describes a model for aiding human-robot interactions based on the principle of showing behaviours which are created based on 'human' cognitive biases by a robot. Currently, most human-robot interactions are based on a set of well-ordered and structured rules, which repeat regardless of the person or social situation. This trend tends to provide an unrealistic interaction, which can make difficult for humans to relate 'naturally' with the social robot after a number of relations. The main focus of these interactions is that the social robot shows a very structured set of behaviours and, as such, acts unnaturally and mechanical in terms of social interactions. On the other hand, fallible behaviours (e.g. forgetfulness, inability to understand other' emotions, bragging, blaming others) are common behaviours in humans and can be seen in regular social interactions. Some of these fallible behaviours are caused by the various cognitive biases. Researchers studied and developed various humanlike skills (e.g. personality, emotions expressions, traits) in social robots to make their behaviours more humanlike, and as a result, social robots can perform various humanlike actions, such as walking, talking, gazing or emotional expressions. But common human behaviours such as forgetfulness, inability to understand other emotions, bragging or blaming are not present in the current social robots; such behaviours which exist and influence people have not been explored in social robots.
In this study I developed five cognitive biases in three different robots in four separate experiments to understand the influences of such cognitive biases in human-robot interactions. The results show that participants initially liked to interact with the robot with cognitive biased behaviours more than the robot without such behaviours. In my first two experiments, the robots (e.g., ERWIN, MyKeepon) interacted with the participants using a single bias (i.e., misattribution and empathy gap) cognitive biases accordingly, and participants enjoyed the interactions using such bias effects: for example, forgetfulness, source confusions, always showing exaggerated happiness or sadness and so on in the robots. In my later experiments, participants interacted with the robot (e.g., MARC) three times, with a time interval between two interactions, and results show that the likeness the interactions where the robot shows biased behaviours decreases less than the interactions where the robot did not show any biased behaviours.
Speaker: Michael Burke
Title: Temporal modelling with latent permutations
Abstract: Humans can easily reason about the sequence of high-level actions needed to complete tasks, but it is particularly difficult to instil this ability in neural networks trained from few examples. In this talk, I will discuss recent work exploring soft, differentiable latent permutations. Latent permutations provide a powerful tool to force inductive biases around both permutations and ordering concepts into neural models. I will illustrate this using the task of neural action sequencing conditioned on a single reference visual state (to speed up robot task planning) alongside ongoing work using latent permutations for unsupervised data association in multi-object tracking applications.