This Thursday the IPAB workshop will be the following two talks, with pastries provided.
Speaker: Chenyang Zhao
Title: Tensor Based Knowledge Transfer Across Skill Categories for Robot Control
Abstract: Advances in hardware and learning for control are enabling robots to perform increasingly dextrous and dynamic control tasks. These skills typically require a prohibitive amount of exploration for reinforcement learning, and so are commonly achieved by imitation learning from manual demonstration. The costly non-scalable nature of manual demonstration has motivated work into skill generalisation, e.g., through contextual policies and options. Despite good results, existing work along these lines is limited to generalising across variants of one skill such as throwing an object to different locations. In this paper we go significantly further and investigate generalisation across qualitatively different classes of control skills. In particular, we introduce a class of neural network controllers that can realise four distinct skill classes: reaching, object throwing, casting, and ball-in-cup. By factorising the weights of the neural network, we are able to extract transferrable latent skills that enable dramatic acceleration of learning in cross-task transfer. With a suitable curriculum, this allows us to learn challenging dextrous control tasks like ball-in-cup from scratch with pure reinforcement learning. Advances in hardware and learning for control are enabling robots to perform increasingly dextrous and dynamic control tasks. These skills typically require a prohibitive amount of exploration for reinforcement learning, and so are commonly achieved by imitation learning from manual demonstration. The costly non-scalable nature of manual demonstration has motivated work into skill generalisation, e.g., through contextual policies and options. Despite good results, existing work along these lines is limited to generalising across variants of one skill such as throwing an object to different locations. In this paper we go significantly further and investigate generalisation across qualitatively different classes of control skills. In particular, we introduce a class of neural network controllers that can realise four distinct skill classes: reaching, object throwing, casting, and ball-in-cup. By factorising the weights of the neural network, we are able to extract transferrable latent skills that enable dramatic acceleration of learning in cross-task transfer. With a suitable curriculum, this allows us to learn challenging dextrous control tasks like ball-in-cup from scratch with pure reinforcement learning.
Speaker: Todor Davchev
Title: Incorporating Semantically Meaningful Predictions into Motion Planning
Abstract: Motion planning in dynamic environments, with a number of other goal-directed agents, introduces several challenges. On the one hand, there is the problem of predicting how the agents will respond to various features of the environment which requires an understanding of the task-relevant semantics of the scene. On the other hand, these predictions must be incorporated into motion synthesis. We propose a formulation wherein the motion of these other agents is conceptualised in an optimal control framework, enabling a structured approach to incorporating semantic context in a changing environment and to planning one's own behaviour given these models. This also enables possibilities, such as to group agent motion into semantically meaningful categories. In this talk, I will present my problem formulation and results from initial experiments carried out in the first year of my PhD programme.