Abstract:
Over the last several years, end-to-end neural conversational agents have vastly improved in their ability to carry a chit-chat conversation with humans. However, these models are often trained on large datasets from the internet, and as a result, may learn undesirable behaviors from this data, such as toxic or otherwise harmful language. In this talk, I will discuss the problem landscape for safety for E2E convAI, including recent and related work. I will highlight tensions between values, potential positive impact, and potential harms, and describe a possible path for moving forward.
Bio:
Emily Dinan is a Research Engineer at Facebook AI Research in New York. Her research interests include conversational AI, natural language processing, and fairness and responsibility in these fields. Recently she has focused on methods for preventing conversational agents from reproducing biased, toxic, or otherwise harmful language. Prior to joining FAIR, she received her master's degree in Mathematics from the University of Washington.