Abstract
A lot of people recognize the importance of explainable AI. This is also the case for personalized online content which influences decision-making at individual, business, and societal levels. Filtering and ranking algorithms such as those used in recommender systems support these decisions. However, we often lose sight of the purpose of these explanations and whether understanding is an end in itself. This talk addresses why we may want to develop decision-support systems that can explain themselves and how we may assess that we are successful in this endeavor. This talk will describe some of the state-of-the-art explanations in several domains that help link the mental models of systems and people. However, it is not enough to generate rich and complex explanations; more is required to support effective decision-making. This entails decisions around which information to select to show to people and how to present that information, often depending on the target users and contextual factors.
Bio
Nava Tintarev is a Full Professor of Explainable Artificial Intelligence in the Department of Advanced Computing Sciences (DACS) where she is also the Director of Research. She leads or contributes to several projects in the field of human-computer interaction in artificial advice-giving systems, such as recommender systems; specifically developing the state-of-the-art for automatically generated explanations (transparency) and explanation interfaces (recourse and control). ). Prof. Tintarev has developed and evaluated explanation interfaces in a wide range of domains (music, human resources, legal enforcement, nature conservation, logistics, and online search), machine learning tasks (recommender systems, classification, and search ranking); and modalities (text, graphics, and Interactive combinations thereof). She is currently representing Maastricht university as a Co-Investigator in the ROBUST consortium, pre-selected for a national (NWO) grant with a total budget of 95M (25M from NWO) to carry out long term (10-years) research into trustworthy artificial intelligence. She has published over 100 peer-reviewed papers in top human-computer interaction and artificial intelligence journals and conferences such as UMUAI, TiiS, ECAI, IUI, Recsys, and UMAP. These include best paper awards at the following conferences: CHI, Hypertext, HCOMP, UMAP and CHIIR. Webpage: http://navatintarev.com
Ioannis Konstas is the host.