Date:
29 November 2018
Location:
A1
Time:
10:00 am
Duration:
4 hours
Cost:
Free
Abstract:
While a formal definition of explanation remains elusive, Explainable AI is receiving substantial attention in many sub-areas of AI, with increasing focus on the importance of human-centred evaluation in controlled task settings. Drawing on various endeavours to build cognitive agents able to fluidly operate in dynamic environments with humans, I will reflect on progress and limitations regarding the generation of useful explanations, with a perhaps surprising detour to consider potential contributions from the science of magic.
Bio:
Liz Sonenberg is a Professor in the Faculty of Engineering and Information Technology at the University of Melbourne and holds the Chancellery role of Pro Vice Chancellor (Systems Innovation). She is a member of the Standing Committee of the One Hundred Year Study on Artificial Intelligence (AI100) and a member of the Advisory Board of AI Magazine. For many years one theme of her research has been on agent technology, particularly agent collaboration and teamwork, with a complimentary theme being the human-automation interface and implications for the design of computational mechanisms to support human decision-making. Liz was the recipient of the 2020 Australasian AI Award for Distinguished Research Contributions.