Explainable AI

Mark Riedl, Upol Ehsan, Brent Harrison
Pradyumna Tambwekar, Cheng Hann Gan, Jiahong Sun

In the near future, autonomous and semi-autonomous systems will interact with us with greater frequency. When they fail or perform unexpected behaviors, non-experts must be able to determine what went wrong. We introduce "rationalization", a technique for automatically generating natural language explanations as if another human were describing what the autonomous system was doing. We demonstrate rationalization in the test-bed domain of the Frogger game and share our results on the human-subjects evaluation on the satisfaction of explanation generated by our system compared to other baselines.

Video: https://www.youtube.com/watch?v=vXcuLEBwXsQ

Mark Riedl

The Entertainment Intelligence Lab focuses on computational approaches to creating engaging and entertaining experiences. Some of the problem domains they work on include, computer games, storytelling, interactive digital worlds, adaptive media and procedural content generation. They expressly focus on computationally "hard" problems that require automation, just-in-time generation, and scalability of personalized experiences.