Explainable AI - Rationale Generation with Visual Augmentation

Faculty: 
Mark Riedl
Students: 
Upol Ehsan, Shukan Shah, Pradyumna Tambwekar,

Building on our prior work on making AI agents to think out loud in plain English, we are taking the next step in this demo. Here, we will use our friend Frogger as he plays a game. You will not only see Frogger think out loud (or verbalize its inner monologue) using plan English, you will also be able to visually connect which parts of his language correspond to which parts of the game. For e.g, when Frogger says "I am trying to avoid the red truck to the left", you might see the red truck in the game state light up to show a visual correlation of generated language. Why is this important? While language is instrumental in making black-boxed AI systems explainable to lay users, having a layer of visual correlation makes our approach even more powerful. Drop by if you are interested in about learning the state of the art when it comes to Explainable AI!

Lab: 
Faculty: 
Mark Riedl

The Entertainment Intelligence Lab focuses on computational approaches to creating engaging and entertaining experiences. Some of the problem domains they work on include, computer games, storytelling, interactive digital worlds, adaptive media and procedural content generation. They expressly focus on computationally "hard" problems that require automation, just-in-time generation, and scalability of personalized experiences.