Overcoming sensor ambiguity for RGB action recognition​

Info about the Project

In collaborative human-robot assembly tasks, the robot will need to identify important elements of a scene (humans, objects) and understand their behavior and interaction. Estimated skeleton and object interaction information are often used for video-based human activity recognition, but most research focuses on depth sensors. There are many more standard RGB cameras and videos in the world, but unreliability of pose/object estimations hinders their adoption in this domain. We present novel techniques for dealing with such unreliability to aid in adoption of these techniques for RGB sensors.
Faculty: 
Thomas Ploetz, Irfan Essa
Students: 
Dan Scarafoni