Over a trillion dollars worth of goods is picked from warehouses each year. However, most warehouse workers still use paper-based pick lists to determine which items need to be picked. Though it has been established that Augmented Reality based solutions both increase the speed of order picking and reduce error rates, little work has been done in actively warning pickers of errors.
Current methods for picking confirmation require a tedious process of scanning barcodes: picking the object, finding its barcode, grasping a scanner gun, orienting the object appropriately to the scanner, replacing the scan gun, and placing the object in the proper receive bin. Our goal is to confirm the pick as it is happening, using a head mounted camera and computer vision to identify the object, whether or not the bar code is visible. Since human pickers already achieve an accuracy of 98% or higher, our strategy is to learn the visual appearance of each object as it is picked repeatedly. Our online unsupervised learning model will learn in real-time to visually distinguish between patterns of correct order picking and abnormal patterns indicating an error, all without the need for any data labeling. Our goal is to effectively bring the picking error rate down to zero using a method that automatically learns from current picking processes. We call this process, by which the computer learns a task from human workers and then assists those workers with that task, Symbiotic AI. Over time, we expect our Symbiotic AI techniques to generalize to many domains, including medicine, home health care, and manufacturing.
The Contextual Computing Group (CCG) creates wearable and ubiquitous computing technologies using techniques from artificial intelligence (AI) and human-computer interaction (HCI). We focus on giving users superpowers through augmenting their senses, improving learning, and providing intelligent assistants in everyday life. Members' long-term projects have included creating wearable computers (Google Glass), teaching manual skills without attention (Passive Haptic Learning), improving hand sensation after traumatic injury (Passive Haptic Rehabilitation), educational technology for the Deaf community, and communicating with dogs and dolphins through computer interfaces (Animal Computer Interaction).