Note: This is a virtual presentation. Here is the link for where the presentation will be taking place.
Title: Fine-grained activity recognition for assembly videos
Abstract: When a collaborative robot is working with a human partner to build a piece of furniture or an industrial part, the robot must be able to perceive which parts are connected and where, and it must be able to reason about how these connections can change as the result of its partner’s actions. This need can also arise in industrial process monitoring and manufacturing applications, where an automated system verifies a product as it progresses through the assembly line. These assembly processes require systems that can reason geometrically and temporally, relating the structure of an assembly to the manipulation actions that created it.
Grounded in a behavioral study of spatial cognition, this proposal combines methods for physical and temporal reasoning to enable the analysis and automated perception of assembly actions. We develop a temporal model that relates manipulation actions to the structures they produce and describe its use in enabling fine-grained behavioral analyses. Then, we apply our sequence model to recognize assembly actions in a variety of assembly scenarios. Finally, we describe a method for part-based reasoning that makes our approach robust to occluded and previously unseen assemblies.
Sanjeev Khudanpur, Department of Electrical and Computer Engineering
Greg Hager, Department of Computer Science
Vishal Patel, Department of Electrical and Computer Engineering