10:00 am to 12:00 am
Event Location: GHC 4405
Abstract: As social and collaborative robots move into everyday life, the need for algorithms enabling their acceptance becomes critical. People parse non-verbal communications intuitively, even from machines that do not look like people, thus, expressive motion is a natural and efficient way to communicate with people. This work presents a computational Expressive Motion framework allowing simple robots to modify task motions to communicate varying internal states, such as task status, social relationships, mood (e.g. emotive) and/or attitude (e.g. rushed, confident). By training robot motion features with humans in the loop, future robot designers can use this approach to parametrize how a robot generates its task motions.
The hypothesis of this Thesis is that robots can modify the motion features of their task behaviors such to legibly communicate a variety of states. Typically, researchers build instances of expressive motion into individual robot behaviors (which is not scalable), or use an independent channel such as lights or facial expressions that do not interfere with the robot’s task. What is unique about this work is that we use the same modality to do both task and expression: the robot’s joint and whole-body motions. While this is not the only way for a robot to communicate expression, Expressive Motion is a channel available to all moving machines, which can work in tandem with additional communication modalities.
Our methodological approach is to operationalize the Laban Effort System, a well-known technique from acting training, describing a four-dimensional state space of Time, Weight, Space and Flow. Thus, our Computational Laban Effort (CLE) framework can use four values, the Laban Effort Setting, to represent a robot’s current state. Each value is reflected in the motion characteristics of the robot’s movements. For example, a Laban Time Effort of ‘sudden’ might have more abrupt accelerations and fast velocity, while a Laban Time Effort value of ‘sustained’ could have slower acceleration and low velocity. In our experiments, we find that varying these four Effort values results in complex communications of robot state to the people around it, even for robots with low degrees of freedom.
The technical contributions of this work include:
1. A Computational Laban Effort framework for layering Expressive Motion features onto robot task behaviors, fully specified for low degree of freedom robots.
2. Specifications for selecting, exploring and making generalizations about how to map these motion features to particular robot state communications.
3. Experimental studies of human-robot interaction to evaluate the legibility, attributions and impact of these technical components.
4. Sample evaluations of approaches to establish mappings between CLE features and state communications.
Committee:Reid Simmons, Chair
Manuela Veloso
Aaron Steinfeld
Guy Hoffman, Cornell University