A Gesture-Centric Android System for Multi-Party Human-Robot Interaction

Yutaka Kondo, Kentaro Takemura, Jun Takamatsu, Tsukasa Ogasawara

Abstract


Natural body gesturing and speech dialogue is crucial for human-robot interaction (HRI) and human-robot symbiosis. Real interaction is not only with one-to-one communication but also among multiple people. We have therefore developed a system that can adjust gestures and facial expressions based on a speaker’s location or situation for multi-party communication. By extending our already developed real-time gesture planning method, we propose a gesture adjustment suitable for human demand through motion parameterization and gaze motion planning, which allows communication through eye-to-eye contact. We implemented the proposed motion planning method on an android Actroid-SIT and we proposed to use a Key-Value Store to connect the components of our systems. The Key-Value Store is a high-speed and lightweight dictionary database with parallelism and scalability. We conducted multi-party HRI experiments for 1,662 subjects in total. In our HRI system, over 60 percent of subjects started speaking to the Actroid and the residence time of their communication also became longer. In addition, we confirmed our system gave humans a more sophisticated impression of the Actroid.

Keywords


Human-Robot Interaction; Body Gesture; Facial Expression; Multi-party; Android

Full Text:

PDF Movie (mp4)

Refbacks

  • There are currently no refbacks.




Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 License.