Regular Session #1

      Comments Off on Regular Session #1

Expressing Emotions through Color, Sound, and Vibration with an Appearance-Constrained Social Robot

Sichao Song, Seiji Yamada

Many researchers are now dedicating their efforts to studying interactive modalities such as facial expressions, natural language, and gestures. This phenomenon makes communication between robots and individuals become more natural. However, many robots currently in use are appearance constrained and not able to perform facial expressions and gestures. In addition, although humanoid-oriented techniques are promising, they are time and cost consuming, which leads to many technical difficulties in most research studies. To increase interactive efficiency and decrease costs, we alternatively focus on three interaction modalities and their combinations, namely color, sound, and vibration. We conduct a structured study to evaluate the effects of the three modalities on a human’s emotional perception towards our simple-shaped robot “Maru”. We found that those modalities could offer a basis for intuitive emotional interaction between human beings and robots, which can be particularly suitable for appearance-constrained social robots. The contribution of this work is not so much the explicit parameter settings but rather deepening the understanding of how to express emotions through simple modalities color, sound, and vibration while providing a set of recommended expressions which HRI researchers and practitioners could readily employ.


Making Sound Intentional: A Study of Servo Sound Perception

Dylan Moore, Hamish Tennent, Nik Martelaro, Wendy Ju

How do sounds shape interaction with robots? The present study explores aural impressions associated with servo motors commonly used to prototype robotic motion. This exploratory analysis constructs a framework to objectively and subjectively characterize sound using acoustic analyses and novice evaluators on Amazon Mechanical Turk. Participants evaluated unfamiliar sounds through pairwise comparison, resulting in subjective ratings of servo motor sounds. In this study, subjective measures of sound correlated well internally, but correlated weakly with objective measures. Moreover, qualitative commentary offered by participants suggests both anthropomorphic associations with sounds as well as negative impressions of the sounds overall. We conclude with a roadmap for exploration into the field of consequential sonic interaction design.


Expressive Robot Motion Timing

Allan Zhou, Dylan Hadfield-Menell, Anusha Nagabandi, Anca Dragan

Our goal is to enable robots to time their motion in a way that is purposefully expressive of their internal states, making them more transparent to people. We start by investigating what types of states motion timing is capable of expressing, focusing on robot manipulation and keeping the path constant while systematically varying the timing. We find that users naturally pick up on certain properties of the robot (like confidence), of the motion (like naturalness), or of the task (like the weight of the object that the robot is carrying). We then conduct a hypothesis-driven experiment to tease out the directions and magnitudes of these effects, and use our findings to develop candidate mathematical models for how users make these inferences from the timing. We find a strong correlation between the models and real user data, suggesting that robots can leverage these models to autonomously optimize the timing of their motion to be expressive.


Using Facially Expressive Robots to Calibrate Clinical Pain Perception

Maryam Moosaei, Sumit Das, Dan Popa, Laurel Riek

In this paper, we introduce a novel application of social robotics in healthcare: high fidelity, facially expressive, robotic patient simulators (RPSs), and explore their usage within a clinical experimental context. Current commercially-available RPSs, the most commonly used humanoid robots worldwide, are substantially limited in their usability and fidelity due to the fact that they lack one of the most important clinical interaction and diagnostic tools: an expressive face. Using autonomous facial synthesis techniques, we synthesized pain both on a humanoid robot and comparable virtual avatar. We conducted an experiment with 51 clinicians and 51 laypersons (n = 102), to explore differences in pain perception across the two groups, and also to explore the effects of embodiment (robot or avatar) on pain perception. Our results suggest that clinicians have lower overall accuracy in detecting synthesized pain in comparison to lay participants. We also found that all participants are overall less accurate detecting pain from a humanoid robot in comparison to a comparable virtual avatar, lending support to other recent findings in the HRI community. This research ultimately reveals new insights into the use of RPSs as a training tool for calibrating clinicians’ pain detection skills


Towards Robot Autonomy in Group Conversations: Understanding the Effects of Body Orientation and Gaze

Marynel Vázquez, Elizabeth Carter, Braden McDorman, Jodi Forlizzi, Aaron Steinfeld, Scott Hudson

We conducted an experiment to examine the effects of varying orientation and gaze behaviors on interactions between a mobile robot and groups of people. For this experiment, we designed a novel protocol to induce changes in the robot’s conversational group and study different social contexts. In addition, we implemented a perception system to track participants and control the robot’s orientation and gaze with little human intervention. The results showed that the gaze behaviors under consideration affected the participants’ perception of the robot’s motion, and this motion affected the perception of gaze as well. This mutual dependency implied that gaze and body motion must be designed and controlled jointly, rather than independently of each other. We also found that the two orientation behaviors we studied led to similar feelings of inclusion and sense of belonging to the robot’s group. These outcomes suggested that both can be used as primitives for more complex orientation behaviors.


Transgazer: Improving Impression by Switching Direct and Averted Gaze Using Optical Illusion

Yuki Kinoshita, Masanori Yokoyama, Shigeo Yoshida, Takayoshi Mochizuki, Tomohiro Yamada, Takuji Narumi, Tomohiro Tanikawa, Michitaka Hirose

Both direct gaze and averted gaze have an important effect in one-to-many communication. The purpose of this study is to discover the gaze corn of each gaze and precision of gaze direction, and to improve listeners’ impression on robots. We propose robotic eyes that control gaze cone, the area wherein the receivers feel as if they are looked at. Robotic eyes can send averted and direct gaze to multiple people simultaneously by changing the shape of the eyes, using an optical illusion. We developed a system of this concept, “Transgazer”, which can use convex or hollow eyes. We measured the broadness of gaze cone and correctness of conveying gaze direction of each eye type. The result showed that hollow eyes have a broader gaze cone whereas convex eyes can convey gaze direction more correctly. In the main experiment, Transgazer gave a lecture to two participants simultaneously using convex and hollow eyes and evaluated impression improvement: one of the effects of direct and averted gaze. The result showed that Transgazer could improve the impression of multiple listeners simultaneously without precisely retrieving listeners’ directions. We believe that this concept will improve one-to-many communication and make a significant contribution to human-robot communication in the future.

Event Timeslots (1)

Tue, Mar 7
-
Creating Expressive Robots