Regular Session #5

      Comments Off on Regular Session #5

Marionette: Enabling On-Road Wizard-of-Oz Autonomous Driving Studies

Peter Wang, Srinath Sibi, Brian Mok, Wendy Ju

There is a growing need to study the interactions between drivers and their increasingly autonomous vehicles. This paper describes a method of using low-cost, portable, and versatile driver interaction system that can be used in conjunction with commercial passenger vehicles for on-road partial and fully autonomous driving interaction studies. By conducting on-road Wizard-of-Oz studies in naturalistic settings, we can explore a range of driving conditions and scenarios far beyond what can be conducted in a laboratory simulator environments. The Marionette system uses off-the-shelf components to create bidirectional communication between the driving controls of a Wizard-of-Oz vehicle operator and a driving study participant. It signals to the study participant what the car is doing and enables researchers to study participant intervention in driving activity. Marionette is designed to be easily replicated for researchers studying partially autonomous driving interaction. This paper describes the design and evaluation of this system.

Steps Toward Participatory Design of Social Robots: Mutual Learning with Older Adults with Depression

Hee Rin Lee, Selma Sabanovic, Wan-ling Chang, Shinichi Nagata, Jennifer A. Piatt, Casey Bennett, David Hakken

Here we present the results of research aimed at developing a methodology for the participatory design of social robots, which are meant to be incorporated into social contexts (e.g. home, work) and to establish social relations with humans. In contrast to the dominant technologically driven robot development process, we aim to develop a socially meaningful and responsible approach to robot design using Participatory Design (PD), which starts with participants’ issues and concerns and develops robot concepts based on their socially constructed interpretations of the capabilities and applications of robotic technologies. We present the methodological insights from our ongoing PD field study aimed at developing design concepts for socially assistive robots with older adults diagnosed with depression and their therapists, and also identify remaining challenges in this project. In particular, we discuss how to support mutual learning between researchers and participants as well as bringing out more active participation of older adults as “designers” in the process as foundational aspects of the PD process. We conclude with our thoughts regarding how work in this application area can contribute to the further development of social robots and PD methodologies for developing technologies for domestic environments.

The Robotic Social Attributes Scale (RoSAS): Development and Validation

Colleen Carpinella, Alisa Wyman, Michael Perez, Steven Stroessner

Accurately measuring perceptions of robots has become increasingly important as technological progressions permit more frequent and extensive interaction between people and robots. Across four studies, we develop and validate a scale to measure social perception of robots. Drawing from the Godspeed Scale (Bartneck et al., 2009) and from the psychological literature on social perception, we develop an 18-item scale (The Robotic Social Attribute Scale; RoSAS) to measure people’s judgments of the social attributes of robots. Factor analyses reveal three underlying scale dimensions—warmth, competence, and discomfort. We then validate the RoSAS and show that the discomfort dimension does not reflect a concern with unfamiliarity. Using images of robots that systematically vary in their machineness and gender-typicality, we show that the application of these social attributes to robots varies based on their appearance.

Affective Grounding in Human-Robot Interaction

Malte Jung

Participating in interaction not only requires coordination on content and process, as previously proposed, but also on affect. The term affective grounding is introduced to refer to the coordination of affect in interaction with the purpose of building shared understanding about what behavior can be exhibited, and how behavior is interpreted emotionally and responded to. Affective Ground is achieved when interactants have reached shared understanding about how behavior should be interpreted emotionally. The paper contributes a review and critique of current perspectives on emotion in HRI. Further it outlines how research on emotion in HRI can benefit from taking an affective grounding perspective and outlines implications for the design of robots capable of participating in the coordination on affect in interaction.

It´s Not What You Do, It´s How You Do It: Grounding Uncertainty for a Simple Robot

Julian Hough, David Schlangen

For effective HRI, robots must go beyond having good legibility of their intentions shown by their actions, but also ground the degree of uncertainty they have. We show how in simple robots which have spoken language understanding capacities, uncertainty can be made common by principles of grounding in dialogue interaction even without the need for natural language generation. We present a model which makes this possible for simple robots with limited communication channels beyond the execution of task actions themselves. We implement our model in a simple pick-and-place robot, and experiment with two simple strategies for grounding uncertainty. In an observer study, we show that participants observing interaction with the robot run by the two different strategies were able to infer the degree of understanding the robot had internally, and in the more uncertainty expressive system, the internal uncertainty the robot had.

Implicit Communication in a Joint Action

Ross Knepper, Christoforos Mavrogiannis, Julia Proft, Claire Liang

ctions performed in the context of a joint activity comprise two aspects: functional and communicative. The functional component achieves the goal of the action, whereas its communicative component, when present, expresses some information to the actor’s partners in the joint activity. The interpretation of such communication requires leveraging information that is public to all participants, known as common ground. Much of human communication is performed through this implicit mechanism, and humans cannot help but infer some meaning — whether or not it was intended by the actor — from most actions. Robots must be cognizant of how their actions will be interpreted in context. We present a framework for robots to utilize this communicative channel on top of normal functional actions to work more effectively with human partners. We consider the role of the actor and the observer, both individually and jointly, in implicit communication, as well as the effects of timing. We also show how the framework maps onto various modes of action, including natural language and motion. We consider these modes of action in various human-robot interaction domains, including social navigation and collaborative assembly.

Event Timeslots (1)

Wed, Mar 8
New Methodologies and Techniques