Regular Session #8

      Comments Off on Regular Session #8

Evaluating Effects of User Experience and System Transparency on Trust in Automation

Xi Yang, Vaibhav Unhelkar, Julie Shah

Existing research assessing human operators’ trust in automation and robots has primarily examined trust as a steady-state variable, with little emphasis on the evolution of trust over time. With the goal of addressing this research gap, we present a study exploring the dynamic nature of trust. We defined trust of entirety as a measure that accounts for trust across a human’s entire interactive experience with automation, and first identified alternatives to quantify it using real-time measurements of trust. Second, we provided a novel model that attempts to explain how trust of entirety evolves as a user interacts repeatedly with automation. Lastly, we investigated the effects of automation transparency on momentary changes of trust. Our results indicated that trust of entirety is better quantified by the average measure of “area under the trust curve” than the traditional post-experiment trust measure. In addition, we found that trust of entirety evolves and eventually stabilizes as an operator repeatedly interacts with a technology. Finally, we observed that a higher level of automation transparency may mitigate the “cry wolf” effect — wherein human operators begin to reject an automated system due to repeated false alarms.


Do you want your autonomous car to drive like you?

Chandrayee Basu, Qian Yang, David Hungerman, Anca Dragan, Mukesh Singhal

With progress in enabling autonomous cars to drive safely on the road, it is time to start asking how they should be driving. A common answer is that they should be adopting their users’ driving style, which makes the assumption that users want their cars to drive like they do – aggressive drivers want aggressive cars, defensive drivers want defensive cars for the sake of comfort. In this paper, we put that assumption to the test. We find that users tend to prefer a significantly more defensive driving style than their own. Interestingly, they prefer the style they think is their own, even though their actual driving style tends to be more aggressive. These results open the door for learning what the user’s preferred style will be, by potentially learning their driving style but then purposefully deviating from it.


Piggybacking Robots: Human-Robot Overtrust in University Dormitory Security

Serena Booth, James Tompkin, Krzysztof Gajos, Jim Waldo, Hanspeter Pfister, Radhika Nagpal

Can overtrust in robots compromise physical security? We conducted a series of experiments in which a robot positioned outside a secure-access student dormitory asked passersby to assist it to gain access. We found individual participants were comparably likely to assist the robot in exiting (40% assistance rate) as in entering (19%). When the robot was disguised as a food delivery agent for the fictional start-up Robot Grub, individuals were more likely to assist the robot in entering (76%). Groups of people were more likely than individuals to assist the robot in entering (71%). Lastly, we found participants who identified the robot as a bomb threat were just as likely to open the door (87%) as those who did not. Thus, we demonstrate that overtrust—the unfounded belief that the robot does not intend to deceive or carry risk—can represent a significant threat to physical security.


Framing Effects on Privacy Concerns about a Home Telepresence Robot

Matthew Rueben, Frank J. Bernieri, Cindy M. Grimm, William D. Smart

Privacy-sensitive robotics is an emerging area of HRI research. Judgments about privacy would seem to be context-dependent, but none of the promising work on contextual “frames” has focused on privacy concerns. This work studies the impact of contextual “frames” on local users’ privacy judgments in a home telepresence setting. Our methodology consists of using an online questionnaire to collect responses to animated videos of a telepresence robot after framing people with an introductory paragraph. The results of four studies indicate a large effect of manipulating the robot operator’s identity between a stranger and a close confidante. It also appears that this framing effect persists throughout several videos and even after subjects are re-framed. These findings serve to caution HRI researchers that a change in frame could cause their results to fail to replicate or generalize. We also recommend that robots be designed to encourage or discourage certain frames. Researchers should extend this work to different types of frames and over longer periods of time.


Staking the Ethical Limits of HRI

Thomas Arnold, Matthias Scheutz

HRI research has yielded intriguing empirical results connected to ethics and how we react in social contexts with robots, even though much of this work has focused on short-term, one-on-one interaction. In this paper, we point to the need to investigate the longer-term effects of ongoing interactions with robots — individually and in groups, with a single robot or more. We specifically examine three areas: 1) the primacy and implicit dynamics of bodily perception, 2) the competing interests at work in a single robot-human interaction, and 3) the social intricacy of multiple agents — robots and human beings — communicating and making decisions. While these areas are not exhaustive by any means, we find they yield concrete directions for how HRI can contribute to a widening, intensifying set of ethical debates with critical empirical insight, starting to stake out more of the ethical landscape in HRI.

Event Timeslots (1)

Thu, Mar 9
-
Trust and Privacy