Contents
HRI ’23: Proceedings of the 2023 ACM/IEEE International Conference on Human-Robot Interaction
ACM Digital LibraryHRI ’23: Companion of the 2023 ACM/IEEE International Conference on Human-Robot Interaction
ACM Digital LibraryHRI ’23: Proceedings of the 2023 ACM/IEEE International Conference on Human-Robot Interaction
SESSION: Keynote Talks
Interaction Techniques with a Navigation Robot for the Visually Impaired
Robotic technology has long been seen as a potential mobility aid for the visually impaired, and the latest advancements in sensing and artificial intelligence have made it a reality. Before starting the project, we researched the capabilities of, and user interactions with guide dogs. We discovered that the rich haptic interaction with the “handle” allows users to comfortably follow a guide dog.
We then began designing a navigation robot with a handle that is small and slim enough to be followed in a similar body posture. Our goal was to make the robot natural and seamless in an urban environment, leading us to the concept of the AI Suitcase, a suitcase-shaped robot. In this presentation, after reviewing the capabilities of a guide dog, we will explain the concept, design, and implementation of the AI Suitcase and interaction techniques with the robot. We will also discuss the challenges of implementing such new robotic technologies in the real world, including technical challenges, infrastructure challenges, business models, and social acceptance.
BIO: Dr. Chieko Asakawa is an IBM Fellow, working in the area of accessibility. Her initial contribution to the field started from braille digitalization and moved onto the Web accessibility, including the world’s first practical voice browser. Since 2010, Chieko is focusing on real world accessibility to help the visually impaired to better comprehend their surroundings and navigate the world by the power of AI. Her latest project is the development of the AI suitcase, a navigation robot for the visually impaired. She has been serving as an IBM Distinguished Service Professor at Carnegie Mellon University since 2014. Dr. Asakawa started to concurrently serve as Chief Executive Director of the Japanese National Museum of Emerging Science and Innovation (Miraikan) since April 2021. In 2013, the government of Japan awarded the Medal of Honor with Purple Ribbon to Dr. Asakawa for her outstanding contributions to accessibility research. She was elected as a foreign member of the US National Academy of Engineering in 2017, inducted into the National Inventors Hall of Fame (NIHF) in 2019. She also received American Foundation for the Blind 2020 Helen Keller Achievement Award.
Robotics Research and Teaching with a Feminist Lens
Feminism is more than (and often not even) an interest in women’s issues. For our robotics research, we use feminist theory as an analytical toolbox, filled with terms and insights to make visible and probe questions of power, representation, and expectations about and between humans and robots in the entangled encounters produced by social robots. Some of these questions are related to gender. Feminist theory gives us a vocabulary to talk about the materiality of robots, but also their positioning in our social encounters, real and imaginary? and how they position us, the users, in those encounters. This keynote will present some of the theoretical insights from feminism and intersectionality that we have found useful & generative; discuss how and where we apply them to our studies of social robots; and reflect on our experiences using these concepts to teach engineering students.
BIO: Ericka Johnson is a professor of gender and society at Linköping University, Sweden, and a member of the Royal Swedish Academy of Sciences. She has an interdisciplinary background in sociology, gender studies, and science & technology studies. Her work explores how technologies of the body refract discourses, articulate silent understandings, highlight cultural values, and make tangible social norms, with a particular interest for technologies of the digital body, from medical simulators to care robots. She is the author of several monographs and anthologies, including: A Cultural Biography of the Prostate (MIT Press 2021) Refracting through Technology (Routledge 2019), and Gendering Drugs: Feminist Studies of Pharmaceuticals (Palgrave 2018). Together with Dr. Katherine Harrison and professor Ginevra Castellano, she is leading an interdisciplinary research project on the ethics and social consequences of AI and caring robots, funded by WASP-HS.
Robots in Real Life: Putting HRI to Work
This talk will be focused on the unique challenges in deploying a mobile manipulation robot into an environment where the robot is working closely with people on a daily basis. Diligent Robotics’ first product, Moxi, is a mobile manipulation service robot that is at work in hospitals today assisting nurses and other front line staff with materials management tasks. This talk will dive into the computational complexity of developing a mobile manipulator with social intelligence. Dr. Thomaz will focus on how human-robot interaction theories and algorithms translate into the real-world and the impact on functionality and perception of robots that perform delivery tasks in a busy human environment. The talk will include many examples and data from the field, with commentary and discussion around both the expected and unexpected hard problems in building robots operating 24/7 as reliable teammates.
BIO: Andrea Thomaz is the CEO and Co-Founder of Diligent Robotics. Her accolades include being recognized by the National Academy of Science as a Kavli Fellow, the US President’ s Council of Advisors on Science and Tech (PCAST), MIT Technology Review TR35 list, and TEDx as a featured keynote speaker on social robotics. Dr. Thomaz has received numerous research grants including the NSF CAREER award and the Office of Naval Research Young Investigator Award.
Andrea has published in the areas of Artificial Intelligence, Robotics, and Human-Robot Interaction. Her research aims to computationally model mechanisms of human social learning and interaction, in order to build social robots and other machines that are intuitive for everyday people to teach. She earned her Ph.D. from MIT and B.S. in Electrical and Computer Engineering from UT Austin, and was a Robotics Professor at UT Austin and Georgia Tech (where she directed the Socially Intelligent Machines Lab). Andrea co-founded Diligent Robotics in 2018, to pursue her vision of creating socially intelligent robot assistants that collaborate with humans by doing their chores so humans can have more time for the work they care most about.
SESSION: Inclusive Design, Accessibility, Assistive Robots
Exploring Levels of Control for a Navigation Assistant for Blind Travelers
Only a small percentage of blind and low-vision people use traditional mobility aids such as a cane or a guide dog. Various assistive technologies have been proposed to address the limitations of traditional mobility aids. These devices often give either the user or the device majority of the control. In this work, we explore how varying levels of control affect the users’ sense of agency, trust in the device, confidence, and successful navigation. We present Glide, a novel mobility aid with two modes for control: Glide-directed and User-directed. We employ Glide in a study (N=9) in which blind or low-vision participants used both modes to navigate through an indoor environment. Overall, participants found that Glide was easy to use and learn. Most participants trusted Glide despite its current limitations, and their confidence and performance increased as they continued to use Glide. Users’ control mode preferences varied in different situations; no single mode “won” in all situations.
The Robot Made Us Hear Each Other: Fostering Inclusive Conversations among Mixed-Visual Ability Children
Inclusion is key in group work and collaborative learning. We developed a mediator robot to support and promote inclusion in group conversations, particularly in groups composed of children with and without visual impairment. We investigate the effect of two mediation strategies on group dynamics, inclusion, and perception of the robot. We conducted a within-subjects study with 78 children, 26 experienced visual impairments, in a decision-making activity. Results indicate that the robot can foster inclusion in mixed-visual ability group conversations. The robot succeeds in balancing participation, particularly when using a highly intervening mediating strategy (directive strategy). However, children feel more heard by their peers when the robot is less intervening (organic strategy). We extend prior work on social robots to assist group work and contribute with a mediator robot that enables children with visual impairments to engage equally in group conversations. We finish by discussing design implications for inclusive social robots.
Design Principles for Robot-Assisted Feeding in Social Contexts
Social dining, i.e., eating with/in company, is replete with meaning and cultural significance. Unfortunately, for the 1.8 million Americans with motor impairments who cannot eat without assistance, challenges restrict them from enjoying this pleasant social ritual. In this work, we identify the needs of participants with motor impairments during social dining and how robot-assisted feeding can address them. Using speculative videos that show robot behaviors within a social dining context, we interviewed participants to understand their preferences. Following a community-based participatory research method, we worked with a community researcher with motor impairments throughout this study. We contribute (a) insights into how a robot can help overcome challenges in social dining, (b) design principles for creating robot-assisted feeding systems, (c) and an implementation guide for future research in this area. Our key finding is that robots’ unique assistive qualities can address challenges people with motor impairments face during social dining, promoting empowerment and belonging.
Multi-Purposeful Activities for Robot-Assisted Autism Therapy: What Works Best for Children’s Social Outcomes?
This research designed and applied 24 multi-purposeful robot activities of varying social mediation levels in a multiple-session experiment with 34 children of diverse autistic characteristics in a rehabilitation setting. This paper explores what type of robot activities can meet individual needs to bring more socio-behavioral progress and juxtaposes child characteristics to identify behavioral outcomes in each activity. This knowledge would help us to respond to the question of what activity types suit specific subgroups of Autism Spectrum Disorder (ASD). Our data analysis included coding 48.5 hours of video data for a total of 14 measures to fully capture children’s activity-based socio-emotional outcomes. Overall, the activities on varying social mediation levels brought more or less positive social outcomes to all children. However, children showed some different behavioral outcomes as mediated by core autism-related and age-specific characteristics. This study provides in-depth accounts of what might be helpful in designing and applying multi-purposeful activities responsive to the diverse needs of children.
Get SMART: Collaborative Goal Setting with Cognitively Assistive Robots
Many robot-delivered health interventions aim to support people longitudinally at home to complement or replace in-clinic treatments. However, there is little guidance on how robots can support collaborative goal setting (CGS). CGS is the process in which a person works with a clinician to set and modify their goals for care; it can improve treatment adherence and efficacy. However, for home-deployed robots, clinicians will have limited availability to help set and modify goals over time, which necessitates that robots support CGS on their own. In this work, we explore how robots can facilitate CGS in the context of our robot CARMEN (Cognitively Assistive Robot for Motivation and Neurorehabilitation), which delivers neurorehabilitation to people with mild cognitive impairment (PwMCI). We co-designed robot behaviors for supporting CGS with clinical neuropsychologists and PwMCI, and prototyped them on CARMEN. We present feedback on how PwMCI envision these behaviors supporting goal progress and motivation during an intervention. We report insights on how to support this process with home-deployed robots and propose a framework to support HRI researchers interested in exploring this both in the context of cognitively assistive robots and beyond. This work supports designing and implementing CGS on robots, which will ultimately extend the efficacy of robot-delivered health interventions.
Expanded Situational Awareness Without Vision: A Novel Haptic Interface for Use in Fully Autonomous Vehicles
This work presents a novel ultrasonic haptic interface to improve nonvisual perception and situational awareness in applications such as fully autonomous vehicles. User study results (n=14) suggest comparable performance with the dynamic ultrasonic stimuli versus a control using static embossed stimuli. The utility of the ultrasonic interface is demonstrated with a prototype autonomous small-scale robot vehicle using intersection abstractions. These efforts support the application of ultrasonic haptics for improving nonvisual information access in autonomous transportation with strong implications for people who are blind and visually impaired, accessibility, and human-in-the-loop decision making.
“Being in on the Action” in Mobile Robotic Telepresence: Rethinking Presence in Hybrid Participation
Mobile Robotic Telepresence (MRP) systems afford remote communication with an embodied physicality and autonomous mobility, which is thought to be useful for creating a sense of presence in hybrid activities. In this paper, drawing on phenomenology, we interviewed seven long term users of MRP to understand the lived experience of participating in hybrid spaces through a telepresence robot. The users’ accounts show how the capabilities of the robot impact interactions, and how telepresence differs from in-person presence. Whilst not feeling as if they were really there, users felt present when they were being able to participate in local action and be treated as present. They also report standing out and being subject to behaviour amounting to ‘ othering’. We argue that these experiences point to a need for future work on telepresence to focus on giving remote users the means to exercise autonomy in ways that enable them to participate — to be ‘ in on the action’ — rather than in ways that simply simulate being in-person.
Feminist Human-Robot Interaction: Disentangling Power, Principles and Practice for Better, More Ethical HRI
Human-Robot Interaction (HRI) is inherently a human-centric field of technology. The role of feminist theories in related fields (e.g. Human-Computer Interaction, Data Science) are taken as a starting point to present a vision for Feminist HRI which can support better, more ethical HRI practice everyday, as well as a more activist research and design stance. We first define feminist design for an HRI audience and use a set of feminist principles from neighboring fields to examine existent HRI literature, showing the progress that has been made already alongside some additional potential ways forward. Following this we identify a set of reflexive questions to be posed throughout the HRI design, research and development pipeline, encouraging a sensitivity to power and to individuals’ goals and values. Importantly, we do not look to present a definitive, fixed notion of Feminist HRI, but rather demonstrate the ways in which bringing feminist principles to our field can lead to better, more ethical HRI, and to discuss how we, the HRI community, might do this in practice.
Implications of AI Bias in HRI: Risks (and Opportunities) when Interacting with a Biased Robot
Social robotic behavior is commonly designed using AI algorithms which are trained on human behavioral data. This training process may result in robotic behaviors that echo human biases and stereotypes. In this work, we evaluated whether an interaction with a biased robotic object can increase participants’ stereotypical thinking. In the study, a gender-biased robot moderated debates between two participants (man and woman) in three conditions: (1) The robot’ s behavior matched gender stereotypes (Pro-Man); (2) The robot’s behavior countered gender stereotypes (Pro-Woman); (3) The robot’ s behavior did not reflect gender stereotypes and did not counter them (No-Preference). Quantitative and qualitative measures indicated that the interaction with the robot in the Pro-Man condition increased participants’ stereotypical thinking. In the No-Preference condition, stereotypical thinking was also observed but to a lesser extent. In contrast, when the robot displayed counter-biased behavior in the Pro-Woman condition, stereotypical thinking was eliminated. Our findings suggest that HRI designers must be conscious of AI algorithmic biases, as interactions with biased robots can reinforce implicit stereotypical thinking and exacerbate existing biases in society. On the other hand, counter-biased robotic behavior can be leveraged to support present efforts to address the negative impact of stereotypical thinking.
SESSION: Human-robot Communication
No, to the Right: Online Language Corrections for Robotic Manipulation via Shared Autonomy
Systems for language-guided human-robot interaction must satisfy two key desiderata for broad adoption: adaptivity and learning efficiency. Unfortunately, existing instruction-following agents cannot adapt, lacking the ability to incorporate online natural language supervision, and even if they could, require hundreds of demonstrations to learn even simple policies. In this work, we address these problems by presenting Language-Informed Latent Actions with Corrections (LILAC), a framework for incorporating and adapting to natural language corrections “to the right”, or “no, towards the book” – online, during execution. We explore rich manipulation domains within a shared autonomy paradigm. Instead of discrete turn-taking between a human and robot, LILAC splits agency between the human and robot: language is an input to a learned model that produces a meaningful, low-dimensional control space that the human can use to guide the robot. Each real-time correction refines the human’s control space, enabling precise, extended behaviors – with the added benefit of requiring only a handful of demonstrations to learn. We evaluate our approach via a user study where users work with a Franka Emika Panda manipulator to complete complex manipulation tasks. Compared to existing learned baselines covering both open-loop instruction following and single-turn shared autonomy, we show that our corrections-aware approach obtains higher task completion rates, and is subjectively preferred by users because of its reliability, precision, and ease of use.
Do You Follow?: A Fully Automated System for Adaptive Robot Presenters
An interesting application for social robots is to act as a presenter, for example as a museum guide. In this paper, we present a fully automated system architecture for building adaptive presentations for embodied agents. The presentation is generated from a knowledge graph, which is also used to track the grounding state of information, based on multimodal feedback from the user. We introduce a novel way to use large-scale language models (GPT-3 in our case) to lexicalise arbitrary knowledge graph triples, greatly simplifying the design of this aspect of the system. We also present an evaluation where 43 participants interacted with the system. The results show that users prefer the adaptive system and consider it more human-like and flexible than a static version of the same system, but only partial results are seen in their learning of the facts presented by the robot.
Fresh Start: Encouraging Politeness in Wakeword-Driven Human-Robot Interaction
Deployed social robots are increasingly relying on wakeword-based interaction, where interactions are human-initiated by a wakeword like “Hey Jibo” . While wakewords help to increase speech recognition accuracy and ensure privacy, there is concern that wakeword-driven interaction could encourage impolite behavior because wakeword-driven speech is typically phrased as commands. To address these concerns, companies have sought to use wakeword design to encourage interactant politeness, through wakewords like “Name?, please” . But while this solution is intended to encourage people to use more “polite words”, researchers have found that these wakeword designs actually decrease interactant politeness in text-based communication, and that other wakeword designs could better encourage politeness by priming users to use Indirect Speech Acts. Yet there has been no previous research to directly compare these wakewords designs in in-person, voice-based human-robot interaction experiments, and previous in-person HRI studies could not effectively study carryover of wakeword-driven politeness and impoliteness into human-human interactions. In this work, we conceptually reproduced these previous studies (n=69) to assess how the wakewords “Hey ” Name””, “Excuse me ” Name?”, and ” Name?, please” impact robot-directed and human-directed politeness. Our results demonstrate the ways that different types of linguistic priming interact in nuanced ways to induce different types of robot-directed and human-directed politeness.
I Need Your Help… or Do I?: Maintaining Situation Awareness through Performative Autonomy
Interactive intelligent systems are increasingly being deployed in safety critical contexts like Space Exploration. For humans to safely and successfully complete collaborative tasks with robots in these contexts, they must maintain Situational Awareness of their task context without being cognitively overloaded — regardless of whether they are co-located with robots or interacting with them from a distance of thousands or millions of miles. In this paper, we present a novel autonomy design strategy we term Performative Autonomy, in which robots behave as if they have a lower level of autonomy than they are truly capable of (i.e., asking for advice they do not believe they truly need), for the sole purpose of maintaining interactants’ Situational Awareness. In our first experiment (n=264), we begin by demonstrating that Performative Autonomy can increase Situational Awareness (SA) without overly increasing workload, and that this is true across tasks with different baseline levels of Mental Workload. In our second experiment (n=318), we consider cases where robots do not believe they need advice, but in fact have faulty perception or decision making capabilities. In this experiment, we only observed benefits to Performative Autonomy for specific types of questions, and only when there was significant cognitive load imposed by a secondary task; yet we observed uniform benefit on task performance for asking these types of questions regardless of task-imposed Mental workload. Our results from these two studies (total n=582) thus provide strong support for using this autonomy design strategy in future safety-critical missions as humanity explores the Moon, Mars, and beyond.
Communicative Robot Signals: Presenting a New Typology for Human-Robot Interaction
We present a new typology for classifying signals from robots when they communicate with humans. For inspiration, we use ethology, the study of animal behaviour and previous efforts from literature as guides in defining the typology. The typology is based on communicative signals that consist of five properties: the origin where the signal comes from, the deliberateness of the signal, the signal’s reference, the genuineness of the signal, and its clarity (i.e., how implicit or explicit it is). Using the accompanying worksheet, the typology is straightforward to use to examine communicative signals from previous human-robot interactions and provides guidance for designers to use the typology when designing new robot behaviours.
Hmm, You Seem Confused ! Tracking Interlocutor Confusion for Situated Task-Oriented HRI
Our research seeks to develop a long-lasting and high-quality engagement between the user and the social robot, which in turn requires a more sophisticated alignment of the user and the system than is currently commonly available. Close monitoring of interlocutors’ states, and we argue their confusion state in particular, and adjusting dialogue policies based on this state of confusion is needed for successful joint activity. In this paper, we present an initial study of human-robot conversation scenarios using a Pepper robot to investigate the confusion states of users. A Wizard-of-Oz (WoZ) HRI experiment is illustrated in detail with stimuli strategies to trigger confused states from interlocutors. For the collected data, we estimated emotions, head pose, and eye gaze, and these features were analysed against the silence duration time of the speech data and the post-study self-reported confusion states that are reported by participants. Our analysis found a significant relationship between confusion states and most of these features. We see these results as being particularly significant for multimodal situated dialogues for human-robot interaction and beyond.
Crossing Reality: Comparing Physical and Virtual Robot Deixis
Augmented Reality (AR) technologies present an exciting new medium for human-robot interactions, enabling new opportunities for both implicit and explicit human-robot communication. For example, these technologies enable physically-limited robots to execute non-verbal interaction patterns such as deictic gestures despite lacking the physical morphology necessary to do so. However, a wealth of HRI research has demonstrated real benefits to physical embodiment (compared to, e.g., virtual robots on screens), suggesting AR augmentation of virtual robot parts could face challenges. In this work, we present empirical evidence comparing the use of virtual (AR) and physical arms to perform deictic gestures that identify virtual or physical referents. Our subjective and objective results demonstrate the success of mixed reality deictic gestures in overcoming these potential limitations, and their successful use regardless of differences in physicality between gesture and referent. These results help to motivate the further deployment of mixed reality robotic systems and provide nuanced insight into the role of mixed-reality technologies in HRI contexts.
The Effect of Simple Emotional Gesturing in a Socially Assistive Robot on Child’s Engagement at a Group Vaccination Day
Children encounter high levels of stress and anxiety before receiving medical treatment, such as a vaccination. This paper explores the effect of emotional gesturing in socially assistive robots (SARs) on children’s observed and self-reported engagement, as well as self-reported anxiety, fear, and trust during a group vaccination. A total of 249 children interacted with the social robot iPal before and after receiving the vaccine. Our results show an overall positive effect of adding emotional gestures to a SAR’ s interaction behavior leading to increased engagement and lower anxiety, while increased engagement also resulted in trusting the robot more. Thus, adding emotional gestures during child-robot interaction is a powerful way to improve the child’s experience during a group vaccination day.
Designing Robot Sound-In-Interaction: The Case of Autonomous Public Transport Shuttle Buses
Horns and sirens are important tools for communicating on the road, which are still understudied in autonomous vehicles. While HRI has explored different ways in which robots could sound, we focus on the range of actions that a single sound can accomplish in interaction. In a Research through Design study involving autonomous shuttle buses in public transport, we explored sound design with the help of voice-overs to video recordings of the buses on the road and Wizard-of-Oz tests in live traffic. The buses are slowed down by (unnecessary) braking in response to people getting close. We found that prolonged jingles draw attention to the bus and invite interaction, while repeated short beeps and bell sounds can instruct the movement of others away from the bus. We highlight the importance of designing sound in sequential interaction and describe a new method for embedding video interaction analysis in the design process.
Coffee, Tea, Robots?: The Performative Staging of Service Robots in ‘Robot Cafes’ in Japan
We present an ethnographic observational study of six robot cafes in Japan to understand how service robots are performatively staged and presented to the public. We particularly attend to the diverse ways in which the physical setting and ambience of the cafes, the verbal characterization of and staff behaviors toward robots, explicit and implicit instructions on appropriate interactions with robots, and handling of robot malfunctions constitute robots as socially acceptable and useful in daily life. Such scaffolding enables robots to provide material and affective services to cafe visitors, and visitors to explore various interaction possibilities with robots. Our work contributes to the critical study of the ongoing construction of “robot cultures” in Japan, and calls attention to public interactions with robots and the importance of contextual staging beyond individual robot features in human-robot interaction design.
Investigating the Integration of Human-Like and Machine-Like Robot Behaviors in a Shared Elevator Scenario
This paper examines the advantages and disadvantages of combining Human-Like and Machine-Like behaviors for a robot taking a shared elevator with a bystander as part of an office delivery service scenario. We present findings of an in-person wizard-of-oz experiment that builds on and implements behavior policies developed in a previous study. In this experiment, we found that the combination of Machine-Like and Human-Like behaviors was perceived as better than Human-Like behaviors alone. We discuss possible reasons and point to key capabilities that a socially competent robot should have to achieve better Human-Like behaviors in order to seamlessly negotiate a social encounter with bystanders in a shared elevator or similar scenario. We found that establishing and maintaining a shared transactional space is one of these key requirements.
Studying Mind Perception in Social Robotics Implicitly: The Need for Validation and Norming
The recent shift towards incorporating implicit measurements into the mind perception studies in social robotics has come along with its promises and challenges. The implicit tasks can go beyond the limited scope of the explicit tasks and increase the robustness of empirical investigations in human-robot interaction (HRI). However, designing valid and reliable implicit tasks requires norming and validating all stimuli to ensure no confounding factors interfere with the experimental manipulations. We conducted a lexical norming study to systematically explore the concepts suitable for an implicit task that measures mind perception induced by social robots. Two-hundred seventy-four participants rated an expanded and strictly selected list of forty mental capacities in two categories: Agency and Experience, and in two levels of capacities: High and Low. We used the partitioning around medoids algorithm as an objective way of revealing the clusters. We discussed the different clustering solutions in light of the previous findings. We consulted on frequency-based natural language processing (NLP) on the answers to the open-ended questions. The NLP analyses verified the significance of clear instructions and the presence of some common conceptualizations across dimensions. We proposed a systematic approach that encourages validation and norming studies, which will further improve the reliability and reproducibility of HRI studies.
Your Way Or My Way: Improving Human-Robot Co-Navigation Through Robot Intent and Pedestrian Prediction Visualisations
As mobile robots enter shared urban spaces, operating in close proximity to people, this raises new challenges in terms of how these robots communicate with passers-by. Following an iterative process involving expert focus groups (n=8), we designed an augmented reality concept that visualises the robot’s navigation intent and the pedestrian’ s predicted path. To understand the impact of path visualisations on trust, sense of agency, user experience, and robot understandability, we conducted a virtual reality evaluation (n=20). We compared visualising both robot intent and pedestrian path prediction against just visualising robot intent and a baseline without augmentation. The presence of path visualisations resulted in a significant improvement of trust. Triangulation of quantitative and qualitative results further highlights the impact of pedestrian path prediction visualisation on robot understandability as it allows for exploratory interaction.
On Using Social Signals to Enable Flexible Error-Aware HRI
Prior error management techniques often do not possess the versatility to appropriately address robot errors across tasks and scenarios. Their fundamental framework involves explicit, manual error management and implicit domain-specific information driven error management, tailoring their response for specific interaction contexts. We present a framework for approaching error-aware systems by adding implicit social signals as another information channel to create more flexibility in application. To support this notion, we introduce a novel dataset (composed of three data collections) with a focus on understanding natural facial action unit (AU) responses to robot errors during physical-based human-robot interactions—varying across task, error, people, and scenario. Analysis of the dataset reveals that, through the lens of error detection, using AUs as input into error management affords flexibility to the system and has the potential to improve error detection response rate. In addition, we provide an example real-time interactive robot error management system using the error-aware framework.
Illustrating Robot Movements
In efforts to disseminate research on human-robot interaction, many researchers use illustrations in the form of sketches, photographs, and 3D models of robot movements. These illustrations are not only useful for building on the research, but they also capture ways researchers think about robot movement. In this paper, we review papers from the ACM/IEEE International Conference on Human-Robot Interaction in which such illustrations are presented supplementary to the text. We analyse a total of 181 illustrations from 137 papers to understand the diverse ways in which robot movements are illustrated as well as how each style supports and limits information about the movements. We identify 10 basic styles that are used. Based on a visual analysis of these styles, we provide a detailed examination of each. This paper contributes with an overview that can be used to support future dissemination within the HRI research community. We present four aspects to consider for future illustrations and a discussion on how our findings could be utilised in early design processes.
SESSION: Human-robot Collaboration
Resolving Conflicts During Human-Robot Co-Manipulation
This paper proposes a machine learning (ML) approach to detect and resolve motion conflicts that occur between a human and a proactive robot during the execution of a physically collaborative task. We train a random forest classifier to distinguish between harmonious and conflicting human-robot interaction behaviors during object co-manipulation. Kinesthetic information generated through the teamwork is used to describe the interactive quality of collaboration. As such, we demonstrate that features derived from haptic (force/torque) data are sufficient to classify if the human and the robot harmoniously manipulate the object or they face a conflict. A conflict resolution strategy is implemented to get the robotic partner to proactively contribute to the task via online trajectory planning whenever interactive motion patterns are harmonious, and to follow the human lead when a conflict is detected. An admittance controller regulates the physical interaction between the human and the robot during the task. This enables the robot to follow the human passively when there is a conflict. An artificial potential field is used to proactively control the robot motion when partners work in harmony. An experimental study is designed to create scenarios involving harmonious and conflicting interactions during collaborative manipulation of an object, and to create a dataset to train and test the random forest classifier. The results of the study show that ML can successfully detect conflicts and the proposed conflict resolution mechanism reduces human force and effort significantly compared to the case of a passive robot that always follows the human partner and a proactive robot that cannot resolve conflicts.
Crafting with a Robot Assistant: Use Social Cues to Inform Adaptive Handovers in Human-Robot Collaboration
We study human-robot handovers in a naturalistic collaboration scenario, where a mobile manipulator robot assists a person during a crafting session by providing and retrieving objects used for wooden piece assembly (functional activities) and painting (creative activities). We collect quantitative and qualitative data from 20 participants in a Wizard-of-Oz study, generating the Functional And Creative Tasks Human-Robot Collaboration dataset (the FACT HRC dataset), available to the research community. This work illustrates how social cues and task context inform the temporal-spatial coordination in human-robot handovers, and how human-robot collaboration is shaped by and in turn influences people’s functional and creative activities.
The Effect of Robot Skill Level and Communication in Rapid, Proximate Human-Robot Collaboration
As high-speed, agile robots become more commonplace, these robots will have the potential to better aid and collaborate with humans. However, due to the increased agility and functionality of these robots, close collaboration with humans can create safety concerns that alter team dynamics and degrade task performance. In this work, we aim to enable the deployment of safe and trustworthy agile robots that operate in proximity with humans. We do so by 1) Proposing a novel human-robot doubles table tennis scenario to serve as a testbed for studying agile, proximate human-robot collaboration and 2) Conducting a user-study to understand how attributes of the robot (e.g., robot competency or capacity to communicate) impact team dynamics, perceived safety, and perceived trust, and how these latent factors affect human-robot collaboration (HRC) performance. We find that robot competency significantly increases perceived trust (p < .001), extending skill-to-trust assessments in prior studies to agile, proximate HRC. Furthermore, interestingly, we find that when the robot vocalizes its intention to perform a task, it results in a significant decrease in team performance (p=.037) and perceived safety of the system (p=.009).
“What If It Is Wrong”: Effects of Power Dynamics and Trust Repair Strategy on Trust and Compliance in HRI
Robotic systems designed to work alongside people are susceptible to technical and unexpected errors. Prior work has investigated a variety of strategies aimed at repairing people’s trust in the robot after its erroneous operations. In this work, we explore the effect of post-error trust repair strategies (promise and explanation) on people’ s trust in the robot under varying power dynamics (supervisor and subordinate robot). Our results show that, regardless of the power dynamics, promise is more effective at repairing user trust than explanation. Moreover, people found a supervisor robot with verbal trust repair to be more trustworthy than a subordinate robot with verbal trust repair. Our results further reveal that people are prone to complying with the supervisor robot even if it is wrong. We discuss the ethical concerns in the use of supervisor robot and potential interventions to prevent improper compliance in users for more productive human-robot collaboration.
Trust-Aware Planning: Modeling Trust Evolution in Iterated Human-Robot Interaction
Trust between team members is an essential requirement for any successful cooperation. Thus, engendering and maintaining the fellow team members’ trust becomes a central responsibility for any member trying to not only successfully participate in the task but to ensure the team achieves its goals. The problem of trust management is particularly challenging in mixed human-robot teams where the human and the robot may have different models about the task at hand and thus may have different expectations regarding the current course of action, thereby forcing the robot to focus on the costly explicable behavior. We propose a computational model for capturing and modulating trust in such iterated human-robot interaction settings, where the human adopts a supervisory role. In our model, the robot integrates human’ s trust and their expectations about the robot into its planning process to build and maintain trust over the interaction horizon. By establishing the required level of trust, the robot can focus on maximizing the team goal by eschewing explicit explanatory or explicable behavior without worrying about the human supervisor monitoring and intervening to stop behaviors they may not necessarily understand. We model this reasoning about trust levels as a meta reasoning process over individual planning tasks. We additionally validate our model through a human subject experiment.
Verbally Soliciting Human Feedback in Continuous Human-Robot Collaboration: Effects of the Framing and Timing of Reminders
Humans expect robots to learn from their feedback and adapt to their preferences. However, there are limitations with how humans provide feedback to robots, e.g., humans may give less feedback as interactions progress. Therefore, it would be advantageous if robots could influence humans to provide more feedback during interactions. We conducted a 2×2 between-subjects user study (N=71) to investigate whether the framing and timing of a robot’s reminder to provide feedback could influence human interactants. Human-robot interactions took place in the context of Space Invaders, a fast-paced and continuous collaborative environment. Our results suggest that reminders can influence the amount of feedback humans provide to robots, how participants feel about the robot, and how they feel about providing feedback during the interaction.
SESSION: Robot Teachers, Learning with Robots
Robotic Mental Well-being Coaches for the Workplace: An In-the-Wild Study on Form
The World Health Organization recommends that employers take action to protect and promote mental well-being at work. However, the extent to which these recommended practices can be implemented in the workplace is limited by the lack of resources and personnel availability. Robots have been shown to have great potential for promoting mental well-being, and the gradual adoption of such assistive technology may allow employers to overcome the aforementioned resource barriers. This paper presents the first study that investigates the deployment and use of two different forms of robotic well-being coaches in the workplace in collaboration with a tech company whose employees (26 coachees) interacted with either a QTrobot (QT) or a Misty robot (M). We endowed the robots with a coaching personality to deliver positive psychology exercises over four weeks (one exercise per week). Our results show that the robot form significantly impacts coachees’ perceptions of the robotic coach in the workplace. Coachees perceived the robotic coach in M more positively than in QT (both in terms of behaviour appropriateness and perceived personality), and they felt more connection with the robotic coach in M. Our study provides valuable insights for robotic well-being coach design and deployment, and contributes to the vision of taking robotic coaches into the real world.
A Drone Teacher: Designing Physical Human-Drone Interactions for Movement Instruction
Drones (micro unmanned aerial vehicles) are becoming more prevalent in applications that bring them into close human spaces. This is made possible in part by clear drone-to-human communication strategies. However, current auditory and visual communication methods only work with strict environmental settings. To continue expanding the possibilities for drones to be useful in human spaces, we explore ways to overcome these limitations through physical touch. We present a new application for drones–physical instructive feedback. To do this we designed three different physical interaction modes for a drone. We then conducted a user study (N=12) to answer fundamental questions of where and how people want to physically interact with drones, and what people naturally infer the physical touch is communicating. We then used these insights to conduct a second user study (N=14) to understand the best way for a drone to communicate instructions to a human in a movement task. We found that continuous physical feedback is both the preferred mode and is more effective at providing instruction than incremental feedback.
Design Specifications for a Social Robot Math Tutor
To benefit from the social capabilities of a robot math tutor, instead of being distracted by them, a novel approach is needed where the math task and the robot’s social behaviors are better intertwined. We present concrete design specifications of how children can practice math via a personal conversation with a social robot and how the robot can scaffold instructions. We evaluated the designs with a three-session experimental user study (n = 130, 8-11 y.o.). Participants got better at math over time when the robot scaffolded instructions. Furthermore, the robot felt more as a friend when it personalized the conversation.
Robocamp at Home: Exploring Families’ Co-Learning with a Social Robot: Findings from a One-Month Study in the Wild
Social robots are becoming important agents in several sectors of people’s lives. They can act in different contexts, e.g., public spaces, schools, and homes. Operating, programming and interacting with these robots will be an essential skill in the future. We present a qualitative and explorative study on how family members collaboratively learn (co-learn) about social robots at their homes. Our one-month in the wild study took place at homes of eight families (N=32) in Finland. We defined a novel model for co-learning about and with a social robot at home, Robocamp. In Robocamp, Alpha Mini robot was introduced and left within the families, who were then provided with weekly robotic challenges to be conducted with the robot. The research data was collected by semi-structured interviews and online diaries. This study provides novel insights about family-based co-learning with social robots in the home context. It also offers recommendations for implementing family-based co-learning with social robots at homes.
A Social Robot Reading Partner for Explorative Guidance
Pedagogical agent research has yielded fruitful results in both academic skill learning and meta-cognitive skill acquisition, often studied in instructional or peer-to-peer paradigms. In the past decades, child-centric pedagogical research, which emphasizes the learner’s active participation in learning with self-motivation, curiosity, and exploration, has attracted scholarly attention. Studies show that combining child-driven pedagogy with appropriate adult guidance leads to efficient learning and a strengthened feeling of self-efficacy. However, research on using social robots for guidance in child-driven learning still remains open and under-explored. In our study, we focus on children’ s exploration as the vehicle in literacy learning and develop a social robot companion that provides guidance to encourage and motivate children to explore during a storybook reading interaction. To investigate the effect of the robot’s explorative guidance, we compare it against a control condition in which children have full autonomy to explore and read the storybooks. We conduct a between-subjects study with 31 children aged 4 to 6, and the result shows that children who receive explorative guidance from the social robot exhibit a growing trend of self-exploration. Further, children’ s self-exploration in the explorative guidance condition is found correlated to their learning outcome. We conclude the study with recommendations for designing social agents to guide children’s exploration and future research directions in child-centric AI-assisted pedagogy.
Towards Modeling and Influencing the Dynamics of Human Learning
Humans have internal models of robots (like their physical capabilities), the world (like what will happen next), and their tasks (like a preferred goal). However, human internal models are not always perfect: for example, it is easy to underestimate a robot’s inertia. Nevertheless, these models change and improve over time as humans gather more experience. Interestingly, robot actions influence what this experience is, and therefore influence how people’ s internal models change. In this work we take a step towards enabling robots to understand the influence they have, leverage it to better assist people, and help human models more quickly align with reality. Our key idea is to model the human’s learning as a nonlinear dynamical system which evolves the human’ s internal model given new observations. We formulate a novel optimization problem to infer the human’s learning dynamics from demonstrations that naturally exhibit human learning. We then formalize how robots can influence human learning by embedding the human’ s learning dynamics model into the robot planning problem. Although our formulations provide concrete problem statements, they are intractable to solve in full generality. We contribute an approximation that sacrifices the complexity of the human internal models we can represent, but enables robots to learn the nonlinear dynamics of these internal models. We evaluate our inference and planning methods in a suite of simulated environments and an in-person user study, where a 7DOF robotic arm teaches participants to be better teleoperators. While influencing human learning remains an open problem, our results demonstrate that this influence is possible and can be helpful in real human-robot interaction.
Limitations of Audiovisual Speech on Robots for Second Language Pronunciation Learning
The perception of audiovisual speech plays an important role in infants’ first language acquisition and continues to be important for language understanding beyond infancy. Beyond that, the perception of speech and congruent lip motion supports language understanding for adults, and it has been suggested that second language learning benefits from audiovisual speech, as it helps learners distinguish speech sounds in the target language. In this paper, we study whether congruent audiovisual speech on a robot facilitates the learning of Japanese pronunciation. 27 native-Dutch speaking participants were trained in Japanese pronunciation by a social robot. The robot demonstrated 30 Japanese words of varying complexity using either congruent audiovisual speech, incongruent visual speech, or computer-generated audiovisual speech. Participants were asked to imitate the robot’ s pronunciation, recordings of which were rated by native Japanese speakers. Against expectation, the results showed that congruent audiovisual speech resulted in lower pronunciation performance than low-fidelity or incongruent speech. We show that our learners, being native Dutch speakers, are only very weakly sensitive to audiovisual Japanese speech which possibly explains why learning performance does not seem to benefit from audiovisual speech.
I Learn Better Alone !: Collaborative and Individual Word Learning With a Child and Adult Robot
The use of social robots as a tool for language learning has been studied quite extensively recently. Although their effectiveness and comparison with other technologies are well studied, the effects of the robot’s appearance and the interaction setting have received less attention. As educational robots are envisioned to appear in household or school environments, it is important to investigate how their designed persona or interaction dynamics affect learning outcomes. In such environments, children may do the activities together or alone or perform them in the presence of an adult or another child. In this regard, we have identified two novel factors to investigate: the robot’ s perceived age (adult or child) and the number of learners interacting with the robot simultaneously (one or two). We designed an incidental word learning card game with the Furhat robot and ran a between-subject experiment with 75 middle school participants. We investigated the interactions and effects of children’s word learning outcomes, speech activity, and perception of the robot’ s role. The results show that children who played alone with the robot had better word retention and anthropomorphized the robot more, compared to those who played in pairs. Furthermore, unlike previous findings from human-human interactions, children did not show different behaviors in the presence of a robot designed as an adult or a child. We discuss these factors in detail and make a novel contribution to the direct comparison of collaborative versus individual learning and the new concept of the robot’s age.
“Off Script:” Design Opportunities Emerging from Long-Term Social Robot Interactions In-the-Wild
Social robots are becoming increasingly prevalent in the real world. Unsupervised user interactions in a natural and familiar setting, such as the home, can reveal novel design insights and opportunities. This paper presents an analysis and key design insights from family-robot interactions, captured via on-robot recordings during an unsupervised four-week in-home deployment of an autonomous reading companion robot for children. We analyzed interviews and 160 interaction videos involving six families who regularly interacted with a robot for four weeks. Throughout these interactions, we observed how the robot’s expressions facilitated unique interactions with the child, as well as how family members interacted with the robot. In conclusion, we discuss five design opportunities derived from our analysis of natural interactions in the wild.
SESSION: Human Perception of Robots
Would You Help Me?: Linking Robot’s Perspective-Taking to Human Prosocial Behavior
Despite the growing literature on human attitudes toward robots, particularly prosocial behavior, little is known about how robots’ perspective-taking, the capacity to perceive and understand the world from other viewpoints, could influence such attitudes and perceptions of the robot. To make robots and AI more autonomous and self-aware, more researchers have focused on developing cognitive skills such as perspective-taking and theory of mind in robots and AI. The present study investigated whether a robot’ s perspective-taking choices could influence the occurrence and extent of exhibiting prosocial behavior toward the robot. We designed an interaction consisting of a perspective-taking task, where we manipulated how the robot instructs the human to find objects by changing its frame of reference and measured the human’s exhibition of prosocial behavior toward the robot. In a between-subject study (N=70), we compared the robot’ s egocentric and addressee-centric instructions against a control condition, where the robot’s instructions were object-centric. Participants’ prosocial behavior toward the robot was measured using a voluntary data collection session. Our results imply that the occurrence and extent of prosocial behavior toward the robot were significantly influenced by the robot’s visuospatial perspective-taking behavior. Furthermore, we observed, through questionnaire responses, that the robot’ s choice of perspective-taking could potentially influence the humans’ perspective choices, were they to reciprocate the instructions to the robot.
Self-Annotation Methods for Aligning Implicit and Explicit Human Feedback in Human-Robot Interaction
Recent research in robot learning suggests that implicit human feedback is a low-cost approach to improving robot behavior without the typical teaching burden on users. Because implicit feedback can be difficult to interpret, though, we study different methods to collect fine-grained labels from users about robot performance across multiple dimensions, which can then serve to map implicit human feedback to performance values. In particular, we focused on understanding the effects of annotation order and frequency on human perceptions of the self-annotation process and the usefulness of the labels for creating data-driven models to reason about implicit feedback. Our results demonstrate that different annotation methods can influence perceived memory burden, annotation difficulty, and overall annotation time. Based on our findings, we conclude with recommendations to create future implicit feedback datasets in Human-Robot Interaction.
Out for In !: Empirical Study on the Combination Power of Two Service Robots for Product Recommendation
Service robots have increasingly been investigated in retailing. Previous studies mainly focused on the effectiveness of recommendation with regard to a single robot, and whether and how the use of two robots combined can achieve better performance remain unclear. In this study, we address this by exploring the combination power of two service robots for product recommendation in a bakery. We placed one robot inside the store for product recommendation and the other robot outside to promote the inside robot. Particularly, we are interested in the effects of the outside robot on the inside robot’s performance in product recommendation. Our results indicate that using the outside robot to promote the inside robot achieved more purchases over using the inside robot alone. Particularly, we discovered that the outside robot increased the attention of customers toward the inside robot; hence, more customers checked and purchased the products. Based on the findings, we discuss the important points for the effective use of service robots.
Hey?: ! What did you think about that Robot? Groups Polarize Users’ Acceptance and Trust of Food Delivery Robots
As food delivery robots are spreading onto streets and college campuses worldwide, users’ views of these robots will depend on their direct and indirect interactions with the robots and their conversations with other people, such as those with whom they are ordering food via a robot. We examined if being in a group of 2 to 3 people affects the acceptance and trust of the robot compared to being an individual user. First-time users of the food delivery robot service (N = 60) ordered food either as an Individual or in a Group. We measured the acceptance and trust of the robots after three Exposures (pre-exposure, after ordering food on the app, and after the robots delivered the food). Results indicated that Individual users had more acceptance and trust compared to Group users. Further, as hypothesized, groups had more variation in acceptance and trust compared to individual users, consistent with patterns of group polarization i.e., group members influencing each other’ s perceptions to become more positive or negative. Further analysis demonstrated that group members were highly influenced by their groupmates. Designers and restaurant operators should consider how to enhance group members’ experience of delivery robots.
Models of (Often) Ambivalent Robot Stereotypes: Content, Structure, and Predictors of Robots’ Age and Gender Stereotypes
This study focused on investigating the content, structure, and predictors of robots’ stereotypes. We involved 120 participants in an online study and asked them to rate 80 robots on communion, agency, suitability for female and suitability for male tasks. In line with the stereotype content model, we discovered that robots’ stereotypes are described by two dimensions, communion and agency, which combine to form univalent (e.g., low communion/low agency), as well as ambivalent clusters (e.g., low communion/high agency). Moreover, we found out that a robot’s stereotypical appearance has a role in activating stereotypes. Indeed, in our study, female robots featuring appearance cues socio-culturally associated with femininity (e.g., eyelashes or apparel) were perceived as more communal, and juvenile robots featuring appearance cues tapping into the baby schema (e.g., cartoony eyes) were perceived as more communal, less agentic, and less suited to perform tasks. Given the renowned relationship between stereotyping, prejudice and discrimination, the causal link between appearance and stereotyping we establish in this paper can help HRI researchers disentangle the relation between robots’ design and people’s behavioral tendencies towards them, including proneness to harm.
A Picture Might Be Worth a Thousand Words, But It’s Not Always Enough to Evaluate Robots
Evaluation of robots commonly occurs using various stimuli, including photos, videos, and live interaction. However, a better understanding of how and why chosen stimuli affect perceptions, and how evaluations using lower fidelity media (e.g. photos) compare to evaluations using higher context stimuli (e.g., videos), is needed. Through a survey of 599 M-Turk participants, we compare robot evaluations based on exposure to three types of media – photos, GIFs, and promotional videos. We analyze nine perception and behavioral intention measures of three home robots with varying levels of anthropomorphism (Olly, Jibo, and Liku): overall liking, liking of appearance, liking of intended use, eeriness, human-likeness, performance expectations, privacy concerns, information seeking intention, and purchase intention. We find that photos consistently differ from ratings of videos for all measures, except for liking of robots’ intended use. Use of GIFs led to measurements in line with videos for seven of the nine measurements, due to the importance of movement in perceptual assessments and character judgments (e.g., friendly, creepy). Except for the most human-like robot, neither photos nor GIFs captured human-likeness to a similar degree as videos, due to the importance of speech in assessments. Though GIFs captured informational and overall privacy concerns well, they did not adequately capture physical privacy concerns.
Increasing Perceived Safety in Motion Planning for Human-Drone Interaction
Safety is crucial for autonomous drones to operate close to humans. Besides avoiding unwanted or harmful contact, people should also perceive the drone as safe. Existing safe motion planning approaches for autonomous robots, such as drones, have primarily focused on ensuring physical safety, e.g., by imposing constraints on motion planners. However, studies indicate that ensuring physical safety does not necessarily lead to perceived safety. Prior work in Human-Drone Interaction (HDI) shows that factors such as the drone’s speed and distance to the human are important for perceived safety. Building on these works, we propose a parameterized control barrier function (CBF) that constrains the drone’ s maximum deceleration and minimum distance to the human and update its parameters on people’s ratings of perceived safety. We describe an implementation and evaluation of our approach. Results of a within-subject user study (N=15) show that we can improve perceived safety of a drone by adjusting to people individually.
Effects of Human-Swarm Interaction on Subjective Time Perception: Swarm Size and Speed
Many large-scale multi-robot systems require human input during operation in different applications. To still minimize the human effort, interaction is intermittent or restricted to a subset of robots. Despite this reduced demand for human interaction, the mental load and stress can be challenging for the human operator. A specific effect of human-swarm interaction may be a hypothesized change of subjective time perception in the human operator. In a series of simple human-swarm interaction experiments with robot swarms of up to 15 physical robots, we study whether human operators have altered time perception due to the number of controlled robots or robot speeds. Using data gathered by questionnaires, we found that increased swarm size shrinks perceived time and decreased robot speeds expand the perceived time. We introduce the concept of subjective time perception to human-swarm interaction. Future research will enable swarm systems to autonomously modulate subjective timing to ease the job of human operators.
SESSION: Robots for Health and Well-being
Transformers and Human-robot Interaction for Delirium Detection
An estimated 20% of patients admitted to hospital wards are affected by delirium. Early detection is recommended to treat underlying causes of delirium, however workforce strain in general wards often causes it to remain undetected. This work proposes a robotic implementation of the Confusion Assessment Method for the Intensive Care Unit (CAM-ICU) to aid early detection of delirium. Interactive features of the assessment are performed by Human-robot Interaction while a Transformer-based deep learning model predicts the Richmond Agitation Sedation Scale (RASS) level of the patient from image sequences; thermal imaging is used to maintain patient anonymity. A user study involving 18 participants role-playing each of alert, agitated, and sedated levels of the RASS is performed to test the HRI components and collect a dataset for deep learning. The HRI system achieved accuracies of 1.0 and 0.833 for the inattention and disorganised thinking features of the CAM-ICU, respectively, while the trained action recognition model achieved a mean accuracy of 0.852 on the classification of RASS levels during cross-validation. The three features represent a complete set of capabilities for automated delirium detection using the CAM-ICU, and the results demonstrate the feasibility of real-world deployment in hospital general wards.
Reimagining Robots for Dementia: From Robots for Care-receivers/giver to Robots for Carepartners
Informal caregivers are the main source of dementia care. Considering the importance of both family caregivers and persons living with dementia (PwDs), this paper explores how these two parties go through their dementia journey and how they envision robots to support them. We adopt a person-centered care approach which views these couples as reciprocal carepartners, rather than as care-givers and care-receivers. We conducted a community-based participatory research study with a dementia advocacy organization to imagine how robots can support these dementia dyads. The contribution of this paper is threefold: First, we introduce a person-centered care approach and show how this new approach reveals the issues of PwDs and carepartners (CPs) as partners and citizens. For example, PwDs’ main challenges were not dementia symptoms but the concomitant stigma such as fears of being considered abnormal. This issue has rarely been discussed in HRI. Second, we suggest slow communication as an important robot design feature. When robots can wait for PwDs to proceed with information without judging PwDs’ relatively slow response, PwDs feel respected and less stigmatized. Third, we address the importance of paying attention to disagreements between PwDs and CPs about robot design preferences. Considering the interdependency of the two parties, robot design processes should allow the two to negotiate.
A Robotic Companion for Psychological Well-being: A Long-term Investigation of Companionship and Therapeutic Alliance
Social support plays a crucial role in managing and enhancing one’s mental health and well-being. In order to explore the role of a robot’ s companion-like behavior on its therapeutic interventions, we conducted an eight-week-long deployment study with seventy participants to compare the impact of (1) acontrol robot with only assistant-like skills, (2) acoach-like robot with additional instructive positive psychology interventions, and (3) acompanion-like robot that delivered the same interventions in a peer-like and supportive manner. The companion-like robot was shown to be the most effective in building a positive therapeutic alliance with people, enhancing participants’ well-being and readiness for change. Our work offers valuable insights into how companion AI agents could further enhance the efficacy of the mental health interventions by strengthening their therapeutic alliance with people for long-term mental health support.
Robot, Uninterrupted: Telemedical Robots to Mitigate Care Disruption
Emergency department (ED) healthcare workers (HCWs) are interrupted as often as once every six minutes, increasing the risk of errors and preventable patient harm. As more robots enter hospitals, and the ED, they must support HCWs in managing interruptions, and ideally mitigate their harmful effects, without disrupting ED communication. However, interruption-mitigation strategies, particularly for mobile telemanipulator robots (MTRs), are not well understood. In this work, we explore interruption-mitigation and reorientation methods for MTRs in the ED. We conducted a study where ED HCWs teleoperated an MTR in a realistic hospital simulation environment. Our findings revealed insights on how MTRs might support multitasking in environments with frequent task switching, and the place of autonomy in safety-critical spaces. Conflicting opinions about the appropriateness of different MTR behaviors highlighted challenges and ethical dilemmas that influence the integration of MTRs in the ED. This work will support the implementation of interruption-mitigation strategies on MTRs, enabling them to better support people in fast-paced, interruption-driven environments thus reducing the risk of errors in these situations.
Co-Designing with Older Adults, for Older Adults: Robots to Promote Physical Activity
Lack of physical activity has severe negative health consequences for older adults and limits their ability to live independently. Robots have been proposed to help engage older adults in physical activity (PA), albeit with limited success. There is a lack of robust understanding of older adults’ needs and wants from robots designed to engage them in PA. In this paper, we report on the findings of a co-design process where older adults, physical therapy experts, and engineers designed robots to promote PA in older adults. We found a variety of motivators for and barriers against PA in older adults; we, then, conceptualized a broad spectrum of possible robotic support and found that robots can play various roles to help older adults engage in PA. This exploratory study elucidated several overarching themes and emphasized the need for personalization and adaptability. This work highlights key design features that researchers and engineers should consider when developing robots to engage older adults in PA, and underscores the importance of involving various stakeholders in the design and development of assistive robots.
Evaluating and Personalizing User-Perceived Quality of Text-to-Speech Voices for Delivering Mindfulness Meditation with Different Physical Embodiments
Mindfulness-based therapies have been shown to be effective in improving mental health, and technology-based methods have the potential to expand the accessibility of these therapies. To enable real-time personalized content generation for mindfulness practice in these methods, high-quality computer-synthesized text-to-speech (TTS) voices are needed to provide verbal guidance and respond to user performance and preferences. However, the user-perceived quality of state-of-the-art TTS voices has not yet been evaluated for administering mindfulness meditation, which requires emotional expressiveness. In addition, work has not yet been done to study the effect of physical embodiment and personalization on the user-perceived quality of TTS voices for mindfulness. To that end, we designed a two-phase human subject study. In Phase 1, an online Mechanical Turk between-subject study (N=471) evaluated 3 (feminine, masculine, child-like) state-of-the-art TTS voices with 2 (feminine, masculine) human therapists’ voices in 3 different physical embodiment settings (no agent, conversational agent, socially assistive robot) with remote participants. Building on findings from Phase 1, in Phase 2, an in-person within-subject study (N=94), we used a novel framework we developed for personalizing TTS voices based on user preferences, and evaluated user-perceived quality compared to best-rated non-personalized voices from Phase 1. We found that the best-rated human voice was perceived better than all TTS voices; the emotional expressiveness and naturalness of TTS voices were poorly rated, while users were satisfied with the clarity of TTS voices. Surprisingly, by allowing users to fine-tune TTS voice features, the user-personalized TTS voices could perform almost as well as human voices, suggesting user personalization could be a simple and very effective tool to improve user-perceived quality of TTS voice.
SESSION: Robot Learning, Robot Programming, Formal Methods
Interactive Policy Shaping for Human-Robot Collaboration with Transparent Matrix Overlays
One important aspect of effective human–robot collaborations is the ability for robots to adapt quickly to the needs of humans. While techniques like deep reinforcement learning have demonstrated success as sophisticated tools for learning robot policies, the fluency of human-robot collaborations is often limited by these policies’ inability to integrate changes to a user’ s preferences for the task. To address these shortcomings, we propose a novel approach that can modify learned policies at execution time via symbolic if-this-then-that rules corresponding to a modular and superimposable set of low-level constraints on the robot’s policy. These rules, which we call Transparent Matrix Overlays, function not only as succinct and explainable descriptions of the robot’ s current strategy but also as an interface by which a human collaborator can easily alter a robot’s policy via verbal commands. We demonstrate the efficacy of this approach on a series of proof-of-concept cooking tasks performed in simulation and on a physical robot.
Impacts of Robot Learning on User Attitude and Behavior
With an aging population and a growing shortage of caregivers, the need for in-home robots is increasing. However, it is intractable for robots to have all functionalities pre-programmed prior to deployment. Instead, it is more realistic for robots to engage in supplemental, on-site learning about the user’s needs and preferences. Such learning may occur in the presence of or involve the user. We investigate the impacts on end-users of in situ robot learning through a series of human-subjects experiments. We examine how different learning methods influence both in-person and remote participants’ perceptions of the robot. While we find that the degree of user involvement in the robot’s learning method impacts perceived anthropomorphism (p=.001), we find that it is the participants’ perceived success of the robot that impacts the participants’ trust in (p<.001) and perceived usability of the robot (p<.001) rather than the robot’ s learning method. Therefore, when presenting robot learning, the performance of the learning method appears more important than the degree of user involvement in the learning. Furthermore, we find that the physical presence of the robot impacts perceived safety (p< .001), trust (p< .001), and usability (p< .014). Thus, for tabletop manipulation tasks, researchers should consider the impact of physical presence on experiment participants.
Multiperspective Teaching of Unknown Objects via Shared-gaze-based Multimodal Human-Robot Interaction
For successful deployment of robots in multifaceted situations, an understanding of the robot for its environment is indispensable. With advancing performance of state-of-the-art object detectors, the capability of robots to detect objects within their interaction domain is also enhancing. However, it binds the robot to a few trained classes and prevents it from adapting to unfamiliar surroundings beyond predefined scenarios. In such scenarios, humans could assist robots amidst the overwhelming number of interaction entities and impart the requisite expertise by acting as teachers. We propose a novel pipeline that effectively harnesses human gaze and augmented reality in a human-robot collaboration context to teach a robot novel objects in its surrounding environment. By intertwining gaze (to guide the robot’s attention to an object of interest) with augmented reality (to convey the respective class information) we enable the robot to quickly acquire a significant amount of automatically labeled training data on its own. Training in a transfer learning fashion, we demonstrate the robot’ s capability to detect recently learned objects and evaluate the influence of different machine learning models and learning procedures as well as the amount of training data involved. Our multimodal approach proves to be an efficient and natural way to teach the robot novel objects based on a few instances and allows it to detect classes for which no training dataset is available. In addition, we make our dataset publicly available to the research community, which consists of RGB and depth data, intrinsic and extrinsic camera parameters, along with regions of interest.
People Dynamically Update Trust When Interactively Teaching Robots
Human-robot trust research often measures people’s trust in robots in individual scenarios. However, humans may update their trust dynamically as they continuously interact with a robot. In a well-powered study (n = 220), we investigate the trust updating process across a 15-trial interaction. In a novel paradigm, participants act in the role of teacher to a simulated robot on a smartphone-based platform, and we assess trust at multiple levels (momentary trust feelings, perceptions of trustworthiness, and intended reliance). Results reveal that people are highly sensitive to the robot’ s learning progress trial by trial: they take into account both previous-task performance, current-task difficulty, and cumulative learning across training. More integrative perceptions of robot trustworthiness steadily grow as people gather more evidence from observing robot performance, especially of faster-learning robots. Intended reliance on the robot in novel tasks increased only for faster-learning robots.
SIRL: Similarity-based Implicit Representation Learning
When robots learn reward functions using high capacity models that take raw state directly as input, they need to both learn a representation for what matters in the task — the task “features” — as well as how to combine these features into a single objective. If they try to do both at once from input designed to teach the full reward function, it is easy to end up with a representation that contains spurious correlations in the data, which fails to generalize to new settings. Instead, our ultimate goal is to enable robots to identify and isolate the causal features that people actually care about and use when they represent states and behavior. Our idea is that we can tune into this representation by asking users what behaviors they consider similar: behaviors will be similar if the features that matter are similar, even if low-level behavior is different; conversely, behaviors will be different if even one of the features that matter differs. This, in turn, is what enables the robot to disambiguate between what needs to go into the representation versus what is spurious, as well as what aspects of behavior can be compressed together versus not. The notion of learning representations based on similarity has a nice parallel in contrastive learning, a self-supervised representation learning technique that maps visually similar data points to similar embeddings, where similarity is defined by a designer through data augmentation heuristics. By contrast, in order to learn the representations that people use, so we can learn their preferences and objectives, we use their definition of similarity. In simulation as well as in a user study, we show that learning through such similarity queries leads to representations that, while far from perfect, are indeed more generalizable than self-supervised and task-input alternatives.
Transfer Learning of Human Preferences for Proactive Robot Assistance in Assembly Tasks
We focus on enabling robots to proactively assist humans in assembly tasks by adapting to their preferred sequence of actions. Much work on robot adaptation requires human demonstrations of the task. However, human demonstrations of real-world assemblies can be tedious and time-consuming. Thus, we propose learning human preferences from demonstrations in a shorter, canonical task to predict user actions in the actual assembly task. The proposed system uses the preference model learned from the canonical task as a prior and updates the model through interaction when predictions are inaccurate. We evaluate the proposed system in simulated assembly tasks and in a real-world human-robot assembly study and we show that both transferring the preference model from the canonical task, as well as updating the model online, contribute to improved accuracy in human action prediction. This enables the robot to proactively assist users, significantly reduce their idle time, and improve their experience working with the robot, compared to a reactive robot.
Sketching Robot Programs On the Fly
Service robots for personal use in the home and the workplace require end-user development solutions for swiftly scripting robot tasks as the need arises. Many existing solutions preserve ease, efficiency, and convenience through simple programming interfaces or by restricting task complexity. Others facilitate meticulous task design but often do so at the expense of simplicity and efficiency. There is a need for robot programming solutions that reconcile the complexity of robotics with the on-the-fly goals of end-user development. In response to this need, we present a novel, multimodal, and on-the-fly development system, Tabula. Inspired by a formative design study with a prototype, Tabula leverages a combination of spoken language for specifying the core of a robot task and sketching for contextualizing the core. The result is that developers can script partial, sloppy versions of robot programs to be completed and refined by a program synthesizer. Lastly, we demonstrate our anticipated use cases of Tabula via a set of application scenarios.
Lively: Enabling Multimodal, Lifelike, and Extensible Real-time Robot Motion
Robots designed to interact with people in collaborative or social scenarios must move in ways that are consistent with the robot’s task and communication goals. However, combining these goals in a naïve manner can result in mutually exclusive solutions, or infeasible or problematic states and actions. In this paper, we present Lively, a framework which supports configurable, real-time, task-based and communicative or socially-expressive motion for collaborative and social robotics across multiple levels of programmatic accessibility. Lively supports a wide range of control methods (i.e. position, orientation, and joint-space goals), and balances them with complex procedural behaviors for natural, lifelike motion that are effective in collaborative and social contexts. We discuss the design of three levels of programmatic accessibility of Lively, including a graphical user interface for visual design called LivelyStudio, the core library Lively for full access to its capabilities for developers, and an extensible architecture for greater customizability and capability.
Nudging or Waiting?: Automatically Synthesized Robot Strategies for Evacuating Noncompliant Users in an Emergency Situation
Robots have the potential to assist in emergency evacuation tasks, but it is not clear how robots should behave to evacuate people who are not fully compliant, perhaps due to panic or other priorities in an emergency. In this paper, we compare two robot strategies: an actively nudging robot that initiates evacuation and pulls toward the exit and a passively waiting robot that stays around users and waits for instruction. Both strategies were automatically synthesized from a description of the desired behavior. We conduct a within participant study (=20) in a simulated environment to compare the evacuation effectiveness between the two robot strategies. Our results indicate an advantage of the nudging robot for effective evacuation when being exposed to the evacuation scenario for the first time. The waiting robot results in lower efficiency, higher mental load, and more physical conflicts. However, participants like the waiting robots equally or slightly more when they repeat the evacuation scenario and are more familiar with the situation. Our qualitative analysis of the participants’ feedback suggests several design implications for future emergency evacuation robots.
HRI ’23: Companion of the 2023 ACM/IEEE International Conference on Human-Robot Interaction
SESSION: alt.HRI
The Eye of the Robot Beholder: Ethical Risks of Representation, Recognition, and Reasoning over Identity Characteristics in Human-Robot Interaction
Significant segments of the HRI literature rely on or promote the ability to reason about human identity characteristics, including age, gender, and cultural background. However, attempting to handle identity characteristics raises a number of critical ethical concerns, especially given the spatiotemporal dynamics of these characteristics. In this paper I question whether human identity characteristics can and should be represented, recognized, or reasoned about by robots, with special attention paid to the construct of race, due to its relative lack of consideration within the HRI community. As I will argue, while there are a number of well-warranted reasons why HRI researchers might want to enable robotic consideration of identity characteristics, these reasons are outweighed by a number of key ontological, perceptual, and deployment-oriented concerns. This argument raises troubling questions as to whether robots should even be able to understand or generate descriptions of people, and how they would do so while avoiding these ethical concerns. Finally, I conclude with a discussion of what this means for the HRI community, in terms of both algorithm and robot design, and speculate as to possible paths forward.
How Did We Miss This?: A Case Study on Unintended Biases in Robot Social Behavior
With societies growing more and more conscious of human social biases that are implicit in most of our interactions, the development of automated robot social behavior is failing to address these issues as more than just an afterthought. In the present work, we describe how we unintentionally implemented robot listener behavior that was biased toward the gender of the participants, while following typical design procedures in the field. In a post-hoc analysis of data collected in a between-subject user study (n=60), we find that both a rule-based and a deep learning-based listener behavior models produced a higher number of backchannels (listener feedback, through nodding or vocal utterances) if the participant identified as a male. We investigate the cause of this bias in both models and discuss the implications of our findings. Further, we provide approaches that may be taken to address the issue of algorithmic fairness, and preventative measures to avoid the development of biased social robot behavior.
Borrowing, Poking and Entangling. In Search of Shared Spaces Between Science and Technology Studies and Human-Robot Interaction
In this paper, we reflect on the disciplinary foundations and dominant practices in the field of Human-Robot Interaction (HRI) from the perspective of our own experience of working interdisciplinarily and drawing on colleagues’ ongoing work that transcends disciplinary boundaries. As a part of this reflection, we explore possibilities for the field’s theoretical and methodological expansion, which we contend is needed, given the rapid expansion of robotic technologies in the real world settings. We argue the field of science and technology studies (STS) can be a valuable collaborator and contributor in the process of negotiating disciplinary boundaries of HRI and advancing the field beyond common narratives of technological solutionism and determinism. We frame STS as a field with a strong tradition of studying social and political embeddedness of science and technology, and how these are co-constitutive and co-emergent. STS also investigates the roles and responsibility different actors share in this process. To further explore how the interfacing between STS and HRI can be enacted, we sketch out three modes of interdisciplinary collaboration we call i) Borrowing, ii) Poking and iii) Entangling. We argue that each of these modes comes with advantages, disadvantages and challenges. In the conclusion, we engage the notions of “thinking with care” and disciplinary reflexivity, as an invitation to fellow scholars to consider which disciplinary assumptions are brought to the table when enacting different modes of interfacing between HRI and STS, and how these are entangled with the goals and (desired) outcomes of research practices.
Nature-Robot Interaction
Up until now, Human-Robot Interaction (HRI) has been largely defined by the influences both humans and robots exert on each other across various interaction modes. Robots follow human purpose and serve goals determined by humans with various degrees of agency. Humans act, respond, and adapt to robot behaviors while simultaneously advancing technology to increase robot’s affordances. Abstracted by this dyad, HRI has left out the material background making this exchange possible: Nature. The current planetary crisis forces us to reconsider the importance of contextualizing HRI within a larger picture, and invites us to ask ourselves how this relationship can be better served by considering Nature as the driving agent in this binary relationship. In response to this reflection, we present a first attempt of a speculative paradigm in HRI: Nature-Robot Interaction. We discuss ethical and design underpinnings of this approach to HRI, introduce initial guiding principles, as well as examples of potential affordances, embodiments and interactions. While we begin in the realm of the speculative and recognize the infancy of our proposal, we invite the HRI community to it as a serious design principle moving forward.
Creative AI for HRI Design Explorations
Design fixation, a phenomenon describing designers’ adherence to pre-existing ideas or concepts that constrain design outcomes, is particularly prevalent in human-robot interaction (HRI), for example, due to collectively held and stabilised imaginations of what a robot should look like or behave. In this paper, we explore the contribution of creative AI tools to overcome design fixation and enhance creative processes in HRI design. In a four weeks long design exploration, we used generative text-to-image models to ideate and visualise robotic artefacts and robot sociotechnical imaginaries. We exchanged results along with reflections through a digital postcard format. We demonstrate the usefulness of our approach to imagining novel robot concepts, surfacing existing assumptions and robot stereotypes, and situating robotic artefacts in context. We discuss the contribution to designerly HRI practices and conclude with lessons learnt for using creative AI tools as an emerging design practice in HRI research and beyond.
Dancing with the Nonhuman: A Feminist, Embodied, Material Inquiry into the Making of Human-Robot Relationships
We propose that feminist reconceptualisations of agency and difference could dramatically expand our possibilities for both relating to robots in social scenarios and designing them as social agents. A performative approach to human-robot interaction favors the artefact’s relational, participatory capacities over representational attributes to explore the meaning-making potential of human-machine couplings rather than the predefined meaning of an individual robotic agent. We discuss the feminist concepts of intra-action and diffraction and explore how they could expand our understanding of the workings of the interference patterns that characterize human-robot relationships. Our collaborative Machine Movement Lab project serves as a case study to look at the situated enactment of the subjects and objects that shape our human-robot relationships through the embodied lens of performance-making.
SESSION: Late-Breaking Reports
Holobot: Hologram based Extended Reality Telepresence Robot
Nowadays, telepresence systems based on the Extended Reality (XR) system are actively developed and used for remote collaboration due to COVID-19. Still, several issues, such as limited traversable space in Virtual Reality (VR) and require all participants to wear head-mounted display (HMD), stop these systems from being used in our daily life. On the other hand, telepresence robots have been used in various fields before the pandemic. However, these robots also have a limitation in that the current form is incapable of delivering non-verbal expressions, which is essential for social interaction. Therefore, we present a Holobot, a telepresence robot based on the XR system. A remote user connects to the Holobot through VR HMDs, and the Holobot augments a virtual avatar that projects users’ facial and gesture expressions. We developed a prototype and conducted a simple field test in the exhibition to receive feedback. VR participants enjoyed exploring remote spaces and interacting with each other through Holobot. Furthermore, remote space participants mentioned that a 1:1 scale avatar helped to build co-presence with the VR user. Based on these insights, we think Holobot could provide design guideline for future telepresence robot. For further approach, we plan to improve our prototype and conduct user test for structured evaluation of our system.
The NarRobot Plugin – Connecting the Social Robot Reeti to the Unity Game Engine
The integration of robots as storytellers, game masters or embodied characters into games is a novel technique for game design yet restricted to human-robot interaction (HRI) research. To facilitate the usage of robots, a plugin for a common game engine is needed. NarRobot was developed to provide an easy to use interface and seamless integration of the social robot Reeti into the Unity engine without using third-party materials. Further, it includes an intuitive pose editor. Targeting both HRI research and game development, the plugin allows researchers to focus on their actual research instead of fiddling with back-end functionality. It also simplifies the entry into programming robots for games. In this contribution, our plugin is presented alongside with a proof of concept and three use cases focusing on interactivity, combination with other services, and multi-platform usage: interactive storytelling, integration into a smart room, and a mobile app for a robotic hotel employee.
“Nice to meet you!”: Expressing Emotions with Movement Gestures and Textual Content in Automatic Handwriting Robots
Text-writing robots have been used in assistive writing and drawing applications. However, robots do not convey emotional tones in the writing process due to the lack of behaviors humans typically adopt. To examine how people interpret designed robotic expressions of emotion through both movements and textual output, we used a pen-plotting robot to generate texts by performing human-like behaviors like stop-and-go, speed, and pressure variation. We examined how people convey emotion in the writing process by observing how they wrote in different emotional contexts. We then mapped these human expressions during writing to the handwriting robot and measured how well other participants understood the robot’s affective expression. We found that textual output was the strongest determinant of participants’ ability to perceive the robot’s emotions, whereas parameters of gestural movements of the robots like speed, fluency, pressure, size, and acceleration could be useful for understanding the context of the writing expression.
Towards Designing Companion Robots with the End in Mind
This paper presents an early-stage idea of using ‘robot death’ as an integral component of human-robot interaction design for companion robots. Reviewing previous discussions around the deaths of companion robots in real-life and popular culture contexts, and analyzing the lifelike design of current companion robots in the market, the paper explores the potential advantages of designing companion robots and human-robot interaction with their ‘death’ in mind.
A Multimodal Teach-in Approach to the Pick-and-Place Problem in Human-Robot Collaboration
Teaching robotic systems how to carry out a task in a collaborative environment still presents a challenge. This is because replicating natural human-to-human interaction requires the availability of interaction modalities that allow conveying complex information. Speech, gestures, gaze-based interactions as well as directly guiding a robotic system count towards such modalities that yield the potential to enable smooth multimodal human-robot interaction. This paper presents a conceptual approach for multimodally teaching a robotic system how to pick-and-place an object, one of the fundamental tasks not only in robotics, but in everyday life. By establishing task and dialogue model separately, we aim to split robot/task logic from interaction logic and to achieve modality independence for the teaching interaction. Finally, we elaborate on an experimental implementation of our models for multimodally teaching a UR-10 robot arm how to pick-and-place an object.
Robotic Coaches Delivering Group Mindfulness Practice at a Public Cafe
Group meditation is known to keep people motivated and committed over longer periods of time, as compared to individual practice. Robotic coaching is a promising avenue for engaging people in group meditation and mindfulness exercises. However, the deployment of robotic coaches to deliver group mindfulness sessions in real-world settings is very scarce. We present the first steps in deploying a robotic mindfulness coach at a public cafe, where participants could join robot-led meditation sessions in a group setting. We conducted two studies with two robotic coaches: the toy-like Misty II robot for 4 weeks (n = 4), and the child-like QTrobot for 3 weeks (n = 3). This paper presents an exploratory qualitative analysis of the data collected via group discussions after the sessions, and researcher observations during the sessions. Additionally, we discuss the lessons learned and future work related to deploying a robotic coach in a real-world group setting.
TEAM3 Challenge: Tasks for Multi-Human and Multi-Robot Collaboration with Voice and Gestures
Intuitive human-robot collaboration requires adaptive modalities for humans and robots to communicate and learn from each other. For diverse teams of humans and robots to naturally collaborate on novel tasks, robots must be able to model roles for themselves and other team members, anticipate how team members may perceive their actions, and communicate back to team members to continuously promote inclusive team cohesion toward achieving a shared goal. Here, we describe a set of tasks for studying mixed multi-human and multi-robot teams with heterogenous roles to achieve joint goals through both voice and gestural interactions. Based around the cooperative game TEAM3, we specify a series of dyadic and triadic human-robot collaboration tasks that require both verbal and nonverbal communication to effectively accomplish. Task materials are inexpensive and provide methods for studying a diverse set of challenges associated with human-robot communication, learning, and perspective-taking.
Exploring Human-Drone Collaboration Through Contact Improvisation
In this work we used a dance performance to explore physical human-drone interactions during a collaborative task. We created drone behaviors to allow partnership and increase physicality between the dancer and drone. We found that extended moments of hovering increase the dancer’s perspective of the drone as a partner. We found that using the amount of force exerted from the dancer to the drone is a sufficient input for designing drone responses and increasing the amount of physical contact between the partners.
Gesture-Bot: Design and Evaluation of Simple Gestures of a Do-it-yourself Telepresence Robot for Remote Communication
Current video conferencing technology allows participants to communicate virtually over distance, but users lack the sense of presence due to the absence of physical cues for interaction. We proposes the design of Gesture-Bot, a DIY telepresence robot that performs pan-and-tilt gestures in the presence of a receiver during video chat. It is command via the web by a remote sender. We conducted a workshop to design and evaluate pan-and-tilt movements with 26 participants in two separate communication scenarios to examine the flow of communication in physical-interactions-assisted remote chat. According to the data collected from the questionnaire and the post-experiment interview, the Gesture-Bot system has shown its potential in assisting remote communications while aspects such as the outlook, the method of controlling the robot, the set of gestures are to be improved in the future.
A Methodological Approach to Facilitate the Design of Flexible and Efficient Multi-Application Systems for HRC
Human-robot collaboration (HRC) can bring immense benefits in terms of working conditions and flexibility in industrial environments. Efficiency benefits can only be realized if validated risk reduction measures ensure human safety. One idea to increase the overall throughput of a robot is to increase the number of possible tasks and the number of potential safety reactions that a robot system can perform. Activating a validated safety configuration based on the environmental status in a safety-rated manner allows for deciding the most efficient task within an assessed setup. This paper proposes a new methodology exploiting this potential with a dual-layer finite state machine. The concept and its potential benefits are showcased using a simplified simulation example.
Towards Robot Learning from Spoken Language
The paper proposes a robot learning framework that empowers a robot to automatically generate a sequence of actions from unstructured spoken language. The robot learning framework was able to distinguish between instructions and unrelated conversations. Data were collected from 25 participants, who were asked to instruct the robot to perform a collaborative cooking task while being interrupted and distracted. The system was able to identify the sequence of instructed actions for a cooking task with an accuracy of of 92.85 ± 3.87%.
Using a Social Robot as a Hotel Assessment Tool
This field study investigates the influence of social robots as a hotel assessment tool on hotel ratings. Based on media equation theory, it is assumed that social robots increase quality and quantity of hotel ratings by triggering politeness rules. We developed a robot application that allowed hotel guests to submit ratings on site together with the robot Pepper. Data from robot interactions are compared with data from TrustYou (online rating platform). Results show the potential of social robots as an assessment tool with a trend towards better overall rating when evaluating a hotel on a robot compared to a website.
Reactive Planning for Coordinated Handover of an Autonomous Aerial Manipulator
In this paper, we present a coordinated and reactive human-aware motion planner for performing a handover task by an autonomous aerial manipulator (AAM). We present a method to determine the final state of the AAM for a handover task based on the current state of the human and the surrounding obstacles. We consider the visual field of the human and the effort to turn the head and see the AAM as well as the discomfort caused to the human. We apply these social constraints together with the kinematic constraints of the AAM to determine its coordinated motion along the trajectory.
Who to Teach a Robot to Facilitate Multi-party Social Interactions?
One salient function of social robots is to play the role of facilitator to enhance the harmony state of multi-party social interactions so that every human participant is encouraged and motivated to engage actively. However, it is challenging to handcraft the behavior of social robots to achieve this objective. One promising approach is for the robot to learn from human teachers. This paper reports the findings of an empirical test to determine the optimal experiment condition for a robot to learn verbal and nonverbal strategies to facilitate a multi-party interaction. First, the modified L8 Orthogonal Array (OA) is used to design a fractional factorial experiment condition using factors like the type of human facilitator, group size and stimulus type. The response of OA is the harmony state explicitly defined using the speech turn-taking between speakers and represented using metrics extracted from the first order Markov transition matrix. Analyses of Main Effects and ANOVA suggest the type of human facilitator and group size are significant factors affecting the harmony state. Therefore, we propose the optimal experiment condition to train a facilitator robot using high school teachers as human teachers and group size larger than four participants.
Human Workload Evaluation of Drone Swarm Formation Control using Virtual Reality Interface
This paper presents experimental data that evaluates the human workload in interacting with a drone swarm using a virtual reality (VR) interface. Formation control algorithm integrated into the system aids a human operator in interacting with the drone swarm. VR head-mounted display (HMD) will help to visualize the swarm, enabling teleoperation control. The algorithm will maintain each drone’s position in the formation, making it simple to operate several drones at once. An experiment scenario is proposed to assess the workload using a joystick and a VR controller by moving the drones through a VR HMD. According to the current findings, human could achieve a smoother control with a joystick controller compared to a VR controller. However, a VR controller’s average workload (62.67±30.29) is still twice as high as a joystick controller’s (29.67±12.00) based on the NASA-TLX assessment.
Low-latency Classification of Social Haptic Gestures Using Transformers
Social touch, and its recognition and classification, is increasingly important in human-robot interaction. We present a Transformer-based model trained and evaluated on an open-source dataset. The dataset, the Human-Animal Affective Robot Touch (HAART) dataset, was collected for the 2015 Recognition of Touch Gesture Challenge (RTGC 2015) and contains different haptic actions directed at a robotic animal. The actions are recorded using a multi-resolution pressure sensor. We feed the output, containing the touch type to the Nao robot to make the robot sense the touch type. The proposed transformer-based gesture classification model achieved 72.8% classification accuracy in 2.67 seconds, which outperforms the best-submitted algorithm of the RTGC 2015 which has a test classification accuracy of 70.9 % and needed 8 seconds.
Intelligent Disobedience: A Novel Approach for Preventing Human Induced Interaction Failures in Robot Teleoperation
Failures are natural and unavoidable events in any form of interaction, especially in human-robot interactions (HRI). Throughout the literature, the definition and classification of failures are diverse, depending on the source and application domain. However, the tolerance to the aftereffect of these failures is low in teleoperation due to its unstructured application domains. One such type of failure is called human induced interaction failure. This is an interesting and often overlooked failure type, due to the perspective that robots are designed always to obey the instructions given by the human operators. Regardless of the degree of automation that the robot is equipped with. But what if the instructions provided are faulty, dangerous, or misleading. This paper addresses the above mentioned research gap. It introduces a framework based on the concept of Intelligent Disobedience (ID), derived from guide dog training methods, to manage human induced interaction failures in teleoperation scenarios.
HighLight: Towards an Ambient Robotic Table as a Social Enabler
With smartphones becoming more commonplace in our daily lives, they often take up more time and space than we would like them to. Research shows that using smartphones during social interactions does more harm than good. With this in mind, we set out to create the first prototype of an ambient robotic table that will support social interactions and discourage digital distractions. Through a rapid prototyping process, we present HighLight, a prototype of a socially enabling robotic table that has a smartphone compartment in its center and ambient features reacting in real-time to conversations taking place around the table. We report on our contributions to the research community by investigating the design of an ambient robotic table as a social enabler that encourages social interactions through ambiance, thus exploring future directions of non-disruptive technologies that support social interactions.
An Expressive Robotic Table to Enhance Social Interactions
We take initial steps into prototyping an expressive robotic table that can serve as a social mediator. The work is constructed through a rapid prototyping process consisting of five workshop-based phases with five interaction design participants. We report on the various prototyping techniques that led to the generated concept of an expressive robotic table. Our design process explores how expressive motion cues such as respiratory movements can be leveraged to mediate social interactions between people in cold outdoor environments. We conclude by discussing the implications of the different prototyping methods applied and the envisioned future directions of the work within the scope of expressive robotics.
Social Robotics meets Sociolinguistics: Investigating Accent Bias and Social Context in HRI
Deploying a social robot in the real world means that it must interact with speakers from diverse backgrounds, who in turn are likely to show substantial accent and dialect variation. Linguistic variation in social context has been well studied in human-human interaction; however, the influence of these factors on human interactions with digital agents, especially embodied agents such as robots, has received less attention. Here we present an ongoing project where the goal is to develop a social robot that is suitable for deployment in ethnically-diverse areas with distinctive regional accents. To help in developing this robot, we carried out an online survey of Scottish adults to understand their expectations for conversational interaction with a robot. The results confirm that social factors constraining accent and dialect are likely to be significant issues for human-robot interaction in this context, and so must be taken into account in the design of the system at all levels.
Showing Sympathy via Embodied Affective Robot Touch, GIFs, and Texts: People are Indifferent
Social touch is important for improving social connection, and it is one method that may improve online communication with people at a distance. In the study, we examined the use of social touch from a robot to convey sympathy, in comparison to merely text or GIFs. Fifty-one pairs of friends entered the lab: one (the recipient) talked about a minor inconvenience, and the other (the giver) comforted their friend using one of the three methods described above. Results indicate that the touching method that we used was not effective for supporting the recipient. Follow-up interviews from five pairs of participants suggest ways to improve robot touch, including customizing touch to individuals, making the robot more biological (e.g., warm), and providing feedback that the touch was received.
Human- or Machine-like Music Assistive Robots Effects on Fluency and Memory Recall
Assistive robots are expected to contribute to the solution of major societal problems in healthcare, such as the increasing number of elderly who need informal and professional care over a long period of time. Most of the research focuses on the development of human-like robots to facilitate human-robot interaction and strengthen the social, cognitive and affective processes. However, there are some possible downsides of this type of “robot humanizing”, like raising high expectations and causing incorrect mental models of the robots. Machine-like robots, on the other hand, may help to build more realistic mental models and expectations but might bring about less fluent interactions and less pronounced experiences (i.e., less to remember). To test if a human-like robot indeed brings about better interaction fluency and memory recall, we designed two types of robots for a joint human-robot music listening activity: A human-like and a machine-like robot (Pepper). Thirty students participated in the experiment managed by a Wizard-of-Oz set-up. As expected, the human-like robot proved to perform better in terms of fluency and memory recall. Currently, we are preparing a follow-up experiment, consisting of longer sessions with the elderly to see whether this effect persists for this age group and how far the human- or machine-likeness influences the elderly’s understanding and expectations of the robot’s capabilities.
Robotic Gaze Drives Attention, Even with No Visible Eyes
Robots can direct human attention using their eyes. However, it remains unclear whether it is the gaze or the low-level motion of the head rotation that drives attention. We isolated these components in a non-predictive gaze cueing task with a robot to explore how limited robotic signals orient attention. In each trial, the head of a NAO robot turned towards the left or right. To isolate the direction of rotation from its gaze, NAO was presented frontally and backward along blocks. Participants responded faster to targets on the gazed-at site, even when the eyes of the robot were not visible and the direction of rotation was opposed to that of the frontal condition. Our results showed that low-level motion did not orient attention, but the gaze direction of the robot did. These findings suggest that the robotic gaze is perceived as a social signal, similar to human gaze.
Spill the Tea: When Robot Conversation Agents Support Well-being for Older Adults
Robots could support older adults’ well-being by engaging them in meaningful conversations, specifically to reflect on, support, and improve different aspects of their well-being. We implemented a system on a QT social robot to conduct short autonomous conversations with older adults, to help understand what brings them feelings of joy and meaning in life. We evaluated the system with written surveys and observations of 12 participants including older adults, caregivers, and dementia care staff. From this, we saw the need to improve user experience through personalized interaction that better support older adults as they talk about well-being. Improving the interactions will involve improving the conversation flow, detecting emotions and nonverbal cues, and natural language processing to extract topics around well-being.
Tell Me About It: Adolescent Self-Disclosure with an Online Robot for Mental Health
Self-disclosure to a social robot is a mental health intervention that can decrease stress for adolescents. Online digital robots provide the potential to scale this intervention especially in COVID-19 social distancing situations. However, self-disclosure interactions with digital social robots remain relatively unexplored.
We conducted two online self-disclosure studies with adolescents (13-19 years old): our Active Listening Study compared experiences sharing positive, negative, and neutral feelings with a social robot, while our Journaling Study explored differences in sharing stressors by speaking with and without a social robot and by writing. We found that positive prompt tone improved mood while neutral prompt decreased stress, and less negative attitudes toward robots correlate with more qualitatively positive experiences with robot interactions. We also found robot disclosure interactions hold promising potential as a preferred method of self-disclosure over solo speaking, moderated by negative attitudes toward robots. This paper outlines limitations and future work from these studies.
A Controllable and Repeatable Method to Study Perceptual and Motor Adaptation in Human-Robot Interaction
Human perception and motion are continuously influenced by prior experience. However, when humans have to share the same space and time, different previous experience could lead towards opposite percepts and actions, consequently failing in coordination. This study presents a novel experimental setup that aims at exploring the interplay between human perceptual mechanisms and motor strategies during human-robot interaction. To achieve this goal, we developed a complex system to enable the realization of an interactive perceptual task, where the participant has to perceive and estimate temporal durations together with iCub, with the goal of coordinating with the robotic partner. Results show that the experimental setup continuously monitor how participants implement their perceptual and motor behavior during the interaction with a controllable interacting agent. Therefore, it will be possible to produce quantitative models describing the interplay between perceptual and motor adaptation during an interaction.
Keep your Distance! Assessing Proxemics to Virtual Robots by Caregivers
To maintain safety, ensure a positive user experience, and guarantee long-term use, robots must follow the established conventions of caregivers and residents in healthcare settings. To investigate which interpersonal distance conventions are expected from robots in care facilities, caregivers’ perceptions and preferred robot distances were tested in a virtual environment. With a within-design the participants’ position (sitting/standing) and the robot’s speed (0.3/0.8/1.4 m/s) were varied. The slower the robot moved, the shorter the preferred distance of the robot was and the more comfortable and safer caregivers felt. In addition, the robot was allowed to move closer when participants were standing, but no subjective difference was found between sitting and standing conditions. Although control variables did not influence the preferred distances, results suggest that participants’ height becomes relevant at higher speed conditions. This study can be used to derive concrete proximity regulations for the use of robots in care facilities.
Decision Support System for Autonomous Underwater Robot Grasping
Underwater environments present numerous challenges for marine robots, such as noisy perception, constrained communication, and uncertainty due to wave motion. Human-in-the-loop systems can improve the efficiency and success rate of underwater grasping; however, collecting information in such unstructured environments and accurately presenting it to the operator remains a challenging task. Decision Support Systems (DSSs) can intelligently process and convey information to the operators to facilitate informed decision making. A DSS for autonomous underwater grasping provides visualization capabilities and tools to interact with the available information. Successful initial DSS-assisted underwater grasping was conducted using a six degrees of freedom robotic arm and a depth camera mounted on a mechanical testbed.
Trust Estimation for Autonomous Vehicles by Measuring Pedestrian Behavior in VR
This study proposes a method to estimate pedestrian trust in an automated vehicle (AV) based on pedestrian behavior. It conducted experiments in a VR environment where an AV approached a crosswalk. Participants rated their trust in the AV at three levels before/while they crossed the road. The level can be estimated by deep learning using their skeletal coordinates, position, vehicle position, and speed during the past four seconds. The estimation accuracy was 61%.
What if a Social Robot Excluded You?: Using a Conversational Game to Study Social Exclusion in Teen-robot Mixed Groups
Belonging to a group is a natural need for human beings. Being left out and rejected represents a negative event, which can cause discomfort and stress to the excluded person and other members. Social robots have been shown to have the potential to be optimal tools for studying influence in group interactions, providing valuable insights into how human group dynamics can be modeled, replicated, and leveraged. In this work, we aim to study the effect of being excluded by a social robot in a teenagers-robot interaction. We propose a conversational turn-taking game, inspired by the Cyberball paradigm and rooted in social exclusion mechanisms, to explore how the humanoid robot iCub can affect group dynamics by excluding one of the group members. Preliminary results show that the included player tries to re-engage with the one excluded by the robot. We interpret this dynamic as an included player’s tentative to compensate for the exclusion and reestablish a balance, in line with findings in human-human interaction research. Furthermore, the paradigm we developed seems a suitable tool for researching social influence in different Human-Robot Interaction contexts.
Can a Robot’s Hand Bias Human Attention?
Previous studies have revealed that humans prioritize attention to the space near their hands (the so-called near-hand effect). This effect may also occur towards a human partner’s hand, but only after sharing a physical joint action. Hence, in human dyads, interaction leads to a shared body representation that may influence basic attentional mechanisms. Our project investigates whether a collaborative interaction with a robot might similarly influence attention. To this aim, we designed an experiment to assess whether the mere presence of a robot with an anthropomorphic hand could bias the human partner’s attention. We replicated a classical psychological paradigm to measure this attentional bias (i.e., the near-hand effect) by adding a robotic condition. Preliminary results found the near-hand effect when performing the task with the self-hand near the screen, leading to shorter reaction times on the same side of the hand. On the contrary, we found no effect on the robot’s hand in the absence of previous collaborative interaction with the robot, in line with studies involving human partners.
Designing Robotic Movement with Personality
As robots are starting to inhabit more intimate social spheres, their functionality and acceptance in a fundamentally social environment greatly depend on them being tolerated by humans. One factor contributing to successfully accomplishing tasks in a collaborative manner is how robots’ actions and motions are interpreted by the people around them. Our broader research seeks to explore this gap aiming to design movement that is expressive, culturally dependent and contextually sensitive. A country that is at the forefront of this, in terms of social robots and their acceptance in society, is Japan. Therefore, as the first phases of this broader research, we present a new process, including a design toolkit, an open brief and a participatory structure. We discuss the resulting robot morphologies and participant feedback from a workshop in Japan, and conclude by discussing limitations and further research in designing robots with expressive movement, contextually sensitive within an HRI-for-all paradigm.
How to Train Your Guide Dog: Wayfinding and Safe Navigation with Human-Robot Modeling
A robot guide dog has the potential to enhance the independence and quality of life of individuals who are blind or visually impaired by providing accessible, automated, and intelligent guidance. However, developing effective robot guide dogs requires researchers not only to solve robotic perception and planning problems but also to understand complicated two-way interactions of the human-robot team. This work presents the formal definition of the wayfinding task of the robotic guide dog that is grounded by common practices in the real world. Given such a task, we train an effective policy for the robot guide dog while investigating two different human models, a rotating rod model and a rigid harness model. We show that our robot can safely guide a human user to avoid several obstacles in the real world. We also demonstrate that a proper human model is necessary to achieve collision-free navigation for both the human and the robot.
Uncertainty-Resolving Questions for Social Robots
Social robots should deal with uncertainties in unseen environments and situations in an interactive setting. For humans, question-answering is one of the most typical activities for resolving or reducing uncertainty by acquiring additional information, which is also desirable for social robots. In this study, we propose a framework for leveraging the research on learning-by-asking techniques for social robots. This framework is inspired by human inquiries. Information seeking by asking should be considered at the multi-dimensional level, including required knowledge, cognitive processes, and question types. These dimensions offer a framework to embed generated questions into the three-dimensional question space, which is expected to provide a reasonable benchmark for the active learning approach and evaluation methodologies of uncertainty-resolving question generation for social robots.
Clarifying Social Robot Expectation Discrepancy: Developing a Framework for Understanding How Users Form Expectations of Social Robots
When engaging with a social robot, people form expectations about the robot that may not align with its real behaviour and abilities. This gap is known as expectation discrepancy, and can confuse and disappoint users. We are developing a framework that can be used to understand and compare instances of expectation discrepancy between robots by considering the sources of those expectations. In doing so, we aim to provide a structure and unified vocabulary that can be used to support description and comparison of robot designs and the expectations users form of them. We have begun by examining theoretical work on expectations in interactions between people, and are working to synthesize this into an initial foundation. We will then refine this into a final social robot expectation framework by conducting a survey of expectation formation and discrepancy in existing social robots and projects.
Designing and Prototyping Drones for Emotional Support
Recent work in the interaction community has revealed that drones can potentially become social entities with emotional capabilities beyond the traditional ontological status of drones as mechanical objects. We build upon this work to envision drones as an emotional support technology. To explore this notion, we ran a series of exploratory design workshops with lay users (N=18) to create designs for a companion drone. We used their inputs to analyze the drone’s concept for prototyping and future work.
A Multimodal Dataset for Robot Learning to Imitate Social Human-Human Interaction
Humans tend to use various nonverbal signals to communicate their messages to their interaction partners. Previous studies utilised this channel as an essential clue to develop automatic approaches for understanding, modelling and synthesizing individual behaviours in human-human interaction and human-robot interaction settings. On the other hand, in small-group interactions, an essential aspect of communication is the dynamic exchange of social signals among interlocutors. This paper introduces LISI-HHI – Learning to Imitate Social Human-Human Interaction, a dataset of dyadic human inter- actions recorded in a wide range of communication scenarios. The dataset contains multiple modalities simultaneously captured by high-accuracy sensors, including motion capture, RGB-D cameras, eye trackers, and microphones. LISI-HHI is designed to be a benchmark for HRI and multimodal learning research for modelling intra- and interpersonal nonverbal signals in social interaction contexts and investigating how to transfer such models to social robots.
What You See Is (not) What You Get: A VR Framework for Correcting Robot Errors
Many solutions tailored for intuitive visualization or teleoperation of virtual, augmented and mixed (VAM) reality systems are not robust to robot failures, such as the inability to detect and recognize objects in the environment or planning unsafe trajectories. In this paper, we present a novel virtual reality (VR) framework where users can (i) recognize when the robot has failed to detect a real- world object, (ii) correct the error in VR, (iii) modify proposed object trajectories and, (iv) implement behaviors on a real-world robot. Finally, we propose a user study aimed at testing the efficacy of our framework. Project materials can be found in the OSF repository.
The Impact of Robot’s Body Language on Customer Experience: An Analysis in a Cafe Setting
Nonverbal communication plays a crucial role in human-robot interaction (HRI) and have been widely used for robots in service environments. While few studies have addressed the understanding customer’s acceptance of robots under many different interaction conditions, the impact of robots’ nonverbal interaction modalities (i.e., a combination of body language, voice, and touch) on customers’ experience has not been investigated truly. To this end, in this paper, we introduce an HRI framework that aims to assist customers in their food and beverage choices in a real-world cafe setting. With this framework, the contribution of this paper are two folds. We introduce a time-synchronised multisensory HRI dataset comprising the interactions between a social robot and customers in a real-world environment. We conduct a user study to evaluate the configuration of multimodal HRI framework, particularly nonverbal gestures, and its contribution to customers’ interaction experience in this specific marketing setting.
Eye-Movement Dependency of Peripheral Visual Perception of Anthropomorphism Using an 80ms Robot Picture Stimulus
This paper investigates the impact of types of eye-movements (static vs. pursuit) on dimensions of perceived anthropomorphism of robots in the peripheral visual field (covert visual attention) in a strongly controlled video-based study design. It replicates a previous study with a contrasting refinement of using a short-term stimulus of only 80ms in order to avoid the effect of potential short direct saccades towards the stimulus. In a between-subjects design, test participants are told to follow a point target, which is either static at the screen center or moving linearly. The robot head picture is then briefly presented in the peripheral field of view region for 80ms. After stimulation, a questionnaire based on the HRIES scale on anthropomorphism is completed. Significant results show differences in anthropomorphism perception with the sociability sub-scale being affected. Other dimensions are not found to be affected, which may be subject to potential motion dependency or ambiguity of the other HRIES sub-scales. The findings may have an impact on task performance in close HRI, if a robot is only visible in the peripheral visual field and perceived covertly, while overt (foveated) visual attention of an interacting human focuses on tasks requiring hand or arm movements.
TSES-R: An Extended Scale for Measuring Parental Expectations toward Robots for Children in Healthcare
There is a growing interest in implementing robotics applications for children in healthcare to provide companionship, comfort, education, and therapy. Parental expectations regarding robotics for young children play a critical role in influencing its development and acceptance. However, parental expectations are widely overlooked in HRI. Therefore, a better understanding of what parents of young children expect the robot to do in health-related interactions with robots is needed. To achieve this, we adopted the Technology-Specific Expectation Scale (TSES) [2] and added three more dimensions (i.e., assistive role, social-emotional, and playful distraction) to gauge users’ expectations of robots in healthcare, resulting in TSES-R. This paper reports the development and reliability analysis of TSES-R. Furthermore, this paper presents the preliminary results collected from using the TSES-R with a sample of 31 families, which showcases how these outcomes could be helpful for future related studies.
A Social Robot for Explaining Medical Tests and Procedures: An Exploratory Study in the Wild
Healthcare professionals often have little time to explain medical tests and procedures. Social robots capable of verbal dialogues may contribute to informing patients and the public in general about such tests and procedures, for example in general practitioner or hospital waiting rooms, nursery homes, as well as in public spaces. As an example of latter, an exploratory study was conducted at the Lowlands music festival in August 2022. A social robot explained a blood pressure measurement and a grip strength measurement to participants. Participants were asked to value the expected clarity of the explanation before the explanation, the experienced clarity after the robot explanation but before the actual physical measurements, and after the physical measurements. 172 participants completed the interaction (99 female, 57 male, 8 non-binary, 8 undisclosed). The mean interaction duration was 2.02 minutes (SD=0.40 minutes). Participants found the explanation after the interaction with Pepper clearer than they expected beforehand. Participants found the clarity of the explanation, after they had actually undergone the physical examination, even higher than before the physical examination. This study indicates that social robots are potentially useful to explain medical tests and procedures.
The Warehouse Robot Interaction Sim: An Open-Source HRI Research Platform
The use of physical robots in real-world laboratories for the study of human-robot interaction is not without limitations and logistical challenges. In response, a wide range of studies have begun using virtual representations of robots. However, very few of these platforms are openly available to the HRI community. This limits reproducibility and the ability of the community to leverage existing resources for their own research. In response, this paper presents The Warehouse Robot Interaction Sim. The Warehouse Robot Interaction Sim is an open-source immersive virtual platform developed in the Unreal Engine with the goal of conducting research on trust repair in HRI. This paper summarizes the overall structure of the platform, how it can be modified, and briefly discuss how this platform has been leveraged for research. In doing so we hope to encourage other researchers in HRI to consider leveraging this platform for their own research questions and study designs.
Formative Usability Evaluation of WiGlove – A Home-based Rehabilitation Device for Hand and Wrist Therapy after Stroke
WiGlove is a passive dynamic orthosis aimed at home-based post-stroke rehabilitation of the hand and wrist. This paper highlights results from WiGlove’s formative evaluation as the first step towards its deployment. In this study, twenty healthy participants evaluated the usability and safety of the WiGlove compared to its predecessor, the state-of-the-art SCRIPT Passive Orthosis (SPO). In this within-subject experiment, they performed various tasks such as donning/doffing, adjusting the tension, grasping, etc., with both gloves and rated them using a Likert scale-based questionnaire. The results showed improvements in several aspects of usability and safety. This study provides preliminary evidence of WiGlove’s fitness for the next assessment with its intended users, people recovering from stroke with sustained hand and wrist impairment.
Robotic Interventions for Learning (ROB-I-LEARN): Examining Social Robotics for Learning Disabilities through Business Model Canvas
This ROB-I-LEARN research utilizes a versatile framework (e.g., Business Model Canvas or BMC) for robot design and curriculum development aimed at students diagnosed with autism spectrum disorder (ASD). Robotic interventions / human-robot interaction (HRI) field experiments with high school students were conducted as a recommendation or an outcome of the BMC framework and customer discovery interviews. These curriculum-related robotic interventions / interactive scenarios were designed to improve cognitive rehabilitation targeting students with ASD in high schools, thus enabling a higher quality learning environment that corresponds with students’ learning requirements to prepare them for future learning and workforce environments.
Enhancing Human-robot Collaboration by Exploring Intuitive Augmented Reality Design Representations
As the use of Augmented Reality (AR) to enhance interactions between human agents and robotic systems in a work environment continues to grow, robots must communicate their intents in informative yet straightforward ways. This improves the human agent’s feeling of trust and safety in the work environment while also reducing task completion time. To this end, we discuss a set of guidelines for the systematic design of AR interfaces for Human-Robot Interaction (HRI) systems. Furthermore, we develop design frameworks that would ride on these guidelines and serve as a base for researchers seeking to explore this direction further. We develop a series of designs for visually representing the robot’s planned path and reactions, which we evaluate by conducting a user survey involving 14 participants. Subjects were given different design representations to review and rate based on their intuitiveness and informativeness. The collated results showed that our design representations significantly improved the participants’ ease of understanding the robot’s intents over the baselines for the robot’s proposed navigation path, planned arm trajectory, and reactions.
“Who’s that?”: Identity Self-Perception and Projection in the Use of Telepresence Robots in Hybrid Classrooms
Robotic Telepresence (RT) is a promising medium for students who are unable to attend in-person classes. It enables remote students to be present in the classroom and interact with their classmates and instructors. However, it can be limiting to their identity self-perception and projection, which may have repercussions on the social dynamics and inclusion within the classroom. We present preliminary findings of a qualitative analysis of 12 observations and interviews with RT attendees. We examine RT design and use aspects that either supported identity self-perception and projection or limited it. Finally, we present telepresence robots design and use recommendations for the classroom context.
Will It Yield: Expectations on Automated Shuttle Bus Interactions With Pedestrians and Bicyclists
Autonomous vehicles that operate on public roads need to be predictable to others, including vulnerable road users. In this study, we asked participants to take the perspective of videotaped pedestrians and cyclists crossing paths with an automated shuttle bus, and to (1) judge whether the bus would stop safely in front of them and (2) report whether the bus’s actual stopping behavior accorded with their expectations. The results show that participants expected the bus to brake safely in approximately two thirds of the human-vehicle interactions, more so to pedestrians than cyclists, and that they tended to underestimate rather than overestimate the bus’s capability to yield in ways that they considered as safe. These findings have implications for the design and implementation of automated shuttle bus services.
Robots as Social Cues: The Influence of Follow Cargo Robot Use on Perceptions of Leadership Quality and Interpersonal Impressions
Mute machines offer clues to understanding social dynamics simply by their function and proximity to humans. We used followbots, robots that haul equipment without social interaction, to investigate perceptions of leader configuration on credibility, attraction, and social presence across three scenarios (leader, leader/followbot, leader/human). The followbot rated higher than the leader-only condition for competence, leadership effectiveness, and social presence. As socially silent machines gain popularity, this research has implications for understanding how robots function as social cues.
When Do Drivers Intervene In Autonomous Driving?
Autonomous vehicles (AVs) are expected to handle traffic scenarios more safely and efficiently than human drivers. However, it needs to be better understood which AV decisions are perceived to be unsafe or risky by drivers. To investigate drivers’ perceived risk, we conducted a driving simulator experiment where participants are driven around by two types of AVs—car and sidewalk mobility—with a driving style that matches the participant’s driving style. We developed a computational model that allows us to examine drivers’ perceived risk of scenarios when interacting with an AV based on the drivers’ interventions. The model allows us to quantify and compare the relative perceived risk of different scenarios for the two mobility types. Our results indicate that 1) drivers perceived higher risk in scenarios where the AV attempts to match the driver’s preferred driving style, and 2) different scenarios were perceived as having higher risk across the two mobility types. The ability to quantify the perceived risk of scenarios and an understanding of how perceived risk differs across mobility types will provide critical insights for the design of human-aware mobility.
The Answer lies in User Experience: Qualitative Comparison of US and South Korean Perceptions of In-home Robotic Pet Interactions
This paper describes a user experience comparison study to explore whether a user’s ‘cultural background’ affects their interaction with in-home pet robots designed for health purposes, e.g. socially-assistive robots (SARs). 11 Koreans and 10 Americans were interviewed after interacting in their own homes with a SAR. Statistical analyses and TF-IDF keyword analyses were conducted to detect significant differences between groups in terms of code co-occurrences. Results showed that American participants were more likely to focus on the interactive experience itself, whereas Korean participants focused more on critiquing technical aspects of the technology. Such differences suggest that Koreans tend to treat robotic pets as “tools”, while Americans view the robotic pet through the lens of their past experience raising real-life pets. We discuss implications of this for human-robot interaction (HRI) regarding SARs may be dependent on users’ cultural characteristics, e.g. necessitating customized content that takes into account culturally-specific modes of use.
Development of a Wearable Robot that Moves on the User’s Arm to Provide Calming Interactions
Wearable robots can maintain constant physical contact with the user and support its daily life. However, since most wearable robots are fixed on the user’s body, the user has to be constantly aware of their presence. Sometimes this can impose a burden on the user and prevents them from wearing the robot daily. One solution to this problem is for the robot to move around the user’s body. When the user does not interact with the robot, it can move to an unobtrusive position and attract less attention from the user. This research aims to actualize such a wearable robot. In addition, by introducing flexible rubber joints, we aim for creating calming interactions with the user wearing the robot. This short paper reports the development of our initial prototypes.
Designing a Robot which Touches the User’s Head with Intra-Hug Gestures
There are a lot of positive benefits of hugging, and several studies have applied its application in human-robot interaction. However, due to the limitation of a robot performance, these robots only touched the human’s back. In this study, we developed a hug robot, named “Moffuly-II.” This robot can hug not only with intra-hug gestures, but also touch the user’s back or head. This paper describes the robot system and the user’s impression of hug with the robot.
A Persuasive Robot that Alleviates Endogenous Smartphone-related Interruption
The endogenous interruptions of smartphones have impacted people’s everyday life in many aspects, especially in the study and work scene under a lamp. To mitigate this, we make a robot that could persuade you intrinsically by augmenting the lamp on your desk with specific posture and light. This paper will present our design considerations and the first prototype to show the possibility of alleviating people’s endogenous interruptions through robots.
Buzzo or Eureka — Robot that Makes Remote Participants Feel More Presence in Hybrid Discussions
Teleconferencing technology has been widely used in the context of the covid-19 pandemic. However, local and remote participants always have a poorer experience of hybrid discussion for various reasons in the leaderless group discussions with mixed online and offline members. In this paper, this phenomenon is explored through an early pilot study. We found problems with the lack of presence of remote participants in hybrid discussion sessions, as well as unclear information about the status of members. To solve such problems, we’ve designed a social robot called SNOTBOX. The bot indicates the participation status (marginalized or not) of the remote participant using “Buzzo” and the remote participant’s desire to be heard through a “Eureka”. We used both representations to attract the attention of local participants as a way to enhance the presence of remote participants in the conference. SNOTBOX is easy to produce and allows for DIY customization, and also supports multi-participant online discussions.
Designing and Evaluating Interactive Tools for a Robot Hand Collection
Recent robot collections provide various interactive tools for users to explore and analyze their datasets. Yet, the literature lacks data on how users interact with these collections and which tools can best support their goals. This late-breaking report presents preliminary data on the utility of four interactive tools for accessing a collection of robot hands. The tools include a gallery and similarity comparison for browsing and filtering existing hands, a prediction tool for estimating user impression of hands (e.g., humanlikeness), and a recommendation tool suggesting design features (e.g., number of fingers) for achieving a target user impression rating. Data from a user study with 9 novice robotics researchers suggest the users found the tools useful for various tasks and especially appreciated the gallery and recommendation functionalities for understanding the complex relationships of the data. We discuss the results and outline future steps for developing interface design guidelines for robot collections.
Anomaly Detection for Dynamic Human-Robot Assembly: Application of an LSTM-based Autoencoder to interpret uncertain human behavior in HRC
Human-Robot Collaboration (HRC) requires humans and robots to work on the same product in the same work environment at the same time. Therefore, the robotic system needs to understand human behavior so it can assist the human appropriately. Since the human is an uncertain variable in this system, human action recognition is one of the key challenges when it comes to HRC. To address this problem, we developed an anomaly detection framework for the dynamic assembly of complex products. We used an Long-Short-Term-Memory (LSTM)-based autoencoder to detect anomalies in human behavior and post-process the output to categorize it as a green or red anomaly. A green anomaly represents a deviation from the intended order but a valid assembly sequence. A red anomaly represents an invalid sequence. In both cases, the worker is guided to complete the assembly process. We demonstrate our proposed framework using an appropriate industrial use case.
How Sequential Suggestions from a Robot and Human Jury Influence Decision Making: A Large Scale Investigation using a Court Sentencing Judgment Task
There have been discussions on using robots to provide suggestions for decision making in mixed jury-systems. Hence, it is vital to investigate the influence of system interventions on decision making. This study focused on how such suggestions from a robot and an expert human influenced the decision making of a sentence in a court judgment task. We hypothesized that the sequential pattern of presentation of suggestions by an AI system installed in a robot and a human expert would influence decision making performance. In a large-scale online experiment, we investigated several factors, such as the (a) adviser type (AI(Robot) or Human), (b) sequential order (AI to Human, Human to AI), and (c) length of the sentence (3 or 7 years) that would influence decision making. The results showed that when presented with a human expert’s suggestion, after an AI decision, participants were more biased towards the human’s suggestion. Moreover, participants’ decisions were influenced by the length of the suggestion, especially when presented with heavy lengths (seven years). This provides new implications on the factors that may influence decision making using robots as tools in a mixed jury-system and contributes to the notion of using robots in courts.
PLATYPUS: An Environment for End-User Development of Robot-Assisted Physical Training
When robots are used for physical therapy, programming becomes too important to be left to programmers. Developing programs for training robots is time-consuming and requires expertise within multiple engineering domains, combined with physical training, therapy, and human interaction competencies. In this paper, we present Platypus: an end-user development environment that encompasses the design and execution of custom activities for robot-assisted physical training. The current version ships a set of plugins for Eclipse’s IDE and uses a block-based visual language to specify the robot’s behaviors at a high abstraction level, which are translated into the low-level code specifications followed by the robot. As a use case, we present its implementation on RoboTrainer, a modular, rope-based pulling device for training at home. While user tests suggest that the platform has the potential to reduce the technical obstacles for building custom training scenarios, informational and design learning barriers were revealed during the tests.
Towards HRI of Everyday Life: Human Lived Experiences with Social Robots
As the HRI field evolves, the way we understand, and study human-robot interaction also inevitably changes. We argue here that there has been a gradual shift in HRI research from investigating a concept of human-robot ‘interaction’ towards that of ‘experience’. This includes User Experience (UX) approaches in the first place, but also those perspectives that begin to go beyond mere usability and optimization toward meaningful social interactions and social experiences. This paper addresses the shift in question from a sociological perspective and proposes to bring it even further to include a systematic study of human ‘lived experiences’ taking place in the community contexts. The ultimate goal is to facilitate theoretical and methodological developments needed to systematically address and pursue research on the ‘HRI of Everyday Life’.
The Peg-Turning Dilemma: An Experimental Framework for Measuring Altruistic Behaviour Towards Robots
This paper presents the results of a preregistered pilot study testing an experimental framework to measure altruistic behaviour towards robots. We define altruistic behaviour as behaviour that benefits others at a personal cost to the behaving individual. The pilot study explores feelings of guilt and perceived agency and experience (i.e., mind) in the robot as potential predictors of altruistic behaviour. Using a within-subjects design (n=48), we compared the willingness to perform a dull, repetitive task (i.e., turning pegs in a pegboard) to avoid shaking (harming) either an emotion-simulating robot, or a non-responsive object. The results showed that participants felt significantly more guilty after shaking the robot than after shaking the object, and they perceived more agency and experience in the robot than in the object. Finally, even though participants felt significantly more bored after performing the peg-turning task than after subsequent tasks, they were significantly more willing to repeat this task to avoid shaking the robot again, compared to the object. An exploratory regression analysis showed that feelings of guilt were the only significant predictor of this behaviour.
Social Robots in Secondary Education: Can Robots Assist Young Adult Learners with Math Learning?
Social robots have been extensively studied in educational settings for children and their positive impacts on children’s learning are reported. The aim of this study was to find out whether embodied educational technologies such as robot tutors can also yield similar results with adult learners. An experiment was conducted in a secondary education mathematics classroom, where 15 students (of ages 17 to 20) worked on math exercises in two conditions. In one condition, a Nao robot was present as a math tutor to read the questions, collect answers and provide feedback. In the other condition, students practiced math exercises on a laptop which is a non-social and non-embodied technology. Results indicated that students in secondary education do not seem to favor using a robot tutor over traditional technologies. This implies that there is a difference between children and adults in the way they experience this technology in its current state-of-the-art for their education.
Augmented Reality Safety Zone Configurations in Human-Robot Collaboration: A User Study
Close interaction with robots in Human-Robot Collaboration (HRC) can increase worker productivity in production, but cages around the robot often limit this. Our research aims to visualise virtual safety zones around a real robot arm with Augmented Reality (AR), thereby replacing the cages. We tested our system with a collaborative pick-and-place application which mimics a real manufacturing scenario in an industrial robot cell. The shape, size and visualisation of the AR safety zones were tested with 19 participants. The overwhelming preference was for a visualisation that used cylindrical AR safety zones together with a virtual cage bars effect.
I See You! Design Factors for Supporting Pedestrian-AV Interaction at Crosswalks
With the advent of autonomous vehicles (AVs) on public roads, the frequency of interactions between these AVs and pedestrians will increase. One example of such an interaction is at unsignalized crosswalks, where pedestrians and vehicles must negotiate for the right of way. Studies show that these interactions often use social communication channels. This paper addresses how AVs can fill this communication gap, focusing on the impact of pedestrian self-identifiability. Using VR, we designed two novel awareness-conveying behaviors, and a control condition with no awareness behavior. We then conducted a within-subjects VR study with 19 participants in which they traversed a crosswalk in front of a driverless vehicle in each experimental condition and rated their experience across seven probes. Results indicated that an awareness-conveying behavior significantly increased pedestrians’ sense of safety and that increases in self-identifiability further improved pedestrians’ experience without resulting in a heightened sense of surveillance from the vehicle.
Save Baby Whale! A Pet Robot as a Medication Reminder for Children with Asthma
Asthma is one of the most common chronic diseases in children, but adherence to asthma medications is very low, which can lead to poor or even dangerous outcomes. To solve this problem, we came up with a baby whale pet robot that needs to be taken care of by children. In this paper, we present the design of our first prototype to explore whether a pet robot could help improve medication adherence in children with asthma.
The Impact of Speech and Movement on the Interaction with a Mobile Hand Disinfection Robot
Hand disinfection is an important tool in the line of defense against infectious diseases. Hand sanitizer dispensers are usually passive devices sitting at entrances of buildings and other frequented locations. In this study we explore the usefulness of an interactive mobile hand sanitizer robot, more specifically, the research is focused on finding the most significant attention-grabbing modality for the robot to motivate people to disinfect their hands. Through an in -the-wild Wizard-of-Oz experiment and a short questionnaire each of the four usage modalities was tested in the entrance hall of a university. The results show that movement had the most significant impact, compared to sound and nothing at all, yet due to the robot’s design the participants expected it to talk to them.
“Feeling Unseen”: Exploring the Impact of Adaptive Social Robots on User’s Social Agency During Learning
Adaptive robots have the potential to support the overloaded healthcare system by helping new stroke survivors learn about their conditions. However, current adaptive robots often fail to maintain users’ engagement during interactions. This study investigated the impact of an adaptive robot on Social Agency which has been proposed to influence engagement during learning. Twenty-four healthy subjects participated in a study where they learned about stroke symptoms from a robot providing social cues either 1) when their engagement measured by a Brain-Computer Interface (BCI) decreased or 2) at random intervals. While the results confirmed that Social Agency correlated with Engagement, the robot’s adaptive behaviour did not increase Social Agency, Engagement, and Information Recall. Using qualitative methods, we propose that adaptive robots need to explicitly acknowledge users to increase Social Agency.
Introducing Children and Young People with Sight Loss to Social Robots: A Preliminary Workshop
Meaningful first-time interactions between humans and robots are important for learning functionality, shaping impressions, and building trust. Additionally, robots should be an inclusive tool, accessible to all, yet typical introductory human-robot interactions rely heavily on the human’s visual perceptions. For children and young people with sight loss, this is can be problematic. Therefore, we present a preliminary workshop with four children and young people with sight loss in order to begin investigating how this population learns about social robots for the first time, their overall impressions of social robots, and whether games could assist in creating positive first-time interactions. Our initial findings reveal the importance of promoting tactile exploration, clarifying safety aspects, and careful consideration of the communication of robot emotion.
Crowdsourcing Task Traces for Service Robotics
Demonstration is an effective end-user development paradigm for teaching robots how to perform new tasks. In this paper, we posit that demonstration is useful not only as a teaching tool, but also as a way to understand and assist end-user developers in thinking about a task at hand. As a first step toward gaining this understanding, we constructed a lightweight web interface to crowdsource step-by-step instructions of common household tasks, leveraging the imaginations and past experiences of potential end-user developers. As evidence of the utility of our interface, we deployed the interface on Amazon Mechanical Turk and collected 207 task traces that span 18 different task categories. We describe our vision for how these task traces can be operationalized as task models within end-user development tools and provide a roadmap for future work.
Coming In! Communicating Lane Change Intent in Autonomous Vehicles
Lane changes of autonomous vehicles (AV) should not only succeed in making the maneuver but also provide a positive interaction experience for other drivers. As lane changes involve complex interactions, identification of a set of behaviors for autonomous vehicle lane change communication can be difficult to define. This study investigates different movements communicating AV lane change intent in order to identify which effectively communicates and positively affects other drivers’ decisions. We utilized a virtual reality environment wherein 14 participants were each placed in the driver’s seat of a car and experienced four different AV lane change signals. Our findings suggest that expressive lane change behaviors such as lateral movement have high levels of legibility at the cost of high perceived aggressiveness. We propose further investigation into how balancing key parameters of lateral movement can balance in legibility and aggressiveness that provide the best AV interaction experience for human drivers
Safe to Approach: Insights on Autonomous Vehicle Interaction Protocols with First Responders
As autonomous vehicles (AV) become increasingly common on our roads, it is important for first responders – police officers, firefighters, and emergency medical services to learn new interaction protocols as they can no longer rely on those applied to human-driven vehicles. This study identifies critical pain points and concerns of first responders interacting with AVs on the road. We explore 7 different designs that communicate that an AV is in park and is safe to approach and analyze how first responders perceive these designs in terms of clarity and safety. We conducted qualitative interviews with 9 first responders and gained insights on how the needs of first responders can be integrated within the AV design process. As a result, we identify an AV safe park state communication protocol that would be ideal for first responders. Additionally, we derive a guideline for effective communication methods that can be used in the design of these vehicles establishing research methods that involve emergency responders within the loop.
Multisensory Evaluation of Human-Robot Interaction in Retail Stores – The Effect of Mobile Cobots on Individuals’ Physical and Neurophysiological Responses
As more mobile collaborative robots (cobots) are being deployed in domestic environments, it is necessary to ensure safety while interacting with humans. To this end, a better understanding of individuals’ physical and neurophysiological responses (i.e., short term adaptation) during those interactions becomes crucial to frame the cobot’s behavioral and control algorithms. The primary objective of this study was to assess individuals’ physical and neurophysiological responses to the mobile cobot in a retail environment. Eight participants were recruited to complete typical grocery shopping tasks (i.e., cart pushing, item picking, and item sorting) with and without a mobile robot running in the same space. Results showed the co-existence of mobile cobot in the retail environment stimulated individuals’ physical responses, by significantly changing their upper-limb kinematics, i.e., reducing the average flexion angles of L5/S1, T12/L1, and right shoulder in the sagittal plane. However, no significant differences were observed in the neurophysiological adaptation based on the measures of muscle activity of the latissimus dorsi, anterior deltoid, and bicep brachii, nor the pupil diameter.
Cat-E: A Social Robot Guiding Children’s Activities with AI Art Generator
The increasing number of public AI-art generation tools has allowed people, including children, to be creative and bring their ideas to life. However, these AI systems could be misused by children and expose them to age-inappropriate images. In this paper, we explore how social robots could guide children as an embodied element of the AI art generation system. To investigate this topic, we examined how children conceptualize AI, observed how they use an AI-art generation system called DALL-E 2, and conducted co-design workshops to develop a social robot prototype, Cat-E. The Cat-E aims to help children generate AI art in a regulated, safe but creative way by providing prompts and leading them to generate more ethical and morally sound AI art. This study reveals that children perceive a robot as an embodied element of AI and consider it to be a reliable and unbiased source of information. We propose that social robots can play a role of a friendly guide that enables children creatively but safely utilize AI systems.
Improving a Robot’s Turn-Taking Behavior in Dynamic Multiparty Interactions
In this paper, we describe ongoing work to develop a robust and natural turn-taking behavior for a social agent to engage a dynamically changing group in a conversation. We specifically focus on discussing likely interaction scenarios for a social robot and how appropriate conversational behavior could unfold in each situation. Preliminary findings from annotations of more than 9,000 dialogue samples from a related domain are used to help judge the importance of different interaction scenarios. We conclude by outlining important general considerations for designing more robust dialogue systems as well as highlight next steps we are taking in developing our character’s turn-taking behavior.
Attention-guiding Takeover Requests for Situation Awareness in Semi-autonomous Driving
In semi-autonomous driving (SAE Level-3), the automated driving system allows drivers to focus on their non-driving-related tasks for the majority of the journey. However, when the system faces situations beyond its operational design domain, the drivers need to manually control the vehicle responding to the takeover request (TOR). Many efforts have been made in previous studies to find a more effective method to initiate the TOR. In this paper, we propose to improve drivers’ takeover performance by utilizing attention-guiding techniques when delivering the TOR. A preliminary experiment (N=19) indicates that our method reduced drivers’ collision rate and mental workload.
What Does It Mean to Anthropomorphize Robots?: Food For Thought for HRI Research
Anthropomorphism is a well-used but vague concept that demands further understanding and clarification to be effectively used in HRI research. Although most HRI research defines and uses anthropomorphism as a human-like attribution process, there is lack of distinction between its deployment in design versus its manifestation in user response. Furthermore, researchers need to separate mindless from mindful anthropomorphism and find ways to theorize and measure each. Researchers also need to consider the dynamic and contextual nature of anthropomorphism to generate relevant findings for research as well as practice.
Presenting Human-Robot Relative Hand Position using a Multi-Step Vibrotactile Stimulus for Handover Task
For humans to hand over an object to a robot, confirming the robot hand’s position through their sensory organs (i.e., vision) is required. Only then can perform the hand-reaching task. This step is time-consuming and visual demand can also degrade other simultaneously conducted tasks performances. We assume that eliminating this step can lead to a rapid and precise reaching task. We propose a method to directly present the relative position of human-robot hands using phantom sensation-based vibrotactile stimulus on the human reaching arm. A multi-step vibrotactile cue; 1) gross motion, 2) fine motion, and 3) deadband, is provided to indicate the direction, distance to the target, and a signal indicating the reached target, respectively. The experimental result shows that users could precisely recognize the target position in a given shorter time. This indicates that humans can use the proposed method to initiate their ballistic movement for hand-reaching during the handover task.
Exploring the Effects of Self-Disclosed Backstory of Social Robots on Development of Trust in Human-Robot Interaction
This paper investigated the influence of a social robot which discloses a backstory of its experiences on the development of trust in human-robot interaction with respect to the nature of backstories. We compared three cases of backstories, a happy backstory, a sorrowful backstory and no backstory told by the robot during interaction with participants. The results indicated that the robot disclosing a happy backstory provided the participants with higher impression of trustworthiness in general and affective trust compared to the robot telling no backstory. However, the robot with sorrowful backstory was not evaluated to lead to higher trustworthiness than the robot with no backstory. Furthermore, the happy backstory condition scored higher than the sorrowful backstory condition in general, affective and cognitive trust. Thus, participants rated a happy backstory tied to positive self-disclosed emotion, to be significantly more influential in human-robot trust.
‘Sorry’ Says the Robot: The Tendency to Anthropomorphize and Technology Affinity Affect Trust in Repair Strategies after Error
This research investigates how six different trust repair strategies (apologies, explanations and denial) of a robot packing lunch bags affect trust after an error and how user dispositions predict trust in the repair strategies. In an online experiment, the perceived trustworthiness was assessed in a within-subjects design (N = 604) in which all strategies were evaluated in direct comparison. Higher trustworthiness of an apology (vs. no apology) and of a technical explanation for an error (vs. an empty and an anthropomorphic explanation) was found. In line with theoretical considerations, user personality was found to be associated with trust in specific strategies. A higher tendency to anthropomorphize technology was associated with higher trust in an anthropomorphic explanation and technological affinity was associated with a higher trust in the technical explanation. Taken together, personalization of trust repair strategies is a promising direction for individualized design to foster trustworthy human-robot interaction.
Effects of Predictive Robot Eyes on Trust and Task Performance in an Industrial Cooperation Task
Industrial cobots can perform variable action sequences. For human-robot interaction (HRI) this can have detrimental effects, as the robot’s actions can be difficult to predict. In human interaction, eye gaze intuitively directs attention and communicates subsequent actions. Whether this mechanism can benefit HRI, too, is not well understood. This study investigated the impact of anthropomorphic eyes as directional cues in robot design. 42 participants worked on two subsequent tasks in an embodied HRI with a Sawyer robot. The study used a between-subject design and presented either anthropomorphic eyes, arrows or a black screen as control condition on the robot’s display. Results showed that neither directional stimuli nor the anthropomorphic design in particular led to increased trust. But anthropomorphic robot eyes improved the prediction speed, whereas this effect could not be found for non-anthropomorphic cues (arrows). Anthropomorphic eyes therefore seem to be better suitable for an implementation on an industrial robot.
Mixed Reality-based Exergames for Upper Limb Robotic Rehabilitation
Robotic rehabilitation devices are showing strong potential for intensive, task-oriented, and personalized motor training. Integrating Mixed Reality (MR) technology and tangible objects in these systems allow the creation of attractive, stimulating, and personalized hybrid environments. Using a gamification approach, MR-based robotic training can increase patients’ motivation, engagement, and experience. This paper presents the development of two Mixed Reality-based exergames to perform bimanual exercises assisted by a shoulder rehabilitation exoskeleton and using tangible objects. The system design was completed by adopting a user-centered iterative process. The system evaluates task performance and cost function metrics from the kinematic analysis of the hands’ movement. A preliminary evaluation of the system is presented, which shows the correct operation of the system and the fact that it stimulates the desired upper limb movements.
Bridging the Gap: Using a Game-based Approach to Raise Lay People’s Awareness About Care Robots
As people’s expectations regarding robots are still mostly shaped by the media and Science Fiction, there exists a gap between imaginaries of robots and the state-of-the-art of robotic technologies. Care robots are one example of existing robots that the general public has little awareness about. In this report, we introduce a card-based game prototype developed with the goal to bridge this gap and explore how people conceive of existing care robots as a part of their daily lives. Based on the trial game runs, we conclude that game-based approach is effective as a device to inform participants in a playful setting about existing care robots and to elicit conversations about the role such robots could play in their lives. In the future, we plan to adapt the prototype and create a design game prototype to develop novel use cases for care robots.
Body Gesture Recognition to Control a Social Mobile Robot
In this work, we propose a gesture-based language to allow humans to interact with robots using their body in a natural way. We have created a new gesture detection model using neural networks and a new dataset of humans making a collection of body gestures to train this architecture. Furthermore, we compare body gesture communication with other communication channels to demonstrate the importance of adding this knowledge to robots. The presented approach is validated in diverse simulations and real-life experiments with non-trained volunteers. This attains promising results and establishes that it is a valuable framework for social robotic applications, such as human robot collaboration or human-robot interaction.
Co-design of a Social Robot for Distraction in the Paediatric Emergency Department
We are developing a social robot to help children cope with painful and distressing medical procedures in the hospital emergency department. This is a domain where a range of interventions have proven effective at reducing pain and distress, including social robots; however, until now, the robots have been designed with limited stakeholder involvement and have shown limited autonomy. For our system, we have defined and validated the necessary robot behaviour together with children, parents/caregivers, and healthcare professionals, taking into account the ethical and social implications of robotics and AI in the paediatric healthcare context. The result of the co-design process has been captured in a flowchart, which has been converted into a set of concrete design guidelines for the AI-based autonomous robot system.
Robot-Supported Information Search: Which Conversational Interaction Style do Children Prefer?
Searching via speech with a robot can be used to better support children in expressing their information needs. We report on an exploratory study where children (N=35) worked on search tasks with two robots using different interaction styles. One system posed closed, yes/no questions and was more system-driven while the other system used open-ended questions and was more user-driven. We studied children’s preferences and experiences of these interaction styles using questionnaires and semi-structured interviews. We found no overall strong preference between the interaction styles. However, some children reported task-dependent preferences. We further report on children’s interpretation and reasoning around interaction styles for robots supporting information search.
Utilizing Prior Knowledge to Improve Automatic Speech Recognition in Human-Robot Interactive Scenarios
The prolificacy of human-robot interaction not only depends on a robot’s ability to understand the intent and content of the human utterance but also gets impacted by the automatic speech recognition (ASR) system. Modern ASR can provide highly accurate (grammatically and syntactically) translation. Yet, the general purpose ASR often misses out on the semantics of the translation by incorrect word prediction due to open-vocabulary modeling. ASR inaccuracy can have significant repercussions as this can lead to a completely different action by the robot in the real world. Can any prior knowledge be helpful in such a scenario? In this work, we explore how prior knowledge can be utilized in ASR decoding. Using our experiments, we demonstrate how our system can significantly improve ASR translation for robotic task instruction.
Let’s Roll Together: Children Helping a Robot Play a Dice Game
Play is an important part of children’s lives and playing with social robots could provide powerful interventions, for example in education. However, child-robot play is often restricted by the technical limitations of the robot. Tools like Bluetooth-connected dice could circumvent some of these limitations, but technical limitations can also be resolved in a social way. In this paper, we explore children playing a dice game with a Nao robot. The Nao robot cannot pick up the dice. We compared two modes of helping: rolling for the robot and handing the dice to the robot. The results show that children prefer handing the dice to the robot. They feel the robot is more involved when it physically participates. Children who feel the robot is more involved, enjoy the game more. Finally, we found evidence that helping the robot might even be preferred over the robot not needing any help.
Sawarimōto: A Vision and Touch Sensor-based Method for Automatic or Tele-operated Android-to-human Face Touch
Although robot-to-human touch experiments have been performed, they have all used direct tele-operation with a remote controller, pre-programmed hand motions, or tracked the human with wearable trackers. This report introduces a project that aims to visually track and touch a person’s face with a humanoid android using a single RGB-D camera for 3D pose estimation. There are three major components: 3D pose estimation, a touch sensor for the android’s hand, and a controller that combines the pose and sensor information to direct the android’s actions. The pose estimation is working and released under as open-source. A touch sensor glove has been built and we have begun work on creating an under-skin version. Finally, we have tested android face-touch control. These tests showed many hurdles that will need to be overcome, but also how convincing the experience already is for the potential of this technology to elicit strong emotional responses.
Exploring Mothers’ Perspectives on Socially Assistive Robots in Peripartum Depression Screening
Peripartum Depression (PPD) affects 8-15 percent of new mothers in Sweden every year; a majority of PPD cases go undetected, and only a small percentage receives adequate care. Social Assistive Robots (SARs) bring great potential for healthcare applications. Using SARs in healthcare tasks, for example PPD Screening, could reduce healthcare professionals’ strain, by supporting them, without replacing them, in key roles. However, studies that investigate the possibility to utilize SARs in PPD screening are scarce. In this paper, we present an interview study with ten mothers with prior experience of PPD in relation to their pregnancy. The contributions from this work are twofold. First, we elicited participants’ opinions and attitudes towards utilizing SARs in PPD screening. Second, we explored participants’ expressed needs in PPD screening. From the participants’ statements, we discovered potential scenarios which could address future patients’ needs. These insights could be used as a foundation for the development of SARs in PPD screening and other mental healthcare applications, thus helping address PPD in women.
Chaos to Control: Human Assisted Scene Inspection
We are working towards a mixed reality-based human-robot collaboration interface using gaze and gesture as methods of communicating intent in a search and rescue scenario to optimize the operation. The lack of mature algorithms and control schemes for autonomous systems makes it still difficult for them to operate safely in high-risk environments. We are approaching the problem through symbiosis while utilizing humans’ intuition of the environment and robots’ capability to travel through unknown environments for optimal performance in a given time.
The Effect of Gender on Perceived Anthropomorphism and Intentional Acceptance of a Storytelling Robot
Gender and anthropomorphism play a substantial role in how social robots are perceived. In child-robot interaction, children’s perception of the robot can be influenced by individual factors, such as the robot’s gender. The purpose of this study is to examine how gender congruity affects the way children perceive social storytelling robots. Furthermore, the relationships among gender congruity, anthropomorphism, and intentional acceptance were investigated. Sixty-four children interacted with a storytelling robot. The results indicated that children did not humanize the robot to a higher degree if the robot’s gender matched with the children’s gender. Moreover, children who anthropomorphised the robot to a higher degree found the robot more sociable and had higher intentions of using the robot repeatedly. The findings of this studies contrast with previous scientific work, and indicate more research should be conducted to find out which factors play a vital role in the humanization and gendering of robots.
Hey Robot, Can You Help Me Feel Less Lonely?: An Explorative Study to Examine the Potential of Using Social Robots to Alleviate Loneliness in Young Adults
An often-forgotten group of people which is heavily affected by loneliness are young adults. The perceived social isolation often stems from attachment insecurities and social skill deficiencies. Since robots can function as social interaction partners who exert less social pressure and display less social complexity, they may pose a promising approach to alleviate this problematic situation. The goal would not be to replace human interaction partners, but to diminish acute loneliness and accompanying detrimental effects and to function as social skills coach and practice interaction partner. To explore the potential of this approach, a preregistered quantitative online study (N = 150) incorporating a video-based interaction with a social robot and qualitative elements was conducted. First results show that young adults report less state loneliness after interacting with the robot than before. Technically affine people evaluate the robot’s sociability as well as the interaction with it more positively, people with a general negative attitude towards robots less positively. Furthermore, the more trait loneliness people report to experience, the less sociable they perceive the robot.
Towards Online Adaptation for Autonomous Household Assistants
Many assistive home robotics applications assume open-loop interactions: robots incorporate little feedback from people while autonomously completing tasks. This places undue burden on people to condition their actions and environment to maximize the likelihood of their desired outcomes. We formalize assistive household rearrangement as collaborative online inverse reinforcement learning (IRL). Since online IRL can lead to sample inefficient interactions and overfit to specific user objectives, we compare sample efficiency and generalizability of two initial choices of action representations in a simulated household rearrangement task. We show, under certain assumptions, that representing objects by their material properties can increase sample efficiency and generalizability to out of domain objects.
What Skin Is Your Robot In?
The use of socially assistive robots is able to alleviate some depression symptoms, according to existing research. However, due to comorbidities that often accompany depression and the unique experiences of each individual, it is necessary to get a better understanding of how SARs should be personalized. Through 10 hourlong workshops with 10 individuals living with depression, we explored the customization of a zoomorphic SAR for adults with depression. By using the SAR Therabot? as a base platform, participants designed their own unique covering for the robot, and discussed desired robot behaviors and privacy concerns around data collection. Though the physical designs of the robots varied greatly, participants expressed common themes regarding their preference for a soft touchable exterior, comfort with sharing data with their therapists, and interest in the robot producing more realistic sounds and movements, among other design features.
Development of a University Guidance and Information Robot
We are developing a social robot that will be deployed in a large, recently-built university building designed for learning and teaching. We outline the design process for this robot, which has included consultations with stakeholders including members of university services, students and other visitors to the building, as well as members of the “Reach Out” team who normally provide in-person support in the building. These consultations have resulted in a clear specification of the desired robot functionality, which will combine central helpdesk queries with local information about the building and the surrounding university campus. We outline the technical components that will be used to develop the robot system, and also describe how the success of the deployed robot will be evaluated.
Towards a Computational Approach for Proactive Robot Behaviour in Assistive Tasks
While most of the current work has been focused on developing adaptive techniques to respond to human-initiated inputs (what behaviour to perform), very few of them have explored how to proactively initiate an interaction (when to perform a given behaviour). The selection of the proper action, its timing and confidence are essential features for the success of proactive behaviour, especially in collaborative and assistive contexts. In this work, we present the initial phase towards the deployment of a robotic system that will be capable of learning what, when, and with what confidence to provide assistance to users playing a sequential memory game.
L2 Vocabulary Learning Through Lexical Inferencing Stories With a Social Robot
Vocabulary is a crucial part of second language (L2) learning. Children learn new vocabulary by forming mental lexicon relations with their existing knowledge. This is called lexical inferencing: using the available clues and knowledge to guess the meaning of the unknown word. This study explored the potential of second language vocabulary acquisition through lexical inferencing in child-robot interaction. A storytelling robot read a book to Dutch kindergartners (N = 36, aged 4-6 years) in Dutch in which a few key words were translated into French (L2), and with a robot providing additional word explanation cues or not. The results showed that the children learned the key words successfully as a result of the reading session with the storytelling robot, but that there was no significant effect of additional word explanation cues by the robot. Overall, it seems promising that lexical inferencing can act as a new and different way to teach kindergartners a second language.
Visuo-Textual Explanations of a Robot’s Navigational Choices
With the rise in the number of robots in our daily lives, human-robot encounters will become more frequent. To improve human-robot interaction (HRI), people will require explanations of robots’ actions, especially if they do something unexpected. Our focus is on robot navigation, where we explain why robots make specific navigational choices. Building on methods from the area of Explainable Artificial Intelligence (XAI), we employ a semantic map and techniques from the area of Qualitative Spatial Reasoning (QSR) to enrich visual explanations with knowledge-level spatial information. We outline how a robot can generate visual and textual explanations simultaneously and test our approach in simulation.
Study of Telerobot Personalization for Children: Exploring Qualitative Coding of Artwork
Social telepresence robots (i.e., telerobots) are used for social and learning experiences by children. However, most (if not all) commercially available telerobot bodies were designed for adults in corporate or healthcare settings. Due to an adult-focused market, telerobot design has typically not considered important factors such as age and physical aspect in the design of robot bodies. To better understand how peer interactants can facilitate the identities of remote children through personalization of robot bodies, we conducted an exploratory study to evaluate collaborative robot personalization. In this study, child participants (N=28) attended an interactive lesson on robots in our society. After the lesson, participants interacted with two telerobots for personalization activities and a robot fashion show. Finally, participants completed an artwork activity on robot design. Initial findings from this study will inform our continued work on telepresence robots for virtual inclusion and improved educational experiences of remote children and their peers.
Collaboration with Highly Automated Vehicles via Voice Interaction and Augmented Reality: A VR-Based Study
In future confined industrial contexts (hubs), highly automated vehicles and human operators may work in shared spaces and collaborate on joint tasks. This will probably generate a demand for new user interfaces between humans and machines that need to be designed to facilitate high levels of safety and efficiency as well as a positive user experience (UX). The present work investigates the potential of using a combination of voice interaction (VI) and visual augmented reality (AR) to support collaboration between automated vehicles and humans manually operating a machine. A concept using VI and AR for a loading scenario in a logistic center was created and evaluated using a VR headset to provide an immersive experience. A user study with 18 forklift drivers was conducted. Our study shows that the concept generated high scores in terms of usability and UX, which indicates a promising potential to use VI and AR to facilitate interaction between human machine operators and unmanned highly automated vehicles when performing collaborative tasks. Our study also implies a need to explore the design and implementation of more complex and social VI for users in logistic centers.
Robot Theory of Mind with Reverse Psychology
Theory of mind (ToM) corresponds to the human ability to infer other people’s desires, beliefs, and intentions. Acquisition of ToM skills is crucial to obtain a natural interaction between robots and humans. A core component of ToM is the ability to attribute false beliefs. In this paper, a collaborative robot tries to assist a human partner who plays a trust-based card game against another human. The robot infers its partner’s trust in the robot’s decision system via reinforcement learning. Robot ToM refers to the ability to implicitly anticipate the human collaborator’s strategy and inject the prediction into its optimal decision model for a better team performance. In our experiments, the robot learns when its human partner does not trust the robot and consequently gives recommendations in its optimal policy to ensure the effectiveness of team performance. The interesting finding is that the optimal robotic policy attempts to use reverse psychology on its human collaborator when trust is low. This finding will provide guidance for the study of a trustworthy robot decision model with a human partner in the loop.
Human Gesture Recognition with a Flow-based Model for Human Robot Interaction
Human skeleton-based gesture classification plays a dominant role in social robotics. Learning the variety of human skeleton-based gestures can help the robot to continuously interact in an appropriate manner in a natural human-robot interaction (HRI). In this paper, we proposed a Flow-based model to classify human gesture actions with skeletal data. Instead of inferring new human skeleton actions from noisy data using a retrained model, our end-to-end model can expand the diversity of labels for gesture recognition from noisy data without retraining the model. At first, our model focuses on detecting five human gesture actions (i.e., come on, right up, left up, hug, and noise-random action). The accuracy of our online human gesture recognition system is as well as the offline one. Meanwhile, both attain 100% accuracy among the first four actions. Our proposed method is more efficient for inference of new human gesture action without retraining, which acquires about 90% accuracy for noise-random action. The gesture recognition system has been applied to the robot’s reaction toward the human gesture, which is promising to facilitate a natural human-robot interaction.
Understanding Differences in Human-Robot Teaming Dynamics between Deaf/Hard of Hearing and Hearing Individuals
With the development of industry 4.0, more collaborative robots are being implemented in manufacturing environments. Hence, research in human-robot interaction (HRI) and human-cobot interaction (HCI) is gaining traction. However, the design of how cobots interact with humans has typically focused on the general able-bodied population, and these interactions are sometimes ineffective for specific groups of users. This study’s goal is to identify interactive differences between hearing and deaf and hard of hearing individuals when interacting with cobots. Understanding these differences may promote inclusiveness by detecting ineffective interactions, reasoning why an interaction failed, and adapting the framework’s interaction strategy appropriately.
Transparent Value Alignment
As robots become increasingly prevalent in our communities, aligning the values motivating their behavior with human values is critical. However, it is often difficult or impossible for humans, both expert and non-expert, to enumerate values comprehensively, accurately, and in forms that are readily usable for robot planning. Misspecification can lead to undesired, inefficient, or even dangerous behavior. In the value alignment problem, humans and robots work together to optimize human objectives, which are often represented as reward functions and which the robot can infer by observing human actions. In existing alignment approaches, no explicit feedback about this inference process is provided to the human. In this paper, we introduce an exploratory framework to address this problem, which we call Transparent Value Alignment (TVA). TVA suggests that techniques from explainable AI (XAI) be explicitly applied to provide humans with information about the robot’s beliefs throughout learning, enabling efficient and effective human feedback.
Children’s Fundamental Rights in Human-Robot Interaction Research: A Systematic Review
Citizens and policy institutions increasingly express their concerns regarding the emerging challenges in the context of Artificial Intelligence (AI) and have concrete demands for the protection of human rights. In parallel, studies in the field of AI and Human-Robot Interaction (HRI) indicate the impact of social robots on children’s development. We conducted a systematic review based on UNICEF’s AI Policy Guidance to map the landscape of research on social robots and children’s rights. We used the PRISMA method and identified N=37 papers that address one of the rights, which we then annotated to indicate tendencies and areas of alignment and misalignment with the UNICEF guidance. Our findings reveal that although the field of HRI is looking at specific rights, with a focus on inclusion, some of the rights have been under-researched. Furthermore, we observed a misalignment between HRI and UNICEF regarding the terminology. With this paper, we hope to bring awareness to the field of HRI regarding children’s rights and to highlight directions for alignment among research, societal needs, and policy.
Perception-Intention-Action Cycle as a Human Acceptable Way for Improving Human-Robot Collaborative Tasks
In Human-Robot Collaboration (HRC) tasks, the classical Perception-Action cycle can not fully explain the collaborative behaviour of the human-robot pair until it is extended to Perception-Intention-Action (PIA) cycle, giving to the human’s intention a key role at the same level of the robot’s perception and not as a subblock of this. Although part of the human’s intention can be perceived or inferred by the other agent, this is prone to misunderstandings so the true intention has to be explicitly informed in some cases to fulfill the task. Here, we explore both types of intention and we combine them with the robot’s perception through the concept of Situation Awareness (SA). We validate the PIA cycle and its acceptance by the user with a preliminary experiment in an object transportation task showing that its usage can increase trust in the robot.
Practical Development of a Robot to Assist Cognitive Reconstruction in Psychiatric Day Care
One of the important roles of social robots is to support mental health through conversations with people. In this study, we focused on the column method to support cognitive restructuring, which is also used as one of the programs in psychiatric day care, and to help patients think flexibly and understand their own characteristics. To develop a robot that assists psychiatric day care patients in organizing their thoughts about their worries and goals through conversation, we designed the robot’s conversation content based on the column method and implemented its autonomous conversation function. This paper reports on the preliminary experiments conducted to evaluate and improve the effectiveness of this prototype system in an actual psychiatric day care setting, and on the comments from participants in the experiments and day care staff.
Comparison of Attitudes Towards Robots of Different Population Samples in Norway
Acceptance of robots is known to be directly influenced by perceptions and attitudes potential users have of them. Particularly, negative attitudes can prevent that the implementations of robots unlock their full potential and ultimately fail if negative attitudes are not addressed. We employed the popular Negative Attitude Towards Robots Scale (NARS) across four different studies to assess how different populations in Norway perceive robots. All four included exposure to at least a robot. However, the setup of each individual study was different from the others. We summarized the results across study and made comparisons between the different samples. We also analyzed the effect of gender and age on attitude towards robots as measured by the NARS. The results indicate that there are significant differences between samples and that females score significantly higher than males, thus having a less favorable opinion of robots and potentially avoiding interaction with them. We touch upon possible explanations and implications of our results and highlight the need for more research into this topic.
Who’s in Charge?: Using Personalization vs. Customization Distinction to Inform HRI Research on Adaptation to Users
This paper presents a conceptual approach regarding robot-to-user adaptation, with a focus on the psychological effects of this adaptation process during human-robot interaction (HRI). This approach emphasizes the pertinent role of users in shaping adaptation processes. First, a literature review revealed perceived personal relevance as the central determinant of successful robot-to-user adaptation. Second, we distinguish two main types of adaptations which depend on the extent to which a user is involved: Personalization and customization. We then illustrate effects of personalization vs. customization on potential end users. In particular, anthropomorphism and psychological ownership should be taken into account in prospective research. Finally, we propose to interpret personalization and customization as two opposites of a continuum to guide future empirical research about robot adaptation, before suggesting some leads for future research about the psychological effects of adaptation processes in HRI.
Evaluating Kinect, OpenPose and BlazePose for Human Body Movement Analysis on a Low Back Pain Physical Rehabilitation Dataset
Analyzing human motion is an active research area, with various applications. In this work, we focus on human motion analysis in the context of physical rehabilitation using a robot coach system. Computer-aided assessment of physical rehabilitation entails evaluation of patient performance in completing prescribed rehabilitation exercises, based on processing movement data captured with a sensory system, such as RGB and RGB-D cameras. As 2D and 3D human pose estimation from RGB images had made impressive improvements, we aim to compare the assessment of physical rehabilitation exercises using movement data obtained from both RGB-D camera (Microsoft Kinect) and estimation from RGB videos (OpenPose and BlazePose algorithms). A Gaussian Mixture Model (GMM) is employed from position (and orientation) features, with performance metrics defined based on the log-likelihood values from GMM. The evaluation is performed on a medical database of clinical patients carrying out low back-pain rehabilitation exercises, previously coached by robot Poppy.
Towards a Wave Approach for Value Sensitive Design in Social Robotics
Even though a broad range of social robots are currently available on the market, social robots are not yet an integral part of companies, healthcare providers, or public institutions. This might be due to the fact that the prevalent developer perspective immanently focuses on technological advancements, whereas a human-centered view remains underrepresented. In this paper, we argue that a human-centered perspective which integrates values and beliefs of relevant technology stakeholders needs to complement existing approaches to social robot design. Therefore, we propose to apply value sensitive design (VSD) to improve the process of social robot development and design. Even though VSD has become popular in recent years and it represents an established approach to foster innovative technologies, it has not yet been widely applied in the context of social robotics. Concretely, in this paper we will outline the added value of using VSD for social robots and we will explain how to utilize this methodology to enrich research and practice in social robotics.
Comparing How Soft Robotic Tentacles and an Equivalent Traditional Robot are Described
Soft robotics technology has several technical benefits and enables inherently safer human-robot interaction (HRI). However, only few studies have addressed how people experience soft robots and how their embodiment and designs can meaningfully support HRI. The present study explores impressions formed in the physical meeting with soft robots. 94 participants interacted with one out of two soft robots or a similar traditional robot. Following interaction, they were asked what they thought the robot resembled and to describe the robot’s appearance using five adjectives. The results show that different categories of items were used to describe each of the three robots’ resemblances. Furthermore, a significant difference in the sentiment of the adjectives used was found – positive adjectives were predominantly used to describe the two soft robots, whereas there was an overweight of negative adjectives for the traditional robot.
Out of Sight, Out of Mind?: Investigating People’s Assumptions About Object Permanence in Self-Driving Cars
Safe and efficient interaction with autonomous road vehicles requires that human road users, including drivers, cyclists, and pedestrians, understand differences between the capabilities and limitations of self-driving vehicles and those of human drivers. In this study, we explore how people judge the ability of self-driving cars versus human drivers to keep track of out-of-sight objects by engaging online study participants in cognitive perspective taking toward a car in an animated traffic scene. The results indicate that people may expect self-driving cars to have similar object permanence capability as human drivers. This finding is important because unmet expectations on autonomous road vehicles can result in undesirable interaction outcomes, such as traffic accidents.
The Views of Hospital Laboratory Workers on Augmenting Laboratory Testing with Robots
One way to address shortages in the workforce and improve the safety of health workers is through robots. Here, we will specifically look at whether and how robots might augment workers working on the pre-analytical phase of clinical testing in hospital laboratories. We conducted eight interviews with workers using futuristic autobiographies. Through our analysis, we identified three themes. Workers envisioned robots to increase their well-being and change blue-collar workers’ tasks towards that of automation operators. The latter was perceived to be a change towards doing more meaningful tasks (cognitive tasks, rather than manual labour). Additionally, workers have a need to better cope with structural changes and temporary fluctuations in the workflow. More general-purpose robots could address this.
In the Eyes of the Beheld: Do People Think That Self-Driving Cars See What Human Drivers See?
Safe interaction with automated vehicles requires that human road users understand the differences between the capabilities and limitations of human drivers and their artificial counterparts. Here we explore how people judge what self-driving cars versus human drivers can perceive by engaging online study participants in visual perspective taking toward a car pictured in various traffic scenes. The results indicate that people do not expect self-driving cars to differ significantly from human drivers in their capability to perceive objects in the environment. This finding is important because unmet expectations can result in detrimental interaction outcomes, such as traffic accidents. The extent to which people are able to calibrate their expectations remains an open question for future research.
Lessons From a Robot Asking for Directions In-the-wild
Robots operating in human spaces need to be able to communicate with people. Understanding how humans and robots communicate about the shared space around them allows us to build robots that can interact fluidly with others. We performed a field study with a telepresence robot and a perceived autonomous robot to explore how humans give directions to robots and how the interactions differ based on the perceived identity of the robot operator. In this work we present some initial findings from our in-the-wild study including: 1) participants were more considerate to the robot in the telepresence condition, 2) participants considered the sensing and physical limitations of the robot when giving directions, and 3) participants were uncertain about the realness or identity of the robot and the robot operator.
Where is My Phone?: Towards Developing an Episodic Memory Model for Companion Robots to Track Users’ Salient Objects
Persons with Dementia face the issue of a deteriorating memory. As assistive robots are being increasingly adapted as a helper to persons with dementia, this paper presents an additional feature to such robots. Assistive robots that might assist with different tasks in users’ households can also be utilized to track salient objects to quickly find them in case they are misplaced. This paper presents an episodic memory system that can enable a robot to recognize salient objects and track them while moving in and out of the environment. We also demonstrate how to develop access to the robot’s memory in an easy-to-understand way using a graphical user interface (GUI). The proposed system is integrated with a Fetch mobile manipulator robot to track, store and visualize various household objects in an environment. Results from a system evaluation study are encouraging and the system will be further investigated in future co-design and user studies.
Social Robots to Encourage Play for Children with Disabilities: Learning Perceived Requirements and Barriers from Family Units
We are currently conducting a study with children and their family units to learn the requirements for, concerns about, barriers to, and opinions on using social robots to facilitate play in children with physical disabilities. The motivation for this work is that children with disabilities often have fewer opportunities and lower playfulness, impacting their cognitive and social development. Simultaneously, social robots provide opportunities for supporting these children to engage in play. To work toward developing these robots, our goal in this work is to improve our understanding of the fundamental needs of the child and their family unit to allow us to be better positioned to develop such a social robot.
Towards Improved Replicability of Human Studies in Human-Robot Interaction: Recommendations for Formalized Reporting
In this paper, we present a proposed format for reporting human studies in Human-Robot Interaction (HRI). We call for details which are often overlooked or left out of research papers due to space constraints, and propose a standardized format to contain those details in paper appendices. Providing a formalized study reporting method will promote an increase in replicability and reproducibility of HRI studies and encourage meta-analysis and review, ultimately increasing the generalizability and validity of HRI research. Our draft is the first step towards these goals, and we welcome feedback from the HRI community on the included topics.
Rube-Goldberg Machines, Transparent Technology, and the Morally Competent Robot
Social robots of the future will need to perceive, reason about, and respond appropriately to ethically sensitive situations. At the same time, policymakers and researchers alike are advocating for increased transparency and explainability in robotics-design principles that help users build accurate mental models and calibrate trust. In this short paper, we consider how Rube Goldberg machines might offer a strong analogy on which to build transparent user interfaces for the intricate, but knowable inner workings of a cognitive architecture’s moral reasoning. We present a discussion of these related concepts, a rationale for the suitability of this analogy, and early designs for an initial prototype visualization.
TIP: A Trust Inference and Propagation Model in Multi-Human Multi-Robot Teams
Trust has been identified as a central factor for effective human-robot teaming. Existing literature on trust modeling predominantly focuses on dyadic human-autonomy teams where one human agent interacts with one robot. There is little, if not no, research on trust modeling in teams consisting of multiple human agents and multiple robotic agents. To fill this research gap, we present the trust inference and propagation (TIP) model for trust modeling in multi-human multi-robot teams. We assert that in a multi-human multi-robot team, there exist two types of experiences that any human agent has with any robot: direct and indirect experiences. The TIP model presents a novel mathematical framework that explicitly accounts for both types of experiences. To evaluate the model, we conducted a human-subject experiment with 15 pairs of participants (N=30). Each pair performed a search and detection task with two drones. Results show that our TIP model successfully captured the underlying trust dynamics and significantly outperformed a baseline model. To the best of our knowledge, the TIP model is the first mathematical framework for computational trust modeling in multi-human multi-robot teams.
Comparing a Graphical User Interface, Hand Gestures and Controller in Virtual Reality for Robot Teleoperation
Robot teleoperation is being explored in a number of application areas, where combining human adaptive intelligence and high precision of robots can provide access to dangerous or inaccessible places, or augment human dexterity. Using virtual reality (VR) is one way to enable robot teleoperation, where additional information can be augmented in the display to make the remote control easier. In this paper, we present a robot teleoperation system, developed and deployed on a JAKA Minicobo robotic arm, to compare the user experience and performance for three different control methods. These include a VR controller, VR gestures, and a traditional graphical user interface (GUI). Each study participant was asked to conduct the experiment twice using all three methods, during which the time, the total spatial distance of the robot end-effector movement, and the frequency of errors in a teleoperation task were recorded for quantitative analysis. All participants were also asked to complete a questionnaire based on the NASA task load index. The results show that overall the VR gestures method enables users to complete the task faster than the VR controller, and using the traditional GUI is generally slower. While the quantitative results do not show statistically significant differences, both of the VR methods place greater perceived physical and mental demands on the users, in comparison to the GUI, although when asked which method they preferred, only three of the sixteen participants preferred the VR gestures method. Given that a number of applications use VR, this study indicates that if the task is not time critical, then a traditional GUI might be more suitable for reducing perceived mental and physical load, and controller-free VR interaction might not always be desirable.
A Persuasive Hand Sanitizer Robot in the Wild: The Effect of Persuasive Speech on the Use of a Hand Sanitizer Robot
In this paper, we report on field tests of a hand sanitizer robot, which tracks people’s movements using gaze and which uses several different persuasive utterances when people are approaching. Our results show that adding speech made people significantly more aware of the opportunity of using hand sanitizer, but that people do not use the hand sanitizer more often than with eye gaze only. Furthermore, the different utterances themselves did not lead to significant differences in attention or use, in spite of their effectiveness in other situations.
More Than a Number: A Multi-dimensional Framework For Automatically Assessing Human Teleoperation Skill
We present a framework for the formal evaluation of human teleoperator skill level in a systematic fashion, aiming to quantify how skillful a particular operator is for a well-defined task. Our proposed framework has two parts. First, the tasks used to evaluate skill levels are decomposed into a series of domain-specific primitives, each with a formal specification using signal temporal logic. Secondly, skill levels are automatically evaluated along multiple dimensions rather than a singular number. These dimensions include robustness, efficiency, resilience and readiness for each primitive task. We provide an initial evaluation for the task of taking-off, hovering, and landing in a drone simulator. This preliminary evaluation shows the value of a multi-dimensional evaluation of human operator performance.
Examining the State of Robot Identity
Human-robot interaction has the power to influence human norms and culture. While there is potential benefit in using this power to create positive social change, so too is there risk in merely reinforcing existing social biases which uphold systems of oppression. As the most salient forms of oppression arise along lines of social identity, it stands to reason that we must take utmost care in leveraging human-like identity cues when designing social robots and other agentic embodiments. Yet, the understanding of how to do this is not well-developed. Towards forming an ethics of robot identity, we begin by surveying the state of thought on the topic in human-robot interaction. We do this by conducting a structured review of HRI conference proceedings analyzed from a feminist, intersectional perspective. Our initial findings suggest that existing literature has not fully engaged with intersectionality, embodies an alarming pathologization of neurodivergence, and almost wholly neglects the examination of race.
M-OAT Shared Meta-Model Framework for Effective Collaborative Human-Autonomy Teaming
Integrating humans and autonomous machines in teams for the successful completion of complex, multi-objective tasks in dynamic or unknown environments can help improve the safety and efficiency of team members. For effective cooperation, human-machine teams require understanding team members’ unique potentials, interdependent decision-making, and trust among all team members. To develop a framework that supports the human-machine teaming required of complex, multi-objective tasks, shared mental models, cognitive representations, and multi-directional trust calibration need to be investigated. Providing a shared mental model, cognitive load understanding, and situational awareness to agents allows human-machine teams to adapt to shortcomings and unexpected environmental threats, independently or conjointly make time-sensitive decisions, and improve safety of team members and efficiency of task performance. The goal of our research is to create a multi-objective decision-support framework that encourages the incorporation of trusted autonomous systems for effective cooperation in ad-hoc heterogeneous collaborative teams.
“Can You Guess My Moves?: Playing Charades with a Humanoid Robot Employing Mutual Learning with Emotional Intelligence
Social play is essential in human interactions, increasing social bonding, mitigating stress, and relieving anxiety. With advancements in robotics, social robots can employ this role to assist in human-robot interaction scenarios for clinical and healthcare purposes. However, robotic intelligence still needs further development to match the wide spectrum of social behaviors and contexts in human interactions. In this paper, we present our robotic intelligence framework with a mutual learning paradigm in which we apply deep learning based on emotion recognition and behavior perception, through which the robot learns human movements and contexts through the interactive game of charades. Furthermore, we designed a gesture-based social game to provide a more empathetic and engaging social robot for the user. We also created a custom behavior database containing contextual behaviors for the proposed social games. A pilot study was conducted with participants ranging in age from 12 to 19 for a preliminary evaluation.
The Influence of a Robot Recommender System on Impulse Buying Tendency
The present study examines the influences of a robot recommender system on human impulse buying tendency in online e-commerce contexts. An empirical user study was conducted, where different marketing strategies (limited quantity vs. discount rate) were applied to the products and intimate designs were utilized for the robotic agent. An electroencephalogram (EEG) headset was used to capture users’ brain activities, which allowed us to investigate participants’ real-time cognitive perceptions toward different experimental conditions (i.e., marketing plans and robotic agents). Our preliminary results reveal that marketing strategies and robot recommender applications can trigger impulsive buying behavior and contribute to different cognitive activities.
Robot-Assisted First Language Learning in a New Latin Alphabet: The Reinforcement Learning-based QWriter system
This works addresses the lately initiated Cyrillic-to-Latin alphabet shift in Kazakhstan that may bring challenges for early literacy development and acquisition; both public and scientific communities agree on possible resistance to acquiring and using a new alphabet. To support the acquisition of the new Kazakh Latin alphabet and its handwriting, this study proposes a reinforcement-learning (RL) system named QWriter. It comprises a humanoid robot NAO, a tablet with a stylus, and an RL agent that learns from a child’s mistakes and progresses to maximize alphabet learning in the shortest amount of time by altering the order of practice words in response to a child’s mistakes. We conducted a five-sessions experiment using a between-subject design with 69 Kazakh children ages 7 to 10 and compared their learning performance with a human tutor to assess the effectiveness of the QWriter system. Overall results show no significant differences in learning gains between the two conditions. Our study foregrounds the promising potential of the RL-based social robot in teaching foundational letter acquisition and writing over time.
Human-Drone Interaction: Interacting with People Smoking in Prohibited Areas
Drones are continually entering our daily lives by being used in a number of different applications. This creates a natural demand for better interaction ways between humans and drones. One of the possible applications that would benefit from improved interaction is the inspection of smoking in prohibited areas. We propose our own gesture of drone flight that we believe would deliver the message “not to smoke” better than the ready-made built-in gesture. To this end, we conducted a within-subject experiment involving 19 participants, where we evaluated the gestures on a drone operated through the Wizard-of-Oz interaction design. The results demonstrate that the proposed gesture was better at conveying the message compared to the built-in gesture.
Design of Child-robot Interactions for Comfort and Distraction from Post-operative Pain and Distress
There are numerous strategies for reducing the stress and anxiety associated with pain that children experience before and after surgery. There is a potential communication barrier between hospital staff and the child which may result in inadequate pain management. Social robots may reduce the gap between the support that personnel can provide and what the children’s emotional needs are. This study qualitatively evaluates the interactions between children and their parents who interact with the social robot MiRo-E. In the overall interaction, the robot would act like a pet and show different behaviours based on the estimated pain level of the children. However, in the current study, only the quality of the robot interaction behaviours was tested with healthy children and no pain was measured. During this study, two usability tests were done. Each usability test evaluated a different robot interaction. In both tests, children and their parents evaluated the designed interactions. Results indicate that children initially have different responses to the robot. They can either be held back from immediately interacting or they are not afraid of the robot at all and start touching it and interacting immediately. Although the intended behaviours could be more elaborate and personalized, both children and their parents appeared to like the different emotions shown by the robot and how it responded to their touch. The parents also offered some ideas to enhance the interaction between a child and a robot in a medical context, such as by including more sounds, making some behaviours more distinct, and allowing kids to customize the robot’s look.
Montessori-based Design of Long-term Child-Robot Interaction for Alphabet Learning
The transition of the Kazakh alphabet from Cyrillic to Latin, set to be fully implemented by 2031, poses unwanted challenges to early and continuous literacy development and acquisition of the new script. This creates a need to design innovative learning solutions to boost children’s motivation in acquiring the new Kazakh Latin alphabet. The Montessori method has proven itself effective for young children to engage in self-directed and developmentally appropriate literacy acquisition. These core ideas have been carefully adopted to establish design principles for the robotic system that is adhering to the principles of the Montessori pedagogy. This paper proposes a robotic system named Moveable -l-pbi and details its interaction design life cycle from understanding users and establishing requirements to designing, and implementing robot behaviours and validating them with the Montessori practitioner. This process was iterative that involved several cycles of piloting the system with children of targeted age groups and redesigning the learning activities. With the aim to evaluate the proposed system and to find the most cognitively rewarding way of learning the alphabet, we conducted a mixed-subject design experiment with 60 Kazakh children aged 8-10 years old from a local public school where we compare the proposed Moveable -l-pbi robotic system with a baseline Montessori human teacher. The results demonstrate the potential of the robot as a Montessori teacher in providing foundational letter acquisition over multiple sessions. Implications for improving the interaction design and activities are discussed based on the findings.
Language Learning using Caption Generation within Reciprocal Multi-Party Child-Tutor-Tutee Interaction
Reciprocal Peer Tutoring (RPT) is a learning paradigm characteristic of collaborative interaction between learners with alternating tutor-tutee roles. In recent years, robot-assisted language learning (RALL) has gained traction by its wide application for learning language skills, such as speaking or writing, using social robots. Our work aims at exploring the effectiveness of RPT in learning Kazakh as a second language with the help of two robots acting either as a tutee or a tutor. To this end, we piloted a within-subject experiment with 21 children aged 8 and 9 years old from a primary school with Kazakh and Russian languages of instruction. The results show that the tutor robot was more effective in terms of learning gains, while the tutee robot brought positive emotional experiences.
‘Ikigai’ Robots: Designing for Direct Benefits to Older Adults and Indirect Benefits to Caregivers
As a step towards designing a home robot to support older adults’ ikigai (meaning in life), we interviewed the family members who provide care for them. After conducting interviews with ten family caregivers in Japan, we found that older adults’ physical health is a major concern to both caregivers and older adults. However, concerns over loneliness were not prioritized by caregivers, though they did perceive older adults’ worries around this issue. Additionally, caregivers saw a number of ways a social robot could be designed to address this, as well as its ability to go beyond loneliness in promoting more fulfilling lives among older adults. Finally, we conclude that an ikigai robot may be designed to support both the ikigai of older adults and (indirectly) their family caregivers.
Lying About Lying: Examining Trust Repair Strategies After Robot Deception in a High-Stakes HRI Scenario
This work presents an empirical study into robot deception and its effects on changes in behavior and trust in a high-stakes, time-sensitive human-robot interaction scenario. Specifically, we explore the effectiveness of different apologies to repair trust in an assisted driving task after participants realize they have been lied to by a robotic assistant. Our results show that participants are significantly more likely to change their speeding behaviors when driving advice is framed as coming from a robotic assistant. Our results also suggest an apology without acknowledging intentional deception is best at mitigating negative influences on trust. These results add much needed knowledge to the understudied area of robot deception and could inform designers and policy makers of future practices when considering deploying robots that may learn to deceive.
Robot-Assisted Word-to-Picture Matching Game for Language Learning
Teaching methods are developing rapidly, offering different ways of teaching languages. The more recent ones are robot-assisted language learning and gamification of learning. There is a great body of research related to teaching English as a second/foreign language to children with the help of technology. Most of these studies integrating robots into the learning process showed that they positively affect the learning process and increase the learning gains of children. This study introduces a robot-assisted word-to-picture matching game for English learning in two different modes. This work aims to propose a robot-assisted game-based English learning activity and compare its two versions: a standard version and a priority selection version. To this end, a within-subject experiment was conducted in which a total of 17 children aged 7 to 8 years old participated. The experiment compared the children’s knowledge after each version of the game. The results showed a significant difference in learning gains between the question priority algorithm and the standard game.
Improving Health and Safety Promotion with a Robotic Tool: A Case Study for Face Mask Detection
Social robots have been shown to effectively promote healthy behaviour in humans. In the context of the pandemic, these robots have been used to encourage the use of face masks and other bio-safety measures. However, human perception in these scenarios is yet to be assessed. This study evaluates the effectiveness of using a social robot, specifically the NAO robot, to promote face-mask usage in public spaces with a hybrid experiment. The methodology involves an in-person study, as well as an online survey. The results show that the robot was able to detect correct face-mask usage with 95% accuracy, and 87.5% of participants had a positive experience interacting with the robot. Statistical results also suggest that the users perceiving a human-robot interaction scenario through a pre-recorded video can perceive differently the robot’s trust, safety, and intelligence, among others. These findings suggest that social robots can be a valuable tool for promoting health and safety measures, not only during the pandemic but in other collaborative environments as well.
Technical Transparency for Robot Navigation Through AR Visualizations
Since robots can facilitate our everyday life by assisting us in basic tasks, they are continuously integrated into our life. However, for a robot to establish itself, a user must accept and trust its doing. As the saying goes, you don’t trust things you don’t understand. Therefore, the base hypothesis of this paper is that providing technical transparency for users can increase understanding of the robot architecture and its behaviors as well as trust and acceptance towards it. In this work, we aim to improve a robot’s understanding, trust, and acceptance by displaying transparent visualizations of its intention and perception in augmented reality. We conducted a user study where robot navigation with certain interruptions was demonstrated to two groups. The first group did not have AR visualizations displayed during the first demonstration; in the second demonstration, the visualizations were shown. The second group had the visualizations displayed throughout only one demonstration. Results showed that understanding increased with AR visualizations when prior knowledge had been gained in previous demonstrations.
CHIBO: A Pneumatic Robot that Provides Real-time Intervention for Parent-child Feeding Behavior
Parents may inadvertently promote excess weight gain in childhood by using inappropriate child-feeding behaviors. This study presents the design of a home robot in a parent-child feeding scenario. Designed for parents who have difficulty quantifying the amount of food their children, it helps them quantify calorie intake through pneumatic feedback and gives real-time feedback and intervention by recognizing feeding behaviors. It aims to create a more intuitive link between scientific quantification standards and actual eating scenarios by a pneumatic device in the form of a peripheral interaction. Based on our current prototype development, we have sought to explore the effectiveness of peripheral interactions with physical robotic interventions in user behavior.
SESSION: HRI Pioneers
Enabling Human-like Language-Capable Robots Through Working Memory Modeling
Working Memory (WM) is a central component of cognition. It has direct impact not only on core cognitive processes, such as learning, comprehension, and reasoning, but also language-related processes, such as natural language understanding and referring expression generation. Thus, for robots to achieve human-like natural language capabilities, we argue that their cognitive models should include an accurate WM representation that plays a similarly central role. Our research investigates how different WM models from cognitive psychology affect robots’ natural language capabilities. Specifically, we explore the limited capacity nature of WM and how different information forgetting strategies, namely decay and interference, impact the human-likeness of utterances formulated by robots.
Adaptive Robotic Mental Well-being Coaches
Mental well-being issues such as anxiety and depression are increasing, and as provisions by healthcare systems are insufficient to meet people’s needs, new technology is being used to improve mental well-being. In this doctoral thesis, we examine the iterative and user-centred design, implementation and evaluation of a robotic mental well-being coach – i.e., a robot that could help people maintain and focus on their well-being. In this article, we discuss the studies we have already conducted. These have examined coach and user preferences, the design of a robotic well-being coach, how to computationally implement such a coach, and how such a robot is experienced in the short (laboratory setting) and long term (workplace setting). We then discuss future work, which includes data analysis of a longitudinal study where a robotic coach interacted with a group, the implementation and testing of an longitudinal adaptation model for the robotic coach, and a survey of the state of the art in affective robotics for well-being.
Supporting End-Users in Programming Collaborative Robots
End-user robot programming methods have come a long way in enabling users who do not have conventional robotics training to customize robot behaviors. However, such methods can still be difficult to adopt for untrained users. In this abstract, I describe my work towards designing tailored support to reduce barriers preventing end-users from programming collaborative robots.
Socially Assistive Robotics for Anxiety Reduction
Prior work in HRI for domains such as exercise, rehabilitation, and autism has shown how socially assistive robots (SARs) can successfully support behavioral practices. Applying these insights to mental health is an opportunity to support a large and growing population that is actively struggling. My research investigates utilizing SARs for supporting the therapeutic behavior of deep breathing for anxiety reduction. My prior work to date has focused on the design affordances required for an anxious population through the development of a new, haptically-based robot, Ommie. Future work explores how SARs for anxiety-reducing behaviors can maximize long-term, in-the-wild use through haptic interactions, perception technologies, and personalized motivational mechanisms.
Aligning Robot Behaviors with Human Intents by Exposing Learned Behaviors and Resolving Misspecifications
Human-robot interaction is limited in large part by the challenge of writing correct specifications for robots. The research community wants alignment between humans’ goals and robot behaviors, but this alignment is very hard to achieve. My research tackles this problem. I view alignment as the consequence of iterative design and ample testing, and I design methods in service of these processes. I first study how humans currently write reward functions, and I profile some of the typical errors they make when doing so. I then study how humans can inspect the behaviors robots learn from any given specification. A typical approach to this mandates unstructured or hand-designed test cases; I instead introduce a Bayesian inference method for finding behavior examples which cover information-rich test cases. Alongside finding these behavior examples, I study how these examples should be presented to the human through applying cognitive theories of human concept learning. For the remainder of my thesis, I am pursuing two open questions. My first question concerns how these components can be combined such that humans are able to iteratively design better behavioral specifications. My second question concerns how robots can better interpret humans’ erroneous specifications and attempt to infer their true intent, in spite of the errors.
Investigating the Potential of Life-like Haptic Cues for Socially Assistive Care Robots
Physical touch (e.g., hugging, holding hands, petting an animal) plays a fundamental role in the provision of socio-emotional support. Touch-based interactions should thus be considered for socially assistive robots designed to provide similar support. However, the haptic properties of currently existing robots are limited. While research has shown possibilities of integrating life-like haptic cues (e.g., thermal, vibrotactile, pressure cues) into robotic interfaces, current understanding of user experiences with life-like haptic cues delivered by a robot, and their potential to regulate user affect, is insufficient. The current and proposed works investigate whether integrating life-like haptic cues into Human-Robot Interaction (HRI) can enhance the socio-emotional support provided by socially assistive robots and improve the relationships with, and perceptions of such robots. The findings of this work will provide insights into user experiences of touching a socially assistive robot and their perceptions of life-like haptic cues. The contributions will provide concrete design suggestions on how haptic cues can be integrated into interfaces of socially assistive robots to enhance users’ well-being during stress-inducing situations
Measuring Trust in Children’s Speech: Towards Responsible Robot-Supported Information Search
Children use conversational agents, such as Alexa or Siri, to search for information, but also tend to trust these agents which might influence their information assessment. It is challenging for children to assess the veracity of information retrieved from the internet and social media, possibly more so when they trust a voice agent excessively. In this project, I propose to design child-robot interactions to empower children to have a critical attitude by implementing real-time trust monitoring and robot behavioural interventions in cases of high trust. First, we need to be able to measure children’s level of trust in the robot real-time during the interaction, to reason about when excessive trust may be occurring. Second, we need to study what behavioural interventions by the robot foster critical attitudes toward the provided information. By adapting the robot’s behavior when excessive trust occurs, I aim to contribute to more responsible interactions between children and robots.
Designing Robotic Camera Systems to Enable Synchronous Remote Collaboration
Collaborative robots have the potential to be intelligent, embodied agents that can contribute to remote human collaboration. We explore this paradigm through the design of robot-mounted camera systems for remote assistance. In this extended abstract, we discuss our iterative design process to develop interaction techniques that leverage shared control-based methods to distribute camera control between the agentic robot and human collaborators.
Robot Sound-In-Interaction
Sound is an important interaction modality in human interaction, which robot design is only starting to tap into. Drawing on insights about how human sounds support coordination of bodily activities, this work focuses on how robots can communicate through sound in concrete interactions in the wild. My work contributes a focus on how users make sense of sound in everyday interaction, promotes re-consideration of what HRI is designing for and stimulates the development of new HRI design methods.
Social Network Engagement during Human-robot Interaction: An Extended Abstract on a Fundamental Novel Field within Human-robot Interaction
The improvement of natural and intuitive interaction is currently one of the major challenges for studies of human-robot interaction. Although the sector has advanced technologically to astonishing levels, the integration of these robots into our social environment frequently still feels rusty. People are often reluctant to communicate with robots on a regular basis because they feel uneasy or distrustful of them. As a result, conversing with an agent from a novel embodiment raises significant questions about human social interaction in general. How do people connect, or how does interaction result in the social relationship that is viewed as the ultimate aim? My goal is to systematically investigate the neural mechanisms of social behaviour during human-robot interaction. Specifically, my aim is to investigate the social domain-general networks in the brain that form the underpinnings of communication. While cutting-edge research techniques, such as simultaneous scanning and real-time measurements, have been used in neuroscientific research to study human-human interaction, the extension of neuroscience towards human-robot interaction is surprisingly novel. I therefore want to contribute to the human-robot interaction field with the methods from neuroscience using an interdisciplinary, naturalistic, and replicable approach.
Balancing Flexibility and Precision in Robot-assisted Feeding
Assistive robots can empower those with mobility impairments, but they must manage the trade-off between safety, efficacy, and comfort. For some task dimensions, there is flexibility: humans can shake robot hands anywhere within reach. For others, precision is key: too hard of a handshake can lead to injury. This distinction is critical for particularly intimate tasks like feeding. A robot feeding system needs to explore when there is flexibility, optimizing for success and user preferences, while maintaining the precision necessary to avoid destroying food or harming the user. Here, we propose a hierarchical approach. We design strategies and heuristics based on user feedback to abstract away the precise dimensions of bite acquisition and transfer. We can deploy learning algorithms relatively safely the resulting curated action subspace. Within the next year, we expect this work to culminate in a week-long in-home deployment with a user and co-designer.
Collaborative Planning and Negotiation in Human-Robot Teams
Our work aims to apply iterative communication techniques to improve functionality of human-robot teams working in space and other high-risk environments. Forms of iterative communication include progressive incorporation of human preference and otherwise latent task specifications. Our prior work found that humans would choose not to comply with robot-provided instructions and then proceed to self-justify their choices despite the risks of physical harm and blatant disregard for rules. Results clearly showed that humans working near robots are willing to sacrifice safety for efficiency. Current work aims to improve communication by iteratively incorporating human preference into optimized path planning for human-robot teams operating over large areas. Future work will explore the extent to which negotiation can be used as a mechanism for improving task planning and joint task execution for humans and robots.
Using Justifications to Mitigate Loss in Human Trust when Robots Perform Norm – Violating and Deceptive Behaviors
Robots are increasingly being introduced in environments that require intimate and sensitive interactions with humans, ranging from robot caretakers in senior living facilities to medical assistants in hospitals, and as team members embedded in military operations. For robots to be trusted and accepted in interactions with humans, they must be aware of, follow, and prioritize the norms of the communities in which they will operate. This line of research aims to examine human perceptions of justifications presented by robots that exhibit norm-violating and deceptive behaviors. Across two studies, we examined human trust and moral blame ratings of robots violating social norms and the effects of justifications in mitigating initial perceptions. We aim to expand our research to emphasize deceptive robotic acts — providing quantifiable evidence for the “deception objection” debate in social robotics literature.
Investigating Learning from Demonstration in Imperfect and Real World Scenarios
As the world’s population is aging and there are growing shortages of caregivers, research into assistive robots is increasingly important. Due to differing needs and preferences, which may change over time, end-users will need to be able to communicate their preferences to a robot. Learning from Demonstration (LfD) is one method that enables non-expert users to program robots. While a powerful tool, prior research in LfD has made assumptions that break down in real-world scenarios. In this work, we investigate how to learn from suboptimal and heterogeneous demonstrators, how users react to failure with LfD, and the feasibility of LfD with a target population of older adults.
Resolving References in Natural Language Explanation Requests about Robot Behavior in HRI
In HRI, users have been shown to request explanations when their interpretation of autonomous robot behavior fails. These requests can refer to the behavior either by open questions or with attributes of the behavior. The presented work aims to resolve these references in explanation requests by developing an episodic memory with a graph database that stores and queries representations of the internal execution. The reference resolution is done by the detection of temporal adverb and verb constraints in the syntactical dependency tree of utterances, the execution of a query in the episodic memory, and the scoring of the resulting entries to find the referred behavior. The explanation generation process of the original model is adapted to the new approach and can contain additional information such as detected constraints, a failed execution state, and the distinction between running and completed executions.
Longitudinal Proactive Robot Assistance
Studies show that users value proactivity in robotic assistance, but mobile manipulators today require exact task descriptions, and can proactively assist only the user’s ongoing activity. Through this research project, I formulate and advance the problem of longitudinal proactive assistance, which seeks to carry out assistive actions in advance and without being asked, by understanding the human user’s needs and preferences in context of the environment over long time horizons. My research aims to enable autonomous robots to utilize multimodal data from passive observations to understand routine lives of human users, and interact with the human user to communicate its reasoning and actively seek feedback.
Development of a Wearable Robot that Moves on the Arm to Support the Daily Life of the User
Wearable robots can maintain physical contact with the user and interact with them to assist in daily life. However, since most wearable robots operate at a single point on the user’s body, the user must be constantly aware of their presence. This imposes a burden on the user, both physically and mentally, and prevents them from wearing the robot daily. One solution to this problem is for the robot to move around the user’s body. When the user does not interact with the robot, it can move to an unobtrusive position and attract less attention from the user. This research aims to develop a wearable robot that reduces the burden by developing an arm movement mechanism for wearable robots and a self-localization method for autonomous movement and helps the user’s daily life by providing supportive interactions.
Perceived Appropriateness: A Novel View for Remediating Perceived Inappropriate Robot Navigation Behaviors
Robots navigating in social environments inevitably exhibit behavior perceived as inappropriate by people, which they will repeat unless they are aware of them; hindering their social acceptance. This highlights the importance of robots detecting and adapting to the perceived appropriateness of their behavior, in line with what we found in a systematic literature review. Therefore, we have conducted experiments (both outdoor and indoor) to understand the perceived appropriateness of robot social navigation behavior, based on which we collected a dataset and developed a machine learning model for detecting such perceived appropriateness. To investigate the usefulness of such information and inspire robot adaptive navigation behavior design, we will further conduct a WoZ study to understand how trained human operators adapt robot behavior to people’s feedback. In all, this work will enable robots to better remediate their inappropriate behavior, thus improving their social acceptance.
Social Robots to Encourage Play for Children with Physical Disabilities: Learning from Family Units
Children with disabilities have fewer opportunities and lower motivation for play, impacting their cognitive and social de-velopment. Leveraging co-design and participatory design we plan to conduct a study with children with physical disabili-ties and their families to learn the requirements, concerns, barriers, and opinions about using social robots to facilitate play in children with physical disabilities. Combining the in-sights gathered from the families with knowledge from litera-ture, we hope to outline the requirements needed to direct fu-ture research with a grounded understanding of the practical and social landscape these social robots would need to be de-signed within.
SESSION: Student-Design Competition
Toaster Bot: Designing for Utility and Enjoyability in the Kitchen Space
Toasting bread is a seemingly mundane task that people perform on a daily basis, whether in a private kitchen area or in a communal dining space. This paper presents a robotic toaster, or “toaster bot”, that is designed with animated movements to enhance the toast-making experience by not only assisting in completing the task itself but also by acting as a playful entity with whom users may interact. Furthermore, we aim to explore different roles and behaviors for the robotic toaster and how they are understood by the users.
Internet of Robotic Cat Toys to Deepen Bond and Elevate Mood
Pets provide important mental support for human beings. Recent advancements in robotics and HRI have led to research and commercial products providing smart solutions to enrich indoor pets’ lives. However, most of these products focus on satisfying pets’ basic needs, such as feeding and litter cleaning, rather than their mental well-being. In this paper, we present the internet of robotic cat toys, where a group of robotic agents connects to play with our furry friends. Through three iterations, we demonstrate an affordable and flexible design of clip-on robotic agents to transform a static household into an interactive wonderland for pets.
Making Music More Inclusive with Hospiano
Music brings people together; it is a universal language that can help us be more expressive and help us understand our feelings and emotions in a better manner. The “Hospiano” robot is a prototype developed with the goal of making music accessible to all, regardless of physical ability. The robot acts as a pianist and can be placed in hospital lobbies and wards, playing the piano in response to the gestures and facial expressions of patients (i.e. head movement, eye and mouth movement, and proximity). It has three main modes of operation: “Robot Pianist mode”, in which it plays pre-existing songs; “Play Along mode”, which allows anyone to interact with the music; and “Composer mode”, which allows patients to create their own music. The software that controls the prototype’s actions runs on the Robot Operating System (ROS). It has been proven that humans and robots can interact fluently via a robot’s vision, which opens up a wide range of possibilities for further interactions between these logical machines and more emotive beings like humans, resulting in an improvement in the quality of life of people who use it, increased inclusivity, and a better world for future generations to live in.
Aimoji, an Affordable Interaction Kit that Upcycles Used Toy as Companion Robot
When a child wants to talk with a toy, usually it is a one-way interaction with the child imagining the toy’s responses. Our design enables every toy to have a two-way interaction using our low-cost interaction kit. The reaction of the toy is based on a motion-sensor that triggers the toy to respond to the child through a screen attached to the toy. Through this method, every child can experience Human-Robot Interaction in an affordable way. There can be as many robots as the number of toys.
GratiBot: Enhancing Relationships of Caregivers and Older Adult Care Recipients through Gratitude
Enhancing relationships between caregivers and older adult care recipients relationships are a common goal for caregivers all over the world. One way to improve a relationship is through gratitude. Therefore, we designed and built a high-fidelity prototype of the GratiBot, a robot designed to facilitate mutual gratitude practices. The GratiBot is implemented as a mobile application that is accessible and affordable for caregivers around the world. Caregivers and their care recipients can use GratiBot to foster positive relationship interactions. We designed the storyboard based on the results of the interviews with seven caregivers from the U.S. and Taiwan.
Melodica: An Affordable Music Companion
Melodica is a situated robot we envision will help users practise mindfulness and overcome social disconnection through music. At a time when there are insufficient mental health resources, several AI tools have been designed to support people. Whereas these employ therapeutic practice, Melodica uses music as an approach to self-care. Music is a universal language, accessible to everyone, and helps to connect people from different cultures, age groups, and socio-economic backgrounds. Melodica is designed to be an inclusive and affordable robot that aims to accompany users and enrich their musical experiences, bringing people joy and solace to their everyday lives.
Labo is Watching You: A Robot that Persuades You from Smartphone Interruption
Endogenous smartphone interruptions have changed many aspects of people’s daily lives, particularly in the study and work environment. We create a robot that could persuade you inherently by enhancing the desk lamp with posture changes and facial expression of gaze, which is affordable on both cost and interaction. This paper presents our design considerations and the first prototype to show the possibility of alleviating people’s endogenous interruptions through persuasive robots.
Pet Whale Robot Reminds Asthmatic Children of Medication
Asthma is one of the most common diseases in children, but children’s adherence to medication is very low, which poses a great threat to their health. baby whale, a pet robot for asthmatic children, is a device that inhales drugs and alerts children to use them at certain times. The baby whale will exist as a pet companion of the child. When the time for medication is approaching, the robot will show a state of hypoxia through interaction such as page prompt and vibration, while the child needs to use the medication correctly to help the whale out of danger.
Toubot: A Pair of Wearable Haptic Robots Linking Left-behind Children and Their Parents Emotionally
Children who are left behind have more mental problems than their urban peers because they have fewer instant emotional interactions with their parents. In order to solve this, we propose a pair of wearable soft robots that strengthen their emotional bond by enhancing instant nonverbal interactions. This paper details the design and creation of our initial pair of prototypes.
Mosu Buddy: Mourning Support Robot
The pandemic increased the number of people in mourning and safety protocols interfered with rituals and customs people were used to when a loved one passed away. This can lead to a complicated grief process. Mosu Buddy was designed to accompany a person going through a period of grief and help them overcome this process. The user can interact with Mosu with its different activities and functionalities to help them cope with the loss of a loved one.
CHIBO: A Robot Design for Parent-child Feeding Scenario
Inappropriate child-feeding behaviors are one of the causes of childhood obesity or stomach problems. Children from 1 to 5 years old don’t know how much they should eat and parents may not know whether their children are full or not. Therefore, we designed a robot in a parent-child feeding scenario that visualizes children’s daily diet through changes in its form. It aims to establish a more intuitive connection between scientific quantitative standards and actual feeding scenarios in the form of peripheral interaction. Thus solving the problem of real-time feedback in feeding.
LEIDUSS: An Interactive Social Robot Table for ADHD Children for Reading
Children with ADHD often have difficulty reading books, a problem that has been magnified by the pandemic, affecting their development, especially in the early school years. LEIDUSS is an interactive board that facilitates children’s concentration while reading an activity or story without the need for a teacher or parent by their side, whether in the classroom or at home.
ALH-E: A Deformable and Flexible Robot that provides Tangible Interaction for Pain Communication
It is difficult for humans to express their pain confidently. Generally, communication between patients and caregivers is important, but there are difficulties of interaction with existing methods. We propose an assistive robot, ALH-E, that consists of squeezable input device and wriggling output device for pain communication. The patients can log their pain by squeezing, and the output device can express the patient’s pain intensity through dynamic wriggling movements. Human-robot interaction with ALH-E can be an assistive communication of pain between the patients and caregivers in the insufficient environment. This paper presents our development of the device and new interaction.
Smart Transformer Health Monitoring System
Dependency on electricity is currently at an all-time high with power distribution being a key element. Currently, the maintenance and monitoring methodologies especially in Pakistan revolve around taking on-field samples (DGA) which is overall primitive and prone to human error. This research paper presents a smart device that can monitor transformer’s health using AI and relay that health score to a remote dashboard through the Internet of Things (IoT). The device uses a combination of single board computer, embedded systems, and communication protocols and is designed to detect and diagnose any issues with the transformer and alert the user in real time. The results of this research show that the device is an effective and reliable tool for extrusive monitoring of health of transformer.
BeeBot: A Robot that Help Children Manage Their Blood Glucose in a Friendly Way
Obesity is a major problem affecting children around the world. In many cases this condition leads to an even more serious disease, diabetes, a condition for which the child and his or her family must create new habits and purchase medical devices. BeeBot is an affordable robot that helps children with diabetes and obesity who are unfamiliar with or afraid of using a glucometer. It incorporates a glucometer and, in addition, includes a counter to capture the number of glasses of water to be consumed and a special button that, when pressed, will advise the child on the exercise he or she can do. Finally, it has two buttons that the child can press depending on whether or not the child has met the day’s goal, whether it is the number of glasses of water or the recommended exercise. All the information can be monitored by the parent through an app.
RoPi: Robotic Assistant for the Emotional Support of Hospitalized Children for Burns
Injury burns are traumatic events, especially for children. The use of combined pharmacological and non-pharmacological strategies in hospitals favor emotional experiences. However, in Latin American hospitals, which have low budgets, it is not possible to hire therapeutic support personnel, such as clowns, due to the lack of available human resources. RoPi is a social robot created to assist hospitalized children’s emotional support through its multicolor interchangeable pieces and its interactive functions.
An Affordable MathBot: Let’s play!
The design of an affordable and easily-scalable system as a support tool for the educational system in developing countries is presented. It is aimed to develop a system that can work regardless of the participants’ gender, ethnicity, or socio-economical conditions. Therefore, a system whose only requirement is a battery or electricity is proposed.
By making the students compete with each other to be the first one that answers correctly some mathematical operations said by a robot, is aimed to provide experiential learning through practice in an environment that stimulates social and cognitive skills while increasing their engagement in learning.
Social Bots that Bring a Strong Presence to Remote Participants in Hybrid Meetings
We’ve designed a social robot called SNOTBOX. The bot indicates the participation status (marginalized or not) of the remote participant using “Buzzo” and the remote participant’s desire to be heard through a “Eureka”. We used both representations to attract the attention of local participants as a way to enhance the presence of remote participants in the conference. SNOTBOX is low cost, easy to manufacture and supports diy participants’ personalities, as well as being able to support multiple participants in online discussions.
Arpi, a Social Robot for Children with Epilepsy
Arpi is a social robot designed to keep children with tonic-clonic epilepsy company and help the families to take care of them. Arpi monitors the child’s mood and empathically interacts with him through movements and sounds. When a seizure begins, it alerts the parents and monitors the time to assess whether it is necessary to seek professional help. Additionally, it is low cost and can be used in any environment, making it accessible to children in low-income countries , who, due to a lack of a good diagnosis and treatment, experience physical and emotional difficulties in their daily lives.
Cogui: Interactive Social Robot for Autism Spectrum Disorder Children: A Wonderful Partner for ASD Children
Autistic kids have difficulties communicating with others and learning new things in an academic environment. Cogui is a robot designed for ASD children. It converses with children in a reciprocal way in order to emphasize with the kid and help them in their learning process while having fun.
Carla – Making Transport Hubs Accessible
Accessibility in transport hubs is indispensable for the upliftment of the social and economic spheres of persons with disabilities. Carla – a smart, autonomous personal assistant is proposed as an affordable solution to impart inclusivity in transport hubs. Carla can serve as a guide to the destination, an information kiosk and a luggage carrier. It is equipped with a rope guide and digital braille to serve people suffering from multi-sensory impairment. Carla can move to a modest push from the user – a novel feature introduced to enhance interaction. The system architecture and algorithm for autonomous navigation are also discussed. Finally, the interaction of Carla in a number of use cases are presented.
SESSION: Video Submissions
Utility Belt for an Agricultural Robot: Reflections on Performing Design Research in the Field
By performing design research in the field, designers can better understand the target context, needs, values, and concerns of their users, and iterate on potential solutions. This, in turn, helps designers apply their work to unexplored territories. We illustrate the opportunities and requirements of this method through a case study of the development of a multi-purpose utility belt for an agriculture robot. We benefited from being able to observe current practices, collaborating to test prototypes with on-site roboticists and farmers, and sharing documentation in the moment. On the other hand, it could be challenging to improvise space for the design work or to find the right times to interrupt locals, and to negotiate the documentation activity with people who have concerns about being recorded.
Trash Barrel Robots in the City
We deployed two trash barrel robots in New York City to study people’s interactions with autonomous everyday objects in public spaces. We used a Wizard-of-Oz technique for in-the-wild deployment to simulate robots’ autonomy and elicit natural interaction behaviors. This work extends previous research on trash barrel robots toward multi-robot interactions in an urban environment. Our video shows that people in public generally welcome the robots, that the robots encourage social interaction among strangers, that people feel pressure to generate garbage for the robots, and that people’s interactions assume the robots’ awareness of each other.
Demonstrating TRAinAR: An Augmented Reality Tool that Helps Humans Teach Robots
We demonstrate TrainAR, an augmented reality (AR)-based tool that is designed to improve sim2real reinforcement learning (RL) for robots. Users with TRAinAR can tailor a virtual training environment with constraints to match the real-world, visualize training data to gain insights into an agent’s learning process, and animate a robot’s future actions before execution. The system described here enabled a robotic arm manipulator to learn how to navigate its end-effector toward a target object. The novelty of our AR application will hopefully enable users to better train a robot by quickly prototyping complex environments that are difficult to model.
Anomaly Detection for Dynamic Human-Robot Assembly: Application of an LSTM-based autoencoder to interpret uncertain human behavior in HRC
Human action recognition is one of the key challenges in human-robot collaboration (HRC), especially when the process has multiple valid ways to assemble a product. To address this problem, we developed an anomaly detection framework for the assembly of complex products. We used an Long-Short-Term-Memory (LSTM)-based autoencoder to detect anomalies in human behavior and post-process the output to categorize it as a green or red anomaly. A green anomaly represents a deviation from the intended order but a valid assembly sequence. A red anomaly represents an invalid sequence. In both cases, the worker is guided to complete the assembly process.
Unintended Failures of Robot-Assisted Feeding in Social Contexts
Over 1.8 million Americans require assistance eating. Robot-assisted feeding is a promising way to empower people with motor impairments to eat independently. Yet, most robot-assisted feeding research has focused on individual dining (e.g., eating at home with a caregiver), but not social dining (e.g., family meals, friends’ brunch, romantic dates). What happens when a robot developed for individual contexts gets used in social contexts? In this humorous video, we present unintended consequences that can arise from robot-assisted feeding in social settings. This video aims to raise awareness about the importance of accounting for social context when designing assistive robots.
Autonomous Underwater Robot Grasping Decision Support System
Underwater environments present numerous challenges for marine robots, such as noisy perception, constrained communication, and uncertainty due to wave motion. Collecting and accurately presenting information to the operator in such unstructured environments is a challenging task. A Decision Support System for autonomous underwater grasping provides visualization capabilities and tools to interact with the available information. Successful operator selected, autonomous underwater grasping trials were conducted using a six degrees of freedom robotic arm and a depth camera.
ReRun: Enabling Multi-Perspective Analysis of Driving Interaction in VR
Rerun is a software system to support post-facto analysis in simulation research. In this submission, we show it working inside a multiplayer driving simulator. Rerun is built in Unity 3D and captures the virtual behavior of participants and their interactions with virtual objects. These recorded behaviors can then be played back from any perspective in the virtual space. This is useful in multi-agent interaction studies because researchers can sift through scenarios carefully from each participant’s perspective or even from an outside observer’s perspective. This enables a fine-grained understanding of implicit and explicit signaling between participants and other human or AI-controlled agents.
Stretch to the Client; Re-imagining Interfaces
This paper presents the efforts made towards the creation of a client interface to be used with Hello-Robot Stretch. The goal is to create an interface that is accessible to allow for the best user experience. This interface enables users to control Stretch with basic commands through several modalities. To make this interface accessible, a simple and clear web interface was crafted so users of differing abilities can successfully interact with Stretch. A voice activated option was also added to further increase the range of possible interactions.
Mosu Buddy: Mourning Support Robot
The pandemic increased the number of people in mourning and safety protocols interfered with rituals and customs people were used to when a loved one passed away. This can lead to a complicated grief process. Mosu Buddy was designed to accompany a person going through a period of grief and help them overcome this process. The user can interact with Mosu with its different activities and functionalities to help them cope with the loss of a loved one.
Cogui: Interactive Social Robot for Autism Spectrum Disorder Children: A Wonderful Partner for ASD Children
Autistic kids have difficulties communicating with others and learning new things in an academic environment. Cogui is a robot designed for ASD children. It converses with children in a reciprocal way in order to emphasize with the kid and help them in the learning process while having fun.
SESSION: Demonstrations
Demonstrating the Potential of Interactive Product Packaging for Enriching Human-Robot Interaction
While social robots are increasingly introduced into domestic settings, few have explored the utility of the robots’ packaging. Here we highlight the potential of product packaging in human-robot interaction to facilitate, expand, and enrich user experience with the robot. We present a social robot’s box as interactive product packaging, designed to be reused as a “home” for the robot. Through co-design sessions with children, an narrative-driven and socially engaging box was developed to support initial interactions between the child and the robot. Our findings emphasize the importance of packaging design to produce positive outcomes towards successful human-robot interaction.
SEAN-VR: An Immersive Virtual Reality Experience for Evaluating Social Robot Navigation
We propose a demonstration of the Social Environment for Autonomous Navigation with Virtual Reality (VR) for advancing research in Human-Robot Interaction. In our demonstration, a user controls a virtual avatar in simulation and performs directed navigation tasks with a mobile robot in a warehouse environment. Our demonstration shows how researchers can leverage the immersive nature of VR to study robot navigation from a user-centered perspective in densely populated environments while avoiding physical safety concerns common with operating robots in the real world. This is important for studying interactions with robots driven by algorithms that are early in their development lifecycle.
Language Models for Human-Robot Interaction
Recent advances in large scale language models have significantly changed the landscape of automatic dialogue systems and chatbots. We believe that these models also have a great potential for changing the way we interact with robots. Here, we present the first integration of the OpenAI GPT-3 language model for the Aldebaran Pepper and Nao robots. The present work transforms the text-based API of GPT-3 into an open verbal dialogue with the robots. The system will be presented live during the HRI2023 conference and the source code of this integration is shared with the hope that it will serve the community in designing and evaluating new dialogue systems for robots.
Open-source Natural Language Processing on the PAL Robotics ARI Social Robot
We demonstrate how state-of-art open-source tools for automatic speech recognition (vosk) and dialogue management (rasa) can be integrated on a social robotic platform (PAL Robotics’ ARI robot) to provide rich verbal interactions.
Our open-source, ROS-based pipeline implements the ROS4HRI standard, and the demonstration specifically presents the details of the integration, in a way that will enable attendees to replicate it on their robots.
The demonstration takes place in the context of assistive robotics and robots for elderly care, two application domains with unique interaction challenges, for which, the ARI robot has been designed and extensively tested in real-world settings.
I’m a Robot, Hear Me Speak!
How should a robot speak in warm, cold, loud or bright environments? We propose a demonstration that allows participants to experience how varied background ambient contexts (i.e., lighting, sound, background) can affect the acceptability of robot voice styles, including pitch range, speed, and pauses. We demonstrate the technical result of a 3-step voice design process: (a) Collecting human voice data under the varied ambient conditions, (b) Clustering human vocal utterances to identify primary voice styles, and (c) Modifying a robot voice to the clustered voice styles. The demo will allow participants to experience a food service scenario in 6 different ambient conditions, and feel how changing the robot’s voice style can affect both intelligibility and social appropriateness. Ultimately, we hope to highlight the importance in adapting a robot’s vocal characteristics to ambient contexts, towards deploying robots in the wild.
Haru He’s Here to Help!: A Demonstration of Implementing Comedic Rapid Turn-taking for a Social Robot
Current robot dialog systems are predominantly implemented using a sequential, utterance based, two-party, speak-wait/speak-wait approach. This is inadequate for any interaction where rapid timed turn-taking is required. In this demo we show how a conversational listener can support a comedy dialogue with the social robot ‘Haru’. The system depends on fast local word-spotting and can be extended to support scripted and semi-scripted interactions where conversational timing is important such as fictive dialogues for educational purposes, or to support acting within a story telling context. The system could also be integrated with a full dialogue manager for non-scripted open dialogue applications.
SESSION: Workshops
Workshop YOUR Study Design 2023!: Participatory Critique and Refinement of Participants’ Studies
A well-designed and evaluated study plays an essential role in highlighting the impact and contribution of a research idea. However, novice Human-Robot Interaction (HRI) researchers often lack the experience and know-how to devise an effective study. This workshop aims to provide a platform for those doing research in HRI, and related fields to obtain expert feedback on their study design before running a user study. The workshop invites a 2-4 page long contribution from participants outlining an upcoming user study focusing on the methods section and planned analyses. Participants will take part in two separate mentoring sessions led by different mentors. The workshop is interactive in nature and will also include mentor-led discussion sessions on topics relevant to study design such as hypothesis design and analysis, and human-centered study design.
The Imperfectly Relatable Robot: An Interdisciplinary Workshop on the Role of Failure in HRI
Focusing on failure to improve human-robot interactions represents a novel approach that calls into question human expectations of robots, as well as posing ethical and methodological challenges to researchers. Fictional representations of robots (still for many non-expert users the primary source of expectations and assumptions about robots) often emphasize the ways in which robots surpass/perfect humans, rather than portraying them as fallible. Thus, to encounter robots that come too close, drop items or stop suddenly starts to close the gap between fiction and reality. These kinds of failures – if mitigated by explanation or recovery procedures – have the potential to make the robot a little more relatable and human-like. However, studying failures in human-robot interaction requires producing potentially difficult or uncomfortable interactions in which robots failing to behave as expected may seem counterintuitive and unethical. In this space, interdisciplinary conversations are the key to untangling the multiple challenges and bringing themes of power and context into view. In this workshop, we invite researchers from across the disciplines to an interactive, interdisciplinary discussion around failure in social robotics. Topics for discussion include (but are not limited to) methodological and ethical challenges around studying failure in HRI, epistemological gaps in defining and understanding failure in HRI, sociocultural expectations around failure and users’ responses.
Social Robots Personalisation: At the Crossroads between Engineering and Humanities (CONCATENATE)
Nowadays, robots are expected to interact more physically, cognitively, and socially with people. They should adapt to unpredictable contexts alongside individuals with various behaviours. For this reason, personalisation is a valuable attribute for social robots as it allows them to act according to a specific user’s needs and preferences and achieve natural and transparent robot behaviours for humans. If correctly implemented, personalisation could also be the key to the large-scale adoption of social robotics. However, achieving personalisation is arduous as it requires us to expand the boundaries of robotics by taking advantage of the expertise of various domains. Indeed, personalised robots need to analyse and model user interactions while considering their involvement in the adaptative process. It also requires us to address ethical and socio-cultural aspects of personalised HRI to achieve inclusive and diverse interaction and avoid deception and misplaced trust when interacting with the users. At the same time, policymakers need to ensure regulations in view of possible short-term and long-term adaptive HRI. This workshop aims to raise an interdisciplinary discussion on personalisation in robotics. It aims at bringing researchers from different fields together to propose guidelines for personalisation while addressing the following questions: how to define it – how to achieve it – and how it should be guided to fit legal and ethical requirements.
Human-Robot Conversational Interaction (HRCI)
Conversation is one of the primary methods of interaction between humans and robots. It provides a natural way of communication with the robot, thereby reducing the obstacles that can be faced through other interfaces (e.g., text or touch) that may cause difficulties to certain populations, such as the elderly or those with disabilities, promoting inclusivity in Human-Robot Interaction (HRI). Work in HRI has contributed significantly to the design, understanding and evaluation of human-robot conversational interactions. Concurrently, the Conversational User Interfaces (CUI) community has developed with similar aims, though with a wider focus on conversational interactions across a range of devices and platforms. This workshop aims to bring together the CUI and HRI communities through a one-day workshop to outline key shared opportunities and challenges in developing conversational interactions with robots, resulting in collaborative publications targeted at the CUI 2023 provocations track.
CRITTER: Child-Robot Interaction and Interdisciplinary Research
Several recent works in human-robot-interaction (HRI) have begun to highlight the importance of the replication crisis and open science practices for our field. Yet, suggestions and recommendations tailored to child-robot-interaction (CRI) research, which poses it’s own additional set of challenges, remain limited. There is also an increased need within both HRI and CRI for inter and cross-disciplinary collaborations, where input from multiple different domains can contribute to better research outcomes. Consequently, this workshop aims to facilitate discussions between researchers from diverse disciplines within CRI. The workshop will open with a panel discussion between CRI researchers from different disciplines, followed by 3-minute flash talks of the accepted submissions. The second half of the workshop will consist of breakout group discussions, where both senior and junior academics from different disciplines can share their experiences of conducting CRI research. Through this workshop we hope to create a common ground for addressing shared challenges in CRI, as well as identify a set of possible solutions going forward.
Lifelong Learning and Personalization in Long-Term Human-Robot Interaction (LEAP-HRI): Adaptivity for All
Adaptation and personalization are critical elements when modeling robot behaviors toward users in real-world settings. Multiple aspects of the user need to be taken into consideration in order to personalize the interaction, such as their personality, emotional state, intentions, and actions. While this information can be obtained a priori through self-assessment questionnaires or in real-time during the interaction through user profiling, behaviors and preferences can evolve in long-term interactions. Thus, gradually learning new concepts or skills (i.e., “lifelong learning”) both for the users and the environment is crucial to adapt to new situations and personalize interactions with the aim of maintaining their interest and engagement. In addition, adapting to individual differences autonomously through lifelong learning allows for inclusive interactions with all users with varying capabilities and backgrounds. The third edition of the “Lifelong Learning and Personalization in Long-Term Human-Robot Interaction (LEAP-HRI)” workshop aims to gather and present interdisciplinary insights from a variety of fields, such as education, rehabilitation, elderly care, service and companion robots, for lifelong robot learning and adaptation to users, context, environment, and activities in long-term interactions. The workshop aims to promote a common ground among the relevant scientific communities through invited talks and in-depth discussions via paper presentations, break-out groups, and a scientific debate. In line with the HRI 2023 conference theme, “HRI for all”, our workshop theme is “adaptivity for all” to encourage HRI theories, methods, designs, and studies for lifelong learning, personalization, and adaptation that aims to promote inclusion and diversity in HRI.
Variable Autonomy for Human-Robot Teaming (VAT)
As robots are introduced to various domains and applications, Human-Robot Teaming (HRT) capabilities are essential. Such capabilities involve teaming with humans in\ on\ out-the-loop at different levels of abstraction, leveraging the complementing capabilities of humans and robots. This requires robotic systems with the ability to dynamically vary their level or degree of autonomy to collaborate with the human(s) efficiently and overcome various challenging circumstances. Variable Autonomy (VA) is an umbrella term encompassing such research, including but not limited to shared control and shared autonomy, mixed-initiative, adjustable autonomy, and sliding autonomy.
This workshop is driven by the timely need to bring together VA-related research and practices that are often disconnected across different communities as the field is relatively young. The workshop’s goal is to consolidate research in VA. To this end, and given the complexity and span of Human-Robot systems, this workshop will adopt a holistic trans-disciplinary approach aiming to a) identify and classify related common challenges and opportunities; b) identify the disciplines that need to come together to tackle the challenges; c) identify and define common terminology, approaches, methodologies, benchmarks, and metrics; d) define short- and long-term research goals for the community.
To achieve these objectives, this workshop aims to bring together industry stakeholders, researchers from fields under the banner of VA, and specialists from other highly related fields such as human factors and psychology. The workshop will consist of a mix of invited talks, contributed papers, and an interactive discussion panel, toward a shared vision for VA.
Robots for Learning 7 (R4L): A Look from Stakeholders’ Perspective
This year’s conference theme “HRI for all” not just raises the importance of reflecting on how to promote inclusion for every type of user but also calls for careful consideration of the different layers of people potentially impacted by such systems. In educational setups, for instance, the users to be considered first and foremost are the learners. However, teachers, school directors, therapists and parents also form a more secondary layer of users in this ecosystem. The 7th edition of R4L focuses on the issues that HRI experiments in educational environments may cause to stakeholders and how we could improve on bringing the stakeholders’ point of view into the loop. This goal is expected to be achieved in a very practical and dynamic way by the means of: (i) lightening talks from the participants; (ii) two discussion panels with special guests: One with active researchers from academia and industry about their experience and point of view regarding the inclusion of stakeholders; another panel with teacher, school directors, and parents that are/were involved in HRI experiments and will share their viewpoint; (iii) semi-structured group discussions and hands-on activities with participants and panellists to evaluate and propose guidelines for good practices regarding how to promote the inclusion of stakeholders, especially teachers, in educational HRI activities. By acquiring the viewpoint from the experimenters and stakeholders and analysing them in the same workshop, we expect to identify current gaps, propose practical solutions to bridge these gaps, and capitalise on existing synergies with the collective intelligence of the two communities.
Virtual, Augmented, and Mixed Reality for Human-Robot Interaction (VAM-HRI)
The 6th International Workshop on Virtual, Augmented, and Mixed Reality for Human-Robot Interaction (VAM-HRI) will bring together HRI, robotics, and mixed reality researchers to address challenges in mixed reality interactions between humans and robots. Topics relevant to the workshop include the development of robots that can interact with humans in mixed reality, the use of virtual reality for developing interactive robots, the design of augmented reality interfaces that mediate communication between humans and robots, the investigations of mixed reality interfaces for robot learning, comparisons of the capabilities and perceptions of robots and virtual agents, and best design practices. VAM-HRI 2023 will follow the success of VAM-HRI 2018-22 and advance the cause of this nascent research community.
Semantic Scene Understanding for Human-Robot Interaction
Service robots will be co-located with human users in an unstructured human-centered environment and will benefit from understanding the user’s daily activities, preferences, and needs towards fully assisting them. This workshop aims to explore how abstract semantic knowledge of the user’s environment can be used as a context in understanding and grounding information regarding the user’s instructions, preferences, habits, and needs. While object semantics have primarily been investigated for robotics in the perception and manipulation domain, recent works have shown the benefits of semantic modeling in a Human-Robot Interaction (HRI) context toward understanding and assisting human users. This workshop focuses on semantic information that can be useful in generalizing and interpreting user instructions, modeling user activities, anticipating user needs, and making the internal reasoning processes of a robot more interpretable to a user. Therefore, the workshop builds on topics from prior workshops such as Learning in HRI, behavior adaptation for assistance, and learning from humans and aims at facilitating cross-pollination across these domains through a common thread of utilizing abstract semantics of the physical world towards robot autonomy in assistive applications. We envision the workshop to touch on research areas such as unobtrusive learning from observations, preference learning, continual learning, enhancing the transparency of autonomous robot behavior, and user adaptation. The workshop aims to gather researchers working on these areas and provide fruitful discussions towards autonomous assistive robots that can learn and ground scene semantics for enhancing HRI.
Workshop on Test Methods and Metrics for Accessible HRI
As robots become more ubiquitous in our modern world, the expectation that humans and robots will interact physically, conceptually, and emotionally in everyday life necessitates the development of validated technologies that are safe, secure, and effective. Regardless, robotics is still considered by many as being inaccessible, even as their availability increases. In alignment with the 2023 HRI Conference’s theme of ?HRI for all,” this fifth installment of the Workshop on Test Methods and Metrics for Effective HRI is focused on addressing this accessibility disparity. Specifically, this year’s workshop presents and addresses issues regarding 1) human factors for diverse populations, 2) accessibility of standards and specifications, and 3) enabling equal and equitable access to research results & data. The goal of this workshop is to enable increased accessibility to HRI research and resources by addressing the metrology that is used to verify and validate HRI performance.
2nd Workshop on Human-Interactive Robot Learning (HIRL)
With robots poised to enter our daily environments, they will not only need to work for people, but also learn from them. An active area of investigation in the robotics, machine learning, and human-robot interaction communities is the design of teachable robots that can learn interactively from human input. To refer to these research efforts, we use the umbrella term Human-Interactive Robot Learning (HIRL). While algorithmic solutions for robots learning from people have been investigated in a variety of ways, HIRL, as a fairly new research area, is still lacking: 1) a formal set of definitions to classify related but distinct research problems or solutions, 2) benchmark tasks, interactions, and metrics to evaluate the performance of HIRL algorithms and interactions, and 3) clear long-term research challenges to be addressed by different communities. Last year we began consolidating the needed definitions and vocabulary to enable fruitful discussions between researchers from these interdisciplinary fields, and identified a preliminary list of long, medium, and short-term research problems for the community to tackle, and existing tools and frameworks that can be leveraged to this end. This workshop will build upon these discussions, focusing on promoting the specification and design of HIRL benchmarks.
Advancing Human-Robot Interaction Research and Benchmarking Through Open-Source Ecosystems
Recent rapid progress in HRI research makes it more crucial than ever to have systematic development and benchmarking methodologies to assess and compare different algorithms and strategies. Indeed, the lack of such methodologies results in inefficiencies and sometimes stagnation, since new methods cannot be effectively compared to prior work and the research gaps become challenging to identify. Moreover, lacking an active and effective mechanism to disseminate and utilize the available datasets and benchmarking protocols significantly reduces their impact and utility. A unified effort in the development, utilization, and dissemination of open-source assets amongst a governed community of users can advance these domains substantially; for HRI, this is particularly needed in the curation and generation of datasets for benchmarking. This workshop will take a step towards removing the roadblocks to the development and assessment of HRI by reviewing, discussing, and laying the groundwork for an open-source ecosystem at the intersection of HRI and robot manipulation. The workshop will play a crucial role for identifying the preconditions and requirements to develop an open-source ecosystem that provides open-source assets for HRI benchmarking and comparison, aiming to determine the needs and wants of HRI researchers. Invited speakers include those who have contributed to the development of open-source assets in HRI and robot manipulation and discussion topics will include issues related to the usage of open-source assets and the benefits of forming of an open-source ecosystem.
Symbiotic Society with Avatars (SSA): Beyond Space and Time
Avatar robots can help people extend their physical, cognitive, and perceptual capabilities, allowing people to exceed time and space constraints. In that sense, avatar robots can greatly influence people’s lives. However, we have many challenges to be addressed in various scenarios such as avatar-human interaction, operator-avatar interaction, avatar-avatar interaction, ethical and legal issues, technical challenges, and so on. It is indispensable to discuss what the necessary research and technologies are to realize avatars that are well accepted in society while envisioning a future symbiotic society in which people communicate with other people and their avatars. In our previous workshop “Symbiotic Society with Avatars: Social Acceptance, Ethics, and Technologies (SSA)” we focused on the ethical aspect of avatars. In this workshop, our aim is to provide an opportunity for researchers from different backgrounds including social robotics, teleoperation, and mixed reality to come together and discuss the advances and values in a symbiotic society with avatars.
Inclusive HRI II: Equity and Diversity in Design, Application, Methods, and Community
Diversity, equality, and inclusion (DEI) are critical factors that need to be considered when developing AI and robotic technologies for people. The lack of such considerations exacerbates and can also perpetuate existing forms of discrimination and biases in society for years to come. Although concerns have already been voiced around the globe, there is an urgent need to take action within the human-robot interaction (HRI) community. This workshop contributes to filling the gap by providing a platform in which to share experiences and research insights on identifying, addressing, and integrating DEI considerations in HRI. With respect to last year, this year the workshop will further engage participants on the problem of sampling biases through hands-on co-design activities for mitigating inequity and exclusion within the field of HRI.
Perspectives on Moral Agency in Human-Robot Interaction
Establishing when, how, and why robots should be considered moral agents is key for advancing human-robot interaction. For instance, whether a robot is considered a moral agent has significant implications for how researchers, designers, and users can, should, and do make sense of robots and whether their agency in turn triggers social and moral cognitive and behavioral processes in humans. Robotic moral agency also has significant implications for how people should and do hold robots morally accountable, ascribe blame to them, develop trust in their actions, and determine when these robots wield moral influence. In this workshop on Perspectives on Moral Agency in Human-Robot Interaction, we plan to bring together participants who are interested in or have studied the topics concerning a robot’s moral agency and its impact on human behavior. We intend to provide a platform for holding interdisciplinary discussions about (1) which elements should be considered to determine the moral agency of a robot, (2) how these elements can be measured, (3) how they can be realized computationally and applied to the robotic system, and (4) what societal impact is anticipated when moral agency is assigned to a robot. We encourage participants from diverse research fields, such as computer science, psychology, cognitive science, and philosophy, as well as participants from social groups marginalized in terms of gender, ethnicity, and culture.