HRI '21 Companion: Companion of the 2021 ACM/IEEE International Conference on Human-Robot Interaction

Full Citation in the ACM Digital Library

SESSION: alt.HRI

Sex Robots in Care: Setting the Stage for a Discussion on the Potential Use of Sexual Robot Technologies for Persons with Disabilities

Although every human should enjoy physical touch, intimacy, and sexual pleasure, persons with disabilities are often not in the position to fully experience the joys of life in the same manner as abled people. The United Nations stated in 1993 that persons with disabilities should enjoy family life and personal integrity and should not be denied the opportunity to experience their sexuality, have sexual relationships, and experience parenthood. However, after nearly 30 years of discussion, universal access to sexual and reproductive health remains an unfinished agenda for the disabled, as if society failed in recognizing people with disabilities as sexual beings. In this respect, a growing body of scholars have started to explore the idea of using technology to help disabled people satisfy some of these needs, although not without controversy. In concrete, ideas surrounding the use of robots for sex care purposes have been put forward, as service robots performing actions contributing directly towards improvement in the satisfaction of a user's sexual needs. This paper continues to explore the potential use of these robots in disability care for sex care purposes, including for those with physical and mental health disabilities, which is currently underexplored. Our contribution seeks to understand whether sex robots could serve as a step forward in realizing the sexual rights of persons with disabilities. By building on a conceptual analysis of how sex robots could empower persons with disabilities to exercise their sexual rights, we hope to inform the policy debate around robots' regulation and governance and set the scene for further research.

Robots as Moral Advisors: The Effects of Deontological, Virtue, and Confucian Role Ethics on Encouraging Honest Behavior

We examined how robots can successfully serve as moral advisors for humans. We evaluated the effectiveness of moral advice grounded in deontological, virtue, and Confucian role ethics frameworks in encouraging humans to make honest decisions. Participants were introduced to a tempting situation where extra monetary gain could be earned by choosing to cheat (i.e., violating the norm of honesty). Prior to their decision, a robot encouraged honest choices by offering a piece of moral advice grounded in one of the three ethics frameworks. While the robot's advice was overall not effective at discouraging dishonest choices, there was preliminary evidence indicating the relative effectiveness of moral advice drawn from deontology. We also explored how different cultural orientations (i.e., vertical and horizontal collectivism and individualism) influence honest decisions across differentially-framed moral advice. We found that individuals with a strong cultural orientation of establishing their own power and status through competition (i.e., high vertical individualism) were more likely to make dishonest choices, especially when moral advice was drawn from virtue ethics. Our findings suggest the importance of considering different ethical frameworks and cultural differences to design robots that can guide humans to comply with the norm of honesty.

Fake It to Make It: Exploratory Prototyping in HRI

Exploratory prototyping techniques are critical to devising new robot forms, actions, and behaviors, and to eliciting human responses to designed interactive features, early in the design process. In this opinion piece, we establish the contribution of exploratory prototyping to the field of human-robot interaction, arguing research engaged in design exploration-rather than controlled experimentation-should be focused on flexibility rather than specificity, possibility rather than replicability, and design insights as incubated subjectively through the designer rather than dispassionately proven by statistical analysis. We draw on literature in HCI for examples of published design explorations in academic venues, and to suggest how analogous contributions can be valued and evaluated by the HRI community. Lastly, we present and examine case studies of three design methods we have used in our own design work: physical prototyping with human-in-the-loop control, video prototyping, and virtual simulations.

Boosting Robot Credibility and Challenging Gender Norms in Responding to Abusive Behaviour: A Case for Feminist Robots

Inspired by the recent UNESCO report I'd Blush if I Could, we tackle some of the issues regarding gendered AI through exploring the impact of feminist social robot behaviour on human-robot interaction. Specifically we consider (i) use of a social robot to encourage girls to consider studying robotics (and expression of feminist sentiment in this context), (ii) if/how robots should respond to abusive, and antifeminist sentiment and (iii) how ('female') robots can be designed to challenge current gender-based norms of expected behaviour. We demonstrate that whilst there are complex interactions between robot, user and observer gender, we were able to increase girls' perceptions of robot credibility and reduce gender bias in boys. We suggest our work provides positive evidence for going against current digital assistant/traditional human gender-based norms, and the future role robots might have in reducing our gender biases.

Who Wants to Grant Robots Rights?

The robot rights debate has thus far proceeded without any reliable data concerning the public opinion about robots and the rights they should have. We have administered an online survey (n = 200) that investigates layman's attitudes towards granting particular rights to robots. Furthermore, we have asked them for what reasons they are willing to grant them those rights. Finally, we have administered general perceptions of robots regarding appearance, capacities, and traits. Results show that rights can be divided in sociopolitical and computing dimensions, and reasons into cognition and compassion dimensions. People generally have a positive view on robot interaction capacities. Attitudes towards robot rights depend on age and experience as well as on the cognitive and affective capacities people believe robots will ever possess. Our results suggest that the robot rights debate stands to benefit greatly from a common understanding of the capacity potentials of future robots.

SESSION: Late-Breaking Reports

Infants Respond to Robot's Need for Assistance in Pursuing Action-based Goals

Instrumental helping has been reported in infants toward other humans but not toward robots. Providing infants with opportunities for action-based assistance to robots might lead to more efficient infant-robot interactions. This paper presents preliminary findings on infants' spontaneous instrumental helping to robots exhibiting motion challenges, and proposes a novel decision-making model for infant-robot interaction that encompasses instrumental helping in its parameters; both in the context of pediatric rehabilitation. Six infants were engaged in a chasing game with a wheeled robot with the goal to follow the robot and ascend an inclined platform (8 sessions, 4 weeks). After infants' instrumental helping toward the robot was identified, a decision tree model was created to evaluate a set of annotated variables as potential predictors to the observed behavior. Next, a Markovian model for robot control was developed where these predictors were used as parameters to promote, in turn, action-based goals for the infants.

Assisted Human-Robot-Interaction for Industrial Assembly: Application of Spatial Augmented Reality (SAR) for Collaborative Assembly Tasks

Human-robot collaboration is increasingly applied to industrial assembly sequences due to the growing need for flexibility in manufacturing. Assistant systems are able help to support shared assembly sequences to facilitate collaboration. This contribution shows a workplace installation of a collaborative robot (Cobot) and a spatial augmented reality (SAR) assistant system applied to an assembly use-case. We demonstrate a methodology for the distribution of the assembly sequence between the worker, the Cobot, and the SAR.

Looks Can Permit Deceiving: How Reward or Punishment Decisions are Influenced by Robot Embodiment

As robots and artificially intelligent systems are given more cognitive capabilities and become more prevalent in our societies, the relationships they share with humans have become more nuanced. This paper aims to investigate the influences that embodiment has on a person's decision to reward or punish an honest or deceptive intelligent agent. We cast this exploration within a financial advisement scenario. Our results suggest that people are more likely to choose to reward a physically embodied intelligent agent over a virtual one irrespective of whether the agent has been deceptive or honest and even if this deception or honesty resulted in the individual gaining or losing money. Additionally, our results show that people are more averse to punishing intelligent agents, irrespective of the embodiment, which matches prior research in relation to human-human interaction. These results suggest that embodiment choices can have meaningful effects on the permissibility of deception conducted by intelligent agents.

Perception of Emotion in Torso and Arm Movements on Humanoid Robot Quori

Displaying emotional states is an important part of nonverbal communication that can facilitate successful interactions. Facial expressions have been studied for their emotional expression, but this work looks at the capacity of body movements to convey different emotions. This work first generates a large set of nonverbal behaviors with a variety of torso and arm properties on a humanoid robot, Quori. Participants in a user study evaluated how much each movement displayed each of eight different emotions. Results indicate that specific movement properties are associated with particular emotions; such as leaning backward and arms held high displaying surprise and leaning forward displaying sadness. Understanding the emotions associated with certain movements can allow for the design of more appropriate behaviors during interactions with humans and could improve people's perception of the robot.

Perceptual Effects of Ambient Sound on an Artificial Agent's Rate of Speech

Interactive robots are increasingly being deployed in public spaces that may differ in context from moment to moment. One important aspect of this context is the soundscape of the robot and human's shared environment, such as an airport that is noisy during a weekend rush hour, yet quiet on a weekday evening. Just as humans are adept at adapting their speech appropriately to their environment, robots should adjust their speech characteristics (e.g. speech rate, volume) to their context. We studied the effect of a shared auditory soundscape on the perceived ideal speech rate of an artificial agent. We tasked raters to listen to a combination of text-to-speech (TTS) samples with different speech rates and soundscape samples from freesound.org and to evaluate the appropriateness of the speech combination and social perception of artificial speech. Contrary to our expectations, faster artificial speech in louder environments and slower speech in quieter environments were not preferred by raters. This suggests that further research into how exactly to adapt artificial speech to background noise is necessary.

The Haunted Desk: Exploring Non-Volitional Behavior Change with Everyday Robotics

We introduce and explore the concept of non-volitional behavior change, a novel category of behavior change interventions, and apply it in the context of promoting healthy behaviors through an automated sit-stand desk. While routine use of sit-stand desks can increase health outcomes, compliance decreases quickly and behavioral nudges tend to be dismissed. To address this issue, we introduce robotic furniture that moves on its own to promote healthy movement. In an in-person preliminary study, we explored users' impressions of an autonomous sit-stand desk prototype that changes position at regular pre-set time intervals while participants complete multiple tasks. While in-the-moment self-reported ratings were similar between the autonomous and manual desks, we observed several bi-modal distributions in user's retrospective comparisons and their qualitative responses. Findings suggest about half were receptive to using an autonomous sit-stand desk, while the remaining preferred to retain some level of control.

A User-Centered Agile Approach to the Development of a Real-World Social Robot Application for Reception Areas

As social robots are increasingly entering the real world, developing a viable robot application has become highly important. While a growing body of research has acknowledged that the integration of an agile development methodology with user-centered design (UCD) provides advantages for both organizations and end users, integrating UCD in an agile methodology has been a challenging endeavor. The present paper illustrates a user-centered agile approach that integrates user perspectives through formative usability testing during an agile development process of a robot application and thus differentiates from most robot application evaluations, which conduct summative usability testing (i.e., they quantitatively test goal achievement after technological developments). Through an active involvement of organization and end users, the intermediate results of our ongoing project show that the developed social robot application is both useful and usable.

Why Autonomous Driving Is So Hard: The Social Dimension of Traffic

Smooth traffic presupposes fine coordination between different actors, such as pedestrians, cyclists and car drivers. When autonomous vehicles join regular traffic, they need to coordinate with humans on the road. Prior work has often studied and designed for interaction with autonomous vehicles in structured environments such as traffic intersections. This paper describes aspects of coordination also in less structured situations during mundane maneuvers such as overtaking. Taking an ethnomethodological and conversation analytic approach, the paper analyzes video recordings of self-driving shuttle buses in Sweden. Initial findings suggest that the shuttle buses currently do not comply with cyclists' expectations of social coordination in traffic. The paper highlights that communication and coordination with human road users is crucial for smooth flow of traffic and successful deployment of autonomous vehicles also in less structured traffic environments.

Development of an Inflatable Haptic Device for Pain Reduction by Social Touch

Previous Human-Robot Interaction studies have shown that social touching can be established under human-robot touch conditions. To provide the sensation of a robot grasping human hand when he/she grasps the robot, we develop a soft robot that is consisted of airbags and a force sensor. We expect that this mutual touching can take a positive role in alleviating human pains. This paper presents the development of our two prototypes of inflatable haptic devices consisting of airbags.

Validation of a Novel Theory of Mind Measurement Tool: The Social Robot Video Task

Social communication difficulties in autism spectrum disorder (ASD) have been associated with poor Theory of Mind (ToM)¬, an ability to attribute mental states to others. Interventions using humanoid robots could improve ToM that may generalize to human-human interactions. Traditionally, ToM has been measured using the Firth-Happe Animations (FHA) task which depicts interactions between two animated triangles. Recently, we developed a Social Robot Video (SRV) task which depicts interactions between two NAO robots. In this study, we administered both tasks to 8 children with ASD and 9 typically-developing children to examine the validity, reliability, and sensitivity of the SRV task. Results suggest that SRV has face validity, partial inter-rater reliability and could differentiate between the two groups. In sum, the SRV task could be used to assess effectiveness of ToM interventions using humanoid robots.

Communicative Function of Eye Blinks of Virtual Avatars May Not Translate onto Physical Platforms

Eye behaviour is one of the main modalities used to regulate face-to-face conversation. Gaze aversion and mutual gaze, for example, serve to signal cognitive load, interest or turns during a conversation. While eye blinking is mainly thought to have a physiological function, the rate of blinking is known to increase during conversation suggesting a communicative function for the eye. Recently, it has been shown that a virtual avatar, acting as the receiver in a conversation, could use blinking as a kind of conversational marker influencing the speaker's communicative behaviour. In particular, it has been demonstrated that long eye blinks resulted in shorter answers by the speaker compared with short ones. Here, we set out to investigate this effect when using a humanoid robot as interaction partner, given that robots have both a physical and social presence. Interestingly, however, we could not replicate the result: short or long blinks did not modulate the length of the responses by the human interactant.

The Valley of non-Distraction: Effect of Robot's Human-likeness on Perception Load

Previous research in psychology has found that human faces have the capability of being more distracting under high perceptual load conditions compared to non-face objects. This project aims to assess the distracting potential of robot faces based on their human-likeliness. As a first step, this paper reports on our initial findings based on an online study. We used a letter search task where participants had to search for a target letter within a circle of 6 letters, whilst an irrelevant distractor image was also present. The results of our experiment replicated previous results with human faces and non-face objects. Additionally, in the tasks where the irrelevant distractors are images of robot faces, the human-likeness of the robot influenced the response time (RT). Interestingly, the robot Alter produced results significantly different than all other distractor robots. The outcome of this is a distraction model related to human-likeness of robots. Our results show the impact of anthropomorphism on distracting potential and thus should be taken into account when designing robots.

Pilot Study on Robot's Open Diary to Deepen Friendships with a Child and Promote Communication between a Child and People

We propose a concept of the open diary drawing behavior of a robot living with a child. In our concept, a robot acts together with a child and draws a diary about the memories of a day. Also, it shares the diary with other people such as parents, grandparents, and community members. The purposes of the open diary are to deepen friendships with a child and to promote communication between the child and people. Today, people lose touch with each other due to social change. We believe the robot's open diary is one of the solutions to this problem. In this study, we conducted two preliminary experiments to show the potential and the design principle of the open diary. The diary about memories of what a robot learned from a child pleased the child. Also, when the robot drew a humorous diary, people could enjoy the diary together.

Exploring Web-Based VR for Participatory Robot Design

With the pandemic preventing access to universities and consequently limiting in-person user studies, it is imperative to explore other mediums for conducting user studies for human-robot interaction. Virtual reality (VR) presents a novel and promising research platform that can potentially offer a creative and accessible environment for HRI studies. Despite access to VR being limited given its hardware requirements (e.g. need for headsets), web-based VR offers universal access to VR utilities through web browsers. In this paper, we present a participatory design pilot study, aimed at exploring the use of co-design of a robot using web-based VR. Results seem to show that web-based VR environments are engaging and accessible research platforms to gather environment and interaction data in HRI.

Does Emotional State Affect How People Perceive Robots?

Emotions serve important regulatory roles in social interaction. Although recognition, modeling, and expression of emotion have been extensively researched in human-robot interaction and related fields, the role of human emotion in perceptions of and interactions with robots has so far received considerably less attention. We here report inconclusive results from a pilot study employing an affect induction procedure to investigate the effect of people's emotional state on their perceptions of human-likeness and mind in robots, as well as attitudes toward robots. We propose a new study design based on the findings from this study.

Investigating the Validity of Online Robot Evaluations: Comparison of Findings from an One-Sample Online and Laboratory Study

As the number of online studies in the field of human-robot interaction (HRI) increases, the comparability of the results of online studies with those of laboratory experiments needs further investigation. In this one-sample study, 29 participants experienced three different commercially-available service robots first in an online session and then in a lab experiment and evaluated the robots regarding their trust, fear and intention to use the robot. Furthermore, several robot characteristics were evaluated (e.g. humanness, uncanniness). Overall, study results indicate high comparability of findings from online and lab experiments for trust, fear and robot characteristics like humanness and uncanniness. Same relative differences between the robots were found for both presentation methods except for intention to use and robot reliability. This preliminary study provides insights into online study validity and makes recommendations for future research.

Testing the Elaboration Likelihood Model of Persuasion on the Acceptance of Health Regulations in a Video Human-Robot Interaction Study

Social robots in public places could be a useful tool to guide and remind people to adhere to general regulations (e.g., wearing a mask, keeping social distance during a pandemic). Additionally, robots could be a useful assistive tool for public order offices, such as reducing risks of infection for employees. However, it is uncertain whether and how robots could enhance regulation adherence. To this extent, we present the results of a 2 (distraction: yes/no) between- by 2 (argument: strong/weak) within-mixed HRI video study (n=83) investigating the argument's persuasiveness based on the Elaboration Likelihood Model of persuasion (ELM). Participants watched a video of a robot persuading people to wear a mask using either a strong or a weak argument. As a distraction, participants had to either count the word mask in the video or not. Our results show that the distraction had no influence, while the argument's strength significantly influences the perceived robot's persuasiveness.

Introducing SMRTT: A Structural Equation Model of Multimodal Real-Time Trust

Advances in autonomous technology have led to an increased interest in human-autonomy interactions. Generally, the success of these interactions is measured by the joint performance of the AI and the human operator. This performance depends, in part, on the operator having appropriate, or calibrated, trust of the autonomy. Optimizing the performance of human-autonomy teams therefore partly relies on the modeling and measuring of human trust. Theories and models have been developed on the factors influencing human trust in order to properly measure it. However, these models often rely on self-report rather than more objective, and real-time behavioral and physiological data. This paper seeks to build off of theoretical frameworks of trust by adding objective data to create a model capable of finer grain temporal measures of trust. Presented herein is SMRTT: SEM of Multimodal Real Time Trust. SMRTT leverages Structured Equation Modeling (SEM) techniques to arrive at a real time model of trust. Variables and factors from previous studies and existing theories are used to create components of SMRTT. The value of adding physiological data to the models to create real-time monitoring is discussed along with future plans to validate this model.

'Food' for Human Robot Interaction

'Food', when mentioned in Human-Robot Interaction (HRI) research, is most often in the context of functional applications of automation, delivery, and assistance. Food has, however, not been explored as a medium for social expression or building relationships with social robots. Using web-based examples of robot food and our pilot collection of LOVOT and AIBO robot user's Tweets about their practices of feeding their robots, we show how food has the potential to sustain interactions, increase enjoyment, sociability and companionship in HRI, enhance life-likeness, autonomy, and agency for robots, and open up opportunities for community building among robot users. We present design implications of food for HRI, and urge HRI researchers to envision food as a facet of Human-Robot relationships and <--Human-Food-Robot--> interaction as a celebratory, provocative, and promising domain for HRI and social robot design.

How Does the General Population Understand Robot State?

With an increasing number of home and social robot products, it is essential for the general population to feel comfortable in using and understanding these robots in their homes. The goal of this research is to understand the general population's definition of "robot state." We conducted 11 participatory design groups (PDGs) (n=30), in which participants completed two exercises: (1) Memory based: they recalled their past robots to come up with a working "robot state" definition through a series of exercises, and (2) Example-based: they saw short videos of 3 different home robots. Each PDG session yielded a set of "robot states" they felt were important to communicate to a user and a "robot state" definition, which were tested with the same set of participants via an online survey. We found that On/off/booting was significantly rated as more important than all other robot states. Interestingly, task-related stimuli did not result in task-related states being rated being more important to communicate to the user. We believe establishing fundamental knowledge of "robot states" will increase acceptability of home robots by the users and aid robot designers by providing information on states that are essential to be communicated.

Luka Luka - Investigating the Interaction of Children and Their Home Reading Companion Robot: A Longitudinal Remote Study

Social robots increasingly find their way into homes, especially to target families with small children. These commercially available robots are becoming widely accessible, but the research on them was largely confined to a lab or classroom environment, and their long-term use is rarely studied. Moreover, while child-robot interactions in many domains such as education and health are widely explored, little is known about how family context influences the children's perception of a social robot in their home environment and their interaction in day-to-day activities. In this paper, we proposed a longitudinal study that looks at the interaction between children and their home reading companion robot - Luka. Children's interaction and perception of the robot, along with the influence of their family context will be measured and evaluated over time.

Social Robot Encouraging Two Strangers to Talk with Each Other for Their Relationships

We investigated a hypothesis that a conversation between two strangers would be encouraged and they would improve each other's likability if a robot induced them to have a small talk in advance. In our experiment a participant remained in a waiting space with an experimental cooperator for five minutes, and then they did a collaborative task together. The experiment had two conditions: One was the with-intervention condition in which a robot asked them to do a small talk when they remained in the waiting space, another was the without-intervention condition in which there was no robot in the waiting space. The results showed that the number of participant's utterances in the collaborative task had no significant difference (t=-0.676, p=0.511, d=-0.350), and likability rating of the cooperator by the participant also had no significant difference (t=- 0.781, p=0.449, d=0.404). These results did not support the hypothesis. It suggested that it was difficult to affect a relationship between two strangers by a robot prompting them to speak simply.

DroRun: Drone Visual Interactions to Mediate a Running Group

Running with others is motivational but finding running partners is not always accessible due to constraints such as being in remote locations. We present a novel concept of augmenting remote runners' experience by mediating a running group using drone-projected visualisations. As an initial step, a team of five interaction designers apply a user-centred design approach to scope out possible visual prototypes that can mediate a running group. The design team specifically focused on visual prototypes through a series of workshops. We report on the impressions of the visual prototypes from six potential users and present future directions to this project.

TSUNDERE Interaction: Behavior Modification by the Integrated Interaction of Cold and Kind Actions

In order to reinforce behavior modification, we first applied a user's favorite avatar with TSUNDERE type to operant conditioning, an approach that reinforces desired behaviors by providing a combination of punishment and reward. Then, we considered TSUNDERE Interaction, an interaction using TSUN (cold action) as punishment, DERE (kind action) as reward, and TSUN/DERE action by that avatar. Furthermore, we use a 3D virtual avatar superimposed on the robot using augmented reality technology as an avatar, and use the user's state of concentration to provide appropriate feedback to the user. In this paper, we discuss the core concepts we have implemented to develop the system based on their idea as a means of behavior modification.

Back to the Future: Opinions of Autonomous Cars Over Time

The aim of this research was to investigate whether preferences of U.S. adults regarding autonomous vehicles have changed in the past decade. We believe this to be indicative of the effect of cultural shifts over time in preferences regarding robots, similar to the effect of cultural and national differences on preferences regarding robots (e.g. [9],[14]). By replicating a 2009 survey regarding autonomous vehicle parking, we found that participants ranked four out of six parking and transportation options significantly differently now particularly for an autonomous vehicle with no override, a taxi, driving a standard vehicle, and being next to a vehicle driven by another person. Additionally, we found partial support that participants who were more informed about autonomous vehicle technology showed an increase in preferences for autonomous vehicles.

Teleoperation Interface Usage in Robot-Assisted Childhood ASD Therapy

Therapist-operated robots can play a uniquely impactful role in helping children with Autism Spectrum Disorder (ASD) practice and acquire social skills. While extensive research within Human Robot Interaction has focused on teleoperation interfaces for robots in general, little work has been done on teleoperation interface design for robots in the context of ASD therapy. Moreover, while clinical research has shown the positive impact robots can have on children with Autism, much of that research has been performed in a controlled environment, with little understanding of the way these robots are used "in the wild". We analyze archival data of therapists teleoperating robots as part of their regular therapy sessions, to (1) determine common themes and difficulties in therapists' use of teleoperation interfaces, and (2) provide design recommendations to improve therapists' overall experience. We believe that following these recommendations will help maximize the effectiveness of ASD therapy with Socially Assistive Robots and the scale at which it can be deployed.

Contact Web Status Presentation for Freehand Grasping in MR-based Robot-teaching

Presenting realistic grasping interaction with virtual objects in mixed reality (MR) is one of the important issues to promote the use of MR, such as in the robot-teaching domain. However, the intended interaction is difficult to achieve in MR due to the difficulty of depth perception and the lack of haptic feedback. To make intended grasping interactions in MR easier to achieve, we propose visual cues (contact web) that represent the contact state between the user's hand and a MR-object. To evaluate the effect of the proposed method, we performed two experiments. The first experiment determines the grasp type and object to be used in the evaluation of the second experiment, and the second experiment measures the time taken to complete grasping tasks of the object. Both objective and subjective evaluations show that the proposed visual cues entailed a significant reduction in the time required to complete the task.

Priming Effects on Reaction Time toward Touch via a Web-survey: Human Reaction or Android Reaction?

This paper addresses the effects of priming information that shapes people's beliefs for the presented partner: a human or an android. We investigate two reaction times: from the timing of touch to the start of a reaction behavior and its length. We conducted a web-survey based experiment to investigate whether reaction times for androids were affected by priming information. Our results concluded that people exhibited a similar tendency toward androids when they believed they were interacting with a human and with an android, i.e., we found no significant effects of priming information.

Active Explicable Planning for Human-Robot Teaming

Intelligent robots are redefining autonomous tasks but are still far from being fully capable of assisting humans in day to day tasks. An important requirement of collaboration is to have a clear understanding of each other's expectations and capabilities. Lack of which may lead to serious issues such as loose coordination between teammates, ineffective team performance, and ultimately mission failures. Hence, it is important for the robot to behave explicably to make themselves understandable to the human. One of the challenges here is that the expectations of the human are often hidden and dynamically changing as the human interacts with the robot. Existing approaches in plan explicability often assume the human's expectations are known and static. In this paper, we propose the idea of active explicable planning to address this issue. We apply a Bayesian approach to model and predict dynamic human beliefs to be more anticipatory, and hence can generate more efficient plans without impacting explicability. We hypothesize that active explicable plans can be more efficient and more explicable at the same time, compared to the plans generated by existing methods. From the preliminary results of Mturk study, we find that our approach effectively captures the dynamic belief of the human which can be used to generate efficient and explicable behavior that benefits from dynamically changing expectations.

Formal Verification for Human-Robot Interaction in Medical Environments

We present a formal verification method that provides a model-based approach to human-robot interaction (HRI) in medical settings by utilizing linear temporal logic (LTL). We define high-level HRI procedures with an LTL-based framework to create algorithmically sound robots that can function independently in dynamic HRI environments. Our approach's theoretical infallibility confers particular advantages for medical robots, where safety and informative communications are crucial. In order to establish the viability of our proposed method, and with the ongoing COVID-19 pandemic in mind, we developed an LTL knowledge base for a medical robot tasked with HRI-intensive roles of patient reception and triage. We designed robotic simulations based on our LTL architecture to test our approach, employing randomized inputs to generate unpredictable HRI environments. We then conducted formal verification via an automata-theoretic approach by evaluating our simulated robot against generalized Büchi automata. We hope our LTL-based approach can enable future achievements in HRI.

Can Robots Impact Human Comfortability During a Live Interview?

Interaction among humans does not always proceed without errors; situations might happen in which a wrong word or attitude can cause the partner to feel uneasy. However, humans are often very sensitive to these interaction failures and may be able to fix them. Our research aims to endow robots with the same skill. Thus the first step, presented in this short paper, investigates to what extent a humanoid robot can impact someone's Comfortability in a realistic setting. To capture natural reactions, a set of real interviews performed by the humanoid robot iCub (acting as the interviewer) were organized. The interviews were designed in collaboration with a journalist from the press office of our institution and are meant to appear on the official institutional online magazine. The dialogue along with fluent human-like robotic actions were chosen not only to gather information about the participants' personal interests and professional career, necessary for the magazine column, but also to influence their Comfortability. Once the experiment is completed, the participants' self-report and spontaneous reactions (physical and physiological cues) will be explored to tackle the way people's Comfortability may be manifested through non-verbal cues, and the way it may be impacted by the humanoid robot.

Improving Users Engagement Detection using End-to-End Spatio-Temporal Convolutional Neural Networks

The ability to infer latent behaviours such as the degree of engagement of humans interacting with social robots is still considered one challenging task in the human-robot interaction (HRI) field. Data-driven techniques based on machine learning were recently shown to be a promising approach for tackling the users' engagement detection problem, however, the resolution often involves multiple consecutive stages. This in return makes these techniques either incapable of capturing the users' engagement especially in a dynamic environment or un-deployable because of their inability to track engagement in real-time. This study is based on a data-driven framework, and we propose an end-to-end technique based on a unique 3D convolutional neural network architecture. Our proposed framework was trained and evaluated using a real-life dataset of users interacting spontaneously with a social robot in a dynamic environment. The framework has shown promising results over three different evaluation metrics when compared against three baseline approaches from the literature with an F1-score of 76.72. Additionally, our framework has achieved a resilient real-time performance of 25 Hz.

A Prototype of a Robot Memory Game: Exploring the Technical Limitations of Human-Robot Interaction in a Playful Context

We explored the feasibility and limitations of designing and developing a Robot-Human interactive board game known as memory, a turn-based game of matching card pairs. Our analysis of this case study suggests significant limitations and further interactive improvements before exposing the prototype to the users. In terms of technical limitations, the variability of light and the lack of sharp camera imaging makes it challenging to identify cards uniquely. Open MANIPULATOR-X's (robotic arm used) dexterity is limited and could not mimic the interaction of card flipping. For interactive design terms, we analysed the robot morphology, expressiveness and modifications in the cards. We suggest running comparative studies with well-known humanoid robots and humans. This project is the initial step to developing more engaging and interactive games between Humans and Robots. Future experiments aim to explore the emotional, physical and mental benefit users could obtain from playing games with robots.

Are You Not Entertained? Computational Storytelling with Non-Verbal Interaction

We describe the design and implementation of a multi-modal storytelling system. Multiple robots narrate and act out an AI-generated story whose plots can be dynamically altered via non-verbal audience feedback. The enactment and interaction focuses on gestures and facial expression, which are embedded in a computational framework that draws on cognitive-linguistic insights to enrich the storytelling experience. With the absence of in-person user studies in this late breaking research, we present the validity of the separate modules of this project and introduce it to the HRI field.

"What's this?" Comparing Active learning Strategies for Concept Acquisition in HRI

Social robotics aim to equip robots with the ability to exhibit socially intelligent behaviour while interacting in a face-to-face context with human partners. An important aspect of face-to-face social interaction includes the efficient recognition of their surroundings, the environment and the objects within it, so as to be able to discuss, describe and provide instructions to assist continuous collaboration between the speaker and the listener. Although humans can efficiently learn from their interlocutors to perceptually ground word meanings of visual objects from just a single example, teaching robots to ground word meanings remains a very challenging, expensive and resource-intensive task. In this paper, we present a novel framework for robot concept acquisition on the fly, by combining few-shot learning with active learning. In this framework, a robot learns new concepts through collaboratively performing tasks with humans. We compared different learning strategies in a task-based evaluation with human participants, and we found that active learning significantly outperforms a non-active learning alternative, and is more preferable by the participants while increasing their trust in the social robot's capabilities.

Mind Perception and Social Robots: The Role of Agent Appearance and Action Types

Mind perception is considered to be the ability to attribute mental states to non-human beings. As social robots increasingly become part of our lives, one important question for HRI is to what extent we attribute mental states to these agents and the conditions under which we do so. In the present study, we investigated the effect of appearance and the type of action a robot performs on mind perception. Participants rated videos of two robots in different appearances (one metallic, the other human-like), each of which performed four different actions (manipulating an object, verbal communication, non-verbal communication, and an action that depicts a biological need) on Agency and Experience dimensions. Our results show that the type of action that the robot performs affects the Agency scores. When the robot performs human-specific actions such as communicative actions or an action that depicts a biological need, it is rated to have more agency than when it performs a manipulative action. On the other hand, the appearance of the robot did not have any effect on the Agency or the Experience scores. Overall, our study suggests that the behavioral skills we build into social robots could be quite important in the extent we attribute mental states to them.

User Validation Study of a Social Robot for Use in Hospital Wards

For robots to play an increased role in healthcare settings, the understanding on nurse-patient human-centric interactions in their unique scenario-based setting is crucial for Human-Robot Interaction (HRI) design. We aim to investigate HRI design for a robot to perform nursing tasks in hospitals. The robot's personality is designed with animated eyes, voice with local accent and contextual phrases with politeness to mimic nurse's behavior as they engage with patients. The study was designed with scenario-based use cases focusing on user-centric and task-centric HRI components.

This paper reports on findings based on 2 online surveys conducted and performed by 60 participating nurses from a local Hospital. In the first survey, the tasks performed by the robot include greeting, patient identification, pain level check, and performing vital signs measurement. This is to identify the correlation of politeness, friendliness and strictness exhibited by the robot, and the effects on the user's perception on trustworthiness and their willingness to follow the robot's instructions for a given task. In the second survey, a comparison is made between an item delivery task performed by a nurse and the same task performed by a robot, to assess the user receptivity and the usability of the robot to assist in the task with its ability to communicate its service intent.

The results imply that in the current design, if the robot communicates with the user with politeness and friendliness, it will increase the perceived trustworthiness of the user. When the robot exhibited strictness to get the patient to comply, a decrease in the user-perceived friendliness, politeness, and robot trustworthiness was also revealed in the findings. Finally, participants indicated high receptivity in interacting with the robot and the feasibility of a robot-assisted nurse workflow. These results support the progression to further development towards user validation studies in an actual hospital ward setting.

How can I help you? An Intelligent Virtual Assistant for Industrial Robots

In the light of recent trends toward introducing Artificial Intelligence (AI) to enhance Human-Robot Interaction (HRI), intelligent virtual assistants (VA) driven by Natural Language Processing (NLP) receives ample attention in the manufacturing domain. However, most VAs either tightly bind with a specific robotic system or lack efficient human-robot communication. In this work, we implement a layer of interaction between the robotic system and the human operator. This interaction is achieved using a novel VA, called Max, as an intelligent and robust interface. We expand the research work in three directions. Firstly, we introduce a RESTful style Client-Server architecture for Max. Secondly, inspired by studies of human-human conversations, we embed conversation strategies into human-robot dialog policy generation to create a more natural and humanized conversation environment. Finally, we evaluate Max over multiple real-world scenarios from the exploration of an unknown environment to package delivery, with the means of an industrial robot.

Contingency Detection in Multi-Agent Interactions

When a robot is deployed to learn a new task in a "real-word" environment, there may be multiple teachers and therefore multiple sources of feedback. Furthermore, there may be multiple optimal solutions for a given task and teachers may have preferences among those various solutions. We present an Interactive Reinforcement Learning (I-RL) algorithm, Multi-Teacher Activated Policy Shaping (M-TAPS), which addresses the problem of learning from multiple teachers and leverages differences between them as a means to explore the environment. We show that this algorithm can significantly increase an agent's robustness to the environment and quickly adopt to a teacher's preferences. Finally, we present a formal model for comparing human teachers and constructed oracle teachers and the way that they provide feedback to a robot.

Helper's High with a Robot Pet

Helper's high is the phenomenon that helping someone or something else can lead to psychological benefits such as mood improvement. This study investigates if a robot pet can, like a real pet, induce helpers high in people interacting with it. A Vector robot was programmed to express the need for daily exercise and attention, and participants were instructed how to help the robot meet those needs. Our within subjects design had two conditions: with and without emotional behaviour modifiers to the robot's behaviour. Our primary research question is whether behaviours that conveyed emotion as well as needs would lead to empathy in the participants, which would create a stronger helper's high effect than purely functional need expression behaviours. We present a long-term (4 day) remote study design that not only facilitates the kind of interactions needed for helper's high, but abides by government guidelines on Covid-19 safety (under which a laboratory study is not possible). Preliminary results suggest that Vector was able to improve the mood of some participants, and mood changes tend to be greater when Vector expressed behaviours with emotional components. Our post-study interview data suggests that individual differences in living environment and mood impacting external factors, affected Vector's efficacy in mood influencing.

Human-Robot Collaboration with Force Feedback Utilizing Bimanual Coordination

Humans realize various complex tasks with their upper limbs based on bimanual coordination. A fundamental feature of our bimanual coordination is the natural tendency to synchronize the upper limbs, resulting in preferred symmetrical patterns of interlimb coordination. In this early-stage study, based on the coarse-to-fine human-robot collaboration framework, we investigate the possibility of human-robot collaboration for accurate manipulation under force feedback utilizing the bimanual synchronous mechanism. Primary results suggested the effectiveness of the proposed method.

The Learning Experience of Becoming a FPV Drone Pilot

An immersive drone-flying modality known as First-Person View (FPV) flying is increasing in popularity. Nonetheless, due to its recency, there is a lack of research focusing on FPV drones. Specifically, the journey FPV pilots go through when learning how to fly was not studied yet. Understanding their learning experience allows the development of FPV drones with a gentle learning curve for the pilots. We conducted an online survey with 515 FPV pilots to evaluate how they learned to fly, and how they recommend beginners to learn. We found that most pilots learned using a stabilized flight mode, but the majority of them later switched to an acrobatic (non-stabilized) flight mode. Although only half of the surveyed pilots used flight simulators to learn, 90% of all pilots recommend beginners to use simulators. Our results allow further understanding of the emerging FPV pilot's culture and composition of research questions for future studies.

Competitive Physical Human-Robot Game Play

While competitive games have been studied extensively in the AI community for benchmarking purposes, there has only been limited discussion of human interaction with embodied agents under competitive settings. In this work, we aim to motivate research in competitive human-robot interaction (competitive-HRI) by discussing how human users can benefit from robot competitors. We then examine the concepts from game AI that we can adopt for competitive-HRI. Based on these discussions, we propose a robotic system that is designed to support future competitive-HRI research. A human-robot fencing game is also proposed to evaluate a robot's capability in competitive-HRI scenarios. Finally, we present the initial experimental results and discuss possible future research directions.

Behavioral Design of Guiding Agents to Encourage their Use by Visitors in Public Spaces

To encourage visitors to use guiding agents in public spaces, this study adopted a design approach and focused on identifying behavioral factors that would encourage interaction. Six factors of agent behavior were hypothesized, and an experiment was performed in a public space with real people. One or two communication robots were installed near the entrance of a university library. The reactions of library users passing by the robot were observed and recorded under different robot behavior conditions. The results showed that the robots were able to attract attention by uttering guidance information and looking in various directions while waiting for people. When the robots spoke directly to nearby people, the people tended to interact with them. The results of a questionnaire survey suggested that the voice, speech content, and appearance of the robots are also important factors.

Design Considerations for Child-Robot Interaction in Pediatric Contexts

Social robots can improve quality of life for children undergoing prolonged hospital stays, both by offering a fun and interactive distraction and by providing practical assistance during procedure support and pain management. In this paper, we present important considerations for robots involved in pediatric contexts. These considerations are based on a need-finding interview conducted with a gaming technology specialist at a children's hospital. By summarizing their experiences, we identify considerations affecting the design of robot morphology and behavior for this unique use case, as well as the explore the role of parents, healthcare staff, and child life specialists.

Building a Collaborative Relationship between Human and Robot through Verbal and Non-Verbal Interaction

Interpersonal communication and relationship building promote successful collaborations. This study investigated the effect of conversational nonverbal and verbal interactions of a robot on bonding and relationship building with a human partner.

Participants interacted with two robots that differed in their nonverbal and verbal expressiveness. The interactive robot actively engaged the participant in a conversation before, during and after a collaborative task whereas the non-interactive robot remained passive. The robots' nonverbal and verbal interactions increased participants' perception of the robot as a social actor and strengthened bonding and relationship building between human and robot. The results of our study indicate that the evaluation of the collaboration improves when the robot maintains eye contact, the robot is attributed a certain personality, and the robot is perceived as being alive.

Our study could not show that an interactive robot receives more help by the collaboration partner. Future research should investigate additional factors that facilitate helpful behavior among humans, such as similarity, attributional judgement and empathy.

On the Common and Different Expectations on Robot Service in Restaurant between Customers and Employees

Recently, the restaurant industry attempts to introduce service robot for efficient management. We conducted a survey research to investigate the expectations of restaurant robot service between customers and employees. It was found that customers and the employees share many common expectations depending on service elements that the restaurant industry usually adopts. Our results suggest that personalized service is important for the customer-oriented robot service in restaurant.

RODECA: A Canvas for Designing Robots

Although there are existing frameworks for designing robots within the field of HRI, there is not yet a viable, all encompassing framework that bridges the gap between academic research, industry development and users in the design process. Through two online workshops and an individual company assignment, we identified industry needs, concerns and challenges relevant to the development of the Robot Design Canvas (RODECA). We present our preliminary work with seven industry partners and scientists from three research institutions. This research will inform the development of a versatile robot design framework that accounts for user experience early in the design process that can be validated through systematic investigation across research and industry applications. Such a tool would help bridge the gap between HRI research and commercial robot development.

Effects of Referring to Robot vs. User Needs in Self-Explanations of Undesirable Robot Behavior

Autonomous or lively social robots will often exhibit behavior that is surprising to users and calls for explanation. However, it is not clear how such robot behavior should be explained best. Our previous work showed that different types of a robot's self-explanations, citing its actions, intentions, or needs - alone or in causal relations - have different effects on users (Stange & Kopp, 2020). Further analysis of the data from the cited study implies that explanations in terms of robot needs (e.g. for energy or social contact) did not adequately justify the robot's behavior. In this paper we study the effects of a robot citing the user's needs to explain its behavior. Our study is based on the assumption that users may feel more connected to a robot that aims to recognize and incorporate the users' needs in its decision-making, even when the resulting behavior turns out to be undesirable. Results show that explaining robot behavior with user needs generally did neither lead to higher gains in understanding or desirability of the behaviors, nor did it help to justify them better than explaining it with robot needs. Further, a robot referring to user needs was not perceived as more likable, trustworthy or mindful, nor were users' contact intentions increased. However, an in-depth analysis showed different effects of explanations for different behaviors. We discuss these differences in order to clarify which factors should inform content and form of a robot's behavioral self-explanations.

Dialogue Breakdown and Confusion between Elements and Category

Avoiding dialogue breakdown is important in HRI and HAI. In this paper, we investigated why dialogue breakdown occurs. We hypothesize that confusion between elements and category is one important reason. Elements means individual strange utterances, and category means the entire ability and performance of a robot. We hypothesized that a user who confuses elements and category will tend to lose the motivation to continue interacting with a robot or agent when the robot or agent makes a strange utterance. To verify this hypothesis, we conducted an experiment. We asked participants to perform a memory task cited in a previous work. We separated them into two groups, the no-confusion group and confusion group, and we showed them a movie in which a robot made a mistake on a math problem. After that, we asked them in a questionnaire about their impression of the robot. We conducted a t-test between the two groups for each question. As a result, the participants who confused elements and category tended to brand the robot as having low ability and performance when it made a mistake, and those who were not confused did not have reduced trustworthiness in the robot or reduced motivation for continuing to interact when the robot made a mistake. These results supports our hypothesis.

Psychosocial Impact of Collaborating with an Autonomous Mobile Robot: Results of an Exploratory Case Study

The study was part of a confidential pilot project concerning the introduction of Autonomous Mobile Robots in an order picking process. The study assesses the impact of working with an AMR on logistics workers (hereinafter referred to as 'operators'). Two research questions were investigated. First, does working with an AMR lead to increased psychosocial workload? And second, what is the perception of the operators working with an AMR? Very little research, outside a lab context, has been done so far on the impact of working with an AMR. This study contributes to the understanding of the impact of working with robots, outside the lab in an industrial setting. The results of this study can be summarized as follows: 1) working with an AMR does not lead to extra psychosocial workload and 2) there is a positive perception and attitude versus working with an AMR.

Remote You, Haru and Me: Exploring Social Interaction in Telepresence Gaming With a Robotic Agent

Novel forms of two-player telegame interaction might extend and enhance social connection between physically separated persons. We examine the potential of a Rock-Paper-Scissors game conveyed via an embodied telepresence agent. We compare a game interaction with an autonomous robot and a game interaction with a teleoperated version of the same robot. Both systems are equipped with a perception module that processes and recognizes the hand movement of the human players colocated with the robot. In the classic interaction, the robot acts as the opponent player. In the telegame setting, the robot represents and mirrors the actions of the remote human player as the opponent. We integrate the systems on the tabletop robot platform Haru and evaluate user impressions with respect to game experience and robot sociality. Results show that the telegame is perceived more positively, indicating its potential for physically distant, but socially enhanced interaction in the future.

Focusing on the Vulnerabilities of Robots through Expert Interviews for Trust in Human-Robot Interaction

The practical value of trust in human-robot Interaction (HRI) has been to strength human long-term collaboration and interaction with robots. While much work focuses on determining the various factors influencing trust in HRI, vulnerability as a precondition of trust has not yet been explored from a robot-centered perspective. Based on eight semi-structured interviews with experts, I set out to identify robot vulnerabilities for then to present a systematic overview that resulted in a total of 13 categories grouped into four different themes. In the discussion, I specifically focus on how the experts interpreted the notion of vulnerability as it would relate to robots and how malicious human behavior can be problematic when aiming to ensure mutual trust in HRI.

A Wizard of Oz Approach to Robotic Therapy for Older Adults With Depressive Symptoms

Older adults with late-life depression often suffer from cognitive symptoms, such as dementia. This patient group is not prioritised for psychotherapy and therefore often medicated with antidepressants. However, in the last 20-years, the evidence base for psychotherapy has increased and one promising area is technology-based psychotherapy. Investigations of the possibilities in this area are also motivated by the Covid-19 pandemic, where many older adults are isolated, which makes it impossible for them to meet with a therapist. Therefore, we have developed a Wizard of Oz system allowing a human therapist to control a humanoid robot through a graphical user interface, including natural speech for natural conversations, which enables the robot to be stationed in, for example, a care home. For future research, we will conduct user-centered studies with both therapists and older adults to further develop the system.

Co-creation as a Facilitator for Co-regulation in Child-Robot Interaction

While interacting with a social robot, children have a need to express themselves and have their expressions acknowledged by the robot. A need that is often unaddressed by the robot, due to its limitations in understanding the expressions of children. To keep the child-robot interaction manageable the robot takes control, undermining children's ability to co-regulate the interaction. Co-regulation is important for having a fulfilling social interaction. We developed a co-creation activity that aims to facilitate more co-regulation. Children are enabled to create sound effects, gestures, and light shows for the robot to use during their conversation. Results from a user study (N = 59 school children, 7-11 y.o.) showed that the co-creation activity successfully facilitated co-regulation by improving children's agency. Co-creation furthermore increases children's acceptance of the robot.

Robot Vitals and Robot Health: An Intuitive Approach to Quantifying and Communicating Predicted Robot Performance Degradation in Human-Robot Teams

In this work we introduce the concept of Robot Vitals and propose a framework for systematically quantifying the performance degradation experienced by a robot. A performance indicator or parameter can be called a Robot Vital if it can be consistently correlated with a robot's failure, faulty behaviour or malfunction. Robot Health can be quantified as the entropy of observing a set of vitals. Robot vitals and Robot health are intuitive ways to quantify a robot's ability to function autonomously. Robots programmed with multiple levels of autonomy (LOA) do not scale well when a human is in charge of regulating the LOAs. Artificial agents can use robot vitals to assist operators with LOA switches that fix field-repairable non-terminal performance degradation in mobile robots. Robot health can also be used to aid a tele-operator's judgement and promote explainability (e.g. via visual cues), thereby reducing operator workload while promoting trust and engagement with the system. In multi-robot systems, agents can use robot health to prioritise robots most in need of tele-operator attention. The vitals proposed in this paper are: rate of change of signal strength; sliding window average of difference between expected robot velocity and actual velocity; robot acceleration; rate of increase in area coverage and localisation error.

Towards Web-based Environments for Prototyping Social Robot Applications

Interest in web-based human-robot interaction (HRI) has grown since it was initially introduced. Similarly, interest in social signals such as voice and facial expressions continues to expand. More recently, researchers have also gained interest in the feasibility of using neurophysiological information to enhance HRI. While both social signals and web-based HRI have seen growing interest, there is limited work exploring potential advances at the intersection of these two areas. This paper describes our efforts to investigate this intersection by integrating: 1) web-based social signal interpretation, 2) hybrid block/text scripting interfaces, and 3) ROS integration via rosbridge. We further discuss potential advantages and current challenges concerning web-based platforms for prototyping social robotic applications.

Transparency in HRI: Trust and Decision Making in the Face of Robot Errors

Robots are rapidly gaining acceptance in recent times, where the general public, industry and researchers are starting to understand the utility of robots, for example for delivery to homes or in hospitals. However, it is key to understand how to instil the appropriate amount of trust in the user. One aspect of a trustworthy system is its ability to explain actions and be transparent, especially in the face of potentially serious errors. Here, we study the various aspects of transparency of interaction and its effect in a scenario where a robot is performing triage when a suspected Covid-19 patient arrives at a hospital. Our findings consolidate prior work showing a main effect of robot errors on trust, but also showing that this is dependent on the level of transparency. Furthermore, our findings indicate that high interaction transparency leads to participants making better informed decisions on their health based on their interaction. Such findings on transparency could inform interaction design and thus lead to greater adoption of robots in key areas, such as health and well-being.

The Effect of Robot-Guided Meditation on Intra-Brain EEG Phase Synchronization

In this study, we examined the effect of mindfulness meditation facilitated by human-robot interaction (HRI) on brain activity. EEG signals were collected from two groups of participants; Meditation group who practiced mindfulness meditation with a social robot and Control group who only listened to a lecture by the robot. We compared brain functional connectivity between the two groups by computing EEG phase synchrony during HRI session. The results revealed a significantly lower global phase synchrony in beta frequency band of the Meditation group, which has been previously reported as an indication of reduced cognitive processing and achieving the mindful state in experienced meditators. Our findings demonstrate the potential of Socially-Assistive Robots (SAR) for integration in mental healthcare and optimization of the intervention effects. Additionally, our study puts forward new measures for objective monitoring of HRI effect on the user's neurophysiological responses.

Comparing Strategies for Robot Communication of Role-Grounded Moral Norms

Because robots are perceived as moral agents, they hold significant persuasive power over humans. It is thus crucial for robots to behave in accordance with human systems of morality and to use effective strategies for human-robot moral communication. In this work, we evaluate two moral communication strategies: a norm-based strategy grounded in deontological ethics, and a role-based strategy grounded in role ethics, in order to test the effectiveness of these two strategies in encouraging compliance with norms grounded in role expectations. Our results suggest two major findings: (1) reflective exercises may increase the efficacy of role-based moral language and (2) opportunities for moral practice following robots' use of moral language may facilitate role-centered moral cultivation.

Comparing a Robotic Storyteller versus Audio Book with Integration of Sound Effects and Background Music

Regardless of newly developed forms of narration, listening to stories remains a popular leisure activity. However, storytelling is also evolving. With social robots taking on activities initially conducted by humans, they are emerging as a new storytelling medium. Robots are able to extend human storytelling by adding sounds to the narrative to enhance recipients' emotional reaction to and transportation into the story. To this end, we conducted an online study to compare the traditional audio book to a robotic storyteller, focusing on the influence of adding sound effects and music in both storytelling approaches. Results show that neither emotion induction nor transportation was significantly affected by storytelling medium or additional sounds, but descriptive values indicate a trend towards higher emotion induction and transportation when adding sound to robotic storytelling relative to the audio book condition without additional sounds. Live replications based on this preliminary study might reveal less ambiguous findings.

Your New Friend NAO vs. Robot No. 783 - Effects of Personal or Impersonal Framing in a Robotic Storytelling Use Case

The users' positive or negative attitude towards robots is a crucial factor in human-robot interaction (HRI). We conducted a preliminary online study comparing a storytelling scenario including personal respectively impersonal framing to investigate effects of the robot's self-introduction on the users' attitude towards and likeability of the robot, their transportation into and memory of the story told. No significant group differences were found, but transportation correlated significantly with attitude, likeability and memory. Thus, transportation might be a promising factor which can when increased positively affect HRI.

Dynamic Path Visualization for Human-Robot Collaboration

Augmented reality technology can enable robots to visualize their future actions giving users crucial information to avoid collisions and other conflicting actions. Although a robot's entire action plan could be visualized (such as the output of a navigational planner), how far into the future it is appropriate to display the robot's plan is unknown. We developed a dynamic path visualizer that projects the robot's motion intent at varying lengths depending on the complexity of the upcoming path. We tested our approach in a virtual game where participants were tasked to collect and deliver gems to a robot that moves randomly towards a grid of markers in a confined area. Preliminary results on a small sample size indicate no significant effect on task performance; however, open-ended responses reveal participants preference towards visuals that show longer path projections.

When Oracles Go Wrong: Using Preferences as a Means to Explore

When a robot is deployed to learn a new task in a "real-word" environment, there may be multiple teachers and therefore multiple sources of feedback. Furthermore, there may be multiple optimal solutions for a given task and teachers may have preferences among those various solutions. We present an Interactive Reinforcement Learning (I-RL) algorithm, Multi-Teacher Activated Policy Shaping (M-TAPS), which addresses the problem of learning from multiple teachers and leverages differences between them as a means to explore the environment. We show that this algorithm can significantly increase an agent's robustness to the environment and quickly adopt to a teacher's preferences. Finally, we present a formal model for comparing human teachers and constructed oracle teachers and the way that they provide feedback to a robot.

Effects of Gaze and Speech in Human-Robot Medical Interactions

Especially in medical interactions with robots, estimating people's level of trust is critical. The first human clinical trials with a blood sampling robot have shown promising benefits for both patients and healthcare workers, as a robot provides higher accuracy and quick results. An automated solution for blood drawing is therefore preferable, but it is unclear under what circumstances people would be willing to use a blood sampling robot. This study therefore investigates people's perception of such a robot, and whether speech and gaze have a positive effect on their willingness to interact with the robot. A survey was conducted that shows that the perception of the blood sampling robot was more positive if the robot provided transparency through speech, and that the perception of the robot was more negative if the robot only displayed eye-gaze (without speech). The results also suggest that there generally is a positive attitude towards, and willingness to use a blood sampling robot, at least in the population investigated.

Improving Human-Robot Collaboration Efficiency and Robustness through Non-Verbal Emotional Communication

Emotional expression plays a very important role in human-robot interaction. It can greatly improve the quality of interaction between humans and robots. In the case of non-verbal communication, emotional expression can greatly shorten the social distance between humans and robots. However, in practical HRI applications, the efficiency and robustness are also an important part we need to consider. In this paper, we demonstrate the impact on the efficiency and robustness of human-robot teamwork in the case of implicit non-verbal communication. We designed an interactive experiment using a collaborative Sawyer robot. In the experiment, we let the robot play a tic-tac-toe game with a person, and judged whether there was a positive or negative effect by comparing the effects of various emotional factors. Our experiments demonstrated that the emotional expression in the case of non-verbal communication has a positive impact on the efficiency and robustness in a collaborative human-robot task.

Exploring Non-verbal Gaze Behavior in Groups Mediated by an Adaptive Robot

In this study, non-verbal behavior in diversely-skilled groups was observed while participating in a collaborative educational game with a humanoid robot. Research has indicated that a mediating robot gaze can equalize the verbal contributions from each differently skilled participant, promoting inclusion and learning. The experiment results were further analyzed, extending to non-verbal effects. The initial results from two experiments under different robot gaze behavior indicate that modifications in the robot gaze can lead to different gaze behavior in participants. It was observed that a gaze mediating behavior in the robot led to increased gaze change frequency among participants as well as more time spent mirroring the robot's gaze. These initial results show promise in how a robot can balance attention in a collaborative learning environment.

Estimating Levels of Engagement for Social Human-Robot Interaction using Legendre Memory Units

In this study, we examine whether the data requirements associated with training a system to recognize multiple 'levels' of an internal state can be reduced by training systems on the 'extremes' in a way that allows them to estimate "intermediate" classes as falling in-between the trained extremes. Specifically, this study explores whether a novel recurrent neural network, the Legendre Delay Network, added as a pre-processing step to a Multi-Layer Perception, produces an output which can be used to separate an untrained intermediate class of task engagement from the trained extreme classes. The results showed that identifying untrained classes after training on the extremes is feasible, particularly when using the Legendre Delay Network.

Social Robots to Support Child and Family Care: A Dutch Use Case

Child and family care professionals in the Netherlands are facing challenges including high workloads. Technological support could be beneficial in this context, e.g. for education, motivation and guidance of the children. For example, the Dutch Child and Family Center explores the possibilities of social robot assistance in their regular care pathways. To study the use of social robots in this broad context, we started by drafting three example scenarios based on the expertise of child care professionals. During an exploration phase, we are identifying key design and application requirements through focus groups with child care professionals and parents. Later stages of our research, the testing phase, will focus on testing these requirements via scenario-based design and child-robot interaction experiments in real-world contexts to further shape the application of social robots in various child and family care settings.

Perceptions of Conversational Group Membership based on Robots' Spatial Positioning: Effects of Embodiment

Robots' spatial positioning is a useful communication modality in social interactions. For example, in the context of group conversations, certain types of positioning signal membership to the group interaction. How does robot embodiment influence these perceptions? To investigate this question, we conducted an online study in which participants observed renderings of several robots in a social environment, and judged whether the robots were positioned to take part in a group conversation with other humans in the scene. Our results suggest that robot embodiment can influence perceptions of conversational group membership. An important factor to consider in this regard is whether robot embodiment leads to a discernible orientation for the agent.

On the Influence of Autonomy and Transparency on Blame and Credit in Flawed Human-Robot Collaboration

The collaboration between humans and autonomous AI-driven robots in industrial contexts is a promising vision that will have an impact on the sociotechnical system. Taking research from the field of human teamwork as guiding principles as well as results from human robot collaboration studies this study addresses open questions regarding the design and impact of communicative transparency and behavioral autonomy in a human robot collaboration. In an experimental approach, we tested whether an AI-narrative and communication panels of a robot-arm trigger the attribution of more human like traits and expectations going along with a changed attribution of blame and failure in a flawed collaboration.

Speed and Speech Impact on the Usage of a Hand Sanitizer Robot

Hand hygiene has become an important part of our lives. In order to encourage people to sanitize their hands, a hand sanitizer robot was developed and tested. It drove around in the main entrance hall of a university and reminded people to use hand sanitizer using speech. An initial pilot study (N=196) using ethnographic observation and interviews revealed that the robot's speed and its ability to speak may potentially influence people's willingness to use the robot. In the main study (N=351), the robot was therefore tested with two variables: Speech and speed to find out how the robot is most efficient in engaging participants. An efficient robot is a robot that people use. In particular, we studied to what extent the speed of the hand sanitizer robot impacts whether people use it and to what extent the way the robot addresses people verbally affects whether they use it. These research questions were addressed in four conditions. The results show that a robot that uses a slower speed is regarded to be more trustworthy, and that friendly speech can be useful to remind people to use hand sanitizer.

Improving Remote Environment Visualization through 360 6DoF Multi-sensor Fusion for VR Telerobotics

Teleoperations requires both a robust set of controls and the right balance of sensory data to allow task completion without overwhelming the user. Previous work has mostly focused on using depth cameras that fail to provide situational awareness. We have developed a teleoperation system that integrates 360° camera data in addition to the more standard depth data. We infer depth from the 360° camera data and use that to render it in VR, which allows for six degree of freedom viewing. We use a virtual gantry control mechanism, and also provide a menu with which the user can choose which rendering schemes to render the robot's environment with. We hypothesize that this approach will increase the speed and accuracy with which the user can teleoperate the robot.

Is it Pointless? Modeling and Evaluation of Category Transitions of Spatial Gestures

To enable robots to select between different types of nonverbal behavior when accompanying spatial language, we must first understand the factors that guide human selection between such behaviors. In this work, we argue that to enable appropriate spatial gesture selection, HRI researchers must answer four questions: (1) What are the factors that determine the form of gesture used to accompany spatial language? (2) What parameters of these factors cause speakers to switch between these categories? (3) How do the parameterizations of these factors inform the performance of gestures within these categories? and (4) How does human generation of gestures differ from human expectations of how robots should generate such gestures? In this work, we consider the first three questions and make two key contributions: (1) a human-human interaction experiment investigating how human gestures transition between deictic and non-deictic under changes in contextual factors, and (2) a model of gesture category transition informed by the results of this experiment.

Exoskeletons in the Supermarket: Influences of Comfort, Strain Relief and Task-Technology Fit on Retail Workers' Post-Trial Intention to Use

Many supermarket employees, such as shelf and warehouse workers, suffer from musculoskeletal disorders. Exoskeletons, that is, physical assistance systems that are worn on the body, could help. The conditions under which workers would be willing to use this new wearable technology, however, remain largely unclear. In this exploratory field study, 58 supermarket employees tested one or more out of five passive exoskeletons during their regular work. Perceived wearing comfort, extent of strain relief, and task-technology fit (i.e., how well the exoskeleton fit their current task requirements) were found to correlate significantly with post-trial intention to use. Soft exoskeletons were rated as preferable to rigid ones. Trying one of the latter also resulted in lower intention to use, which was revealed to be fully mediated by a better task-technology fit being ascribed to soft exoskeletons. Practical relevance of the results, study limitations and future research directions are discussed.

Educational Robotics and Mediated Transfer: Transitioning from Tangible Tile-based Programming, to Visual Block-based Programming

In this paper we present the results from a study in which participants (n=26, aged 6-9) were exposed to two different ER systems, one based on tangible tile-based programming and one on visual block-programming. During the transition from the first to the second system, mediated transfer of knowledge regarding computational concepts, were observed. Furthermore, the participants CT skills were likewise observed to improve throughout the study, across both ER systems.

Playing the Blame Game with Robots

Recent research shows - somewhat astonishingly - that people are willing to ascribe moral blame to AI-driven systems when they cause harm [1]-[4]. In this paper, we explore the moral-psychological underpinnings of these findings. Our hypothesis was that the reason why people ascribe moral blame to AI systems is that they consider them capable of entertaining inculpating mental states (what is called mens rea in the law). To explore this hypothesis, we created a scenario in which an AI system runs a risk of poisoning people by using a novel type of fertilizer. Manipulating the computational (or quasi-cognitive) abilities of the AI system in a between-subjects design, we tested whether people's willingness to ascribe knowledge of a substantial risk of harm (i.e., recklessness) and blame to the AI system. Furthermore, we investigated whether the ascription of recklessness and blame to the AI system would influence the perceived blameworthiness of the system's user (or owner). In an experiment with 347 participants, we found (i) that people are willing to ascribe blame to AI systems in contexts of recklessness, (ii) that blame ascriptions depend strongly on the willingness to attribute recklessness and (iii) that the latter, in turn, depends on the perceived "cognitive" capacities of the system. Furthermore, our results suggest (iv) that the higher the computational sophistication of the AI system, the more blame is shifted from the human user to the AI system.

Get This!? Mixed Reality Improves Robot Communication Regardless of Mental Workload

We present the first experiment analyzing the effectiveness of robot-generated mixed reality gestures using real robotic and mixed reality hardware. Our findings demonstrate how these gestures increase user effectiveness by decreasing user response time during visual search tasks, and show that robots can safely pair longer, more natural referring expressions with mixed reality gestures without worrying about cognitively overloading their interlocutors.

Not in my house!: Children Playing an Online Game with Robots Show Low Trust and Closeness with Ingroup Robots

This paper presents preliminary research on whether children will accept a robot as part of their ingroup, and on how a robot's group membership affects trust, closeness, and social support. Trust is important in human-robot interactions because it affects if people will follow robots' advice. In this study, we randomly assigned 11- and 12-year-old participants to a condition such that participants were either on a team with the robot (ingroup) or were opponents of the robot (outgroup) for an online game. Thus far, we have eight participants in the ingroup condition. Our preliminary results showed that children had a low level of trust, closeness, and social support with the robot. Participants had a much more negative response than we anticipated. We speculate that there will be a more positive response with an in-person setting rather than a remote one.

Initiating Human-Robot Interactions Using Incremental Speech Adaptation

In this paper, we present a study in which a robot initiates interactions with people passing by in an in-the-wild scenario. The robot adapts the loudness of its voice dynamically to the distance of the respective person approached, thus indicating who it is talking to. It furthermore tracks people based on information on body orientation and eye gaze and adapts the text produced based on people's distance autonomously. Our study shows that the adaptation of the loudness of its voice is perceived as personalization by the participants and that the likelihood that they stop by and interact with the robot increases when the robot incrementally adjusts its behavior.

BRILLO: A Robotic Architecture for Personalised Long-lasting Interactions in a Bartending Domain

The use of robots for the automation of the supply of food and beverages is a commercially attractive and modern application of robotic technologies. Such innovative technologies are deemed helping to renew the image of a service and, in this way, stimulate people's curiosity. The novelty effect linked to the experience of a new technology, however, has a very limited duration in time, and it is not suitable for guaranteeing user loyalty. Consequently, the need for continuous renewal to keep the commercial proposal attractive becomes very expensive. In this paper, we present the architecture of a new project, called Bartending Robot for Interactive Long Lasting Operations (BRILLO), which aims to create a long-lasting operational robotic system that is able to have personalised multi-user interactions and to work as a bartender by performing different tasks according to the users' requests and preferences.

Active Feedback Learning with Rich Feedback

In this paper, we proposed rich feedback which contains multiple types of feedback to allow human teachers to provide a variety of useful information to the learning agent and modified Policy Shaping to accumulate the effects of rich feedback. Then we designed the ALPHA framework to actively request rich feedback and further developed it to use Deep Learning. The experimental results showed ALPHA with rich feedback can greatly improve learning and quickly lead the learning agent to the optimal solution.

The Role of a Social Robot in Behavior Change Coaching

This experimental study evaluates the effects of coaching people into behavior change with a simulation of the social robot Haru. In order to support participants in their attempts to change their behavior and to create a new habit, a coaching session was created based on the 'Tiny Habits' method developed by BJ Fogg [1]. This coaching session was presented to altogether 41 participants in three conditions. In Condition 1, the dialogue between the participant and the simulated robot was interspersed with emotional expressions and behaviors such as dancing, bowing and vocalizing. Condition 2 used the same set-up with the robot simulator and provided participants with the same guidance, using the same synthesized voice from Condition 1, but without any emotional elements. The third condition was created to evaluate the effect of using a robot as a session coach by comparing the two conditions with Haru to a condition in which the same content was presented, just without a robot. The same script as in the two robot conditions was presented as a text on a website, divided into sections reflecting the human-robot dialogue in the two robot conditions. Data from a post-session questionnaire were supplemented by another questionnaire which was administered 10 days later and focuses on habit retention. Participants from the session with the robot that uses emotional behaviors felt significantly more confident that they will incorporate their behavior change in their lives and thought differently about behavior change. People participating in the session with a robot simulation also had a significantly higher retention rate of their behavior change, thus revealing a positive effect of the social robot.

Collaboration Education Suite for Children with ASD

Our project explores whether multiple robots can be deployed as a therapeutic tool to help children with Autism Spectrum Disorder (ASD) practice social interaction and collaboration. We implemented a human-robot interaction (HRI) solution using two different assistive social robots; the NAO humanoid robot and the Cozmo robot. We have several modules available through an iOS app that we developed for children with ASD to engage with these robots in scenarios focused on social interaction and collaboration. We are also in the process of refining an intelligent web interface so that we, as well as therapists and caregivers, can access relevant data collected by our HRI software. The web interface is designed to help caregivers and therapists, along with our team, monitor their child's or patient's progress, and understand which methods work best. The overall goal is to conduct an HRI user study with children with ASD to test how our software solution performs.

A VR Teleoperation Suite with Manipulation Assist

Advances in the capabilities of technologies like virtual reality (VR) and their rapid proliferation at consumer price points, have made it much easier to integrate them into existing robotic frameworks. VR interfaces are promising for robotics for several reasons, including that they may be suitable for resolving many of the human performance issues associated with traditional robot teleoperation interfaces used for robot manipulation. In this systems-focused paper, we introduce and document the development of a VR-based robot control paradigm with manipulation assist control algorithm, which allows human operators to specify larger manipulation goals while leaving the low-level details of positioning, manipulation and grasping to the robot itself. For the community, we also describe system design challenges to our progress thus far.

Feeling Safe: A Study on Trust with an Interactive Robotic Art Installation

In designing collaborative robots, it is of utmost importance to do so with safety in mind. Most current commercial collaborative robots have numerous built-in safety features to minimize danger to humans. When such robots are placed in public settings, not only the actual safety mechanisms but also the perception of safety plays a crucial role in the success of its deployment. An interactive robotic art installation is a useful site to explore the perceived safety of a robot. This article presents the initial results of a study on the impact of robot faces have on perceived safety in an interactive setting with untrained participants.

A Robot-based Gait Training System for Post-Stroke Rehabilitation

As the prevalence of stroke survivors increases, the demand for rehabilitative services will rise. While there has been considerable development in robotics to address this need, few systems consider individual differences in ability, interests, and learning. Robots need to provide personalized interactions and feedback to increase engagement, enhance human motor learning, and ultimately, improve treatment outcomes. In this paper, we present 1) our design process of an embodied, interactive robotic system for post-stroke rehabilitation, 2) design considerations for stroke rehabilitation technology and 3) a prototype to explore how feedback mechanisms and modalities affect human motor learning. The objective of our work is to improve motor rehabilitation outcomes and supplement healthcare providers by reducing the physical and cognitive demands of administering rehabilitation. We hope our work inspires development of human-centered robots to enhance recovery and improve quality of life for stroke survivors.

We Can Do Better! An Initial Survey Highlighting an Opportunity for More HRI Work on Loneliness

Although research has demonstrated the potential for social robots to positively impact a person's mood and provide comfort, very little research has yet focused on social robots supporting people living with loneliness. Much of the relevant human-robot interaction work focuses on more serious situations such as living with dementia, or on related areas such as stress, anxiety, or depression, and these works generally target the older adult demographic. Loneliness, however, can affect anyone of any health and age. In this paper we present a summary review of the current research on loneliness and social robots, highlighting the gaps in research and the potential opportunity for more work in the area.

Reducing Cognitive Workload in Telepresence Lunar - Martian Environments Through Audiovisual Feedback in Augmented Reality

Navigating through an unknown, and perhaps resource-constrained environment, such as Lunar terrain or through a disaster region can be detrimental - e.g. both physically and cognitively exhausting. Difficulties during navigation can cost time, operational resources, or even a life. To this end, the interaction with a robotic exploration system in lunar or Martian environments could be key to successful exploration extravehicular activities (X-EVA). Through the use of augmented reality (AR) we can afford an astronaut with various capabilities. In particular, we focus on two: (1) The ability to obtain and display information on their current position, on important locations, and on essential objects in an augmented space. (2) The ability to control an exploratory robot system, or smart robotic tools using AR interfaces. We present our ongoing development of such AR robot control interfaces and the feedback system being implemented. This work extends the augmented reality robot navigation and audio spatial feedback components presented at the 2020 National Aeronautics and Space Administration (NASA) SUITS Challenge.

Towards a Learning Architecture to Support Social Scaffolding for an Artificially Intelligent Disability Assistant

Humanoid workers that can improve the joy and achievement of workers with intellectual and developmental disabilities (IDD) hold promise in light manufacturing settings. In this paper, we provide details of an architecture to support social scaffolding for workers with IDD and efforts to adapt it to learn. This architecture is developed for human-robot interaction using the Pepper robot and will support future improvements using machine learning. Additional recommendations based on past experimentation are given for future work.

Mental Synchronization in Human Task Demonstration: Implications for Robot Teaching and Learning

Communication is integral to knowledge transfer in human-human interaction. To inform effective knowledge transfer in human-robot interaction, we conducted an observational study to better understand how people use gaze and other backchannel signals to ground their mutual understanding of task-oriented instruction during learning interactions. Our results highlight qualitative and quantitative differences in how people exhibit and respond to gaze, depending on motivation and instructional context. The findings of this study inform future research that seeks to improve the efficacy and naturalness of robots as they communicate with people as both learners and instructors.

Can Robots Be Used to Encourage Social Distancing?

In this work, we explore whether robots can exert their persuasive influence to encourage others to follow new proxemic norms (i.e., COVID-19 social distancing guidelines). Our results suggest that social robots are not effective for this purpose, and, in fact, when some persuasive strategies are used, this approach might backfire due to novelty effects that encourage pedestrians to approach and cluster around such robots.

Developing a Data-Driven Categorical Taxonomy of Emotional Expressions in Real World Human Robot Interactions

Emotions are reactions that can be expressed through a variety of social signals. For example, anger can be expressed through a scowl, narrowed eyes, a long stare, or many other expressions. This complexity is problematic when attempting to recognize a human's expression in a human-robot interaction: categorical emotion models used in HRI typically use only a few prototypical classes, and do not cover the wide array of expressions in the wild. We propose a data-driven method towards increasing the number of known emotion classes present in human-robot interactions, to 28 classes or more. The method includes the use of automatic segmentation of video streams into short (<10s) videos, and annotation using the large set of widely-understood emojis as categories. In this work, we showcase our initial results using a large in-the-wild HRI dataset (UE-HRI), with 61 clips randomly sampled from the dataset, labeled with 28 different emojis. In particular, our results showed that the "skeptical" emoji was a common expression in our dataset, which is not often considered in typical emotion taxonomies. This is the first step in developing a rich taxonomy of emotional expressions that can be used in the future as labels for training machine learning models, towards more accurate perception of humans by robots.

First Interaction Assessment between a Social Robot and Children Diagnosed with Cerebral Palsy in a Rehabilitation Context

In healthcare areas, social robots have demonstrated positive effects on adherence to procedures and cognitive skills development. This paper explores the effects of a social robot during an introductory phase in Cerebral Palsy rehabilitation. A human-Robot interface was deployed to promote the interaction with the children through 10 activities, including a presentation stage and imitation games. Besides, the interface aims to ease using the social robot to the therapist and allow them to control the robot's non-verbal and verbal gestures. The interaction was measured through joint attention, attitudes, and follow-up instructions. A total of 10 children participate in this study. The results suggest that 80% of the participants have a joint attention rate of 70%, and they accomplish highly the requests given by the robot. These preliminary findings show a positive effect of the robot on the children.

A Theoretical Framework for Large-Scale Human-Robot Interaction with Groups of Learning Agents

Recent advances in robot capabilities have led to a growing consensus that robots will eventually be deployed at scale across numerous application domains. An important open question is how humans and robots will adapt to one another over time. In this paper, we introduce the model-based Theoretical Human-Robot Scenarios (THuS) framework, capable of elucidating the interactions between large groups of humans and learning robots. We formally establish THuS, and consider its application to a human-robot variant of the n-player coordination game, demonstrating the power of the theoretical framework as a tool to qualitatively understand and quantitatively compare HRI scenarios that involve different agent types. We also discuss the framework's limitations and potential. Our work provides the HRI community with a versatile tool that permits first-cut insights into large-scale HRI scenarios that are too costly or challenging to carry out in simulations or in the real-world.

Virtual Shadow Rendering for Maintaining Situation Awareness in Proximal Human-Robot Teaming

One focus of augmented reality (AR) in robotics has been on enriching the interface for human-robot interaction. While such an interface is often made intuitive to interact with, it invariably imposes novel objects into the environment. In situations where the human already has a focus, such as in a human-robot teaming task, these objects can potentially overload our senses and lead to degraded teaming performance. In this paper, we propose using AR objects to solely augment natural objects to avoid disrupting our natural senses while adding critical information about the current situation. In particular, our case study focuses on addressing the limited field of view of humans by incorporating persistent virtual shadows of robots for maintaining situation awareness in proximal human-robot teaming tasks.

Implicit Communication Through Social Distancing: Can Social Navigation Communicate Social Norms?

Socially-aware navigation seeks to codify the rules of human-human and human-robot proxemics using formal planning algorithms. However, the rules that define these proxemic systems are highly sensitive to a variety of contextual factors. Recently, human proxemic norms have been heavily influenced by the COVID-19 pandemic, and the guidelines put forth by the CDC and WHO encouraging people to maintain six feet of social distance. In this paper, we present a study of observer perceptions of a robot that not only follows this social distancing norm, but also leverages it to implicitly communicate disapproval of norm-violating behavior. Our results show that people can relate a robot's social navigation behavior to COVID safety protocols, and view robots that navigate in this way as more socially intelligent and safe.

Toward a One-interaction Data-driven Guide: Putting Co-speech Gesture Evidence to Work for Ambiguous Route Instructions

While recent work on gesture synthesis in agent and robot literature has treated gesture as co-speech and thus dependent on verbal utterances, we present evidence that gesture may leverage model context (i.e. the navigational task) and is not solely dependent on verbal utterance. This effect is particularly evident within ambiguous verbal utterances. Decoupling this dependency may allow future systems to synthesize clarifying gestures that clarify the ambiguous verbal utterance while enabling research in better understanding the semantics of the gesture. We bring together evidence from our own experiences in this domain that allow us to see for the first time what kind of end-to-end concerns models need to be developed to synthesize gesture for one-shot interactions while still preserving user outcomes and allowing for ambiguous utterances by the robot. We discuss these issues within the context of "cardinal direction gesture plans" which represent instructions that refer to the actions the human must follow in the future.

Adaptive Humanoid Robots for Pain Management in Children

Accurate pain assessment and management is particularly important in children exposed to prolonged or repeated acute pain including procedural pain because of elevated risk for adverse outcomes such as traumatic medical stress, intense pain response for subsequent pain and also developing chronic pain. Our current work in progress tries to help pain management in children through developing intelligent adaptive humanoid robots as a multi-modal non pharmacological intervention. Our current work increases the interactive capabilities of Nao humanoid robots by using the camera and microphone to assess pain and emotion in children undergoing procedural treatment through combining detection models for facial expression, voice quality, and adapt the robot's verbal and non-verbal interactive responses accordingly for optimal distraction through adaptive behavioral models. By combining two different methods of obtaining emotion predictive probabilities using facial expression and stress speech data, we predict an emotion label. This label is then used as an environment input to a reinforcement learning model with the robot as the agent to choose the best action out of a set of entertaining and distracting verbal and non-verbal actions to cheer up the child and distract them from the pain and fear of the medical procedure.

Towards Deep Reasoning on Social Rules for Socially Aware Navigation

This work presents ideation and preliminary results of using contextual information and information of the objects present in the scene to query applicable social navigation rules for the sensed context. Prior work in socially-Aware Navigation (SAN) shows its importance in human-robot interaction as it improves the interaction quality, safety and comfort of the interacting partner. In this work, we are interested in automatic detection of social rules in SAN and we present three major components of our method, namely: a Convolutional Neural Network-based context classifier that can autonomously perceive contextual information using camera input; a YOLO-based object detection to localize objects with a scene; and a knowledge base of social rules relationships with the concepts to query them using both contextual and detected objects in the scene. Our preliminary results suggest that our approach can observe an on-going interaction, given an image input, and use that information to query the social navigation rules required in that particular context.

Perceived Social Intelligence as Evaluation of Socially Navigation

As Human-Robot Interaction becomes more sophisticated, measuring the performance of a social robot is crucial to gauging the effectiveness of its behavior. However, social behavior does not necessarily have strict performance metrics that other autonomous behavior can have. Indeed, when considering robot navigation, a socially-appropriate action may be one that is sub-optimal, resulting in longer paths, longer times to get to a goal. Instead, we can rely on subjective assessments of the robot's social performance by a participant in a robot interaction or by a bystander. In this paper, we use the newly-validated Perceived Social Intelligence (PSI) scale to examine the perception of non-humanoid robots in non-verbal social scenarios. We show that there are significant differences between the perceived social intelligence of robots exhibiting SAN behavior compared to one using a traditional navigation planner in scenarios such as waiting in a queue and group behavior.

It's Food Fight! Designing the Chef's Hat Card Game for Affective-Aware HRI

This paper describes the design of an interactive game between humans and a robot that makes it possible to observe, analyze, and model competitive strategies and affective interactions with the aim to dynamically generate appropriate responses or initiations of a robot. We apply an iterative design process that applied several pilot evaluations to define the requirements for the game with a theme, mechanics, and rules that motivate a choice between competition and cooperation and provokes emotional reactions even after subsequent games. Also, the game is designed to be easily understood by humans and unambiguously interpreted by machines. Overall, we aim to make the Chef's Hat card game a standard platform for the development of cooperative/competitive and emotionally aware agents and enable embodied interaction between multiple humans and robots.

Anthropomorphize me!: Effects of Robot Gender on Listeners' Perception of the Social Robot NAO in a Storytelling Use Case

Social robots have started taking on storytelling, an age-old human tradition. However, the narrator's voice is central in storytelling and a robot cannot match the capabilities of human voice modulation, which might affect the perception of the robot by the listener. Using a robot and gendered voice as a medium for storytelling, we take a first step towards identifying effects of narrators' voice on anthropomorphism. We examine the robot's perceived anthropomorphism and the influence of its voice (female, male, or neutral) on recipients' attitude towards the robotic storyteller concerning gender and cross-gender effects. In addition, transportation indicating the quality of storytelling is investigated. We found no significant results, neither for attitudes toward the robot nor for transportation. Our gender-based voice manipulation did not affect the storytelling process. A lack of anthropomorphism of the robot may explain these findings and should be investigated in further studies.

Identification and Engagement of Passive Subjects in Multiparty Conversations by a Humanoid Robot

In this work, we present a novel human-robot interaction (HRI) method to detect and engage passive subjects in multiparty conversations using a humanoid robot. Voice activity detection and speaker localization are combined with facial recognition to detect and identify non-participating subjects. Once a non-participating individual is identified, the robot addresses the subject with a fact related to the topic of the conversation, with the goal of promoting the subject to join the conversation. To prompt sentences related to the topic of the conversation, automatic speech recognition and natural language processing techniques are employed. Preliminary experiments demonstrate that the method successfully identifies and engages passive subjects in a conversation.

Predicting Human Interactivity State from Surrounding Social Signals

This article presents the use of a multi-layered perceptron neural network to predict if one person in a group is being interactive or not, based on the social signals of the other group members. Interactivity state (as manually annotated post-hoc) was correctly predicted with 60% accuracy when using the person's own social signals (self state), but showed higher accuracy of 65% when using instead social signals from the surrounding group members, excluding the target person (group members state). These results are preliminary due to the limits of our dataset (a micro dataset of 6 participants -- of which 3 are in frame -- playing the social game mafia, with 734 frames). A post-hoc factor analysis reveals that facial actions units and the distance between the target person and the group members are the key features to consider when estimating interactivity state from surrounding social peers.

A Human-Aware Task Planner Explicitly Reasoning About Human and Robot Decision, Action and Reaction

The complexity of the tasks autonomous robots can tackle is constantly increasing, yet we seldom see robots interacting with humans to perform tasks. Indeed, humans are either requested for punctual help or given the lead on the whole task. We propose a human-aware task planning approach allowing the robot to plan for a task while also considering and emulating the human decision, action, and reaction processes. Our approach is based on the exploration of multiple hierarchical tasks networks albeit differently whether the agent is considered to be controllable (the robot) or uncontrollable (the human(s)). We present the rationale of our approach along with a formalization and show its potential on an illustrative example involving the assembly of a table by a robot and a human.

Human-Robot Collision Avoidance Scheme for Industrial Settings Based on Injury Classification

The objective of this paper is to develop a real-time, depth-sensing surveillance method to be used in factories that require human operators to complete tasks alongside collaborative robots. Traditionally, collision detection and analysis have been achieved with extra sensors that are attached to the robot to detect torque or current. In this study, a novel method using 3D object detection and raw 3D point cloud data is proposed to ensure safety by deriving the change in distance between humans and robots from depth maps. By not having to deal with any potential delay associated with extra sensor-based data, both the likelihood and severity of collaborative robot-induced injuries are expected to decrease.

Framing the Challenge of Social Interaction Modelling: One Case Study

The level of social cognition in artificial agents depends on its ability to identify and interpret the world surrounding it [11]. Therefore, one design objective when creating artificial systems, such as a social robot, is to give it the ability to identify and interpret the transaction of social signals during social interactions. This is, however, an open research problem, often seen as a wicked problem[4]. One of the key difficulties is to properly frame the problem, as social interactions are complex, highly dynamic, and not easily formalised.In this short article, we present one situation (extracted from the small dataset of social interactions that we recorded) that illustrates in one snapshot the complexity of social situation assessment and might help frame appropriately the problem.

Designing Robots with Relationships in Mind: Suggesting Two Models of Human-socially Assistive Robot (SAR) Relationship

Relationships are crucial for human existence. People form relationships with other humans, pets, objects, and places. We argue that the nature of human-SAR (Socially assistive robot) relationship changes by context of use and interaction level. Therefore, context and interaction must be incorporated into the design requirements. Earlier studies identified design-related preference differences among users, depending on their personal characteristics and on their role in specific contexts. To align the robotic visual qualities (VQ) with users' expectations, we propose two human-SAR relationship models: context-based model- Situational based model and interaction-based model- Dynamic based model. Together with the VQ's evaluation, these models aim to guide industrial designers in the design process of new SARs. An evaluation method and preliminary findings are presented.

SESSION: HRI Pioneers

Machine Learning Driven Musical Improvisation for Mechanomorphic Human-Robot Interaction

As industrial robots and social robots become prevalent in commercial and home settings it is crucial to improve forms of communication with human collaborators and companions. In this work, I describe the use of musical improvisation to generate emotional musical prosody for improved human-robot interaction. This aims to develop a canny approach, where robots perform in a mechanomorphic manner improving collaboration opportunities with humans. I have currently collected a new 12-hour dataset and developed a Conditional Variational Autoencoder to generate new phrases. Generations have then been used to compare the impact of prosody on anthropomorphism, animacy, likeability, perceived intelligence, and trust. Future work will incorporate prosody into groups of robots and humans, using personality to drive emotional decisions and emotion contagion.

Ritual Drones: Designing and Studying Critical Flying Companions

Through a critical design approach, I suggest new perspectives on social drones, particularly companion drones. Supported by philosophies such as slow technology, I propose the design of anti-solutionist ritual drones and the study of their impact on the lives of users, particularly in domestic contexts. I intend to fill some of the methodological gaps identified, such as longitudinal studies in drone user experience through ethnography and auto-ethnography. I propose a "Research through Design" process of custom domestic probes for children and their families.

Behavior Adaptation for Robot-assisted Neurorehabilitation

11% of adults report experiencing cognitive decline which can impact memory, behavior, and physical abilities. Robots have great potential to support people with cognitive impairments, their caregivers, and clinicians by facilitating treatments such as cognitive neurorehabilitation. Personalizing these treatments to individual preferences and goals is critical to improving engagement and adherence, which helps improve treatment efficacy. In our work, we explore the efficacy of robot-assisted neurorehabilitation and aim to enable robots to adapt their behavior to people with cognitive impairments, a unique population whose preferences and abilities may change dramatically during treatment. Our work aims to enable more engaging and personalized interactions between people and robots, which can profoundly impact robot-assisted treatment, how people receive care, and improve their everyday lives.

Toward Hybrid Relational-Normative Models of Robot Cognition

Most previous work on enabling robots' moral competence has used norm-based systems of moral reasoning. However, a number of limitations to norm-based ethical theories have been widely acknowledged. These limitations may be addressed by role-based ethical theories, which have been extensively discussed in the philosophy of technology literature but have received little attention within robotics. My work proposes a hybrid role/norm-based model of robot cognitive processes including moral cognition.

Fostering Inclusive Activities in Mixed-visual Abilities Classrooms using Social Robots

Visually impaired children are increasingly educated in mainstream schools following an inclusive educational approach. However, even though visually impaired (VI) and sighted peers are side by side in the classroom, previous research showed a lack of participation of VI children in classroom dynamics and group activities. That leads to a reduced engagement between VI children and their sighted peers and a missed opportunity to value and explore class members' differences. Robots due to their physicality, and ability to perceive the world, socially-behave and act in a wide range of interactive modalities, can leverage mixed-visual ability children access to group activities while fostering their mutual understanding and social engagement. With this work, we aim to use social robots, as facilitators, to booster inclusive activities in mixed-visual abilities classroom.

Human-Robot Co-Learning for Fluent Collaborations

A team develops competency by progressive mutual adaptation and learning, a process we call co-learning. In human teams, partners naturally adapt to each other and learn while collaborating. This is not self-evident in human-robot teams. There is a need for methods and models for describing and enabling co-learning in human-robot partnerships. The presented project aims to study human-robot co-learning as a process that stimulates fluent collaborations. First, it is studied how interactions develop in a context where a human and a robot both have to implicitly adapt to each other and have to learn a task to improve the collaboration and performance. The observed interaction patterns and learning outcomes will be used to (1) investigate how to design learning interactions that support human-robot teams to sustain implicitly learned behavior over time and context, and (2) to develop a mental model of the learning human partner, to investigate whether this supports the robot in its own learning as well as in adapting effectively to the human partner.

Interactive Reinforcement Learning from Imperfect Teachers

Robots can use information from people to improve learning speed or quality. However, people can have short attention spans and misunderstand tasks. Our work addresses these issues with algorithms for learning from inattentive teachers that take advantage of feedback when people are present, and an algorithm for learning from inaccurate teachers that estimates which state-action pairs receive incorrect feedback. These advances will enhance robots' ability to take advantage of imperfect feedback from human teachers.

Valuable Robotic Teammates: Algorithms That Reason About the Multiple Dimensions of Human-Robot Teamwork

As robots enter our homes and work places, one of the roles they will have to fulfill is being a teammate. Prior approaches in human-robot teamwork enabled robots to reason about intent, decide when and how to help, and allocate tasks to achieve efficiency. However, these existing algorithms mostly focused on understanding intent and providing help and assumed that teamwork is always present. Overall, effective robotic teammates must be able to reason about the multi-dimensional aspects of teamwork. Working towards this challenge, we present empirical findings and an algorithm that enables robots to understand the human's intent, communicate their own intent, display effortful behavior, and provide help to optimize the team's task performance. In addition to task performance, people also care about being treated fairly. As part of future work, we propose an algorithm that reasons about task performance and fairness to achieve lasting human-robot partnerships.

A Haptic Empathetic Robot Animal for Children with Autism

Children with autism and their families could greatly benefit from increased support resources. While robots are already being introduced into autism therapy and care, we propose that these robots could better understand the child's needs and provide enriched interaction if they utilize touch. We present our plans, both completed and ongoing, for a touch-perceiving robot companion for children with autism. We established and validated touch-perception requirements for an ideal robot companion through interviews with 11 autism specialists. Currently, we are evaluating custom fabric-based tactile sensors that enable the robot to detect and identify various touch communication gestures. Finally, our robot companion will react to the child's touches through an emotion response system that will be customizable by a therapist or caretaker.

Let's Play Together: Designing Robots to Engage Children with Special Needs and Their Peers in Robot-Assisted Play

Play is a vital part of childhood, however, children with physical special needs face many obstacles in traditional play scenarios. We have developed MyJay, a robotic system that enables such children to play with their peers via a robot proxy in a basketball-like game. This semi-autonomous robot will feature adaptable controllers to allow children of any physical ability to play the proposed game, and it will focus on developing better child-child interaction.

Handling Trust Between Drivers and Automated Vehicles for Improved Collaboration

Advances in perception and artificial intelligence technology are expected to lead to seamless interaction between humans and robots. Trust in robots has been evolving from the theory on trust in automation, with a fundamental difference: unlike traditional automation, robots could adjust their behaviors depending on how their human counterparts appear to be trusting them or how humans appear to be trustworthy. In this extended abstract I present my research on methods for processing trust in the particular context of interactions between a driver and an automated vehicle, which has the goal of achieving higher safety and performance standards for the team formed by those human and robotic agents.

Improving the Robustness of Social Robot Navigation Systems

Our aim is to advance the reliability of autonomous social navigation. We have researched how simulation may advance this goal via crowdsourcing. We recently proposed the Simulation Environment for Autonomous Navigation (SEAN) and deployed it at scale on the web to quickly collect data via the SEAN Experimental Platform (SEAN-EP). Using this platform, we studied participants' perceptions of a robot when seen in a video versus interacting with it in simulation. Our current research builds on this prior work to make autonomous social navigation more reliable by classifying and automatically detecting navigation errors.

Robots that Help Humans Build Better Mental Models of Robots

Interactive Task Learning (ITL) is an approach to teaching robots new tasks through language and demonstration. It relies on the fact that people have experience teaching each other. However, this can be challenging if the human instructor does not have an accurate mental model of a robot. This mental model consists of the robot's knowledge, capabilities, shortcomings, goals, and intentions. The research question that we investigate is "How can the robot help the human build a better mental model of the robot?" We study human-robot interaction failures to understand the role of mental models in resolving them. We also discuss a human-centred interaction model design that is informed by human subject studies and plan-based theories of dialogue, specifically Collaborative Discourse Theory.

Fairness Considerations for Enhanced Team Collaboration

Fairness plays an important role in decision-making within teams and its perception has shown to drive performance and individual behavior among team members. Robots deployed within human teams are consistently faced with decisions on how to optimally allocate resources (e.g. tools, attention, gaze) but current solutions often ignore key aspects of fairness. In this work, we look to leverage laboratory experiments to identify key performance and behavioral metrics to further develop algorithmic solutions that include fairness considerations. We look to the well established multi-armed bandit algorithms to frame our problem and establish constraints on how resources are distributed amongst team members.

Autonomous Robot Behaviors for Shaping Group Dynamics

Posture Estimation and Optimization in Ergonomically Intelligent Teleoperation Systems

Ergonomics and human comfort are essential concerns in physical human-robot interaction~(p-HRI) applications such as teleoperation. We introduce a novel framework for posture estimation and optimization for ergonomically intelligent teleoperation systems that estimates the human operator's posture solely from the leader robot's trajectory and provides online postural correction and offline initial posture correction according to the type of teleoperation task. Although our framework is in teleoperation, it can be extended to other p-HRI applications with minimal modifications.

What Makes a Good Demonstration for Robot Learning Generalization?

Robot learning from demonstration (LfD) is a common approach that allows robots to perform tasks after observing teacher's demonstrations. Thus, users without a robotics background could use LfD to teach robots. However, such users may provide low-quality demonstrations. Besides, demonstration quality plays a crucial role in robot learning and generalization. Hence, it is important to ensure quality demonstrations before using them for robot learning. This abstract proposes an approach for quantifying demonstration quality which in turn enhances robot learning and generalization.

Continual Learning of Visual Concepts for Robots through Limited Supervision

For many real-world robotics applications, robots need to continually adapt and learn new concepts. Further, robots need to learn through limited data because of scarcity of labeled data in the real-world environments. To this end, my research focuses on developing robots that continually learn in dynamic unseen environments/scenarios, learn from limited human supervision, remember previously learned knowledge and use that knowledge to learn new concepts. I develop machine learning models that not only produce State-of-the-results on benchmark datasets but also allow robots to learn new objects and scenes in unconstrained environments which lead to a variety of novel robotics applications.

SESSION: Student-Design Competition

Audrey- Flower-like Social Assistive Robot: Taking Care of Older Adults in Times of Social Isolation during the Covid-19 Pandemic

Audrey is a flower-like socially assistive robot (SAR) that aims to support older adults living alone in times of social isolation. Audrey is aimed to: First, reduce boredom and overcome loneliness by intellectual stimulation and games according to the user's hobbies, preferences, and interests. Second, facilitate staying in touch with your loved ones: using video chats and active reminders for family members and friends to interact. Third, promote healthy living- encourage a healthy lifestyle and early detection of stroke or fallings for emergency aid.

Can Service Robots Help Best Practices for COVID?

We present the design of a mobile robot that delivers hand sanitizer on the Oregon State University campus. The goal is to encourage people to follow the best health practices under COVID-19. The current hardware involves a hands-free hand sanitizer dispenser mounted atop a TurtleBot base. A wizard teleoperates the robot to approach bystanders, communicating via its approach that it would like them to participate. Future work will evaluate what communication modes best serve this goal of distributing hand sanitizer in particular contexts, and consider distributing services to where there is the most human demand.

A 'Pop'ular 'Corn'panion to Making Your Movie Experience 'Butter' Together

What do we want the most of in this COVID-19 social isolation period? - Companionship, entertainment, and popcorn. We present 'Crunchy': a Human-Popcorn Interaction (HPI) based social robotic movie-companion. Crunchy is designed with the intent of minimizing feelings of isolation and loneliness. Crunchy's personality and reactions are imagined to provide popping interactions, making the worst-rated comedies enjoyable or the scariest movies even more intense. We discuss interaction scenarios, initial concepts for Crunchy's design, and future plans to either pursue developing Crunchy or using it to inform the development of prospective desktop social robots.

Musically Assistive Robot for the Elderly in Isolation

COVID pandemic has impacted our lives in ways that we could not have imagined, introducing us to a new normalcy. Older adults, especially those in care facilities, are often discussed as one of the most vulnerable populations to challenges that the pandemic poses, including social isolation and loneliness. We hence present our robot, M.A.R (Music Assistive Robot) i/o, intended to bring some joy and entertainment through a universal language - music. Our prototype, therefore, attempts to facilitate music participation among older adults by playing the music of choice and inviting them to enjoy listening through its expressive movements and adaptive drumming skills.

Sous-Chef: The Recipe Assistant

During lockdown, it's tempting to order takeout or heat up a frozen pizza. We naturally turn to food for comfort, but as lockdown continues it is easy to lose track of healthy habits. Our robot will encourage users to try new recipes with a mix of healthy ingredients while removing the hassle of searching for a recipe. The robot will gather user-input from motion sensors to learn the user's preferences for core components of the meal (protein, carbs, etc.). After selecting a recipe, the user will use gesture commands to scroll through the steps, avoiding sticky fingers on the robot.

COVID-19 Symptom Checker: The Plushie Solution

Due to a spike in COVID-19 cases, many individuals have needed to quarantine and monitor themselves for symptoms. Through our design process, we developed the idea for a COVID-19 symptom checker. The interactive robot is designed as a way for people in lockdown to report their COVID-19 symptoms easily and is simple to use for everybody, from children to the elderly at home. It will incorporate an Adafruit Bluefruit LE module to return collected data, an LCDR display to ask the user questions about their symptoms, a thermistor to take a temperature, and a joystick to answer questions.

Robot Imitating Human for Assistance and Companionship

For the purpose of assistance and companionship, we have designed a robot that will be capable of imitating human movements. As seen in pets such as dogs, cats, the attachment grows between the human and the pet because of the pet's ability to reciprocate/react. Consequently, we thought of designing a robot that will reciprocate the person's movements. The design of the robot uses raspberry pie 3 board for interfacing microcontroller with webcam, DC Motors, recycled home products such as food tray, used empty container and a camera to build an autonomous robot imitating human using image processing.

Human Pain Relief by Simultaneously Grasping and Being Grasped by an Inflatable Haptic Device

Social touch can be established in human-robot touch conditions according to previous studies in HRI. For eliciting numerous benefits of touching interaction between human and robot, we propose a doll-like inflatable haptic device that can be worn around the hand. The touching interaction with the device is presented by pneumatic airbag system which can provide a grasping feedback sensation when being grasped by a human. We expect this mutual touching brings a beneficial effect on alleviating human pains.

Ivy Curtain: An Alternative-plant Robot

This project began with the question, "What if the feeling of being in or close to nature could be sensed within indoors but without the presence of other living beings?" Ivy Curtain is a robotic design proposal that promotes better mental health for indoor residents by simulating the natural behaviors of vegetation. The plant-robot's organic movement generates the rustling sounds of leaves and changes its colors, just as real plants do. Thus, Ivy Curtain functions as an "alternative-plant" robot that moves in unexpected directions, changing its shapes according to both a user's physical interaction and its surrounding temperature.

MaskUp!

The COVID-19 pandemic has had a drastic impact on our day-to-day lives, prompting us to make changes to our daily routines for the greater good. One such change is wearing a face mask in public. Yet, it is easy to forget to take a mask with us as we venture out of our houses because of how acclimated we still are to pre-pandemic life. MaskUp! addresses this common problem and supports us through lockdown by reminding us to do the right thing and wear a mask.

The Development of a Social Robot Accessible to the Deaf

Bada is a social robot that can interact with individuals with the deaf. It resembles the appearance of a robotic vacuum cleaner and its signaling of abnormal circumstances at home was modeled after the behavior of hearing dogs. Bada effectively reduce the loss of information during delivery by relaying messages in various ways including web service, text messages, visual representation, and haptic interface. We have developed Bada's interaction process through several tests. Its behavior, interface, and interaction model would fairly contribute to the robotic accessibility technology.

Cara: A Smart Plush Toy for Support During COVID-19

COVID-19 has disrupted the way we spend time with our friends and family. Cara is a robot designed to connect isolated persons to their loved ones during the pandemic through three different modes: daily podcasts of audio messages, real-time music sharing, and LED lights that show someone they are being remembered. Cara is a rechargeable, soft stuffed animal that lights up and talks. Individuals can interact with it through a web application.

Alice in Bookland: Robotic Furnishings Delivering Public Library Services Outside during the Pandemic

This project suggests how human-robot interaction might be of service to public institutions during the pandemic crisis. Alice in Bookland aims to ensure a safe environment for children to continue their reading experiences during COVID-19. Due to the lockdown throughout the country, lots of public organizations such as libraries are having a hard time providing their resources, with limited user entrance and open hours. Here, Alice in Bookland complements the situation via providing a safe space outside the library, encouraging readers' healthy reading habits without any undue stress during the pandemic, and raising community awareness of local library resources.

Moody Study Buddy: Robotic Lamp that Keeps Students' Company During the Pandemic

COVID-19 has affected many people's mental health, especially those of students whose usual campus life got replaced by virtual learning platforms. Here, Moody Study Buddy, a robotic lamp that keeps students' company during the pandemic, aims to support their learning experience by reminding them that they are not alone. As they study with Moody Study Buddy, it smiles over at them brightly, and an astronaut figure climbs up and plants a flag on top, helping students feel content and distressed during their use. Its target users expand onto students, work-from-home workers, and lonely elderlies during the pandemic.

Mews - A Music Playing Timer for Guiding Handwashing

According to a USDA study [Cates 2018], only 4% of people correctly wash their hands, with most consumers not washing long enough. Mews is a helpful cat that will time hand washing for the user to ensure hands are sufficiently clean. Once a person signals to Mews, the cat plays a 20-second song of your choice, thus ensuring your hands are washed long enough. As an option for children, we offer a guided sound file of the steps to handwashing so they may practice proper handwashing as a habit.

TUTUS: Track Utilizing Transport for User Safety

In this paper, we discuss our solution towards social issues such as theft, sexual assault, and work-related stress in the context of delivery in densely populated urban areas, especially with the rising concerns for covid-19 (Muschert & Budd, 2020). In doing so, we will address our robot, TUTUS' design, social relevance, and practicality, and how it can be implemented in real situations. We aim to achieve this by having our robot capable of efficiently navigating through apartment complexes, carrying parcels of varying sizes and shapes while keeping cost low and practicality high.

An Affordable, Accessible Human Motion Controlled Interactive Robot and Simulation through ROS and Azure Kinect

We integrate a low-cost, open-source system to create a human-machine interface to control a robot by tracking human motion. Controlling a robot with human motion is intuitive and can readily be used to perform tasks that require precise control. We envision using this system will provide an interactive learning experience to students during the stay-at-home period. Existing solutions require an expensive and precise setup which is not portable. We use open-source components such as Robotics Operating System (ROS) and OpenMANIPULATOR-X and low cost commercially available Azure Kinect DK to create an accessible and portable motion-controlled human-machine interface.

Distance Learning Companion Robot: Introducing Nonverbal Communication in the Digital Classroom

Distance learning lacks many crucial communication tools that support learning in physical classrooms. The Distance Learning Companion Robot expresses the climate of digital classrooms using nonverbal modalities. The system allows students to express their confidence in curricular material, enables instructors to better understand student needs, and supports the virtual classroom community.

A Socially Distanced Robotic Menorah Set

Every Hanukkah, the Jewish Festival of Lights, our team gathers with our grandparents, Fred and Marlene, to celebrate by lighting candles together. This is an interactive experience, where participants pass the Shamash candle to each other to light the next of 8 candles. During the COVID-19 pandemic, social distancing prohibits the close contact required for this interaction; therefore, we designed a pair of robotic menorahs (candelabras with 9 candle-holders) connected via Bluetooth to enable candle lighting across a distance during Hanukkah 2020. Our design incorporates existing menorahs and candles with electronic components to enable a traditional, yet socially distant experience.

Around Us: A Friend for Visually Impaired Individuals

We all experienced quarantine in 2020 and this experience is especially challenging for visually impaired individuals. I aim to develop a robot to enhance their lives during their isolation from society. Around Us is a human interactive robot that uses sensor input to inform them of surrounding information at home through touch and sound, so users can be in a safe, entertaining, and comfortable living environment.

Library GO: Robotic Swarm Cart Delivering Hope and Warmth During Lockdown

Library GO is an interactive robot swarm group carrying books around in the hope of spreading humane care and delivering warmth. Library GO carts will arrive at users' front doors by request. Carts can express delight and excitement through designed motion patterns when they meet users. When users approach and greet the carts, the greeting actions will trigger the carts to open their cabin and raise the bookshelves. Users will then be able to scan through the shelves and select their favorite books just as before COVID-19.

Using Negative Affect to Reinforce Moral Norms in Casual Speech

The words and phrases we utilize in our daily lives reflect our social context, namely its hierarchies and inequalities (e.g., racial, gender). Furthermore, the usage of specific forms of expression can be harmful to vulnerable populations. Here, we propose the development of a robotic agent that will help users seeking to change their speaking habits (i.e., using words that could be harmful to vulnerable populations) fulfill their goal. We provide an overview of our project as well as its tentative design and potential features to be included, such as voice recognition to identify when the user uses a specific word, and the incorporation of a dictionary to inform users of the historical background of certain terms.

SESSION: Video Submissions

SanitizerBot: A Hand Sanitizer Service Robot

This video seeks to use robot prosocial behaviors to incentivize passersby to use hand sanitizer. Teleoperated by the wizard, the robot is capable of expressing via LED lights, gestures, and speech communication modes for human-robot interaction. Future work will explore what communication modes and behaviors are the most effective in encouraging people to use hand sanitizer.

Self-Explainable Robots in Remote Environments

As robots and autonomous systems become more adept at handling complex scenarios, their underlying mechanisms also become increasingly complex and opaque. This lack of transparency can give rise to unverifiable behaviours, limiting the use of robots in a number of applications including high-stakes scenarios, e.g. self-driving cars or first responders. In this paper and accompanying video, we present a system that learns from demonstrations to inspect areas in a remote environment and to explain robot behaviour. Using semi-supervised learning, the robot is able to inspect an offshore platform autonomously, whilst explaining its decision process both through both image-based and natural language-based interfaces.

Assisted Human-Robot-Interaction for Industrial Assembly: Video to Assisted Human-Robot-Interaction for Industrial Assembly

Human-robot collaboration is increasingly applied to industrial assembly sequences due to the growing need for flexibility in manufacturing. Assistant systems are able help to support shared assembly sequences to facilitate collaboration. This contribution shows a workplace installation of a collaborative robot (Cobot) and a spatial augmented reality (SAR) assistant system applied to an assembly use-case. We demonstrate a methodology for the distribution of the assembly sequence between the worker, the Cobot, and the SAR.

Designing Interaction for Multi-agent Cooperative System in an Office Environment

Future intelligent system will involve various artificial agents, including mobile robots, smart home infrastructure or personal devices, which share data and collaborate with each other to serve users. Designing efficient interactions which can support users to express needs to such intelligent environments, supervise the collaboration of different entities and evaluate the outcomes, will be challengeable. This paper presents the design and implementation of the human-machine interface of Intelligent Cyber-Physical system (ICPS), which is a multi-entity coordination system of robots and other smart agents in a workplace (Honda Research Institute). ICPS gathers sensory data from entities and receives users' inputs, then optimizes plans to utilize the capability of different entities to serve people.

Towards Visual Dialogue for Human-Robot Interaction

The goal of the EU H2020-ICT funded SPRING project is to develop a socially pertinent robot to carry out tasks in a gerontological healthcare unit. In this context, being able to perceive its environment and have coherent and relevant conversations about the surrounding world is critical. In this paper, we describe current progress towards developing the necessary integrated visual and conversational capabilities for a robot to operate in such environments. Concretely, we introduce an architecture for conversing about objects and other entities present in an environment. The work described in this paper has applications that extend well beyond healthcare and can be used on any robot that requires to interact with its visual and spatial environment in order to be able to perform its duties.

Sign Language and Emotion Understanding

Sign language is the primary form of communication of the deaf community. It is one of the only channels of communication between the hearing impaired community and the rest of society. While there are many existing SLR systems that focus on the recognition of manual gestures, few consider the non manual components of sign language, such as facial expressions. The role of these non manual features are comparable to pitch, intonation and nuances that are observed in spoken language. Our prototype combines manual SLR with emotion recognition to form a singular system that can recognise both facial expressions and sign language gestures.

Best and Worst External Viewpoints for Teleoperation Visual Assistance

A HRI study with 31 expert robot operators established that an external viewpoint from an assisting robot could increase teleoperation performance by 14% to 58% while reducing human error by 87% to 100% This video illustrates those findings with a side-by-side comparison of the best and worst viewpoints for the passability and traversability affordances. The passability scenario uses a small unmanned aerial system as a visual assistant that can reach any viewpoint on the idealized hemisphere surrounding the task action. The traversability scenario uses a small ground robot that is restricted to a subset of viewpoints that are reachable.

SESSION: Demonstrations

What's a robot doing in the Citizen Service Centre?

This demo is the result of a two-day evaluation of a humanoid robot in a Danish citizens service centre. Due to COVID-19, the goal was to reduce the number of personal contacts between staff and visitors in the centre using verbal interaction with the robot. The robot was pre-programmed with a number of typical questions and answers related to the center. A total of 263 citizens attended the centre during the two days. Visitors would have to pass the robot to enter the center, and is was estimated that 5 percent of the visitors interacted with the robot. The most common interaction patterns were greetings and casual chatting, although questions about the facilities at the centre were also observed. However, most visitors ignored the robot and focused on their scheduled appointment.

A Web-Based User Interface for HRI Studies on Multi-Robot Furniture Arrangement

This video presents a remote user interface (UI) for controlling a multi-robot furniture system intended to enable human-robot interaction studies safely during the COVID-19 pandemic. The three primary features of the system are detailed. The first, a web-based architecture, allows the operation of our chair robots (ChairBots) over the internet. Second, multiple ChairBots are simultaneously operable. Third, variable levels of autonomy allow an operator to send both high-level, with robots autonomously moving to goals, or low-level motion commands. This work presents advances in the technical capabilities of our ChairBot system, representing progress towards a viable multi-robot furniture system.

SESSION: Workshops

Building Bridges and Not Walls: Expanding the Human-machine Communication Connections within HRI

This virtual half-day workshop will explore human-machine communication (HMC) and communication studies/science theories as used in HRI studies. Submitted papers can include a 1) discussion of theories from the disciplines of media and communication that can guide HRI studies and vice versa, 2) analysis of related constructs (variables) that intersect both fields but with different nomenclatures (e.g., credibility vs. trust, presence vs. im- mediacy), or 3) exploration of quantitative and qualitative study designs from media and communication and their application to HRI. For more information and submission information, go to https://www.combotlabs.org/hmchri2021.html

Research through Design Approaches in Human-Robot Interaction

This workshop sets out to bring researchers across the Human-Robot Interaction (HRI), Human-Computer Interaction, and Design fields that use, or are interested in using Research through Design (RtD) in their work. RtD is a research approach that uses design practices, such as ideation and prototyping, to generate knowledge. RtD focuses on understanding what is the right thing to design, and has the potential to bring new perspectives that break through fixation within a field. In our workshop, we will attempt to classify current HRI practices of RtD, identify under-explored topics and methods, and discuss challenges on conducting this type of work in the HRI field. The workshop will result in defining next steps for better integration of RtD approaches in the HRI community and guidelines to include more researchers in RtD practice.

Workshop YOUR Study Design! Participatory Critique and Refinement of Participants' Studies

The purpose of this workshop is to help researchers develop methodological skills, especially in areas that are relatively new to them. With HRI researchers coming from diverse backgrounds in computer science, engineering, informatics, philosophy, psychology, and more disciplines, we can't be expert in everything. In this workshop, participants will be grouped with a mentor to enhance their study design and interdisciplinary work.

Participants will submit 4-page papers with a small introduction and detailed method section for a project currently in the design process. In small groups led by a mentor in the area, they will discuss their method and obtain feedback. The workshop will include time to edit and improve the study. Workshop mentors include Drs. Cindy Bethel, Hung Hsuan Huang, Selma Sabanović, Brian Scassellati, Megan Strait, Komatsu Takanori, Leila Takayama, and Ewart de Visser, with expertise in areas of real-world study, empirical lab study, questionnaire design, interview, participatory design, and statistics.

Robo Ludens: Game Design Techniques Applied in HRI Experiments

Games have been used extensively to study human behavior. Researchers in the field of human-robot interaction (HRI) are becoming more aware of the importance of designing compelling and playful games to study interrelationships among players. Despite the growing interest, the use of game design techniques in the creation of playful experiences for HRI experiments is still in its infancy and more multidisciplinary activities should be promoted to foster the convergence between game research and HRI. This workshop aims at discussing the value of using iterative game design techniques to integrate playful experiences using social robots for HRI experiments. More concretely, we want to explore tools, approaches and methods used in previous experiences for appropriate design of interactive games in HRI. Furthermore, based on previous research, a taxonomy for game design using social robots will be presented and attendees will have access to hands-on material created to facilitate the design of interactive games considering important aspects of the robotic systems to maximize the fun experience. We hope this workshop will bring to HRI researchers, game designers, roboticists, and technology enthusiasts enlightening thoughts and ideas to confront the often complicated and time-demanding process of designing compelling games for HRI experiments.

2nd Edition of Solutions for Socially Intelligent HRI in Real-World Scenarios (SSIR-HRI)

Today it seems even more evident that social robots will have a more integral role to play in the real-world scenarios and need to participate in the full richness of human society. Central to the success of robots being socially intelligent agents is insuring effective interactions between humans and robots. In order to achieve that goal, researchers and engineers from both industry and academia need to come together to share ideas, trials, failures, and successes. This workshops aims at creating the bridge between industry and academia and as such creating a community to tackle the current and future challenges of socially intelligent human-robot interaction in real-world scenarios by finding solutions for them.

Acoustically Aware Robots: Detecting and Evaluating Sounds Robots Make and Hear

The sound a robot or automated system makes and the sounds it listens for in our shared acoustic environment can greatly expand its contextual understanding and to shape its behaviors to the interactions it is trying to perform.

People convey significant information with sound in interpersonal communication in social contexts. Para-linguistic information about where we are, how loud we're speaking, or if we sound happy, sad or upset are relevant to understand for a robot that looks to adapt its interactions to be socially appropriate.

Similarly, the qualities of the sound an object makes can change how people perceive that object and can alter whether or not it attracts attention, interrupts other interactions, reinforces or contradicts an emotional expression, and as such should be aligned with the designer's intention for the object. In this tutorial, we will introduce the participants to software and design methods to help robots recognize and generate sound for human-robot interaction (HRI). Using open-source tools and methods designers can apply to their own robots, we seek to increase the application of sound to robot design and stimulate HRI research in robot sound.

Interdisciplinary Research Methods for Child-Robot Relationship Formation

As the field of child-robot interaction (CRI) research matures, and in light of the recent replication crisis in psychology, it is timely to tackle several important methodological challenges. Notably, studies on child-robot relationship formation face issues regarding the conceptualization and operationalization of this complex, comprehensive construct. In addressing these challenges, increased interdisciplinary collaboration is of vital importance. As such, this workshop aims to facilitate ongoing discussion between interdisciplinary experts on the topic of child-robot relationship formation to identify common issues and corresponding solutions (e.g., consistent definitions and rigorous measurement techniques). The workshop will begin with a keynote talk from Dr. Iolanda Leite, followed by discussion surrounding identified challenges. These discussions will be accompanied by intensive break-out groups moderated by senior researchers in the field (i.e., Mark Neerincx and Vanessa Evers). We hope this workshop will set the baseline for standardised methodologies that can later be expanded to other CRI constructs.

Designing Functional Clothing for Human-robot Interaction

We believe that we can design robot clothes to help robots become better robots-help them to be useful in a wider array of contexts, or to better adapt or function in the contexts they are already in. We propose that robot clothing should avoid mere mimicry of human apparel, and instead be motivated by what robots need. While we have seen robots dressed in clothes in the last few decades, we believe that robot clothes can be designed with thoughtful intention and should be studied as its own field. In this workshop, we explore this new area within human robot interaction by bringing together HRI researchers, designers, fashion and costume designers, and artists. We will focus on potential functions of robot clothes, discuss potential trends, and design clothes for robots together in an interactive prototyping session. Through this workshop, we hope to build a community of people who will push forward the field of robot clothing design.

Sound in Human-Robot Interaction

Robot sound spans a wide continuum, from subtle motor hums, through music, bleeps and bloops, to human-inspired vocalizations, and can be an important means of communication for robotic agents. This first workshop on sound in HRI aims to bring together interdisciplinary perspectives on sound, including design, conversation analysis, (computational) linguistics, music, engineering and psychology. The goal of the workshop is to stimulate interdisciplinary exchange and to form a more coherent overview of perspectives on how sound can facilitate human-robot interaction. During the half-day workshop, we will explore (1) the diverse application opportunities of sound in human-robot interaction, (2) strategies for designing sonic human-robot interactions, and (3) methodologies for the evaluation of robot sound. Workshop outcomes will be documented on a dedicated website and are planned to be collected in a special issue.

The Road to a Successful HRI: AI, Trust and ethicS - TRAITS

The aim of this workshop is to give researchers from academia and industry the possibility to discuss the inter- and multi- disciplinary nature of the relationships between people and robots towards effective and long-lasting collaborations. This workshop will provide a forum for the HRI and robotics communities to explore successful human-robot interaction (HRI) to analyse the different aspects of HRI that impact its success. Particular focus are the AI algorithms required to implement autonomous interactions, and the factors that enhance, undermine, or recover humans' trust in robots. Finally, potential ethical and legal concerns, and how they can be addressed will be considered. Website: https://sites.google.com/view/traits-hri

Designing and Developing Better Robots for Children: A Fundamental Human Rights Perspective

Robots for Learning - Learner-Centred Design

The Robots for Learning workshop series aims at advancing the research topics related to the use of social robots in educational contexts. This year's half-day workshop follows on previous events in Human-Robot Interaction conferences focusing on efforts to dis-cuss potential benchmarks in design, methodology and evaluation of new robotics systems that help learners. In this 6th edition of the workshop, we will be investigating in particular methods from technologies for education and online learning. Since the past few months, online and remote learning has been put in place in several countries to cope with the health and safety measures due to theCovid-19 pandemic. In this workshop, we aim to discuss strategies to design robotics system able to provide embodied assistance to the remote learners and to demonstrate long-term learning effects.

Conversational Interaction with Social Robots

Robo-Identity: Exploring Artificial Identity and Multi-Embodiment

Interactive robots are becoming more commonplace and complex, but their identity has not yet been a key point of investigation. Identity is an overarching concept that combines traits like personality or a backstory (among other aspects) that people readily attribute to a robot to individuate it as a unique entity. Given people's tendency to anthropomorphize social robots, "who is a robot?" should be a guiding question above and beyond "what is a robot?" Hence, we open up a discussion on artificial identity through this workshop in a multi-disciplinary manner; we welcome perspectives on challenges and opportunities from fields of ethics, design, and engineering. For instance, dynamic embodiment, e.g., an agent that dynamically moves across one's smartwatch, smart speaker, and laptop, is a technical and theoretical problem, with ethical ramifications. Another consideration is whether multiple bodies may warrant multiple identities instead of an "all-in-one" identity. Who "lives" in which devices or bodies? Should their identity travel across different forms, and how can that be achieved in an ethically mindful manner? We bring together philosophical, ethical, technical, and designerly perspectives on exploring artificial identity.

Virtual, Augmented, and Mixed Reality for Human-Robot Interaction (VAM-HRI)

The 4th International Workshop on Virtual, Augmented, and Mixed Reality for Human-Robot Interaction (VAM-HRI) will bring together HRI, robotics, and mixed reality researchers to address challenges in mixed reality interactions between humans and robots. Topics relevant to the workshop include development of robots that can interact with humans in mixed reality, use of virtual reality for developing interactive robots, the design of augmented reality interfaces that mediate communication between humans and robots, the investigations of mixed reality interfaces for robot learning, comparisons of the capabilities and perceptions of robots and virtual agents, and best design practices. Special topics of interest this year include VAM-HRI research during the COVID-19 pandemic as well as the ethical implications of VAM-HRI research. VAM-HRI 2021 will follow on the success of VAM-HRI 2018-20 and advance the cause of this nascent research community.

Lifelong Learning and Personalization in Long-Term Human-Robot Interaction (LEAP-HRI)

While most of the research in Human-Robot Interaction (HRI) focuses on short-term interactions, long-term interactions require bolder developments and a substantial amount of resources, especially if the robots are deployed in the wild. Robots need to incrementally learn new concepts or abilities in a lifelong fashion to adapt their behaviors within new situations and personalize their interactions with users to maintain their interest and engagement. The "Lifelong Learning and Personalization in Long-Term Human-Robot Interaction (LEAP-HRI)" Workshop aims to take a leap from the traditional HRI approaches towards addressing the developments and challenges in these areas and create a medium for researchers to share their work in progress, present preliminary results, learn from the experience of invited researchers and discuss relevant topics. The workshop extends the topics covered in the "Personalization in Long-Term Human-Robot Interaction (PLOT-HRI)" Workshop at the 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI) and "Lifelong Learning for Long-term Human-Robot Interaction (LL4LHRI)" Workshop at the 29th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), and focuses on studies on lifelong learning and adaptivity to users, context, environment, and tasks in long-term interactions in a variety of fields (e.g., education, rehabilitation, elderly care, collaborative tasks, customer-oriented service and companion robots).

Exploring Applications for Autonomous Nonverbal Human-Robot Interaction

Non-verbal Human-Robot Interaction (nHRI) encompasses the study of the exchange of human-robot gaze, gesture, touch, body language, paralinguistic, facial and affect expression. nHRI has advanced beyond theoretical and computational contributions. Progress has been made through a variety of user studies and laboratory experiments as well as practical efforts such as integration of nonverbal inputs with other HRI modalities including domain specific implementations. This workshop seeks to promote collaboration between two threads of research: experimental nHRI, and application domains that can benefit from its use.

The workshop will link researchers working on new approaches to nHRI in the laboratory to applied roboticists who present challenges in specific domains, such as: service robots, field robotics, socially-assistive robotics, and human-robot collaborative work that could benefit from richer nHRI. This workshop will draw participation from diverse areas to evaluate best practices and integration efforts across different research domains. We will target a broad, cross-disciplinary audience, and provide a venue for recent efforts related to multimodal interaction, system integration, data collection, and user studies.

Novel and Emerging Test Methods and Metrics for Effective HRI

This is the third annual, full-day workshop that aims to explore the state-of-practice in the metrology necessary for repeatably and independently assessing the performance of robotic systems in realworld human-robot interaction (HRI) scenarios. This workshop continues the aims of shortening the lead time between between the theory and applications of HRI, enabling reproducible studies, and accelerating the adoption of cutting edge technologies as the industry state-of-practice. This third installment of the annual workshop, "Test Methods and Metrics for Effective HRI,' seeks to identify novel and emerging test methods and metrics for the holistic assessment and assurance of HRI performance. The focus is on identifying innovative metrics and test methods for the evaluation of HRI performance, and to advance the growth of the HRI community based on the principles of collaboration, data sharing, and repeatability. The goal of this workshop is to aid in the advancement of HRI technologies through the development of experimental design, test methods, and metrics for assessing interaction and interface designs.