HRI '22: Proceedings of the 2022 ACM/IEEE International Conference on Human-Robot Interaction

Full Citation in the ACM Digital Library

SESSION: Keynote Talks

Putting Human-Robot Interaction Research into Design Practice

As the Human-Robot Interaction (HRI) research field is relatively young and interdisciplinary, we spend a good deal of time and effort on communicating with each other, sharing ideas and empirical findings across disciplinary boundaries. While this practice is important and needs to continue, there is an increasingly pressing need for us to also communicate outside of this research community. I have met way too many designers, engineers, product managers, and roboticists, who tell me that they were either unaware that there was an HRI research community or that they have found our research community and have been disappointed by our work. That is unfortunate and it is fixable. We need to tackle this research-practice gap. In this talk, I will make the case for why and how we might get started. There is so much to be gained by taking a Translational Science approach to HRI research. There are opportunities to do work that is grounded in real world problems, real people, and real robots. Furthermore, there are opportunities to make a difference in the design and deployment of robotic products and services right now. As just one example, in spending time studying robot deployments in service industries, at sea, and in the sky, we kept meeting professional Robot Wranglers. In many of our research projects, graduate students take on the Robot Wrangler role, which we claim will eventually disappear when the robots become "fully" autonomous. Looking across many industries, I believe that Robot Wranglers are here to stay. We could and should study them as real stakeholders and people. As HRI researchers, we have an opportunity to use our work to more directly inform the design of technologies. We can also find inspiration from the study of empirical technologies (as Polanyi would say). I think it is time to roll up our sleeves and get better at communicating and working with the people who are building the future of robotics now. Publishing and presenting peer-reviewed papers is not where our work ends. We have much more work to do in translating our work in ways that will help others outside of our research community actually make use of our insights, engaging in collaborations with them, and coming back here to share and learn stories about those experiences.

What's Social about Social Robots?: A Psychological Perspective

What make robots social and which psychological factors contribute to user perceptions of robots as social agents? We have tried to address these issues by means of experimental psychological research conducted in the Applied Social Psychology Lab at CITEC, Bielefeld University. In my presentation, I will combine a personal account of my journey into the field of HRI with a psychological perspective on developments in social robotics and HRI. To do so, I will shed light on the psychological underpinnings of the perception of robots as social agents, focusing mainly on the impact of design choices (e. g., indicators of robot group membership) on robot-related perceptions. Moreover, I will emphasize the relevance of user attitudes, - how to measure and to change them -, to promote acceptance of novel technologies. I will conclude by pointing out key lessons learned on the way, considering social and ethical implications of doing HRI research.

The Psychology of "Kawaii" and Its Implications for Human-Robot Interaction

"Kawaii" is a popular word in Japan today. Although it is often translated as "cute" in English, its meaning and scope are broader than those of "cute," because this word is used not only as an adjective that describes the perceivable features of an object, but also as an adjective that expresses a person's feelings toward the object. In this talk, I will explain how "kawaii" can be conceptualized in the cognitive and behavioral sciences and how it is relevant to the field of human-robot interaction. First, I will introduce Lorenz's (1943) seminal concept of "Kindchenschema" (baby schema), which proposes that specific physical features, such as a round head shape and large eyes set lower on the face, serve as key stimuli that instinctively trigger nurturance behavior and the affective feelings associated with it in humans. I will then describe some recent empirical findings suggesting that the feeling of "kawaii" goes beyond a response to infantile stimuli and is better seen as a more general, positive emotion related to sociality and approach motivation. Finally, I will discuss the importance of this emotion in designing a desirable environment in which both humans and robots are included.

SESSION: Session: Robots for Children, ASD, Elderly People (1)

A Social Robot for Improving Interruptions Tolerance and Employability in Adults with ASD

A growing population of adults with Autism Spectrum Disorders (ASD) chronically struggles to find and maintain employment. Previous work reveals that one barrier to employment for adults with ASD is dealing with workplace interruptions. In this paper, we present our design and evaluations of an in-home autonomous robot system that aims to improve users' tolerance to interruptions. The Interruptions Skills Training and Assessment Robot (ISTAR) allows adults with ASD to practice handling interruptions to improve their employability. ISTAR is evaluated by surveys of employers and adults with ASD, and a week-long study in the homes of adults with ASD. Results show that users enjoy training with ISTAR, improve their ability to handle various work-relevant interruptions, and view the system as a valuable tool for improving their employment prospects.

Robot Co-design Can Help Us Engage Child Stakeholders in Ethical Reflection

Children are stakeholders of robotic technologies who deserve to have their voices heard in the design process just as much as adult stakeholders. This is especially true for robotic technologies explicitly designed for child-robot interaction, in areas like education, healthcare, and therapy. Researchers face the challenge of cultivating children's critical awareness on the design of robots and accompanying ethical concerns, as the types of exercises typically used to engage with adult stakeholders can be ineffective with children. This requires developmentally appropriate methods for understanding children's perspectives that also address the imbalanced power dynamics between children and adults, such that children feel comfortable sharing their ideas. In this work, we demonstrate that participatory design research techniques already accepted in the Human Robot Interaction (HRI) community can fulfill this purpose. Specifically, through the design and analysis of two co-design workshops with children of different ages at a school in Denver, Colorado, we demonstrate that co-design workshops can be used to effectively understand how children make sense of robotic technologies and to facilitate children's critical reflection on the ethical dilemmas surrounding their own relationships with robots.

"Let's read a book together": A Long-term Study on the Usage of Pre-school Children with Their Home Companion Robot

In several countries, social robots are increasingly accessible within homes, particularly in those with pre-school- aged children. However, research on social robots has mostly been conducted in laboratory or classroom settings, and their long-term use has received little attention. Additionally, while there is a growing body of literature on CRI in a variety of domains such as education and health, less is known about the interactions between children and social robots in home settings during daily activities. Conducted during the Covid-19 pandemic, this article describes a longitudinal mixed-method study that examines children's interactions with their home reading companion robot - Luka. Focusing on parental perspectives, we examined how children interact with robots over time and revealed that a social robot with reading as its primary function has the potential to both attract parental buyers and engage children in long-term use of the robot's diverse features. We offer recommendations for social robot designers and product developers targeting younger users.

Mixed-Method Long-Term Robot Usage: Older Adults' Lived Experience of Social Robots

In the past two decades, human-robot interaction (HRI) researchers have increasingly deployed autonomous and reliable robots long-term in various social contexts including the home. Our work provides a mixed-method approach for analyzing older adults' long-term robot usage data patterns combining quantitative data of robot usage logs with qualitative descriptions from participants' own experience. Overall, this provides a fuller picture to how older adults use and experience social robots in their homes. Our work involves a robot hosting period for at least a month (up to 12 months) in older adults' homes with an experience debrief session held a month into the robot hosting time period. We propose reflections on the novelty effect with respect to older adults' usage data and highlight feelings of guilt, the robot's proactivity and movement, meeting (or not meeting) user expectations, and the robot's persona as key aspects of the hosting experience that promoted usage or non-usage. Finally, we provide design guidelines for structuring future mixed-method long-term robot usage studies being mindful of ethical considerations in this space.

Individual Differences of Children with Autism in Robot-assisted Autism Therapy

Research has recognized the importance of individual differences of children with Autism Spectrum Disorder (ASD) that require interventions to meet their heterogeneous needs. This relatively large-scale study investigates a robot-assisted autism therapy (RAAT) with 34 children with diverse forms of ASD and Attention Deficit Hyperactivity Disorder (ADHD). We conducted a multi-session study with multi-purposeful activities targeting the socio-emotional abilities of children in a rehabilitation setting. We found a number of quantitative results suggesting various autism-related and demographic differences such as diverse forms of ASD, co-occurrence of ADHD, verbal skills, and age groups. The main findings are: 1) severity of ASD forms may not predict intervention outcomes but instead the co-occurrence of ADHD with LFA diagnosis may negatively impact social smiling; 2) verbal children were more generally engaged and less aggressive with the robot than non-verbal children whose curiosity rose over sessions; and 3) younger children (3.4 y.o.) showed more affection, while older children (7-12 y.o.) were better engaged through speaking more words and having longer engagement and eye contact with the robot.

Cognitively Assistive Robots at Home: HRI Design Patterns for Translational Science

Much research in healthcare robotics explores extending rehabilitative interventions to the home. However, for adults, little guidance exists on how to translate human-delivered, clinic-based interventions into robot-delivered, home-based ones to support longitudinal interaction. This is particularly problematic for neurorehabilitation, where adults with cognitive impairments require unique styles of interaction to avoid frustration or overstimulation. In this paper, we address this gap by exploring the design of robot-delivered neurorehabilitation interventions for people with mild cognitive impairment (PwMCI). Through a multi-year collaboration with clinical neuropsychologists and PwMCI, we developed robot prototypes which deliver cognitive training at home. We used these prototypes as design probes to understand how participants envision long-term deployment of the intervention, and how it can be contextualized to the lives of PwMCI. We report our findings and specify design patterns and considerations for translating neurorehabilitation interventions to robots. This work will serve as a basis for future endeavors to translate cognitive training and other clinical interventions onto a robot, support longitudinal engagement with home-deployed robots, and ultimately extend the accessibility of longitudinal health interventions for people with cognitive impairments.

Robot-Mediated Interaction Between Children and Older Adults: A Pilot Study for Greeting Tasks in Nursery Schools

Robot-mediated interaction may be one of the best approaches to overcome the implementation-related limitations of face-to-face interaction between children and older adults. However, there has been little research about development or demonstration of teleoperated social robot systems that could be implemented in nurseries. We report a preliminary experiment wherein older adults greet nursery pupils on their way to and from school using a teleoperated social robot. The results suggest that using teleoperated robots in dialogue-based tasks motivate the children to interact with the robot, and it increases the probability of learning from the tasks. Further, the older adult teleoperators enjoyed the task and wanted to continue the same. In addition, the teleoperators could inherit the close relationships built up by the previous teleoperator even if the teleoperator was replaced. This study provides a starting point for further research on technology-mediated interactions between children and older adults.

SESSION: Session: Robots for Children, ASD, Elderly People (2)

"And then what happens?": Promoting Children's Verbal Creativity Using a Robot

While creativity has been previously studied in Child-Robot interaction, the effect of regulatory focus on creativity skills has not been investigated. This paper presents an exploratory study that, for the first time, uses the Regulatory Focus Theory to assess children's creativity skills in an educational context with a social robot. We investigated whether two key emotional regulation techniques, promotion (approach) and prevention (avoidance), stimulate creativity during a storytelling activity between a child and a robot. We conducted a between-subjects field study with 69 children between the ages of 7 and 9 years old, divided between two study conditions: (1) promotion, where a social robot primes children for action by eliciting positive emotional states, and (2) prevention, where a social robot primes children for avoidance by evoking a states related to security and safety associated with blockage-oriented behaviors. To assess changes in creativity as a response to the priming interaction, children were asked to tell stories to the robot before (pre-test) and after (post-test) the priming interaction. We measured creativity levels by analyzing the verbal content of the stories. We coded verbal expressions related to creativity variables, including fluency, flexibility, elaboration, and originality. Our results show that children in the promotion condition generated significantly more ideas, and their ideas were on average more original in the stories they created in the post-test rather than in the pre-test. We also modeled the process of creativity that emerges during storytelling in response to the robot's verbal behavior. This paper enriches the scientific understanding of creativity emergence in child-robot collaborative interactions.

Memory-Based Personalization for Fostering a Long-Term Child-Robot Relationship

After the novelty effect wears off children need a new motivator to keep interacting with a social robot. Enabling children to build a relationship with the robot is the key for facilitating a sustainable long-term interaction. We designed a memory-based personalization strategy that safeguards the continuity between sessions and tailors the interaction to the child's needs and interests to foster the child-robot relationship. A longitudinal (five sessions in two months) user study (N = 46, 8-10 y.o) showed that the strategy kept children interested longer in the robot, fosters more closeness, elicits more positive social cues, and adds continuity between sessions.

Inclusive'R'Stories: An Inclusive Storytelling Activity with an Emotional Robot

Storytelling has the potential to be an inclusive and collaborative activity. However, it is unclear how interactive storytelling systems can support such activities, particularly when considering mixed-visual ability children. In this paper, we present an interactive multisensory storytelling system and explore the extent to which an emotional robot can be used to support inclusive experiences. We investigate the effect of the robot's emotional behavior on the joint storytelling process, resulting narratives, and collaboration dynamics. Results show that when children co-create stories with a robot that exhibits emotional behaviors, they include more emotive elements in their stories and explicitly accept more ideas from their peers. We contribute with a multisensory environment that enables children with visual impairments to engage in joint storytelling activities with their peers and analyze the effect of a robot's emotional behaviors on an inclusive storytelling experience.

Refilling Water Bottles in Elderly Care Homes With the Help of a Safe Service Robot

This study presents key technologies of a mobile service robot developed to manipulate objects around people safely. We demonstrate this ability to support staff in elderly care homes in the future. The take-over by a service robot allows the staff to spend less time with routine logistical tasks and therefore better focus on the interaction with residents. In the selected application scenario, the robot helps staff by (1) retrieving empty bottles from the residents' rooms, (2) bringing them to the kitchen, (3) taking the refilled bottles back to a table inside the residents' rooms. This task seems trivial for a person, but the robot needs to orchestrate numerous algorithms and components to work smoothly, such as bottle pose detection, manipulation, and navigation. A technical evaluation indicates a high performance of single components, but due to isolated failures, the overall scenario does not always succeed. Next to the technical aspects, it is fundamental to determine the acceptance of the robot, which was achieved by analyzing questionnaires given to care workers. Finally, this paper presents lessons learned to help other researchers in similar use-cases.

SESSION: Session: Norms and Biases

The Shape of Our Bias: Perceived Age and Gender in the Humanoid Robots of the ABOT Database

The present study was aimed at determining the age and gender distribution of the humanoid robots in the ABOT dataset, and providing a systematic data-driven formalization of the process of age and gender categorization of humanoid robots. We involved 153 participants in an online study and asked them to rate the humanoid robots in the ABOT dataset in terms of perceived age, femininity, masculinity, and gender neutrality. Our analyses disclosed that most of the robots in the ABOT dataset were perceived as young adults, and the vast majority of them were attributed a neutral or masculine gender. By merging our data with the data in the ABOT dataset, we discovered that humanlikeness is crucial to elicit social categorization. Moreover, we found out that body manipulators (e.g., legs, torso) guide the attribution of masculinity, surface look features (e.g., eyelashes, apparel) the attribution of femininity, and that robots without facial features (e.g., head, eyes) are perceived as older. Finally, yet importantly, we unveiled that men tend to attribute lower age scores and higher femininity ratings to humanoid robots than women. Our work provides evidence of an existing underlying bias in the design of humanoid robots that needs to be addressed: the under-representation of feminine robots and lack of representation of androgynous ones. We make the results of this study publicly available to the HRI community by attaching the dataset we collected to the present paper and creating a dedicated website.

Norm-Breaking Responses to Sexist Abuse: A Cross-Cultural Human Robot Interaction Study

This article presents a cross-cultural replication of recent work on productively violating gender norms; specifically demonstrating that breaking norms can boost robot Credibility while avoiding harmful stereotypes. In this work we demonstrate via a 3 (country) x 3 (robot behaviour) between-subject experiment that these findings replicate cross-culturally across the US, Sweden, and Japan, finding evidence that breaking gender norms boosts robot credibility regardless of gender or cultural context, and regardless of pretest gender biases. Our findings further motivate a call for feminist robots that subvert the existing gender norms of robot design.

Exploring Machine-like Behaviors for Socially Acceptable Robot Navigation in Elevators

In this paper, we present our ongoing research on socially acceptable robot navigation for an indoor elevator sharing scenario. Informed by naturalistic observations of human elevator use, we discuss the social nuances involved in a seemingly simple activity like taking an elevator and the challenges and limitations of modeling robot behaviors based on a full human-like approach. We propose the principle of machine-like for the design of robot behavior policies that effectively accomplish tasks without being disruptive to the routines of people sharing the elevator with the robots. We explored this approach in a bodystorming session and conducted a preliminary evaluation of the resulting considerations through an online user study. Participants differentiated robots from humans for issues of proxemics and priority, and machine-like behaviors were preferred over human-like behaviors. We present our findings and discuss the advantages and limitations identified for both approaches for designing socially acceptable navigation behaviors.

Why do We Follow Robots?: An Experimental Investigation of Conformity with Robot, Human, and Hybrid Majorities

Individuals tend to conform to a majority for reasons of peer pressure (normative conformity) and insecurity (informational conformity). It is important to investigate the reasons for social phenomena such as conformity in order to better understand processes in hybrid teams (i.e., teams which consist of humans and robots). Research has yielded conflicting results on conformity with robot and hybrid majorities, and the reasons for conformity remain unclear. We conducted a within-subject online experiment (n = 103) to compare the reasons for conformity under three conditions: human, robot, and hybrid majorities. Results indicate that subjects conformed most often with hybrid majorities, while they conformed least often with robot majorities. Normative conformity influenced conformity with human majorities, but informational conformity did not. Informational conformity influenced conformity with robot majorities, but normative conformity did not. Both types of conformity affected conformity with hybrid majorities. Our results provide a possible explanation for the heterogeneous findings on conformity in HRI

SESSION: Session: Robot Learning and Programming

Revisiting Human-Robot Teaching and Learning Through the Lens of Human Concept Learning

When interacting with a robot, humans form conceptual models (of varying quality) which capture how the robot behaves. These conceptual models form just from watching or interacting with the robot, with or without conscious thought. Some methods select and present robot behaviors to improve human conceptual model formation; nonetheless, these methods and HRI more broadly have not yet consulted cognitive theories of human concept learning. These validated theories offer concrete design guidance to support humans in developing conceptual models more quickly, accurately, and flexibly. Specifically, Analogical Transfer Theory and the Variation Theory of Learning have been successfully deployed in other fields, and offer new insights for the HRI community about the selection and presentation of robot behaviors. Using these theories, we review and contextualize 35 prior works in human-robot teaching and learning, and we assess how these works incorporate or omit the design implications of these theories. From this review, we identify new opportunities for algorithms and interfaces to help humans more easily learn conceptual models of robot behaviors, which in turn can help humans become more effective robot teachers and collaborators.

MIND MELD: Personalized Meta-Learning for Robot-Centric Imitation Learning

Learning from demonstration (LfD) techniques seek to enable users without computer programming experience to teach robots novel tasks. There are generally two types of LfD: human- and robot-centric. While human-centric learning is intuitive, human centric learning suffers from performance degradation due to covariate shift. Robot-centric approaches, such as Dataset Aggregation (DAgger), address covariate shift, but can struggle to learn from suboptimal human teachers. To create a more human-aware version of robot-centric LfD, we present Mutual Information-driven Meta-learning from Demonstration (MIND MELD). MIND MELD meta-learns a mapping from suboptimal and heterogeneous human feedback to optimal labels, thereby improving the learning signal for robot-centric LfD. The key to our approach is learning an informative personalized embedding using mutual information maximization via variational inference. The embedding then informs a mapping from human provided labels to optimal labels. We evaluate our framework in a human-subjects experiment, demonstrating that our approach improves corrective labels provided by human demonstrators. Our framework outperforms baselines in terms of ability to reach the goal (p < .001), average distance from the goal (p = .006), and various subjective ratings (p = .008).

Conditional Imitation Learning for Multi-Agent Games

While advances in multi-agent learning have enabled the training of increasingly complex agents, most existing techniques produce a final policy that is not designed to adapt to a new partner's strategy. However, we would like our AI agents to adjust their strategy based on the strategies of those around them. In this work, we study the problem of conditional multi-agent imitation learning, where we have access to joint trajectory demonstrations at training time, and we must interact with and adapt to new partners at test time. This setting is challenging because we must infer a new partner's strategy and adapt our policy to that strategy, all without knowledge of the environment reward or dynamics. We formalize this problem of conditional multi-agent imitation learning, and propose a novel approach to address the difficulties of scalability and data scarcity. Our key insight is that variations across partners in multi-agent games are often highly structured, and can be represented via a low-rank subspace. Leveraging tools from tensor decomposition, our model learns a low-rank subspace over ego and partner agent strategies, then infers and adapts to a new partner strategy by interpolating in the subspace. We experiments with a mix of collaborative tasks, including bandits, particle, and Hanabi environments. Additionally, we test our conditional policies against real human partners in a user study on the Overcooked game. Our model adapts better to new partners compared to baselines, and robustly handles diverse settings ranging from discrete/continuous actions and static/online evaluation with AI/human partners.

"We Make a Great Team!": Adults with Low Prior Domain Knowledge Learn more from a Peer Robot than a Tutor Robot

In peer tutoring, the learner is taught by a colleague rather than by a traditional tutor. This strategy has been shown to be effective in human tutoring, where students have higher learning gains when taught by a peer instead of a traditional tutor. Similar results have been shown in child-robot interactions studies, where a peer robot was more effective than a tutor robot at teaching children. In this work, we compare skill increase and perception of a peer robot to a tutor robot when teaching adults. We designed a system in which a robot provides personalized help to adults in electronic circuit construction. We compare the number of learned skills and preferences of a peer robot to a tutor robot. Participants in both conditions improved their circuit skills after interacting with the robot. There were no significant differences in number of skills learned between conditions. However, participants with low prior domain knowledge learned significantly more with a peer robot than a tutor robot. Furthermore, the peer robot was perceived as friendlier, more social, smarter, and more respectful than the tutor robot, regardless of initial skill level.

CoFrame: A System for Training Novice Cobot Programmers

The introduction of collaborative robots (cobots) into the workplace has presented both opportunities and challenges for those seeking to utilize their functionality. Prior research has shown that despite the capabilities afforded by cobots, there is a disconnect between those capabilities and the applications that they currently are deployed in, partially due to a lack of effective cobot-focused instruction in the field. Experts who work successfully within this collaborative domain could offer insight into the considerations and process they use to more effectively capture this cobot capability. Using an analysis of expert insights in the collaborative interaction design space, we developed a set of Expert Frames based on these insights and integrated these Expert Frames into a new training and programming system that can be used to teach novice operators to think, program, and troubleshoot in ways that experts do. We present our system and case studies that demonstrate how Expert Frames provide novice users with the ability to analyze and learn from complex cobot application scenarios.

Advancing the Design of Visual Debugging Tools for Roboticists

Programming robots is a challenging task exacerbated by software bugs, faulty hardware, and environmental factors. When coding issues arise, traditional debugging techniques such as output logs or print statements that may help in typical computer applications are not always useful for roboticists. As a result, roboticists often leverage visualizations that depict various aspects of robot, sensor, and environment states. In this paper, we explore various design approaches towards such visualizations for robotics debugging support, including 3D visualizations presented on 2D displays, as in the popular RViz tool within the ROS ecosystem, visualizations in a two-dimensional graphical user interfaces (2D GUI), and emerging immersive three-dimensional (3D) augmented reality (AR). We present a qualitative evaluation of feedback gathered from 24 roboticists across two universities who used one of these debugging tools and synthesize design guidelines for advancing robotics debugging interfaces.

CONFIDANT: A Privacy Controller for Social Robots

As social robots become increasingly prevalent in day-to-day environments, they will participate in conversations and appropriately manage the information shared with them. However, little is known about how robots might appropriately discern the sensitivity of information, which has major implications for human-robot trust. As a first step to address a part of this issue, we designed a privacy controller, CONFIDANT, for conversational social robots, capable of using contextual metadata (e.g., sentiment, relationships, topic) from conversations to model privacy boundaries. Afterwards, we conducted two crowdsourced user studies. The first study (n = 174) focused on whether a variety of human-human interaction scenarios were perceived as either private/sensitive or non-private/non-sensitive. The findings from our first study were used to generate association rules. Our second study (n = 95) evaluated the effectiveness and accuracy of the privacy controller in human-robot interaction scenarios by comparing a robot that used our privacy controller against a baseline robot with no privacy controls. Our results demonstrate that the robot with the privacy controller outperforms the robot without the privacy controller in privacy-awareness, trustworthiness, and social-awareness. We conclude that the integration of privacy controllers in authentic human-robot conversations can allow for more trustworthy robots. This initial privacy controller will serve as a foundation for more complex solutions.

SESSION: Session: Perceptions of Robots and Humans (1)

Better than Us: The Role of Implicit Self-Theories in Determining Perceived Threat Responses in HRI

Robots that are capable of outperforming human beings on mental and physical tasks provoke perceptions of threat. In this article we propose that implicit self-theory (core beliefs about the malleability of self-attributes, such as intelligence) is a determinant of whether one person experiences threat perception to a greater degree than another. We test for this possibility in a novel experiment in which participants watched a video of an apparently autonomous intelligent robot defeating human quiz players in a general knowledge game. Following the video, participants received either social comparison feedback, improvement-oriented feedback, or no feedback, and were then given the opportunity to play against the robot. We show that those who adopt a malleable self-theory (incremental theorists) are more likely to play against a robot after imagining losing to it, as well as exhibit more favorable responses and less identity threats than entity theorists (those adopting a fixed self-theory). Moreover, entity theorists (vs. incremental theorists) perceive autonomous intelligent robots to be significantly more threatening (both in terms of realistic and identity threats). These findings offer novel theoretical and practical implications, in addition to enriching the HRI literature by demonstrating that implicit self-theory is, in fact, an influential variable underpinning perceived threat.

It Will Not Take Long! Longitudinal Effects of Robot Conflict Resolution Strategies on Compliance, Acceptance and Trust

Domestic service robots become increasingly prevalent and autonomous, which will make task priority conflicts more likely. The robot must be able to effectively and appropriately negotiate to gain priority if necessary. In previous human-robot interaction (HRI) studies, imitating human negotiation behavior was effective but long-term effects have not been studied. Filling this research gap, an interactive online study (N = 103) with two sessions and six trials was conducted. In a conflict scenario, participants repeatedly interacted with a domestic service robot that applied three different conflict resolution strategies: appeal, command, diminution of request. The second manipulation was reinforcement (thanking) of compliance behavior (yes/no). This led to a 3x2x6 mixed-subject design. User acceptance, trust, user compliance to the robot, and self-reported compliance to a household member were assessed. The diminution of a request combined with positive reinforcement was the most effective strategy and perceived trustworthiness increased significantly over time. For this strategy only, self-reported compliance rates to the human and the robot were similar. Therefore, applying this strategy potentially seems to make a robot equally effective as a human requester. This paper contributes to the design of acceptable and effective robot conflict resolution strategies for long-term use.

Mind the Machines: Applying Implicit Measures of Mind Perception in Social Robotics

Beyond conscious beliefs and goals, automatic cognitive processes shape our social encounters, and interactions with complex machines like social robots are no exception. With this in mind, it is surprising that research in human-robot interaction (HRI) almost exclusively uses explicit measures, such as subjective ratings and questionnaires, to assess human attitudes towards robots - seemingly ignoring the importance of implicit measures. This is particularly true for research focusing on the question whether or not humans are willing to attribute complex mental states (i.e.,mind perception ), such asagency (i.e., the capacity to plan and act) andexperience (i.e., the capacity to sense and feel), to robotic agents. In the current study, we (i) created the mind perception implicit association test (MP-IAT) to examine subconscious attributions of mental capacities to agents of different degrees of human-likeness (here: human vs. humanoid robot), and (ii) compared the outcomes of the MP-IAT to explicit mind perception ratings of the same agents. Results indicate that (i) already at the subconscious level, robots are associated with lower levels of agency and experience compared to humans, and that (ii) implicit and explicit measures of mind perception are not significantly correlated. This suggests that mind perception (i) has an implicit component that can be measured using implicit tests like the IAT and (ii) might be difficult to modulate via design or experimental procedures due to its fast-acting, automatic nature.\\

Patients' Trust in Hospital Transport Robots: Evaluation of the Role of User Dispositions, Anxiety, and Robot Characteristics

For designing the interaction with robots in healthcare scenarios, understanding how trust develops in such situations characterized by vulnerability and uncertainty is important. The goal of this study was to investigate how technology-related user dispositions, anxiety, and robot characteristics influence trust. A second goal was to substantiate the association between hospital patients' trust and their intention to use a transport robot. In an online study, patients, who were currently treated in hospitals, were introduced to the concept of a transport robot with both written and video-based material. Participants evaluated the robot several times. Technology-related user dispositions were found to be essentially associated with trust and the intention to use. Furthermore, hospital patients' anxiety was negatively associated with the intention to use. This relationship was mediated by trust. Moreover, no effects of the manipulated robot characteristics were found. In conclusion, for a successful implementation of robots in hospital settings patients' individual prior learning history - e.g., in terms of existing robot attitudes - and anxiety levels should be considered during the introduction and implementation phase.

SESSION: Session: Visual Communication

Robot, Pass Me the Tool: Handle Visibility Facilitates Task-oriented Handovers

A human handing over an object modulates their grasp and movements to accommodate their partner's capabilities, which greatly increases the likelihood of a successful transfer. State-of-the-art robot behavior lacks this level of user understanding, resulting in interactions that force the human partner to shoulder the burden of adaptation. This paper investigates how visual occlusion of the object being passed affects the subjective perception and quantitative performance of the human receiver. We performed an experiment in virtual reality where seventeen participants were tasked with repeatedly reaching to take a tool from the hand of a robot; each of the three tested objects (hammer, screwdriver, scissors) was presented in a wide variety of poses. We carefully analysed the user's hand and head motions, the time to grasp the object, and the chosen grasp location, as well as participants' ratings of the grasp they just performed. Results show that initial visibility of the handle significantly increases the reported holdability and immediate usability of a tool. Furthermore, a robot that offers objects so that their handles are more occluded forces the receiver to spend more time in planning and executing the grasp and also lowers the probability that the tool will be grasped by the handle. Together these findings indicate that robots can more effectively support their human work partners by increasing the visibility of the intended grasp location of objects being passed.

Learning Gaze Behaviors for Balancing Participation in Group Human-Robot Interactions

Robots can affect group dynamics. In particular, prior work has shown that robots that use hand-crafted gaze heuristics can influence human participation in group interactions. However, hand-crafting robot behaviors can be difficult and might have unexpected results in groups. Thus, this work explores learning robot gaze behaviors that balance human participation in conversational interactions. More specifically, we examine two techniques for learning a gaze policy from data: imitation learning (IL) and batch reinforcement learning (RL). First, we formulate the problem of learning a gaze policy as a sequential decision-making task focused on human turn-taking. Second, we experimentally show that IL can be used to combine strategies from hand-crafted gaze behaviors, and we formulate a novel reward function to achieve a similar result using batch RL. Finally, we conduct an offline evaluation of IL and RL policies and compare them via a user study (N=50). The results from the study show that the learned behavior policies did not compromise the interaction. Interestingly, the proposed reward for the RL formulation enabled the robot to encourage participants to take more turns during group human-robot interactions than one of the gaze heuristic behaviors from prior work. Also, the imitation learning policy led to more active participation from human participants than another prior heuristic behavior.

Task-Consistent Signaling Motions for Improved Understanding in Human-Robot Interaction and Workspace Sharing

In this paper, the concept of signaling motions of a robot interacting with a human is presented. These motions consist in using the redundant degrees of freedom of a robot performing a task as new means of meaningful robot-human communication. They are generated through quasi-static torque control, in consistency with the main robot task. A double within-subject (N=16) study is conducted to evaluate the effects of two signaling motions on the performance of a task by participants and on their behavior towards the robot. Our results show a positive effect on both the task execution and the participants behavior. Additionally, both signaling motions seem to improve the situation awareness of the participants by fueling their mental model throughout the interaction.

Exploiting Augmented Reality for Extrinsic Robot Calibration and Eye-based Human-Robot Collaboration

For sensible human-robot interaction, it is crucial for the robot to have an awareness of its physical surroundings. In practical applications, however, the environment is manifold and possible objects for interaction are innumerable. Due to this fact, the use of robots in variable situations surrounded by unknown interaction entities is challenging and the inclusion of pre-trained object-detection neural networks not always feasible. In this work, we propose deploying augmented reality and eye tracking to flexibilize robots in non-predefined scenarios. To this end, we present and evaluate a method for extrinsic calibration of robot sensors, specifically a camera in our case, that is both fast and user-friendly, achieving competitive accuracy compared to classical approaches. By incorporating human gaze into the robot's segmentation process, we enable the 3D detection and localization of unknown objects without any training. Such an approach can facilitate interaction with objects for which training data is not available. At the same time, a visualization of the resulting 3D bounding boxes in the human's augmented reality leads to exceedingly direct feedback, providing insight into the robot's state of knowledge. Our approach thus opens the door to additional interaction possibilities, such as the subsequent initialization of actions like grasping.

A Taxonomy of Functional Augmented Reality for Human-Robot Interaction

Augmented reality (AR) technologies are today more frequently being introduced to Human-Robot Interaction (HRI) to mediate the interaction between human and robot. Indeed, better technical support and improved framework integration allow the design and study of novel scenarios augmenting interaction with AR. While some literature reviews have been published, so far no classifications have been devised for the role of AR in HRI. AR constitutes a vast field of research in HCI, and as it is picking up in HRI, it is timely to articulate the current knowledge and information about the functionalities of AR in HRI. Here we propose a multidimensional taxonomy for AR in HRI that distinguishes the type of perception augmentation, the functional role of AR, and the augmentation artifact type. We place sample publications within the taxonomy to demonstrate its utility. Lastly, we derive from the taxonomy some research gaps in current AR-for-HRI research and provide suggestions for exploration beyond the current state-of-the-art.

SESSION: Session: Perceptions of Robots and Humans (2)

You're delaying my task?! Impact of Task Order and Motive on Perceptions of a Robot

Recent work has suggested that a robot that interrupts assigned tasks for the sake of curiosity is perceived as less competent, but that communicating acknowledgment of the curious behavior can mitigate some of those feelings. In real-world situations, there are many reasons why a robot's task could be interrupted in favor of another. For example, a robot handling requests for tasks from people in different locations could navigate more efficiently if it interleaves those tasks, but it ideally would not do so at the expense of the users' perceptions of the robot. In order to understand the impact of different task interleaving patterns on human perceptions of a robot's behavior, we performed a study in which a robot performed a delivery task and an investigative task, interleaving them in various ways. The participants were told either that the investigative task was motivated by a request from another person, motivated by curiosity, or they received no information about why the robot performed the action. While participants acknowledged that interleaving tasks should be allowed, they rated the robot as more competent when its tasks were not interleaved. They were most receptive to interleaving when they knew the investigative task was for another person and less receptive to long task detours away from the delivery route, especially when the inspection task was motivated by curiosity.

Who's Laughing NAO?: Examining Perceptions of Failure in a Humorous Robot Partner

Social robots are being deployed to interact with people in various scenarios, where they are expected to incorporate human-like conversational strategies to achieve fluency in interactions. For example, current robots are designed to perform advanced communication strategies (i.e., personal anecdotes, explanations, and apologies) to recover from task failure. However, these tactics are not always sufficient for failure recovery as they can be lengthy and insufficient for encouraging future interactions. In human-human interactions, people often use humor as a low-risk and engaging method for managing failures. Thus, the successful execution of advanced, human-like humor could enable robots to recover from task failures more efficiently. In this paper, we present a human-robot interaction study exploring how a robot's utilization of various human-like humor types (i.e., affiliative, aggressive, self-enhancing, and self-defeating) are perceived by human teammate (n=32) and an external observer of the interaction (n=256). Additionally, we have explored the effects of performance, humor type, perspective, and previous experience with robots on the participants' perceptions of warmth, competence, and the robot as a teammate. Our results indicate that dyadic participants rated the successful robot to be more competent and a better teammate than the bystander participants. Additionally, the results indicate that participants with less experience with robots found the successful robot to be more competent than participants with high levels of experience. These findings will enable the human-robot interaction community to develop more engaging robots for fluent interactive experiences in the future.

Perceptions of Cognitive and Affective Empathetic Statements by Socially Assistive Robots

Communicating empathy is important for building relationships in numerous contexts. Consequently, the failure of robots to be perceived as empathetic by human users could be detrimental to developing effective human-robot interaction. Work on computational models of empathy has been growing rapidly, reflecting the importance of this ability for machines. Despite growing recent work, there remain unanswered questions about how users perceive different forms of empathetic expression by robots and how attitudes towards robots may mediate perceptions of robot empathy. Do people really believe that robots can feel or understand emotions? This work studied the difference in viewers' perceptions of cognitive and affective empathetic statements made by a robot in response to human disclosure. In a within-subjects study, participants (n=111) watched videos in which a human disclosed negative emotions around COVID-19, and a robot responded with either affective or cognitive empathetic responses. Using an adapted version of the Robot's Perceived Empathy (RoPE) scale, participants rated their perceptions of the robot's empathy in both cases. We found that participants perceived the robot that made affective empathetic statements as being more empathetic that the robot that made cognitive empathetic statements; we also found that participants with more negative attitudes toward robots were more likely to rate the cognitive condition as more empathetic than the affective condition. These results inform HRI in general and future work into developing robots that will be perceived as empathetic and could personalize empathetic responses to each user.

Having the Right Attitude: How Attitude Impacts Trust Repair in Human-Robot Interaction

Robot co-workers, like human co-workers, make mistakes that undermine trust. Yet, trust is just as important in promoting human-robot collaboration as it is in promoting human-human collaboration. In addition, individuals can significantly differ in their attitudes toward robots, which can also impact or hinder their trust in robots. To better understand how individual attitude can influence trust repair strategies, we propose a theoretical model that draws from the theory of cognitive dissonance. To empirically verify this model, we conducted a between-subjects experiment with 100 participants assigned to one of four repair strategies (apologies, denials, explanations, or promises) over three trust violations. Individual attitudes did moderate the efficacy of repair strategies and this effect differed over successive trust violations. Specifically, repair strategies were most effective relative to individual attitude during the second of the three trust violations, and promises were the trust repair strategy most impacted by an individual's attitude.

A Carryover Effect in HRI: Beyond Direct Social Effects in Human-Robot Interaction

We evaluate whether an interaction with robots can influence a subsequent Human-Human Interaction without the robots' presence. Social psychology studies indicate that some social experiences have a carryover effect, leading to implicit influences on later interactions. We tested whether a social experience formed in a Human-Robot Interaction can have a carryover effect that impacts a subsequent Human-Human Interaction. We focused on ostracism, a phenomenon known to involve carryover effects that lead to prosocial behavior. Using the Robotic Ostracism Paradigm, we compared two HRI experiences: Exclusion and Inclusion, testing their impact on a Human-Human Interaction that did not involve robots. Robotic ostracism had a carryover effect that led to prosocial behavior in the Human-Human Interaction, whereby participants preferred intimate interpersonal space and displayed increased compliance. We conclude that HRI experiences may involve carryover effects that extend beyond the interaction with robots, impacting separate and different subsequent strictly human Interactions.

SESSION: Session: Explicit and Implicit Communication

Teacher, Teammate, Subordinate, Friend: Generating Norm Violation Responses Grounded in Role-based Relational Norms

Language-capable robots require moral competence, including representations and algorithms for moral reasoning and moral communication. We argue for an ethical pluralist approach to moral competence that leverages and combines disparate ethical frameworks, and specifically argue for an approach to moral competence that is grounded not only in Deontological norms (as is typical in the HRI literature) but also in Confucian relational roles. To this end, we introduce the first computational approach that centers relational roles in moral reasoning and communication, and demonstrate the ability of this approach to generate both context-oriented and role-oriented explanations for robots' rejections of norm-violating commands, which we justify through our pluralist lens. Moreover, we provide the first investigation of how computationally generated role-based explanations are perceived by humans, and empirically demonstrate (N=120) that the effectiveness (in terms of of trust, understanding confidence, and perceived intelligence) of explanations grounded in different moral frameworks is dependent on nuanced mental modeling of human interlocutors.

You Had Me at Hello: The Impact of Robot Group Presentation Strategies on Mental Model Formation

Research has shown how the connections between robots' minds, bodies, and identities can be configured and performed in a variety of ways. In this work, we consider group identity observables: the set of design cues that robot groups use to perform different identity configurations. We explore how group identity observables lead observers to develop different mental models of robot groups. Specifically, we make four key contributions: (1) we define, conceptualize, and taxonomize group identity observables; (2) we use Grounded Theory-informed analysis of qualitative data to produce a taxonomy of users' mental models invoked by variation in those observables; (3) we empirically demonstrate (n=166) how variations in observables lead to different mental models; and (4) we further demonstrate how variations in those observables, and the mental models they evoke, influence key group dynamics constructs like entitativity.

Robot Touch to Send Sympathy: Divergent Perspectives of Senders and Recipients

There is increasingly heavy reliance on online social communication methods for reducing the sense of social isolation. However, this type of communication lacks one of the most critical elements of expressing emotion to comfort people: physical contact. In the current work, we examined people's perceptions of robots with an affective touch function for conveying sympathy from another person. We conducted two online studies to investigate how individuals evaluate imagined robot touch gestures sent to a friend from them (Study 1) and received by them from a friend (Study 2) in the United States and Japan. We found that sympathy senders preferred robot patting or rubbing the shoulder more than other types of touch, but would rather express sympathy through text or GIFs than by robot-mediated touch. In contrast, recipients perceived more sympathy and social support when they received robot touch gestures showing sympathy, compared to some other means to convey sympathy, short text in particular. The current findings highlight the different perspectives of senders and recipients on robot affective touch and potential cultural and individual differences in evaluations of robot affective touch.

A Novel Architectural Method for Producing Dynamic Gaze Behavior in Human-Robot Interactions

We present a novel integration between a computational framework for modeling attention-driven perception and cognition (ARCADIA) with a cognitive robotic architecture (DIARC), demonstrating how this integration can be used to drive the gaze behavior of a robotic platform. Although some previous approaches to controlling gaze behavior in robots during human-robot interactions have relied either on models of human visual attention or human cognition, ARCADIA provides a novel framework with an attentional mechanism that bridges both lower-level visual and higher-level cognitive processes. We demonstrate how this approach can produce more natural and human-like robot gaze behavior. In particular, we focus on how our approach can control gaze during an interactive object learning task. We present results from a pilot crowdsourced evaluation that investigates whether the gaze behavior produced during this task increases confidence that the robot has correctly learned each object.

More Than Words: A Framework for Describing Human-Robot Dialog Designs

This paper presents a novel framework for describing human-robot interaction dialog, developed from a survey and analysis of existing systems and research. We collected data from 75 published systems and conducted an iterative thematic analysis to distill the broad range of work into key underlying factors defining them. Our framework provides a language to describe human-robot dialog systems and a new way of classifying and understanding human-robot dialog, in terms of both high-level design aspects and relevant implementation details. Our quantitative survey summary further provides a detailed, contemporary snapshot of predominant approaches in the field, highlighting opportunities for further exploration.

SESSION: Session: Sensing and Control

Predicting Positions of People in Human-Robot Conversational Groups

Robots that operate in social settings must be able to recognize, understand, and reason about human conversational groups (i.e., F-formations). While several algorithms have been developed for identifying such groups, there has been little research on how robots might reason about inaccuracies following group classification (e.g., recognizing only 4 of 5 group members). We address this gap through a data-driven approach that builds knowledge of human group positioning. By analyzing multiple conversational group data sets, we have developed a system for identifying high probability regions that indicate areas where people are likely to stand in a group relative to a single anchor participant. We use knowledge of these regions to train two models, which we implement on a social robot. The first model can estimate the true size of a partially-observed conversational group (i.e., a group where only some of the participants were detected). Our second model can predict the locations where any undetected participants are likely to reside. Together, these models may improve F-formation detection algorithms by increasing robustness to noisy input data.

REGROUP: A Robot-Centric Group Detection and Tracking System

To facilitate HRI's transition from dyadic to group interaction, new methods are needed for robots to sense and understand team behavior. We introduce the Robot-Centric Group Detection and Tracking System (REGROUP), a new method that enables robots to detect and track groups of people from an ego-centric perspective using a crowd-aware, tracking-by-detection approach. Our system employs a novel technique that leverages person re-identification deep learning features to address the group data association problem. REGROUP is robust to real-world vision challenges such as occlusion, camera egomotion, shadow, and varying lighting illuminations. Also, it runs in real-time on real-world data. We show that REGROUP outperformed three group detection methods by up to 40% in terms of precision and up to 18% in terms of recall. Also, we show that REGROUP's group tracking method outperformed three state-of-the-art methods by up to 66% in terms of tracking accuracy and 20% in terms of tracking precision. We plan to publicly release our system to support HRI teaming research and development. We hope this work will enable the development of robots that can more effectively locate and perceive their teammates, particularly in uncertain, unstructured environments.

Not All Who Wander Are Lost: A Localization-Free System for In-the-Wild Mobile Robot Deployments

It is difficult to run long-term in-the-wild studies with mobile robots. This is partly because the robots we, as human-robot interaction (HRI) researchers, are interested in deploying prioritize expressivity over navigational capabilities, and making those robots autonomous is often not the focus of our research. One way to address these difficulties is with the Wizard of Oz (WoZ) methodology, where a researcher teleoperates the robot during its deployment. However, the constant attention required for teleoperation limits the duration of WoZ deployments, which in-turn reduces the amount of in-the-wild data we are able to collect. Our key insight is that several types of in-the-wild mobile robot studies can be run without autonomous navigation, using wandering instead. In this paper we present and share code for our wandering robot system, which enabled Kuri, an expressive robot with limited sensor and computational capabilities, to traverse the hallways of a 28,000 sq ft floor for four days. Our system relies on informed direction selection to avoid obstacles and traverse the space, and periodic human help to charge. After presenting the outcomes from the four-day deployment, we then discuss the benefits of deploying a wandering robot, explore the types of in-the-wild studies that can be run with wandering robots, and share pointers for enabling other robots to wander. Our goal is to add wandering to the toolbox of navigation approaches HRI researchers use, particularly to run in-the-wild deployments with mobile robots.

Understanding Control Frames in Multi-Camera Robot Telemanipulation

In telemanipulation, showing the user multiple views of the remote environment can offer many benefits, although such different views can also create a problem for control. Systems must either choose a single fixed control frame, aligned with at most one of the views or switch between view-aligned control frames, enabling view-aligned control at the expense of switching costs. In this paper, we explore the trade-off between these options. We study the feasibility, benefits, and drawbacks of switching the user's control frame to align with the actively used view during telemanipulation. We additionally explore the effectiveness of explicit and implicit methods for switching control frames. Our results show that switching between multiple view-specific control frames offers significant performance gains compared to a fixed control frame. We also find personal preferences for explicit or implicit switching based on how participants planned their movements. Our findings offer concrete design guidelines for future multi-camera interfaces.

Detecting 3D Hand Pointing Direction from RGB-D Data in Wide-Ranging HRI Scenarios

This paper addresses the detection of 3D hand pointing direction from RGB-D data by a mobile robot. Considering ubiquitous forms of pointing gestures, the 3D pointing direction is assumed to be inferable from hand data only. First, a novel sequential network-based learning model is developed for the simultaneous detection of hands and humans in RGB data. Differing from previous work, its performance is shown to be both accurate and fast. Following, a new geometric method for estimating the 3D pointing direction from depth data of the detected hand is presented along with a mathematical analysis of sensor noise sensitivity. Two new data sets for pointing gesture classification and continuous 3D pointing direction with varying proximity, arm pose and background are presented. As there are no such data sets to the best of our knowledge, both will be publicly available. Differing from previous work, the robot is able to estimate the 3D hand direction both accurately and fast regardless of hand proximity, background variability or the detectability of specific human parts - as demonstrated by end-to-end experimental results.

SESSION: Session: Understanding and Leveraging Humans

Automatic Frustration Detection Using Thermal Imaging

To achieve seamless interactions, robots have to be capable of reliably detecting affective states in real time. One of the possible states that humans go through while interacting with robots is frustration. Detecting frustration from RGB images can be challenging in some real-world situations; thus, we investigate in this work whether thermal imaging can be used to create a model that is capable of detecting frustration induced by cognitive load and failure. To train our model, we collected a data set from 18 participants experiencing both types of frustration induced by a robot. The model was tested using features from several modalities: thermal, RGB, Electrodermal Activity (EDA), and all three combined. When data from both frustration cases were combined and used as training input, the model reached an accuracy of 89% with just RGB features, 87% using only thermal features, 84% using EDA, and 86% when using all modalities. Furthermore, the highest accuracy for the thermal data was reached using three facial regions of interest: nose, forehead and lower lip.

Asking Follow-Up Clarifications to Resolve Ambiguities in Human-Robot Conversation

When a robot aims to comprehend its human partner's request by identifying the referenced objects in Human-Robot Conversation, ambiguities can occur because the environment might contain many similar objects or the objects described in the request might be unknown to the robot. In the case of ambiguities, most of the systems ask users to repeat their request, which assumes that the robot is familiar with all of the objects in the environment. This assumption might lead to task failure, especially in complex real-world environments. In this paper, we address this challenge by presenting an interactive system that asks for follow-up clarifications to disambiguate the described objects using the pieces of information that the robot could understand from the request and the objects in the environment that are known to the robot. To evaluate our system while disambiguating the referenced objects, we conducted a user study with 63 participants. We analyzed the interactions when the robot asked for clarifications and when it asked users to re-describe the same object. Our results show that generating follow-up clarification questions helped the robot correctly identify the described objects with fewer attempts (i.e., conversational turns). Also, when people were asked clarification questions, they perceived the task as easier, and they evaluated the task understanding and competence of the robot as higher. Our code and anonymized dataset are publicly available\footnotehttps://github.com/IrmakDogan/Resolving-Ambiguities.

Cultural Differences in Indirect Speech Act Use and Politeness in Human-Robot Interaction

How do native English speakers and native Korean speakers politely make a request to a robot? Previous human-robot interaction studies on English have demonstrated that humans use indirect speech acts (ISAs) frequently to robots to make their requests polite. However, it is unknown whether humans considerably used ISAs to robots in other languages. In addition to ISAs, Korean has other politeness expressions called honorifics, which indicate different politeness from that of ISAs. This study aimed to investigate the cultural differences in humans' politeness expressions and politeness when they make requests to robots and to re-examine the effect of conventionality of context on the use of politeness expressions. We conducted a replication experiment of Williams et al. (2018) on native Korean speakers and analyzed their use of ISAs and honorifics. Our results showed that ISAs are rarely used in task-based human-robot interaction in Korean. Instead, honorifics are more frequently used than ISAs and are more common in conventionalized contexts than in unconventionalized contexts. These results suggest that the difference in politeness expressions and politeness between English and Korean exist in both human-robot interaction and human-human interaction. Furthermore, the conventionality of context has a strong constraint on making humans follow social norms in both languages.

Configuring Humans: What Roles Humans Play in HRI Research

Humans are an essential part of human-robot interaction (HRI), but what roles do they play in HRI research? Analysis of the role of human subjects in research can serve as an indicator of how the HRI community engages with society. In this paper, we examine humans' roles in the HRI studies published at the ACM HRI conference over the course of 16 years (between 2006-2021). We categorize the studies into three groups. The studies in the first group investigated human nature and studied humans as interchangeable subjects; the studies in the second group addressed humans as users of robots in certain contexts; the third group of studies approached humans as social actors who are closely connected to other actors and thereby generate social dynamics. The contributions of this paper are twofold: First, we reveal the patterns of how humans have been included in HRI studies. Specifically, we find that more than half of the studies limited the role of humans to interchangeable and generalizable actors. Second, we outline three opportunities for the HRI community that arise if human subjects are given more diversified roles in HRI research — opportunities for diversity, social justice, and reflexivity. On this basis, we call for a more socially-engaged research in HRI.

Correct Me If I'm Wrong: Using Non-Experts to Repair Reinforcement Learning Policies

Reinforcement learning has shown great potential for learning sequential decision-making tasks. Yet, it is difficult to anticipate all possible real-world scenarios during training, causing robots to inevitably fail in the long run. Many of these failures are due to variations in the robot's environment. Usually experts are called to correct the robot's behavior; however, some of these failures do not necessarily require an expert to solve them. In this work, we query non-experts online for help and explore 1) if/how non-experts can provide feedback to the robot after a failure and 2) how the robot can use this feedback to avoid such failures in the future by generating shields that restrict or correct its high-level actions. We demonstrate our approach on common daily scenarios of a simulated kitchen robot. The results indicate that non-experts can indeed understand and repair robot failures. Our generated shields accelerate learning and improve data-efficiency during retraining.

SESSION: Session: Social and Telepresence Robots

Face on a Globe: A Spherical Robot that Appears Lifelike Through Smooth Deformations and Autonomous Movement

In this paper, we provide insights into a design approach for non-anthropomorphic robots. The diversity of robot design methods is becoming increasingly important. As the applications of robots become more diverse, how these robots interact with humans is also changing. Focusing specifically on non-anthropomorphic robot designs, we created a spherical robot called "Face on a Globe." This robot uses a novel and innovative mechanism to smoothly deform and reveal a flat surface that represents an abstract face. We used this mechanism to switch between a pure sphere and a lifelike robot. In the first two studies, we investigated the factors related to the impression of the robot as being "lifelike". The third study added an autonomous motion that turned the robot's plane to face the direction of a sound.We compared the lifelike impression of this robot with previous iterations. Results demonstrated that the motion gave our robot a stronger lifelike impression. Moreover, we found that even though the lifelike impression of our robot increased with autonomous movements, the contrasting impression of the robot as being "artificial" remained high. Finally, we discuss the possibility of a robot that switches between two forms using a smooth deformation mechanism.

Unwinding Rotations Improves User Comfort with Immersive Telepresence Robots

We propose unwinding the rotations experienced by the user of an immersive telepresence robot to improve comfort and reduce VR sickness of the user. By immersive telepresence we refer to a situation where a 360\textdegree~camera on top of a mobile robot is streaming video and audio into a head-mounted display worn by a remote user possibly far away. Thus, it enables the user to be present at the robot's location, look around by turning the head and communicate with people near the robot. By unwinding the rotations of the camera frame, the user's viewpoint is not changed when the robot rotates. The user can change her viewpoint only by physically rotating in her local setting; as visual rotation without the corresponding vestibular stimulation is a major source of VR sickness, physical rotation by the user is expected to reduce VR sickness. We implemented unwinding the rotations for a simulated robot traversing a virtual environment and ran a user study (N=34) comparing unwinding rotations to user's viewpoint turning when the robot turns. Our results show that the users found unwound rotations more preferable and comfortable and that it reduced their level of VR sickness. We also present further results about the users' path integration capabilities, viewing directions, and subjective observations of the robot's speed and distances to simulated people and objects.

Costume vs. Wizard of Oz vs. Telepresence: How Social Presence Forms of Tele-operated Robots Influence Customer Behavior

In this study, we explore the effective form of social presence for a tele-operated robot to provide customer service. Particularly, we address the question on if and how a tele-operated robot displays the presence of an operator, and what effects it would have on people's perception and behavior toward it. We launched a tele-operated robot in a supermarket and had it deliver recipe flyers. We adjusted the robot's social presence by showing or not showing the photo of an operator's face (F) and using or not using voice conversion (V), leading to three forms of presence: Wizard of Oz (F: no, V: yes), costume (F: yes, V: yes), and telepresence (F: yes, V: no), which indicated the operator's presence from a low to a high level. We determined that the customers behaved significantly different when they faced the tele-operated robot in different forms. Our robot that exhibited a moderate presence of the operator (costume form) achieved the overall best performance. Based on these findings, we discuss both the strengths and weaknesses of the three forms of presence for a tele-operated robot and recommend the appropriate form for various applications.

You Got the Job! Understanding Hiring Decisions for Robots as Organizational Members

As social robots will likely be central to future human-robot interactions at work, we assess hiring decisions for social robots as a natural first step prior to their integration into organizations. With a basis in the technology acceptance model and social identity theory, this study focuses on differences between humanoid robotic, android robotic and human candidates. We first examine performance-based evaluations of the applicants by focusing on expectation disconfirmation. While for the human candidate, the interplay between expectations and experiences is decisive for the judgement, for social robots, the actual experience of the hiring situation dominates the decision. Besides the rational decision criteria, we further look into social-cue-based evaluations as social biases in hiring situations. Categorization as social ingroup leads to an absolute preference for the human candidate (i.e., ingroup favoritism) with no differences in preference for the robotic social outgroup (i.e., outgroup homogeneity effect).

Learning Socially Appropriate Robo-waiter Behaviours through Real-time User Feedback

Current Humanoid Service Robot (HSR) behaviours mainly rely on static models that cannot adapt dynamically to meet individual customer attitudes and preferences. In this work, we focus on empowering HSRs with adaptive feedback mechanisms driven by either implicit reward, by estimating facial affect, or explicit reward, by incorporating verbal responses of the human 'customer'. To achieve this, we first create a custom dataset, annotated using crowd-sourced labels, to learn appropriate approach (positioning and movement) behaviours for a Robo-waiter. This dataset is used to pre-train a Reinforcement Learning (RL) agent to learn behaviours deemed socially appropriate for the robo-waiter. This model is later extended to include separate implicit and explicit reward mechanisms to allow for interactive learning and adaptation from user social feedback. We present a within-subjects Human-Robot Interaction (HRI) study with 21 participants implementing interactions between the robo-waiter and human customers implementing the above-mentioned model variations. Our results show that both explicit and implicit adaptation mechanisms enabled the adaptive robo-waiter to be rated as more enjoyable and sociable, and its positioning relative to the participants as more appropriate compared to using the pre-trained model or a randomised control implementation.

Sharing the Spotlight: Co-presenting with a Humanoid Robot

Public speaking is important in the sciences, but poor quality presentations are common, as are high rates of public speaking anxiety. In this work we explore the use of a mobile humanoid robot as a co-presenter that can share the stage with a scientist giving an oral presentation. We conducted a within-subjects experiment comparing presentations given with and without the robot and impacts on public speaking anxiety and speaker confidence. We found that participants reported significantly greater confidence and lower public speaking anxiety when co-presenting with the robot, compared to when they presented on their own, without the robot. Audiences accepted scientific presentations given with the robot, rating these presentations significantly greater than neutral on presentation quality.

SESSION: Alt.HRI

Robotic Improvisers: Rule-Based Improvisation and Emergent Behaviour in HRI

A key challenge in human-robot interaction (HRI) design is to create and sustain engaging social interactions. This paper argues that improvisational techniques from the performing arts can address this challenge. Contrary to the ways in which improvisation is generally used in social robotics, we propose an understanding of improvisational techniques as based on rules that shape motion choices. We claim that such an approach, represented in what we name the "external" and "emergent" perspectives on improvisation, could benefit the way in which robot movement and behaviour is designed and deployed, increasing playful engagement and responsiveness. As an example of this type of improvisation, we discuss how American dancer and choreographer William Forsythe's Improvisation Technologies could be used in an HRI context. We also report on a preliminary experimentation using a Wizard-of-Oz exploratory prototyping system and a participatory design method with professional dancers geared towards the exploration of interactive movement possibilities with a Pepper robot. Finally, we report on how this workshop offered valuable information about the applicability of these tools, as well as reflections on how it could help increase the level of engagement in the interaction.

Children's Perspectives of Advertising with Social Robots: A Policy Investigation

Children are beginning to interact and develop rapport with social robots in their homes. These devices pose new concerns around marketing to children. These include questions of how advertisements can and should be embedded in a robot and the robot's persona and which methods of conveying advertisements to the user are deceptive. In this paper, we engage with 62 children ages 9-12 in an activity to design future robot advertising policies. Results demonstrate that children prefer robots to advertise to them through casual conversations, citing a more positive user experience and the benefit of personalized and conversationally relevant advertising. These findings illuminate a tension between child preferences and more deceptive advertising policies. Overall, the work presented in this paper prompts new design and legal policy questions for how and if robots should advertise to children.

Practical, Ethical, and Overlooked: Teleoperated Socially Assistive Robots in the Quest for Autonomy

Socially Assistive Robots (SARs) show significant promise in a number of domains: providing support for the elderly, assisting in education, and aiding in therapy. Perhaps unsurprisingly, SAR research has traditionally focused on providing evidence for this potential. In this paper, we argue that this focus has led to a lack of critical reflection on the appropriate level of autonomy (LoA) for SARs, which has in turn led to blind spots in the research literature. Through an analysis of the past five years of HRI literature, we demonstrate that SAR researchers are overwhelmingly developing and envisioning autonomous robots. Critically, researchers do not include a rationale for their choice in LoA, making it difficult to determine their motivation for fully autonomous robots. We argue that defaulting to research fully autonomous robots is potentially short-sighted, as applying LoA selection guidelines to many SAR domains would seem to warrant levels of autonomy that are closer to teleoperation. We moreover argue that this is an especially critical oversight as teleoperated robots warrant different evaluation metrics than do autonomous robots since teleoperated robots introduce an additional user, the teleoperator. Taken together, this suggests a mismatch between LoA selection guidelines and the vision of SAR autonomy found in the literature. Based on this mismatch, we argue that the next five years of SAR research should be characterized by a shift in focus towards teleoperation and teleoperators.

Promoting Children's Critical Thinking Towards Robotics through Robot Deception

The need for critically reflecting on the deceptive nature of advanced technologies, such as social robots, is urging academia and civil society to rethink education and the skills needed by future generations. The promotion of critical thinking, however, remains largely unaddressed within the field of educational robotics. To address this gap and question if and how robots can be used to promote critical thinking in young children's education, we conducted an explorative design study named Bringing Shybo Home. Through this study, in which a robot was used as a springboard for debate with twenty 8- to 9-year-old children at school, we exemplify how the deceptive nature of robots, if embraced and magnified in order for it to become explicitly controversial, can be used to nurture children's critical mindset.

Gender Fairness in Social Robotics: Exploring a Future Care of Peripartum Depression

In this paper we investigate the possibility of socially assistive robots (SARs) supporting diagnostic screening for peripartum depression (PPD) within the next five years. Through a HRI/socio-legal collaboration, we explore the gender norms within PPD in Sweden, to inform a gender-sensitive approach to designing SARs in such a setting, as well as governance implications. This is achieved through conducting expert interviews and qualitatively analysing the data. Based on the results, we conclude that a gender-sensitive approach is a necessity in relation to the design and governance of SARs for PPD screening.

SESSION: Short Contribution

PointIt: A ROS Toolkit for Interacting with Co-located Robots using Pointing Gestures

We introduce PointIt, a toolkit for the Robot Operating System (ROS2) to build human-robot interfaces based on pointing gestures sensed by a wrist-worn Inertial Measurement Unit, such as a smartwatch. We release the software as open-source with MIT license; docker images and exhaustive instructions simplify its usage in simulated and real-world deployments.

APReL: A Library for Active Preference-based Reward Learning Algorithms

Reward learning is a fundamental problem in human-robot interaction to have robots that operate in alignment with what their human user wants. Many preference-based learning algorithms and active querying techniques have been proposed as a solution to this problem. In this paper, we present APReL, a library for active preference-based reward learning algorithms, which enable researchers and practitioners to experiment with the existing techniques and easily develop their own algorithms for various modules of the problem. APReL is available at https://github.com/Stanford-ILIAD/APReL.

Jaco: An Offline Running Privacy-aware Voice Assistant

With the recent advance in speech technology, smart voice assistants have been improved and are now used by many people. But often these assistants are running online as a cloud service and are not always known for a good protection of users' privacy. This paper presents the architecture of a novel voice assistant, called Jaco, with the following features: (a) It can run completely offline, even on low resource devices like a RaspberryPi. (b) Through a skill concept it can be easily extended. (c) The architectural focus is on protecting users' privacy, but without restricting capabilities for developers. (d) It supports multiple languages. (e) It is competitive with other voice assistant solutions. In this respect the assistant combines and extends the advantages of other approaches.

Projecting Robot Navigation Paths: Hardware and Software for Projected AR

For mobile robots, mobile manipulators, and autonomous vehicles to safely navigate around populous places such as streets and warehouses, human observers must be able to understand their navigation intent. One way to enable such understanding is by visualizing this intent through projections onto the surrounding environment. But despite the demonstrated effectiveness of such projections, no open codebase with an integrated hardware setup exists. In this work, we detail the empirical evidence for the effectiveness of such directional projections, and share a robot-agnostic implementation of such projections, coded in C++ using the widely-used Robot Operating System (ROS) and rviz. Additionally, we demonstrate a hardware configuration for deploying this software, using a Fetch robot, and briefly summarize a full-scale user study that motivates this configuration. The code, configuration files (roslaunch and rviz files), and documentation are freely available on GitHub at https://github.com/umhan35/arrow_projection.

A New VR Kitchen Environment for Recording Well Annotated Object Interaction Tasks

This paper presents the Virtual Annotated Cooking Environment (VACE), a new open-source virtual reality dataset (https://sites.google.com/view/vacedataset) and simulator (https://github.com/michaelkoller/vacesimulator) for object interaction tasks in a rich kitchen environment. We use the Unity-based VR simulator to create thoroughly annotated video sequences of a virtual human avatar performing food preparation activities. Based on the MPII Cooking 2 dataset, it enables the recreation of recipes for meals such as sandwiches, pizzas, fruit salads and smaller activity sequences such as cutting vegetables. For complex recipes, multiple samples are present, following different orderings of valid partially ordered plans. The dataset includes an RGB and depth camera view, bounding boxes, object masks segmentation, human joint poses and object poses, as well as ground truth interaction data in the form of temporally labeled semantic predicates (holding, on, in, colliding, moving, cutting). In our effort to make the simulator accessible as an open-source tool, researchers are able to expand the setting and annotation to create additional data samples.

Gender Neutrality in Robots: An Open Living Review Framework

Gender is a primary characteristic by which people organize themselves. Previous research has shown that people tend to unknowingly ascribe gender to robots based on features of their embodiment. Yet, robots are not necessarily ascribed the same, or any, gender by different people. Indeed, robots may be ascribed non-human genders or used as "genderless" alternatives. This underlies the notion of gender neutrality in robots: neither masculine nor feminine but somewhere in between or even beyond gender. Responding to calls for gender as a locus of study within robotics, we offer a framework for conducting an open living review to be updated periodically as work emerges. Significantly, we provide an open, formalized submission process and open access dataset of research on gender neutrality in robots. This novel and timely approach to consensus-building is expected to pave the way for similar endeavours on other key topics within human-robot interaction research.

DReyeVR: Democratizing Virtual Reality Driving Simulation for Behavioural & Interaction Research

Simulators are an essential tool for behavioural and interaction research on driving, due to the safety, cost, and experimental control issues of on-road driving experiments. The most advanced simulators use expensive 360 degree projections systems to ensure visual fidelity, full field of view, and immersion. However, similar visual fidelity can be achieved affordably using a virtual reality (VR) based visual interface. We present DReyeVR, an open-source VR based driving simulator platform designed with behavioural and interaction research priorities in mind. DReyeVR (read ''driver'') is based on Unreal Engine and the CARLA autonomous vehicle simulator and has features such as eye tracking, a functional driving heads-up display (HUD) and vehicle audio, custom definable routes and traffic scenarios, experimental logging, replay capabilities, and compatibility with ROS. We describe the hardware required to deploy this simulator for under 5000 USD, much cheaper than commercially available simulators. Finally, we describe how DReyeVR may be leveraged to answer an interaction research question in an example scenario. DReyeVR is open-source at this url: https://github.com/HARPLab/DReyeVR

An Analysis of Metrics and Methods in Research from Human-Robot Interaction Conferences, 2015-2021

Standardized metrics and methods are critical towards wider adoption of HRI technologies in real-world applications. However, the interdisciplinary nature of HRI creates an inherently decentralized research paradigm that limits the use of standardized metrics for baseline comparisons among studies. This limitation restricts both the real-world adoption and academic replicability of HRI solutions developed by the research community. To identify specific opportunities for reuse of metrics and methods in HRI, this paper presents a comprehensive survey of 1464 papers from the ACM/IEEE International Conference on Human-Robot Interaction (HRI) and the IEEE International Conference on Robot and Human Interactive Communication (Ro-Man) over seven years. By providing a holistic perspective of the metrological tools leveraged in the current state-of-practice of HRI research, we find that a significant portion of HRI studies use custom surveys, thus limiting baseline comparison. Hence, the analysis in this work aims to advance the field of HRI by identifying specific barriers to adoption of HRI technologies in addition to proposing solutions to overcome existing limitations in the context of metrics and methodologies.

SESSION: Late-Breaking Reports

Gamer Breakbots: Exploring Robots as a Way for Gamers to Manage Break Time and Alleviate Potential Health Issues

As more than half of the U.S. population is identified as a video gamer, a growing number of studies have investigated the health issues of gamers who spend prolonged times playing games. Potential health issues of gamers include eye fatigue, neck/back pain, wrist pain, mental stress and other issues. To alleviate potential health issues of gamers, this paper investigates how robots can help gamers better manage their health issues. We conducted three studies with gamers, including an informal interview, an online survey, and a co-design session. In the three studies with gamers, we found that gamers focused on the robot design factors-how robots get the gamers' attention, and how they force gamers to pause their play, while still seeming friendly. Our preliminary results indicate substantial promise regarding the role of robots in gamers' health management, which we will investigate in our future research.

Development of a Snuggling Robot That Relieves Human Anxiety

Interacting with animals can relieve human anxiety and stress. In this research, we particularly look at the "snuggling behaviors'' (ex. cats rub their body against a wall) of such animals. We will implement snuggling behaviors inspired by cats into a robot and investigate if such behaviors are effective in reducing human anxiety. In this paper, we will report our initial attempt and early results.

"My Robot Friend": Application of Intergroup Contact Theory in Human-Robot Interaction

We present pilot data for one of the first comprehensive investigations of Intergroup Contact Theory [1], [2] in the context of human-robot interaction. Applying an actual intergroup contact procedure known to affect intergroup attitudes among humans (e.g., [3]), we examined whether human-robot interaction as a positive intergroup contact would change participants' evaluation of robots. Our data from 28 student participants (N = 15 in the interaction condition and N = 13 in the no-interaction condition) suggest that after the participant and robot self-disclosed to each other (Fast Friendship Task), participants (1) felt more positive emotions towards robots, (2) perceived robots as warmer, and (3) identified robots as more similar to humans. These preliminary findings invite further research on the application of Intergroup Contact Theory in examining social human-robot interaction and its possible contributions to understanding human psychology.

AMIGUS: A Robot Companion for Students

The pandemic has made everybody stay studying at home, but also there have always been isolated people who need a little push to study or interact with others. AMIGUS is a social robot that helps with good companionship, motivational messages and tasks during the times when students are dealing with online classes.

Robot-Assisted Language Learning Increases Functional Connectivity in Children's Brain

The current study investigated how robot tutors influence brain activity during child-robot interaction (CRI) for learning of second language vocabulary. We gathered EEG signals from two groups of children; 1) Robot group (N=21) who listened to a storytelling social robot and learned French words, and 2) Display group (N=20) who listened to the same story in the French language mediated by only a computer screen. To measure learning-induced changes in the brain, functional connectivity analysis was conducted on EEG signals, which quantifies the communication between brain regions during the learning phase. Results showed a significantly higher functional brain connectivity for the Robot group in the theta frequency band, which has been previously associated with language functions in neuroscientific literature. Our results provide neurophysiological evidence for the benefit of robot tutors in second language learning in children.

Drone Brush: Mixed Reality Drone Path Planning

In this paper we present Drone Brush, a prototype mixed reality interface for immersive planning of drone paths for tasks such as collaborative photogrammetry and inspection. This interface employs Microsoft's HoloLens 2 to allow users to draw paths for drone navigation in 3D using hand gestures. Users can place waypoints with a simple pinch gesture, and similarly, delete and move existing waypoints. To validate paths, we leverage the HoloLens spatial map to check for potential collisions ahead of time, greatly reducing the likelihood of a collision during drone navigation. Paths are simplified and cleaned up using density-based clustering to prevent complex or redundant drone movement. In this Late-Breaking Report, we present the design and implementation of our system that integrates mixed reality, natural hand gestures, and drone path planning, which we plan to evaluate in a user study in the near future.

Inducing Changes in Breathing Patterns Using a Soft Robot

In this study, we examine whether touching a soft robot while doing different tasks can make participants synchronize their breathing rhythm with the robot. 28 participants interacted with the robot, which either was inflated and deflated, thus simulating breathing, or remained inactive. During the experiment, data were collected through two breathing belts and an EEG device. The findings of the study suggest higher arousal associated with positive emotional valence for participants in the breathing robot condition compared to the inactive robot condition. The participants in the breathing robot condition also breathed more deeply and regularly and blinked fewer times, a finding that suggests lower stress levels in comparison with people who interacted with the inactive robot. The analysis of the data suggests that touching the breathing robot led to some degrees of stress reduction, yet without leading to synchronization with the robot's inhalation rhythm.

Force and Gesture-based Motion Control of Human-Robot Cooperative Lifting Using IMUs

Cooperative lifting (co-lift) is an important application of HRI with use-cases in many fields such as manufacturing, assembly, medical rehabilitation, etc. Successful industrial implementation of co-lifting requires the operations of approaching, attaching, lifting, carrying and placing the object to be handled as a whole rather than individually. In this paper, we target all stages of cooperative lifting in a holistic approach and extend previous results in [1] using IMU-based human motions estimates by introducing force-based control. We demonstrate through experiments on a UR5e robot how the force-based approach significantly improves on the position-based approach of [1]. Additionally, we improve the real-time control capabilities of the system by using a real-time data exchange communication interface. We believe that our system can be an advancing point for more human motion/gesture-based HRI applications as well as increasing the uptake of human-robot co-lifting systems in industrial settings.

Human-Robot Conflict Resolution at an Elevator – The Effect of Robot Type, Request Politeness and Modality

Human-robot conflicts might occur in the future, for instance, if a robot requests a public resource (e.g., an elevator). It needs to be investigated how the robot's request can be designed acceptably and effectively regarding the robot type, modality, and politeness. In this interactive video-based online study (N = 390), a robot requested priority over an elevator either using a polite or an assertive conflict resolution strategy presented via speech or on the robot's display. The robot was either humanlike, zoomorphic, or mechanoid. The mechanoid robot achieved more compliance than the humanoid robot if it used a verbal command. When the humanoid robot displayed the command instead of using its voice, more participants granted the robot priority. This might indicate that politeness norms are triggered more by a humanoid design and that if a robot makes a command, the modality should match the robot type.

Handheld Augmented Reality: Overcoming Reachability Limitations by Enabling Temporal Switching to Virtual Reality

The paper presents an approach for handheld augmented reality in constrained industrial environments, where it might be hard or even impossible to reach certain poses within a workspace. Therefore, a user might not be able to see or interact with some digital content in applications like visual robot programming, robotic program visualizations, or workspace annotation. To overcome this limitation, we propose a temporal switching to a non-immersive virtual reality, enabling the user to see the workspace from any angle and distance. To explore how people would use it and what the benefits would be over pure augmented reality, we chose a representative task of object alignment and conducted a study. The results revealed that mainly physical demands, which is often a limiting factor for handheld augmented reality, could be reduced and that the usability and utility of the approach are rated as high. In the next iteration, we want to investigate other possibilities of controlling the viewpoint in the virtual environment, as the current approach has potential for improvements.

Learning from Carers to inform the Design of Safe Physically Assistive Robots - Insights from a Focus Group Study

This research investigates how professional carers physically assist frail older adults. Carers were asked to discuss their approach and steps for providing safe physical assistance and highlight hazards that assistive robots would have to deal with in such situations. The aspects raised by carers indicate that irrespective of the degree of vulnerability of the older adults, carers can evaluate trust and the older adults' ability and willingness to collaborate during the assistive task through multiple modalities. These include tactile, visual and verbal cues, which the carers use to discern a measure of collaboration and adapt their assistance accordingly. Understanding these hazards and their effective collaboration is a vital step towards developing safe, physically assistive robots.

Initial Test of "BabyRobot" Behaviour on a Teleoperated Toy Substitution: Improving the Motor Skills of Toddlers

This article introduces "Baby Robot", a robot designed to improve infants' and toddlers' motor skills. This robot is a car-like toy that moves autonomously by using reinforcement learning and computer vision. Its behaviour consists of escaping from a target infant that has been previously recognized, or at least detected, without compromising the infant's security by avoiding obstacles. Regarding other robots that share this purpose, there is a variety of commercial toys available on the market; however, no one is betting on an intelligent autonomous movement, since they use to repeat simple, yet repetitive movements. In order to examine how that autonomous movement may improve infants' mobility, two crawling toys -one in representation of "Baby Robot"- were tested in a real environment. These real-life experiments were conducted with a safe and approved surrogate of our proposed robot in a kindergarten, where a group of infants interacted with the toys. Improvements in the efficiency of the play-session were detected.

Determining Success and Attributes of Various Feeding Approaches with a Mobile Robot

Robot feeding is a new but growing application, with the work thus far mostly focusing on mechanical aspects and the ability to acquire food, rather than socially-inspired behaviors or human-centric evaluations. As a precursor to evaluating the ways in which robot arms could expressively offer food to a person in robot-feeding applications, this study explores socially-inspired variables and how they intersect with feeding tools for a distracted participant. The results illustrate the human impact of robot approach path in terms of likelihood to take the food and robot attributions. Specifically, this work evaluates participant reactions to a mobile robot robot approaching a human to offer food with varied expressive pathways and utensils, implementing six different approaches, each a combination of delivery tool and approach path (N=5, within subjects, one week data collection period). Future designers of robots that feed humans may be interested to know that our participants often attributed direct approaches with aggressiveness and indirect approaches with confusion, but found semi-direct approaches to be the most helpful, perhaps because they were seen as polite but intentionally clear. Moreover, utensil choice can emphasize the path's directionality and the robot's perceived aggressiveness. The results indicate that future work on robot arms would benefit from considering the social attributes of how the robot is perceived as, depending on the desired effect, certain approach styles and implements may be recommended over others.

Instruct or Evaluate: How People Choose to Teach Norms to Social Robots

Robots deployed in social settings must act appropriately--that is, in compliance with social and moral norms. However, efforts of teaching norms to robots have typically relied on single teaching methods (e.g., instruction, reward). By contrast, humans may naturally use more than one teaching method when training a novice. To test this claim in the domain of human-robot teaching, we present a novel paradigm in which participants interactively teach a simulated robot to behave appropriately in a healthcare setting, choosing to either instruct the robot or evaluate its proposed actions. We demonstrate that 89% of human teachers naturally adopt mixed teaching strategies. We further identify some of the factors that influence people's choices. Results reveal that human teachers dynamically update their impression of the robot from early to late in the teaching session, and they choose their teaching strategy based on the robot's specific actions and their accumulated perceptions of the robot's learning progress.

Speech Impact in a Usability Test - A Case Study of the KUBO Robot

In interaction with robots, verbal interaction is important and can have an impact on the perception of the robot. Experiences from a pilot study showed that the KUBO robot was not intuitive to use. To investigate if tailored verbal utterances could make the robot more intuitive to use, a new usability test combined with a Wizard of Oz method, where a facilitator played verbal utterances when KUBO drove over the specific TagTiles, was carried out. The test shows that the tailored verbal utterances helped in the understanding of KUBO and the actions of the robot when passing over the TagTiles which therefore made it more intuitive to use. Even though the verbal utterances helped the users, there were still some TagTiles they did not understand. The test also showed that there was a considerable mismatch between what the participants stated they understood and what was observed they really understood.

Authoring Human Simulators via Probabilistic Functional Reactive Program Synthesis

One of the core challenges in creating interactive behaviors for social robots is testing. Programs implementing the interactive behaviors require real humans to test and this requirement makes testing of the programs extremely expensive. To address this problem, human-robot interaction researchers in the past proposed using human simulators. However, human simulators are tedious to set up and context-dependent and therefore are not widely used in practice. We propose a program synthesis approach to building human simulators for the purpose of testing interactive robot programs. Our key ideas are (1) representing human simulators as probabilistic functional reactive programming programs and (2) using probabilistic inference for synthesizing human simulator programs. Programmers then will be able to build human simulators by providing interaction traces between a robot and a human or two humans which they can later use to test interactive robot programs and improve or tweak as needed.

How to Make Robots' Optimal Anthropomorphism Level: Manipulating Social Cues and Spatial Context for an Improved User Experience

With the growing interest in robot related research and industry, there is a demand to shape user experience more sophisticatedly in human-robot interaction. The purpose of this study is to define the elements for manipulating robot's verbal anthropomorphism and investigate the influence on user experience associated with spatial context. Based on the identified elements, we divided the robot's anthropomorphism into three levels (high, medium, low) and associated them with two spatial contexts (open, closed). The results revealed that a higher level of verbal anthropomorphism mostly induced positive user experiences; however, people sometimes tended to prefer a medium level, especially in terms of usefulness. Further, privacy concerns were significantly higher in open space. Consequently, we propose that designers and researchers deviate from the two levels of anthropomorphism (e.g., high or low, existing or not) generally used in prior studies to a new perspective that also considers the spatial context.

Towards using Behaviour Trees for Long-term Social Robot Behaviour

This paper introduces a Behaviour Tree based design of long-term social robot behaviour in the context of SHAPES project, using ROS-compatible libraries, specifically two types of behaviours: a robot idle behaviour where the human approaches and begins the interaction, and a second behaviour where the robot actively navigates and searchers for a specific user to deliver a reminder. The behaviours will be tested on-site as part of SHAPES pilots and adjusted based on feedback and needs and is focused on long-term robot acceptance.

Robot Self-defense: Robot, Don't Hurt Me, No More

Would it be okay for a robot to hurt a human, if by doing so it could protect someone else? Such ethical questions could be vital to consider, as the market for social robots grows larger and robots become increasingly prevalent in our surroundings. Here we introduce the topic of "robot self-defense", which involves the use of force by a robot in response to violence, to protect a human in its care. To explore this topic, we conducted a preliminary analysis of the literature, as well as brainstorming sessions, which led us to formulate an idea about how people will perceive robot self-defense based on the perceived risk of loss. Additionally, we propose a study design to investigate how the general public will perceive the acceptability of a robot using selfdefense techniques. As part of this, we describe some hypotheses based on the assumption that the perceived acceptability will be affected by both the entities involved in a violent situation and the amount of force that is applied. The proposed scenarios will be used in a future survey to evaluate participants' perception of a social robot using self-defense techniques under varying circumstances, toward stimulating ideation and discussion on how robots will be able to help people to live better lives.

Fluid Sex Robots: Looking to the 2LGBTQIA+ Community to Shape the Future of Sex Robots

As sex robots continue to be developed by industry, portrayed by media, and studied by researchers, it is common to conceptualize robots from a cisgender and heterosexual (cishet), or feminist perspective. We advocate for an increased shift toward the 2LGBTQIA+ community for inspiration and a path forward for more inclusive, successful, and socially responsible sex robots. In addition to the intrinsic value of being inclusive, looking to the 2LGBTQIA+ community can help us to break away from traditional ideas of gender and sexuality, to unlock the full potential of this technology to be flexible and offer new possibilities. Further, we reflect on the importance of considering how the designs of sex robots, as politically charged technological artifacts, can contribute to reinforcing ideas about heteronormativity; instead, sex robots have the potential to positively contribute to breaking down traditional barriers surrounding gender and sex. We envision a future of sex robots that reach their full potential as fluid, individualized companions that enable people to comfortably engage their interests and identity.

Robot Teleoperation Interfaces for Customized Therapy for Autistic Children

Socially Assistive Robots are effective at supporting autistic children in a variety of different therapies. Therapists can control the robots' motions and verbalizations to engage children and deliver therapeutic interventions based on their needs. We present teleoperation capabilities to support therapists in customizing therapy to their clients' needs. Specifically, we introduce a documentation sidebar that aims to prime therapists using their clients' documented needs, and a session summary report that helps therapists reflect on the session with the child. We present preliminary designs for these capabilities and describe future work to build upon them.

Role of Socially Assistive Robots in Reducing Anxiety and Preserving Autonomy in Children

Anxiety in children is gradually becoming a problem that needs to be addressed urgently. Socially Assistive Robots (SARs) have shown great potential in anxiety treatment among adults and elders. However, the application on childhood anxiety has scarcely been tested. Autonomy is also an influential factor of psychological therapy sessions in children. This study, using state anxiety as a kick-off point, aims to investigate the effective-ness of SARs and the role of autonomy in reducing children's anxiety by using Progressive Muscle Relaxation (PMR). Partici-pants were 69 Chinese children aged from 10 to 12. We found that SARs significantly reduced state anxiety levels in all condi-tions. But no difference between each level of intervention and autonomy was found. Further research on various perspectives is suggested.

An Intelligent Human Avatar to Debug and Challenge Human-aware Robot Navigation Systems

Experimenting, testing, and debugging robot social navigation systems is a challenging task. While simulation is generally well suited for a first level of debugging and evaluation of robotics controllers and planners, the social navigation field lacks satisfactory simulators of humans which act, react and interact rationally and naturally. To facilitate the development of human-aware navigation systems, we propose a system to simulate an autonomous human agent that is both reactive and rational, specifically designed to act and interact with a robot for navigation problems and potential conflicts. Besides, it also provides some metrics to partially evaluate such interactions and data logs for further analysis. We show the limitations of over-used reactive-only approaches. Then, thanks to two different human-aware navigation planners, we show how our system can help answer the lack of intelligent human avatars for tuning and debugging social navigation systems before their final evaluation with real humans.

Kinematically-consistent Real-time 3D Human Body Estimation for Physical and Social HRI

We present a software tool, fully integrated with ROS, that enables robots to perceive people full body in 3D. The system works either with a simple RGB camera, or a RGB-D camera for better 3D absolute position estimation. The system is based on Google Mediapipe, and runs at > 8Hz on CPU. The consistency of the human kinematic model is ensured by relying on a URDF-defined kinematic model, that could be adjusted to each person's anthropometric characteristics.

Open Source System Integration Towards Natural Interaction with Robots

Speech is an intuitive way to interact with social robots: spoken language dialogues can help users to express their intents in a natural and flexible manner. In recent years, there has been remarkable progress in artificial intelligence related to spoken dialogue technology, including speech recognition and natural language processing. In this paper, we present the integration of the open source speech recognition, natural language processing, and dialogue management components into a robot software platform, and also report on a preliminary experiment of the integrated system using real users. Gesturing of the robot, which is also important in human-robot interaction, is combined with the spoken content of the robot utterance and included in the dialogue management component. As the dialogue domain we chose mealtime discussions on food and recipes, since spoken communication with a companion robot in such scenarios is considered natural and useful.

Perceived Trustworthiness of an Interactive Robotic System

This paper aims at comparing the level of trust given by the participants in a robotic system and their trust in a group of people. This specific experiment focuses on trust in robot knowledge versus trust in collective human knowledge tested with the help of a quiz. It has revealed that the perceived intelligence of the robot is not only dependent on its in-game performance but also on attributes such as its gender. During the experiments, people are competing with each other and gain or lose points based on their decisions. A joker system has been designed such that if chosen, people rely completely on the robot or on the human group. In this setup, perceived intelligence is therefore highly correlated with trustworthiness. In the scope of this joker-picking system, overall results have shown that the trust in the robotic system is higher than the trust in collective human knowledge.

Measuring Users' Attitudinal and Behavioral Responses to Persuasive Communication Techniques in Human Robot Interaction

Many social robots have been developed to support the needs of users, such as tour guides or sales robots. In these systems, the main purpose of the human robot interaction is to support the user's need. However, what if in addition to these capabilities, the robot had a goal of persuading the user to do something of which the user had no knowledge. What would the user's perceptions on the interaction be? We developed a social robot with the ability to employ six types of persuasion conversation logic, namely, scarcity, emotion, social identity, commitment, concreteness, and no persuasion and measured the users' attitudinal and behavioral responses when interacting with our robot. In this pilot study we describe our initial results with success rates varying across all six persuasion techniques. Particular persuasion techniques demonstrated as high as a 75% success rate at directing users towards a secret task.

A Task Design for Studying Referring Behaviors for Linguistic HRI

In many domains, robots must be able to communicate to humans through natural language. One of the core capabilities needed for task-based natural language communication is the ability to refer to objects, people, and locations. Existing work on robot referring expression generation has focused nearly exclusively on generation of definite descriptions to visible objects. But humans use many other linguistic forms to refer (e.g., pronouns) and commonly refer to objects that cannot be seen at time of reference. Critically, existing corpora used for modeling robot referring expression generation are insufficient for modeling this wider array of referring phenomena. To address this research gap, we present a novel interaction task in which an instructor teaches a learner in a series of construction tasks that require repeated reference to a mixture of present and non-present objects. We further explain how this task could be used in principled data collection efforts.

Does Encouraging Self-touching Behaviors with Supportive Voices Increase Stress-buffering Effects?

Due to the current worldwide COVID-19 pandemic, such measures as social distancing is being promulgated to reduce community spread. Unfortunately, such steps greatly limit physical touching, and this touch starvation increases people's stress. We address this problem by focusing on touch interactions with oneself: self-touch. We developed a system that consists of a touch sensor and a supportive voice function to encourage self-touch behaviors. We experimentally evaluated whether our developed system increased self-touch behaviors and investigated its stress-buffering effects under stressful settings. The experimental results showed that the system increased self-touch behaviors, although only male participants showed significant stress-buffering effects.

Development of a Training Robot for Slander Suppression

Slander on social networking sites has become a social problem, and measures, such as legislation and stricter regulations have been taken. However, there are limits to these measures, and we have not seen any system that encourages fundamental changes of the mind and behavior of contributors. Therefore, we propose a training robot that contributes to slander suppression, targeting people who post negative words regularly. To change his/her mind and behavior, the proposed robot system will serve as an intermediary to help people break out of the addiction of feeling good about slander by converting negative words in negative messages into positive words, generating positive messages, and presenting them to the user. In this report, we will describe the concept and the initial development of the proposed system to be implemented in a real robot system.

What Will It Take to Help a Stuck Robot?: Exploring Signaling Methods for a Mobile Robot

Our everyday living environment is created for people. When mobile robots negotiate this space, they might get stuck on hard to solve obstacles thus requiring people's help. The robot will need to persuade passers-by to assist it in overcoming these obstacles to reduce the need for maintenance interventions, which can be costly. In our study, we enabled a mobile robot to communicate its need for assistance in multiple ways, including beeping, synthesized voice, and movement. These behaviors were tested in combination, to ascertain which is the most effective. We found that the robot which communicates being stuck using movement and voice gets the most help, while static, beeping, and silent robots get much less help. However, subjective data shows that people might feel even more empathy towards the robot expressing its problem by beeping compared to using spoken messages.

The Role of Empathic Traits in Emotion Recognition and Emotion Contagion of Cozmo Robots

In this online study, we investigated how well people could recognize emotions displayed by video recordings of a Cozmo robot, and the extent to which emotion recognition is shaped by individuals' empathic traits. We also explored whether participants who report more empathic tendencies experienced more emotional contagion when watching Cozmo's emotional displays, since emotion contagion is a core aspect of empathy. We tested participants' perceptions of Cozmo's happiness, anger, sadness, surprise, and neutral displays. Across 103 participants, we report high recognition rates for most emotion categories except neutral animations. Furthermore, the mixed effects modelling revealed that an empathy subtype (the empathic concern subscale from the Interpersonal Reactivity Index) significantly impacted emotional contagion. Contrary to predictions, participants with high empathic concern subscale scores were less likely to find the robot's videos emotionally contagious. The study validates the utility of Cozmo robots to display emotional cues recognizable to human users, and further suggests that empathic traits could shape our affective interactions with robots, though perhaps in a counterintuitive way.

Learning Reward Functions from a Combination of Demonstration and Evaluative Feedback

As robots become more prevalent in society, they will need to learn to act appropriately under diverse human teaching styles. We present a human-centered approach for teaching robots reward functions by using a mixture of teaching strategies when communicating action appropriateness and goal success. Our method incorporates two teaching strategies for learning: explicit action instruction and evaluative, scalar-based feedback. We demonstrate that a robot instantiating our method can learn from humans who use both kinds of strategies to train the robot in a complex navigation task that includes norm-like constraints.

A Machine Learning Approach to Model HRI Research Trends in 2010~2021

The present study collects a large amount of HRI-related research studies and analyzes the research trends from 2010 to 2021. Through the topic modeling technique, our developed ML model is able to retrieve the dominant research factors. The preliminary results reveal five important topics, handover, privacy, robot tutor, skin de deformation, and trust. Our results show the research in the HRI domain can be divided into two general directions, namely technical and human aspects regarding the use of robotic applications. At this point, we are increasing the research pool to collect more research studies and advance our ML model to strengthen the robustness of the results.

Pick-me-up Strategy for a Self-recommendation Agent: A Pilot Field Experiment in a Convenience Store

Research on self-recommending product agents is under way. Compared with product promotion by humanoid robot agents, self-recommending agents can call a passing customer's attention to product. As the customer's interest is focused on the product, the self-recommending agent can then give instructions, such as "pick me up.'' It has been found that customers tend to follow such instructions, and touching the product is known to effectively support sales promotion. From two experiments in a convenience store, a self-recommending agent in this study was successful in attracting customer interest, which included handling the products. However, we also found that, after being picked up, the products' monologue resulted in customers leaving the product behind on the shelf. Herein, we examine the reasons why.

"Cool glasses, where did you get them?": Generating Visually Grounded Conversation Starters for Human-Robot Dialogue

Visually situated language interaction is an important challenge in multi-modal Human-Robot Interaction (HRI). In this context we present a data-driven method to generate situated conversation starters based on visual context. We take visual data about the interactants and generate appropriate greetings for conversational agents in the context of HRI. For this, we constructed a novel open-source data set consisting of 4000 HRI-oriented images of people facing the camera, each augmented by three conversation-starting questions. We compared a baseline retrieval-based model and a generative model. Human evaluation of the models using crowdsourcing shows that the generative model scores best, specifically at correctly referencing visual features. We also investigated how automated metrics can be used as a proxy for human evaluation and found that common automated metrics are a poor substitute for human judgement. Finally, we provide a proof-of-concept demonstrator through an interaction with a Furhat social robot.

Adolescents' Perceptions of the Role of Social Robots in Civic Participation: An Exploratory Study

Civic technologies are aimed at supporting citizens to participate in democratic processes. Civic robots - social robots that are designed to support people in civic participation - have potential to lower the barriers of participation especially for youth who face obstacles in participating through formal channels. We conducted an exploratory study with adolescents to investigate their perceptions of possible roles for civic robots. In the study, we asked participants of a youth event (n=24, age range 14-21) to ideate civic participation related purposes and concepts for social robots. The findings suggest that scenarios in which a robot serves as social facilitator or provides decision support for civic activities seem promising. In addition, civic robots could indirectly increase participation ability by providing emotional or social support.

A Communication Robot for Playing Video Games Together to Boost Motivation for Daily-use

Nowadays, decreasing opportunities for people to have daily conversations due to the increase in the number of withdrawn young people and households living alone. The lack of daily conversation has been pointed out as a risk that can lead to mental problems such as depression, and serious health problems such as dementia for the elderly. Efforts to encourage daily communication by having communication robots that act as talking partners is attracting attention to solve those problems. One of the challenges of communication robots is the difficulty of maintaining users' motivation to continue using robots. In this study, we propose the communication robot that plays a video game together with a user as an approach to keep the user's motivation high to use the robot. The proposed method not only controls the dialogue content of the robot but also controls the video game situation by manipulating the video game character. In this system, we aim to create an atmosphere where users can enjoy playing video games with the robot together, and the proposed game communication robot can keep their motivation high to use the robot.

Improved Indirect Virtual Objects Selection Methods for Cluttered Augmented Reality Environments on Mobile Devices

The problem of selecting virtual objects within augmented reality on handheld devices has been tackled multiple times. However, evaluations were carried out on purely synthetic tasks with uniformly placed homogeneous objects, often located on a plane and with none or low occlusions. This paper presents two novel approaches to indirect object selection dealing with highly occluded objects with large spatial distribution variability and heterogeneous size and appearance. The methods are designed to enable long-term usage with a tablet-like device. One method is based on a spatially anchored hierarchy menu, and the other utilizes a crosshair and a side menu that shows candidate objects according to a custom-developed metric. The proposed approaches are compared with direct touch in the context of spatial visual programming of collaborative robots problem, on a realistic workplace and a common robotic task. The preliminary evaluation indicates that the main benefit of the proposed indirect methods could be their higher precision and higher selection confidence for the user.

Community-Based Data Visualization for Mental Well-being with a Social Robot

Social robots have been used to support mental health. In this work, we explored their potential as community-based tools. Visualizing mood data patterns of a community with a social robot might help the community raise awareness about the emotions people feel and affecting factors from life events. This could potentially lead to adaptation of suitable coping skills enhancing the sense of belonging and support among community members. We present preliminary findings and ongoing plans for this human-robot interaction (HRI) research work on data visualizations supporting community mental health. In a two-day study, twelve participants recruited from a university community engaged with a robot displaying mood data. Given the feedback from the study, we improved the data visualization in the robot to increase accessibility, universality, and usefulness of such visualizations. In the future, we plan on conducting studies with this improved version and deploying a social robot for a community setting.

A Virtual Agent That is Equipped With Internal Movable Weights for Enhanced Credibility

When interacting with humans, virtual agents use both visual and auditory information. We developed a device that can present the facial expressions of a virtual agent together with providing tactile information from the movement of internal weights. Using this device, we conducted a pilot test to investigate how the trustworthiness of the virtual agent changed when the haptic information from the movement of the internal weights was added to the expression of the virtual agent.

What Can We Do with a Robot for Family Playtime?

As the operation of robots is becoming easier, adopting a robot as a companion for families is increasing. When considering that family playtime has a positive influence on all family members, it is important to provide various activities for families to play together. Robots might mediate fun activities to increase family playtime. Therefore, this research was designed to compare play activities between dyadic child-robot interactions and family-robot interactions. We explored the possible activities for family playtime with a NAO robot. The results from three family groups showed that verbal activities, such as giving an instruction to a robot and talking to a robot, increased when family members are playing together with the robot. Also, increased physical activities were involving all family members, such as giving and receiving balls. Therefore, it will be necessary for a robot to recognize the voices of multiple family members and respond to each member appropriately. More importantly, a robot's time allocation for each family member and encouraging the participation of all members will be necessary to maintain or increase the family playtime.

Feasibility of Using the Robot Sphero to Promote Perceptual-Motor Exploration in Infants

Infant-robot interaction has been increasingly gaining attention, yet, there are limited studies on the development of robot-assisted environments that promote perceptual-motor development in infants. This paper assesses the feasibility of operating a spherical mobile robot, Sphero, to engage infants in perceptual-motor exploration of an open area. Two case scenarios were considered. In the first case, Sphero was the only robot providing stimuli in the environment. In the second case, two additional robots provided stimuli along with Sphero. Pilot data from two infants were analyzed to extract information on their visual attention to and physical interaction with Sphero, as well as their motor actions. Overall, infants (i) expressed a preference to Sphero regardless of stimulation levels, and (ii) moved out of stationary postures in an effort to chase and approach Sphero. These preliminary findings provide support for the future implementation of Sphero in robot-assisted learning environments to promote perceptual-motor development in infants.

KURT: A Household Assistance Robot Capable of Proactive Dialogue

In this work, we present a robot-dialogue framework to handle sophisticated robot-initiated interaction. We introduce a robotic assistant equipped with a dialogue system in a household assistance context. To become a truly collaborative companion, the assistant is able to engage in a proactive conversation for task assistance. The system actions are triggered by the recognition of persons or specific objects. To evaluate our system, we conducted a user study with 17 participants in a laboratory environment where users were able to interact with the system via natural language. The results showed that the behaviour of the system was accepted and perceived as trustworthy by the users.

IVO Robot: A New Social Robot for Human-Robot Collaboration

We present a new social robot named IVO, a robot capable of collaborating with humans and solving different tasks. The robot is intended to cooperate and work with humans in a useful and socially acceptable manner to serve as a research platform for long-term Social Human-Robot Interaction. In this paper, we proceed to describe this new platform, its communication skills and the current capabilities the robot possesses, such as, handing over an object to or from a person or performing guiding tasks with a human through physical contact. We describe the social abilities of the IVO robot, furthermore, we present the experiments performed for each robot's capacity using its current version.

Friendly Elevator Co-rider: An HRI Approach for Robot-Elevator Interaction

In-building cross-floor delivery has always been a common daily task for business, but its robot-oriented automation has never been a popular convention due to the high infrastructural or developmental cost and the difficulties of having physical elevator-ride for common robots. This paper thus aims to contribute a universal robot-oriented elevator-ride workflow that adopts different visual and audible human-like touchpoints to achieve automated robot-elevator interaction, while ensuring the physical safety and emotional security of surrounding pedestrians and elevator passengers. Investigating the handling for potential elevator-ride scenarios and observing the interactions between humans and the robot under the influence of such human-like touchpoints, this paper offers empirical suggestions as to how in-building travel of robots can be pedestrians and passengers friendly.

Robot Musical Theater for Climiate Change Education

The use of social robots has recently been investigated in various areas, including STEM (Science, Technology, Engineering, and Mathematics) education and artistic performances. To inform children of the seriousness of climate change and awareness that they can make change, we created the Robot Musical Theater performance. In this project, natural elements (wind, earth, fire, and water) were anthropomorphized and represented by humanoid robots (Pepper, Milo, and Nao). The robots were designed to motivate audience to participate in the action to prevent climate change. Because of COVID, only fourteen visitors as a single group were allowed to participate in real-time and posted to YouTube, where at the time of submission, 141 people have viewed the performance. The participants provided positive comments on the performance and showed their willingness to participate in the movement to prevent climate change, and expressed their further interest in STEM learning. This performance is expected to contribute to enhancing informal STEM and robotics learning, as well as advancing robotic arts.

Alzheimer's Dementia Detection through Spontaneous Dialogue with Proactive Robotic Listeners

As the aging of society continues to accelerate, Alzheimer's Disease (AD) has received more and more attention from not only medical but also other fields, such as computer science, over the past decade. Since speech is considered one of the effective ways to diagnose cognitive decline, AD detection from speech has emerged as a hot topic. Nevertheless, such approaches fail to tackle several key issues: 1) AD is a complex neurocognitive disorder which means it is inappropriate to conduct AD detection using utterance information alone while ignoring dialogue information; 2) Utterances of AD patients contain many disfluencies that affect speech recognition yet are helpful to diagnosis; 3) AD patients tend to speak less, causing dialogue breakdown as the disease progresses. This fact leads to a small number of utterances, which may cause detection bias. Therefore, in this paper, we propose a novel AD detection architecture consisting of two major modules: an ensemble AD detector and a proactive listener. This architecture can be embedded in the dialogue system of conversational robots for healthcare.

Exploring the Effect of Mass Customization on User Acceptance of Socially Assistive Robots (SARs)

This study examines whether allowing a certain level of user customization of a socially assistive robot (SAR) affects users' perception of the robot and improves acceptance. Two hundred thirty-nine respondents participated in an online study of a telepresence-assistive robot. We formed three groups of respondents who differed in their ability to manipulate the SAR design. Results suggested that allowing mass customization positively affects users' acceptance of the SAR, perceived enjoyment, intention for future use, and perceived robot usefulness.

Using Robots to Facilitate and Improve Social Interaction Between Humans: An Exploratory Qualitative Study with Adults 50+ in the US and Japan

Many people, particularly older adults, are negatively affected by social isolation and loneliness. Robots can facilitate social interaction between persons. In this on-going study, we interview middle and older adults in the US (n=20) and Japan (n=4) regarding their opinions of socially facilitative robots and how they want robots to assist in their social lives. Participants' desires for robots included robots that could act as an embodied avatar, teleconferencing robots that would be mobile and hands-free, and automated management of their social schedule. Some participants were opposed to the idea of socially facilitative robots, expressed concern regarding robot involvement in human social life, felt that robots were incapable of promoting social interactions between humans, or felt that robots were only for the very elderly and disabled and could not currently assist them. Emerging differences include that thus far, only US participants expressed a desire for pet robots or organizational robots, and all four current Japanese participants expressed the belief that robots were for the very elderly and disabled.

EXOSMOOTH: Test of Innovative EXOskeleton Control for SMOOTH Assistance, With and Without Ankle Actuation

This work presents a description of the EXOSMOOTH project, oriented to the benchmarking of lower limb exoskeletons performance. In the field of assisted walking by powered lower limb exoskeletons, the EXOSMOOTH project proposes an experiment that targets two scientific questions. The first question is related to the effectiveness of a novel control strategy for smooth assistance. Current assist strategies are based on controllers that switch the assistance level based on the gait segmentation provided by a finite state machine. The proposed strategy aims at managing phase transitions to provide a smoother assistance to the user, thus increasing the device transparency and comfort for the user. The second question is the role of the actuation at the ankle joint in assisted walking. Many novel exoskeletons devised for industrial applications do not feature an actuated ankle joint. In the EXOSMOOTH project, the ankle joint actuation will be one experimental factor to have a direct assessment of the role of an actuated joint in assisted walking. Preliminary results of 15 healthy subjects walking at different speeds while wearing a lower limb exoskeleton supported the rationale behind this question: having an actuated ankle joint could potentially reduce the torques applied by the user by a maximum value of 85 Nm. The two aforementioned questions will be investigated in a protocol that includes walking on a treadmill and on flat ground, with or without slope, and with a load applied on the back. In addition, the interaction forces measured at the exoskeleton harnesses will be used to assess the comfort of the user and the effectiveness of the control strategy to improve transparency.

Perception of Power and Distance in Human-Human and Human-Robot Role-Based Relations

The use and interpretation of social linguistic strategies such as politeness is influenced by multiple factors, e.g., the speaker-hearer relation. Such relations influence an interlocutor's expectations regarding the interaction and thus also its perception. This makes speaker-hearer relations constituting a partner model highly relevant for the user experience in human-robot interaction as well. This paper presents a questionnaire-based study on the perception of human-robot relations in comparison to human-human relations across different roles (e.g., colleague, assistant) and spaces of interaction (home, work, public). It was found that participants perceive robots differently based on space, as they do for human-human relations in corresponding roles. Overall, humans were evaluated to have more power over and more distance to a robot interaction partner compared to another human. Our results provide insights into an intuitive interaction-initial partner model based on roles.

The Nature of Trust in Communication Robots: Through Comparison with Trusts in Other People and AI Systems

In this study, the nature of human trust in communication robots was experimentally investigated comparing with trusts in other people and artificial intelligence (AI) systems. The results of the experiment showed that trust in robots is basically similar to that in AI systems in a calculation task where a single solution can be obtained and is partly similar to that in other people in an emotion recognition task where multiple interpretations can be acceptable. This study will contribute to designing a smooth interaction between people and communication robots.

T-Top, a SAR Experimental Platform

During these past years, Socially Assistive Robots (SARs) have been used to study the benefits of their uses with elderly people and people with dementia for healthcare purposes. Yet, almost all SARs have somewhat limited perception capabilities or respond using simple pre-programmed behaviors and reactions, providing limited or repetitive interaction modalities. To overcome these limitations and take into consideration the strengths and weaknesses of SARs in healthcare settings, this paper presents T-Top, a tabletop robot designed with advanced audio and vision sensors, deep learning perceptual processing and telecommunication capabilities. Designed as a open hardware/software platform, the objective with T-Top is to provide an experimental platform that can implement richer interaction modalities and develop higher cognitive abilities from interacting with people.

Unfortunately, Your Task Allocation is in Need of Improvement

This study investigates whether the informative content of negative robot feedback influences perceived adequacy of feedback and willingness to improve decisions made during an allocation task. For this purpose, 153 subjects received feedback from a robotic co-worker after an allocation task, which provided either process-level information, task-level information, self-regulation information, or person-specific feedback. As the results of this study indicate, negative robot feedback that is related to the task and contains information about the current progress and potentials for improvement is perceived as more appropriate and leads to a higher willingness to improve than feedback which is neither informative nor specific regarding the handled task.

I enjoyed the chance to meet you and I will always remember you: Healthy Older Adults' Conversations with Misty the Robot

We conducted a 2x2 Wizard of Oz between-subject user study with sixteen healthy older adults. We investigated how to make social robots converse more naturally and reciprocally through unstructured conversation. We varied the level of interaction by changing the level of verbal and nonverbal communication the robot provided. Participants interacted with the robot for eight sessions engaging in an unstructured conversation. These conversations lasted thirty minutes to an hour. This paper will evaluate four questions from the post-interaction survey individuals completed after each session with the robot. The questions include: (i) I had fun talking to the robot; (ii) I felt I had a meaningful conversation; (iii) I was engaged the whole interaction; and (iv) I would consider the robot my friend. All participants reported they were engaged, had a meaningful conversation, and had fun during all eight sessions. Seven individuals felt the robot was their friend.

The Inversion Effect as a Measure of Social Acceptance of Robots

Abstract- If robots could engage face-processing they would increase the likelihood they are accepted as social companions. However, research has not examined whether and when robot "faces" engage face-processing. The current study examined whether facial-width-to-height ratio (FWHR) modulated face-processing with robots using the "inversion task"-a commonly utilized measure of face perception that leverages the finding that inverting face stimuli hurts recognition performance (i.e., inversion effects) compared to other types of stimuli. We predicted that recognition performance would be more effected by inversion when robots had a low rather than high FWHR. While our statistical results were not significant, descriptive results trended in favor of our hypothesis, demonstrating robots with a lower FHWR had larger inversion effects than robot with a higher FWHR. While more research will be needed to clarify these results, the inversion task is a potentially useful tool to measure the social acceptance of robots through the detection of facial processing.

Are Robots That Assess Their Partner's Attachment Style Better At Autonomous Adaptive Behaviour?

Interacting with partners that understand our desire of closeness or space and adapt their behavior accordingly is an important factor in social interaction, since the perception of others is a fundamental prerequisite for reliable interaction. In human-human interaction (HHI), this information can be inferred by a person's attachment style - a person's characteristic way of forming relationships, modulating behavior (i.e ways to give or seek support) and, on a biological level, their hormone dynamics. Enabling robots to understand their partners' attachment style could enhance robot's perception of partners and help them on how adapt behaviors during an interaction. In this direction, we wish to use the relationship between attachment style and cortisol, to equip the humanoid robot iCub with an internal cortisol-inspired framework that allows it to infer the participant's attachment style and drives it to adapt its behavior accordingly.

Multi-party Interaction with a Robot Receptionist

We introduce a situated interactive robot receptionist that can coordinate turn-taking and handle multi-party engagement and dialogue in dynamic environments, where users might enter or leave the scene at any time. The objective is to create a multi-user engagement policy to manage turn-taking using the robot's gaze, head pose, and verbal communication as parameters and to analyse the participant's perception of the robot. Participant feedback on the system was collected using an online survey that allowed for a comparison of subjective feedback for 4 different interaction policies. The results confirm the hypothesis that a robot is perceived as more intelligent and conscious when it reacts using eye gaze or head pose, once a new user enters the scene. Furthermore, we find that robots need to use a combination of verbal and non-verbal cues to coordinate turn-taking in order to be perceived as polite and aware of human social norms.

The Effect of Robots Listening Attitude Change on the Self-disclosure of the Elderly: A Preliminary Study

Self-disclosing one's life experiences is known to be beneficial to the psychological health of older adults, from the viewpoint of integrity. It has been shown that people tend to self-disclose more to the people they are fond of. For improving interpersonal liking, the use of the "gain effect" has been shown to be effective. "Gain effect" refers to that people like a person who initially has a negative attitude but gradually develops a positive attitude, compared to a person who consistently has a positive attitude. Based on these previous studies, our study aims to clarify the effect of a change in the robot's listening attitude on the self-disclosure of the elderly. We conducted an preliminary experiment, wherein 15 elderly participants self-disclosed to a robot for approximately 20 minutes. The participants were assigned to one of three groups according to the type of robot they self-disclosed. 1) The CN group where participants interacted with the robot that consistently listened with neutral behavior, 2) the CP group where participants interacted with the robot that consistently listened with positive behavior, and 3) the Ch group where participants interacted with the robot that listened with neutral behavior first and subsequently with positive behavior. The results revealed that the Ch group participants' ratio of self-disclosure utterances was the highest among the three groups. Furthermore, the willingness to self-disclose tended to be higher for the Ch group than for the CN group, in terms of everyday experiences and loss experiences.

Furnituroid: Shape-Changing Mobile Furniture Robot for Multiple and Dynamic Affordances

This study introduces shape-changing mobile furniture robot, named ''Furnituroid.'' It is based on a polysemous design approach that integrates multiple functions of furniture into a mobile robot rather than simply extending traditional furniture as a robot. Furnituroid allows for dynamic affordances on a furniture-scale, and aims at room-scale dynamic affordances by arranging its multiple pieces. This study describes an instance of Furnituroid and the design space considering through its design process.

Toward Adaptive Driving Styles for Automated Driving with Users' Trust and Preferences

As autonomous vehicles (AVs) become ubiquitous, users' trust will be critical for the successful adoption of such systems. Prior works have shown that the driving styles of AVs can impact how users trust and rely on such systems. However, users' preferred driving style may vary with changes in trust or road conditions, experience, and personal driving preferences. We explore methods to adapt the driving style of an AV to match the preferred driving style of users to improve their trust in the vehicle. We conducted a pilot study (n=16) on a simulated urban environment, where the users experience various static and adaptive driving styles for different pedestrian and traffic-related scenarios. Our results indicate that users best trust AVs that closely match their preferences (p< 0.05). We believe that exploring the effects of AV driving style on users' trust and workload will provide necessary steps towards developing human-aware automated systems.

The Influence of Gaming Experience, Gender and Other Individual Factors on Robot Teleoperations in VR

A valid Human-Robot Interaction (HRI) should be effective for the majority of the population. However, gender, gaming experience, or other individual factors are often likely to affect users' performance when interacting with a robot. In the present study, we measured the performance and perceived workload of participants driving a robot through a pick-and-place task in Virtual Reality (VR) via controller buttons or physical actions. The following individual factors were considered in the analysis: gaming experience, gender, learnability skills, problem solving and trust in technology. Results showed that all the accounted individual factors impacted either performance or perceived demand, but only when guiding the robot via controller buttons. Our findings foster the adoption of more natural ways of teleoperating robots, such as by physical actions, as they demonstrated to be exempt from the influence of individual factors, and are likely to be effective for a broader section of the population.

Sensitivity of Trust Scales in the Face of Errors

Trust between humans and robots is a complex, multifaceted phenomenon and measuring it subjectively and reliably is challenging. It is also context dependent and so choosing the right tool for a specific study can prove difficult. This paper aims to evaluate various trust measures and compare them in terms of sensitivity to changes in trust. This is done by comparing two validated trust questionnaires (TAS and MDMT) and one single item assessment in a COVID-19 triage scenario. We found that trust measures are equivalent in terms of sensitivity to changes in trust. Furthermore, the study showed that trust could be measured similarly through a single item assessment in comparison with other lengthier scales, in scenarios with distinct breaks in trust. This finding would be of use for experiments where lengthy questionnaires are not appropriate, such as those in the wild.

Public Versus Private: How Teens Perceived Teen-robot Interactions in a School Setting

Social robots may be a promising social-emotional tool to support adolescent mental health. However, how might interactions with a social robot in a school setting be perceived by teens? From previous studies, we gathered qualitative data suggesting a design tension between teens wanting both public and private interactions with our social robot, EMAR. In our current study, we explored interactions between a social robot and a small group of adolescents in a semi-private, school library setting. We found: (1) Some teens preferred to have a friend present while they engaged with the social robot, (2) Teens found comfort in being physically visible, but audibly private during interactions, and finally (3) Strangers in the school environment were not disruptive of the teens' robot interactions, but unexpectedly friends were. After presenting these findings, we briefly discuss how these qualitative data can be situated and our next steps for further exploration.

Automated Care in New Zealand

With the growing speed of automation, robots are taking on social care roles. In retirement villages and activity centers for older adults in New Zealand, the robotic seal Paro has become a valuable unpaid staff member contributing to the social life of people who struggle with the effects of dementia. This study aimed to investigate the role of Paro as an agent of care during a time with a steady push towards automation and digitalization of welfare services in New Zealand. As a part of the study four care workers, a family member of a former resident, and three researchers from a New Zealand based robotics research group participated in in-depth interviews pertaining to the participant's own experiences, opinions, and motivations for using or working with social robots. Results found that Paro was used to increase residents' quality of life rather than reasons of automation for profit.

K-Qbot: Language Learning Chatbot Based on Reinforcement Learning

The application of Reinforcement Learning (RL) as an emergent field of Machine Learning has shown positive results in interdisciplinary fields. Although research has proven its effectiveness in language education through various agents (e.g., chatbots, robots, talking avatars), its application in letter acquisition is relatively new. In light of the alphabet transition from Cyrillic to Latin for the Kazakh language, potential challenges might be associated with learning and memorizing the new alphabet. Specifically, students with extant alphabet knowledge might struggle in using the later-learnt new alphabet given no sufficient practice. In this paper, we present a chatbot based on Reinforcement Learning that is anticipated to assist university students in learning the Kazakh Latin alphabet during an interaction in a letter acquisition scenario. Thus, we attempt to identify whether the RL chatbot is efficient for this learning scenario through an online survey study involving pre-test, chatbot interaction, and post-test.

"How Would You Communicate With a Robot?": People with Neourodevelopmental Disorder's Perspective

Neurodevelopmental disorders (NDDs) are characterised by impairments in communication. Socially assistive robots have been identified as a promising avenue to alleviate their burden. Since NDDs have different needs, their way of communicating with a robot (e.g., speech-based) could differ among individuals. This paper aims to investigate the most suitable modality to communicate with a robot for NDDs - among voice, cards, and buttons - and explore their opinion on this matter. We ran an exploratory study involving 29 NDDs participants: 13 of them could freely communicate with an autonomous QT robot, 9 took part in a group discussion, and 7 first interacted individually with the robot, and then they participated in a group discussion. Our results showed that i) the cards were the most used communication modality, ii) voice can be used for counting games, buttons for multiple-choice games, and cards for memory-like games, iii) opinions did not differ much among groups.

User Perception on Personalized Explanation by Science Museum Docent Robot

As the number of docent robots in museums has increased, robot personalization services have become important. A survey-based experiment was conducted to catch the difference in perceptions of personalized service for exhibition visitors. As a result, it was found that the background knowledge of the visitors listening to the explanation had an effect on the perception of the personalized service. The finding gives us a set of design criteria for personalized museum guide robot services.

Design Implications for Effective Robot Gaze Behaviors in Multiparty Interactions

Human-robot non-verbal communication has been a growing focus of research, as we realize its importance to achieve interaction goals (e.g. modulating turn-taking) and manage human perception of the interaction. Consequently, the development of models for robot non-verbal behavior, such as gaze, should be informed by studies of human reaction and perception to that behavior. Here, we look at data from two studies where two humans interact describing words to a robot. The robot tries to balance participation of the two players through a combination of gaze aversion, looking at the listener and looking at the speaker. We analyze how momentary gaze patterns reflect in the participant's turn length and perception of the robot, as well as in the participation imbalance. Our findings may be used as recommendations towards crafting robot gaze behaviors in multiparty interactions.

A Demonstration of the Taxonomy of Functional Augmented Reality for Human-Robot Interaction

With the rising use of Augmented Reality (AR) technologies in Human-Robot Interaction (HRI), it is crucial that HRI research examines the role of AR in HRI to better define AR-HRI systems and identify potential areas for future research. A taxonomy for AR in HRI has recently been proposed for the field. However, it was limited to the definition of the framework, and exemplifying its use was missing. In this paper, we perform a demonstration of how the aforementioned taxonomy of AR in HRI can be used to analyse an existing AR-HRI system and come up with questions for alternative ways AR-HRI could be designed and further extended.

A Novel Online Robot Design Research Platform to Determine Robot Mind Perception

A common issue in Human-Robot Interaction is a gap in understanding how robot designs are perceived by the user. A common issue encountered by practitioners of Machine Learning (ML) is a lack of salient data to use in training. The "Build-A-Bot" project is developing a novel research platform implemented as a web-accessible 3D game that affords data collection of many user-provided robot designs. The designs are used to train ML models to better evaluate robot designs, predict how a design will be perceived using Convolutional Neural Networks (CNNs), and create new robot designs using Generative Adversarial Networks (GANs). This paper outlines the current and future work accomplished by an interdisciplinary undergraduate student team at the University of Denver across Computer Science, Music, Psychology, and other related STEM fields that have created Build-A-Bot.

Is Deep Learning a Valid Approach for Inferring Subjective Self-Disclosure in Human-Robot Interactions?

One limitation of social robots has been the ability of the models they operate on to infer meaningful social information about people's subjective perceptions, specifically from non-invasive behavioral cues. Accordingly, our paper aims to demonstrate how different deep learning architectures trained on data from human-robot, human-human, and human-agent interactions can help artificial agents to extract meaning, in terms of people's subjective perceptions, in speech-based interactions. Here we focus on identifying people's perceptions of their subjective self-disclosure (i.e., to what extent one perceives to be sharing personal information with an agent). We approached this problem in a data-first manner, prioritizing high quality data over complex model architectures. In this context, we aimed to examine the extent to which relatively simple deep neural networks could extract non-lexical features related to this kind of subjective self perception. We show that five standard neural network architectures and one novel architecture, which we call a Hopfield Convolutional Neural Network, are all able to extract meaningful features from speech data relating to subjective self-disclosure.

Robotic Task Complexity and Collaborative Behavior of Children with ASD

- Social interactions are essential in the everyday lives of humans. People with an autism spectrum disorder (ASD) display shortages of social skills, thus making their day-to-day encounters more difficult. This paper reports on two small-scale studies, investigating whether the use of collaborative robot tasks in an educational setting stimulates the collaborative behavior of children with ASD, and whether robotic task complexity affects collaborative behavior. A total of 24 children participated in robotic tasks of varying complexities. The sessions were videotaped and analyzed. Children's supervisors completed questionnaires, evaluating the social behavior of participants. Results demonstrate that children collaborated during the robot activities. The influence of robotic task complexity on collaboration skills was not significant, possibly due to the small number of participants. The results show the promise of using robots in education for children with ASD, although further research is needed to investigate the implementation of robots in special education.

To Transfer or Not To Transfer: Engagement Recognition within Robot-Assisted Autism Therapy

Social robots are increasingly being used as a mediator in robot-assisted autism therapy to improve children's social and cognitive skills. Engagement is one of the key measurements used to evaluate the therapeutic interventions' effect on children. While "engagement'' is broadly used, it has been challenging to find a consensus about its definition in the community. With this paper, we explore the use of a data-driven approach to investigate the extent to which a model on engagement built on one dataset transfers to another. We utilized two publicly available datasets of engagement recognition, namely PInSoRo and Qamqor datasets, with an attempt to achieve a higher accuracy taking into account the transferred knowledge. The accuracy of $83.18%$ was obtained on the PInSoRo dataset of child-child interactions with face and body keypoints. We have used the methodology of transfer learning to improve the classification accuracy on the Qamqor dataset. The best result obtained is a 71.89% accuracy on the Qamqor dataset. This suggests that more data with similar keypoints is needed to achieve better accuracy when utilizing transfer learning from one dataset to another dataset.

Effect of Human Involvement on Work Performance and Fluency in Human-Robot Collaboration for Recycling

Human-robot collaboration has significant potential in recycling due to the wide variation in the composition of recyclable products. Six participants performed a recyclable item sorting task collaborating with a robot arm equipped with a vision system. The effect of three different levels of human involvement or assistance to the robot (Level 1- occlusion removal; Level 2- optimal spacing; Level 3- optimal grip) on performance metrics such as robot accuracy, task time and subjective fluency were assessed. Results showed that human involvement had a remarkable impact on the robot's accuracy, which increased with human involvement level. Mean accuracy values were 33.3% for Level 1, 69% for Level 2 and 100% for Level 3. The results imply that for sorting processes involving diverse materials that vary in size, shape, and composition, human assistance could improve the robot's accuracy to a significant extent while also being cost-effective.

Perceptions of Explicitly vs. Implicitly Relayed Commands Between a Robot and Smart Speaker

Designers of smart-home systems must make decisions about the perceived identities and interconnectedness of their various devices. To inform these decisions, we performed an online study to examine whether people perceive multiple devices in a smart home as different interfaces for the same system, devices that talk to each other, or independent devices. We manipulated the types of devices in the system (heterogeneous/homogeneous), how the devices relayed commands to each other (implicit/explicit), and the task requested. Participants were flexible in how they interpreted the devices, presenting an opportunity for designers to select a suitable model.

Investigating Customers' Preferences of Robot's Serving Styles

This work investigated how robots with different service style in food industries are perceived by customers. We analysed responses from 53 participants who evaluated the video interaction of a robot waiter while taking orders and that delivered a customer service experience based on different styles of a service robot interaction (service-focused vs. social-focused). Results indicated that social-focused robots are perceived to have more natural and pleasant behaviours compared with service-focused robots, and therefore better accepted by customers. Social-focused robots were also perceived as having a more persuasive effect on the users' choices. In contrast, service-focused robots were perceived as hasty, and participants did not feel in charge of the food choices to order.

MAPPO: The Assistance Pet for Oncological Children

MAPPO (Mascota Asistencial Para Pacientes Oncológicos) is a social robot designed to guide, help and keep company children who are experiencing cancer and their caregivers. It helps them face the daily challenges related to the side effects of treatments, medication compliance, healthy nutrition, mental health and emergencies. It's designed to interact and get into the child's life as a pet and guide.

It's not all Bad - Worker Perceptions of Industrial Robots

The current discourse presented by mainstream media towards industrial robots seems to focus on the negative aspects their introduction brings to the workforce. However, it is unclear whether industrial workers share this negative perspective regarding industrial robots. In this paper, we present the results of a survey study (N=94) investigating differences in perception towards industrial robots, depending on the presence or absence of exposure to them in the workplace. Our results show that while workers with robot experience acknowledge that robots can lead to job loss, they also show stronger beliefs in the robots ability to boost new job opportunities. Additionally, we found that first-hand experience with robots in the workplace can positively affect workers perceptions about their advantages. Overall, our findings show that, contrasting the bleak picture drawn by mainstream media, workers exposed to industrial robots developed a more nuanced view of this new technology in the workplace.

Neither "Hear" Nor "Their": Interrogating Gender Neutrality in Robots

Gender is a social framework through which people organize themselves-and non-human subjects, including robots. Research stretching back decades has found evidence that people tend to gender artificial agents unwittingly, even with the slightest cue of humanlike features in voice, body, role, and other social features. This has led to the notion of gender neutrality in robots: ways in which we can avoid gendering robots in line with human models, as well explorations of extra-human genders. This rapid review critically surveyed the literature to capture the state of art on gender neutrality in robots that interact with people. We present findings on theory, methods, results, and reflexivity. We interrogate the very idea that robot gender/ing can be neutral and explore alternate ways of approaching gender/ing through the design and study of robots interacting with people.

DualityBoard: An Asymmetric Remote Gaming Platform with Mobile Robots and the Digital Twins

Due to lifestyle changes caused by COVID-19 and other factors, the importance of gaming platforms that allow users to play together with distant friends and family members is increasing. In general, it is difficult for users with widely different gaming skills to play together. Therefore, game designers need to take asymmetry into account, such as providing different roles and different operation interfaces for each user. In this study, we propose a foundation for an asymmetric tele-gaming platform using mobile robots and their digital twins. Then, we conducted an experiment to evaluate the usability of this platform. As a result, we found that the visual size of the digital twin may affect the usability. Based on these results, we propose a model of the relationship between the size of the digital twin and the usability perceived by the user.

User-centered Exploration of Robot Design for Hospitals in COVID-19 Pandemic

In order to prevent COVID-19 infection in the hospital environment, the medical staff is handling many treatments for patients non face-to-face, which reduces the efficiency of medical services. Robots may enable smooth non face-to-face interactions between medical staff and patients by providing cognitive and physical support to the medical staff. In this paper, we identified the medical staff's pain points and needs about the robots which would help them. In addition, researchers and medical staff together participated in generating the design concept of a robot needed in times of COVID-19 as design participants. We conducted qualitative interviews about robots with nurses working in a negative pressure isolation room (NPIR) where patients with COVID-19 are isolated while treatment. As a result, the needs for supporting increased workload including inventory monitoring, waste management, meal delivery, and medicine delivery as well as supporting communication in emergency including communication in patient's emergency and communication in medical staff's emergency were discovered. Based on the findings from the interview, we proposed a robot design concept that can satisfy medical staff's need in NPIR.

Perceptions of Social Robots as Motivating Learning Companions for Online Learning

The shift for online learning, accelerated during the COVID-19 pandemic, has highlighted the importance of self-regulated learning skills and the challenges many students face while practicing online learning. Could social robots help by serving as motivating learning companions in online learning settings? In this study, we explore the perceived potential of motivating learning companion robots designed based on Self-Determination Theory. One hundred and eighty-five participants watched one of five videos displaying a simulated student-robot interaction and rated their perceptions of the intrinsic motivation of the student in the videos and the type of support provided by the social robot. Overall, the motivating learning companion affected the perceived sense of relatedness and perceived level of anxiety of the student in the video, as well as the social robot's perceived support. Differences between the conditions are discussed.

Human-Aware Reinforcement Learning for Adaptive Human Robot Teaming

Mistakes in high stress and critical multitasking environments, such as piloting an airplane and the NASA control room, can lead to catastrophic failures. The human's internal state (e.g., workload) may be used to facilitate a robot teammate's adaptations, such that the robot can interact with the human without negatively impacting overall team performance. Human performance has a direct correlation with workload states; thus, the human's internal workload state may be leveraged to adapt a robot's interactions with the human in order to improve team performance. A reinforcement learning-based paradigm that incorporates human workload states to determine appropriate robot adaptations is presented. Preliminary results using the proposed approach in a supervisory-based NASA MATB-II environment are presented.

Effects of Colored LEDs in Robotic Storytelling on Storytelling Experience and Robot Perception

Social robots can use biomimetic modalities such as gestures to convey emotions. The use of colored light for emotion expression is also possible, but rarely explored. In this paper, colored LEDs are used in addition to contextual gestures to communicate emotions in robotic storytelling. Results show that adding colored light to the storytelling did not improve the recipients' transportation into the story. Their cognitive absorption was significantly decreased. The users' perception of the NAO robot was not influenced by colored lights except for animacy which was higher using only white LEDs. Problems might have been mismatches between colors and emotions, the lack of emotional gestures, and the too small light emission from NAO's eye LEDs. Since smart rooms lighting can affect the users' whole visual field, they could enhance emotional effects. Thus, future studies should investigate the integration of a robotic storyteller in smart rooms allowing for smart light control.

Can an Empathetic Teleoperated Robot Be a Working Mate that Supports Operator's Mentality?

Customer service with teleoperated robots is susceptible to the same problems related to stress as is emotional labor in general, such as for in-person customer service representatives. In this study, we aimed to reduce that stress by constructing a buddy-like rapport between the robot, which is the target of teleoperation, and its operator. For this purpose, we designed an empathetic interaction between the robot and the operator and conducted a customer service experiment to verify its effectiveness. Our results demonstrate that the proposed interaction can build rapport between the robot and the operator and the operator can feel more reassured. Although the effect on stress could not be isolated directly from the data, detailed analyses of the response to the questionnaires indicated that the proposed interaction may be useful to relieve stress.

3D Head-Position Prediction in First-Person View by Considering Head Pose for Human-Robot EyeContact

For a humanoid robot to make eye contact and initiate communication with a person, it is necessary to estimate the person's head position. However, eye contact becomes difficult due to the mechanical delay of the robot when the person is moving. Owing to these issues, it is important to conduct a head-position prediction to mitigate the effect of the delay in the robot motion. Based on the fact that humans turn their heads before changing direction while walking, we hypothesized that the accuracy of three-dimensional (3D) head-position prediction from a first-person view can be improved by considering the head pose. We compared our method with a conventional Kalman filter-based approach, and found our method to be more accurate. The experiment results show that considering the head pose helps improve the accuracy of 3D head-position prediction.

Who is that?: ! Does Changing the Robot as a Learning Companion Impact Preschoolers' Language Learning?

In child-robot interaction research, many studies pursue the goal to support children's language development. While research in human-human interaction suggests that changing human partners during children's language learning can reduce their recall performance of the learning content, little is known whether a change in social robots as interaction partners influence children's learning in the same way. In this paper, we present findings from a word learning study, in which we changed the robotic partner for one group of children while the other group interacted with the same robot. Contrary to work with human social partners, we found that children did not retrieve words differently when interacting with different humanoid robots as their social interaction partners.

Socially Assistive Robots in Smart Homes: Design Factors that Influence the User Perception

Despite the growing interest in smart homes and robotics in many domains, very few studies have explored how socially assistive robots (SAR) can be integrated into smart homes to control them while socially interacting with people. This paper explores two factors - embodiment and movement - that influence the human-robot interaction into a domestic context. We conducted a within-subjects study with three conditions (disembodied-static, embodied-static, embodied-dynamic) involving the conventional population. Participants (N = 10) interacted into two speech-based tasks with an autonomous Temi robot fully integrated with a smart home (e.g., lights, room temperature, music, oven control) and answered questions about their perception towards the robot, including perceived sociability and social presence. The results indicated that participants perceived the embodied-movable robot as significantly more sociable and socially present than static or disembodied ones.

Real-time Feasibility of a Human Intention Method Evaluated Through a Competitive Human-Robot Reaching Game

Predicting human behavior is a necessary robot ability for safe and fluent human-robot collaboration in shared workspaces. Robots should recognize human intended actions by combining information from ongoing movements and other environmental cues. In many cases, visual sensors might be required for obtaining information regarding the human movement. While using cameras, especially a single one, offers several benefits, the information captured is noisy and of relative low frequency considering the requirements for intention prediction. The purpose of this study was to evaluate the feasibility of obtaining real-time intention prediction and using it timely for robot action, when human behavior is observed by a single RGB-D camera. Visual information is used to obtain human joint data using Openpose. Based on this we then construct appropriate features and train several Machine Learning models. We evaluate the feasibility of timely robot action using a competitive human-robot game. The results show that a prediction available at about 288ms is early enough to enable timely robot action provided that the robot has to act on objects that are no farther than 10 cm away.

An Exploration of Eye Gaze in Women During Reciprocal Self-Disclosure: Implications for Digital Human Design

Digital humans are a highly realistic form of conversational computer agent. Eye gaze is a salient social cue that digital humans could use to facilitate rapport-building during conversations. However, eye gaze tendencies vary by gender and incorrect gaze patterns can have negative social implications. Analysis of observational data during human conversations can help inform the development of eye gaze models for digital humans. This study aimed to identify the eye gaze patterns of women dyads during a rapport-building conversation, and to evaluate the effect of different gaze patterns on rapport, trust, and psychological outcomes. 36 adult women (18 dyads) completed the Relationship Closeness Induction Task while wearing eye tracking glasses. Subjective rapport, trust, and psychological measures were collected. Gaze patterns of women were found to change as the conversation content became more intimate; specifically, gaze aversions for thinking (p=.042), turn-taking (p=.025), and intimacy modulation increased in duration (p=.012). Furthermore, gaze patterns were associated with perceptions of the conversation partner. Displaying fewer cognitive gaze aversions was associated with greater closeness (p=.029) and trust perceptions (p=.035). Longer periods of direct gaze while speaking was associated with greater rapport (p=.040). Results will inform the development of a humanlike gaze model for female digital humans during intimate conversations and may be applicable to social robots.

Conversational AI and Knowledge Graphs for Social Robot Interaction

The paper describes an approach that combines work from three fields with previously separate research communities: social robotics, conversational AI, and graph databases. The aim is to develop a generic framework in which a variety of social robots can provide high-quality information to users by accessing semantically-rich knowledge graphs about multiple different domains. An example implementation uses a Furhat robot with Rasa open source conversational AI and knowledge graphs in Neo4j graph databases.

Affective Responses of Older Adults to the Anthropomorphic GenieConnect Companion Robot During Lockdown of the COVID19 Pandemic

Anthropomorphic robots may reduce loneliness in older people, however, acceptance is requisite for adoption. We collected the experiences of 10 people aged 80-92 who used a pre-market social robot, GenieConnect, for between 2 to 35 days during the COVID19 pandemic restrictions. GenieConnect is a table-top robot with a large face and animated eyes, designed for support and companionship. The robot asked 'how are you feeling, Name' each day and delivered lifestyle prompts such medication reminders. We observed conflicting responses from the participants - five expressed positive responses, three negative (two of these withdrew) and two neutral. Positive comments included 'feeling not alone'; 'having someone to talk to'; and enjoying being asked 'how are you feeling'. Negative comments were mainly related to not liking the eyes. Design adaptations were made to increase acceptance. We conclude that robots like GenieConnect could reduce loneliness when a user-centred design approach is taken.

Cohesiveness of Robots in Groups Affects the Perception of Social Rejection by Human Observers

As robots become increasingly part of human social systems, how humans are psychologically affected by machine behaviors in groups such as ostracism and prejudice, becomes criteria for design. Parameters of Robot group can have an effect on the dynamics of robot-robot-human interaction. The cohesiveness of robot groups, termed entitativity, affects humans' willingness to engage with the group and alters their perception of threat and cooperativity. To investigate how group composition affects how people perceive negative social intent from robots, we showed subjects videos of various ways humans are socially rejected by robots under high and low entitativity conditions. The results reveal that when robotic groups are less cohesive, the sense of rejection is greater, implying that humans experience increased anxiety over being rejected by more diverse sets of machines. Understanding the social consequences of robot group dynamics can assist us in avoiding unanticipated negative affects caused by machines.

Hugmon: Exploration of Affective Movements for Hug Interaction using Tensegrity Robot

Since ancient times, a hug has been one of the most basic ways to express emotions and has played an important role in building relationships between people. On the other hand, social robots that are designed to provide mental health care to patients have been attracting great attention, and hugging between humans and robots is becoming more and more popular. In this study, we propose a huggable robot that allows intimate interactions between humans and robots. Our robot is based on a tensegrity structure, which is composed of rigid elements connected by springs, and the structure allows the robot to flexibly respond to external force from a hugging interaction, and express various emotions through its movements. In addition, we conducted user experiments and explored an interaction design for the affective movement of the proposed robot. Through the experiments, we confirm that a robot with a tensegrity structure can be used for a hug interaction and it has a large possibility for emotional interaction.

First Attempt of Gender-free Speech Style Transfer for Genderless Robot

Some robots for human-robot interaction are designed with female or male physical appearance. Other robots are endowed with no gender characteristics, namely genderless robots, such as Pepper and NAO robot. A robot with male or female physical appearance should possess the mapped speech gender style during a natural human-robot interaction, which can be learned from humans' male or female speech. In this paper, we make a new trial to synthesis gender-free speeches for physically genderless robots, which is promising in order to improve a more natural human-robot interaction with genderless robots. Our gender style-controlled speech synthesizer takes the speech text and gender style embedding as inputs to generate speech audio. A speech gender encoder network is used to extract the embedding of the speech gender style with female and male speeches as input. Based on the distribution of the female and male gender style embedding, we explore the gender-free speech style embedding space where we sample some gender-free embedding vectors to generate genderless speech audio. This is a preliminary work where we show how the genderless speech audio wave will be synthesized from text.

Butsukusa: A Conversational Mobile Robot Describing Its Own Observations and Internal States

This paper presents an autonomous conversational mobile robot Butsukusa that can describe its own observations and internal states during patrolling tasks. The proposed robot can observe the surrounding environment using the recognition module for objects, humans, environment, localization, and speech and then move autonomously around an indoor living space. Interaction skills via language are required for the robot to perform in such human-centered spaces. To investigate a better communication protocol with users, we evaluate various language generation patterns based on different observations and interaction patterns. The evaluation results indicate that the importance of describing the robot's observation results and internal states, as well as the necessity of an appropriate description, depends on the situation.

Modeling the Interplay between Human Trust and Monitoring

In this work, we investigate and model how human trust affects monitoring. We present a web-based human subject study in which the robot is a worker and the human plays the role of a supervisor. First, we evaluate the correlation between the human trust and monitoring by using statistical tests, and then we learn probabilistic models of the behavioral data collected through our user studies. These models can provide us with the likelihood of a human user monitoring a system given their level of trust. Such models can be leveraged in many systems including the ones designed to be resilient to automation bias and complacency.

Understanding Design Preferences for Robots for Pain Management: A Co-Design Study

There is growing interest in psychological interventions using socially assistive robots to mitigate distress and pain in the pediatric population. This work seeks to address the deficit in understanding of what features and functionality young children and their parents desire to help with pain management by using co-design, a common approach to exploring participants' imaginations and gathering design requirements. To close this gap, we carried out a co-design workshop involving seven families (with children aged between 4-6 and their parents) to understand their expectations and design preferences for a robot designed for pain management in children. Data were collected from surveys, video and audio recordings, interviews, and field notes. We present the robot prototypes constructed during the workshops and derive several preferences of the children (e.g., zoomorphic shape, distractors and emotional expressions as behaviors). Additionally, we report methodological insights regarding the involvement of young children and their parents in the co-design process. Based on the findings of this co-design study, we discuss personalization as a possible design concept for future child-robot interaction development.

Robot Mediation of Performer-Audience Dynamics in Live-Streamed Performances

Live-streamed performances, in which the performers and the audience are simultaneously present in separate physical spaces, lack the emotional intensity present in in-person performances. Motivated by the social effects of robots and the potential synergy between robots and art, we conducted a between-subject study to explore robots as mediators in live-streamed performances. As a mediator between the performers and the audience, the robot can solicit audience input and direct performers according to that input. We held three interactive musical performances to compare the audiences' experiences: one with a chatbot mediator and two with a NAO robot mediator. We did not find significant differences in the audience's experiences between mediators, but survey responses and chat activity pointed to useful design considerations.

Preliminary Explorations of Conceptual Design Tools for Students Learning to Design Human-robot Interactions for the Case of Collaborative Drawing

Advancements in technology, methodologies and intelligence associated with Industry 4.0 have brought attention to the application of industrial robots in new fields. As we begin to design the new usage scenarios for industrial six-axis robots, there are gaps in design tools in the human-robot collaboration (HRC) field specially developed for interaction designers and students. We pursue generative and structured design tools that can expand the boundaries of the HRC community, attracting more people with a design background to participate in the development of HRC applications. We present the plan for a workshop for students without HRC design experience centred on creating a collaborative drawing as the theme, through the reflection activities to inform the potential form of HRC design tools in the education domain.

Exploring Variables That Affect Robot Likeability

Like in human-human interaction, people tend to interact in human-robot settings with those they like. Therefore, it is important to understand what variables affect robot likeability. The present study aims at providing insights into how robots' anthropomorphism, voice, gestures, approaching behaviors as well as perceived warmth and competence play a role in robot likeability. We conducted an online survey (N=191) studying two humanoid robots with different characteristics. Our exploratory study empirically indicates that the investigated variables are significantly correlated with robot likeability for both robots but with differing strengths. Further, the likeability of the two robots is predicted by differing variables, with robot voice being the only common predictor for both robots.

SESSION: HRI Pioneers

Designing Psychological Conflict Resolution Strategies for Autonomous Service Robots

As autonomous service robots will become increasingly ubiquitous in our daily lives, human-robot conflicts will become more likely when humans and robots share the same spaces and resources. This thesis investigates the conflict resolution of robots and humans in everyday conflicts in the domestic and public context. Hereby, the acceptability, trustworthiness, and effectiveness of verbal and non-verbal strategies for the robot to solve the conflict in its favor are evaluated. Based on the assumption of the Media Equation and CASA paradigm that people interact with computers as social actors, robot conflict resolution strategies from social psychology and human-machine interaction were derived. The effectiveness, acceptability, and trustworthiness of those strategies were evaluated in online, virtual reality, and laboratory experiments. Future work includes determining the psychological processes of human-robot conflict resolution in further experimental studies.

Understanding and Influencing User Mental Models of Robot Identity

Research has shown that the relationship between robot mind, body, and identity is flexible and can be performed in a variety of ways. Our research explores how identity performance strategies used among robot groups may be presented through group identity observables (design cues), and how those strategies impact human-robot interactions. Specifically, we ask how group identity observables lead observers to develop different mental models of robot groups, and different perceptions of trust and group dynamics constructs.

Learning from Humans for Adaptive Interaction

Robots that will cooperate (or even compete) with humans should understand their goals and preferences. Humans leak and provide a lot of data, e.g., they take actions to achieve their goals, they make choices between multiple options, they use language or gestures to convey information. And we, as humans, are usually very good at using all these available information: we can easily understand what another person is trying to do just by watching them for a while. The goal of my research is to equip robots with the capability of using multiple modes of information sources. For this, I propose using a Bayesian learning approach, and show how it is useful in a variety of applications ranging from exoskeleton gait optimization to traffic routing.

Artificial Trust as a Tool in Human-AI Teams

Mutual trust is considered a required coordinating mechanism for achieving effective teamwork in human teams. However, it is still a challenge to implement such mechanisms in teams composed by both humans and AI (human-AI teams), even though those are becoming increasingly prevalent. Agents in such teams should not only be trustworthy and promote appropriate trust from the humans, but also know when to trust a human teammate to perform a certain task. In this project, we study trust as a tool for artificial agents to achieve better team work. In particular, we want to build mental models of humans so that agents can understand human trustworthiness in the context of human-AI teamwork, taking into account factors such as human teammates', task's and environment's characteristics.

Adaptive Robot Discourse for Language Acquisition in Adulthood

Acquiring a second language in adulthood differs considerably from the approach taken at younger ages. Learning rates tend to decrease during adolescence, and socio-emotional characteristics, like motivation and expectations, take a different perspective for adults. In particular, acquiring communicative competence is a stronger objective for older learners, as an appropriate use of language in social contexts ensures a better community immersion and well-being. This skill is best attained through interactions with proficient speakers, but if this option is not available, social robots present a good alternative for this purpose. However, to obtain optimal results, a robot companion should adapt to the learner's proficiency level and motivation continuously to encourage speech production and increase fluency. Our work attempts to achieve this goal by developing an adaptive robot that modifies its spoken dialogue strategy, and visual feedback, to reflect a student's knowledge, proficiency and engagement levels in situated interactions for long-term learning.

AR Indicators for Visually Debugging Robots

Programming robots is a challenging task exacerbated by software bugs, faulty hardware, and environmental factors. When coding issues arise, traditional debugging techniques are not always useful for roboticists. Robots often have an array of sensors that output complex data, which can be difficult to decipher as raw text. Augmented reality (AR) provides a unique medium for conveying data to the user by displaying information directly in the scene as their corresponding visual definition. In my research, I am exploring various design approaches towards AR visualizations for expert robotics debugging support. From my initial work, I developed design guidelines to inform two future bodies of work which investigate better ways of visualizing robot sensor and state data for debugging.

Empowering Robots for Object Handovers

This work investigates the collaborative task of object handovers between a human and a robot, a central aspect of human-robot collaboration. Our research contributes along three directions: first, designing robot controllers for previously unexplored human-robot handover scenarios; second, investigating gaze behaviors of a receiver in human-to-human and human-to-robot handovers; and third, investigating human behavior in bimanual and multiple sequential human-to-human handovers. Our contributions could help enable robots perform the complex but essential tasks of handing over objects to and receiving objects from humans.

Community-Situated Mixed-Methods Robotics Research for Children and Childhood Spaces

Robots are increasingly present in ethically fraught childhood spaces. In such contexts, HRI researchers should leverage mixed-methods approaches. This is especially true in domains where robots are teleoperated by adult experts-such as therapy. A mixed-methods approach can help researchers build a thorough qualitative understanding of adult experts' needs, incorporate stakeholder perspectives through participatory design, and motivate experimental evaluations with this insight. Through such a user-centered, mixed-methods approach, robotics researchers can ultimately improve the experience of both adults and children in these spaces.

Design Justice for Robot Design and Policy Making

Technology, such as robots, can entrench existing inequities and create new ones. Leveraging design justice, participatory design, and design fictions, we propose new ways for designing robots and policies around social robots to incorporate more voices and values into design processes. This work critically examines design justice in the context of human-robot interaction (HRI) and suggests a framework to engage multiple stakeholders in participatory design of robots and policies grounded in design justice. Overall, we promote discussion around how we can design more equitable robot technology design and policy design in HRI.

Music and Movement Based Dancing for a Non-Anthropomorphic Robot

The purpose of this research is to use human motion and musical features to generate dances for non-anthropomorphic robots. Many non-humanoid dancing robots are hard-coded by their choreographer/programmer. While some robots do generate dances based on music, most of the robots are anthropomorphic and don't use human motion to enhance their dance. This research looks to introduce novel ways of generating dances for a 7 degree of freedom (DoF) robotic arm based on a variety of musical features. It also develops new methods to capture human dance motion for non-humanoid robots. Lastly, it combines human-motion influenced robotic dance with the music based generative dance using to enhance the robot's artistic expression.

Non-Dyadic Human-Robot Interaction: Concepts and Interaction Techniques

With the increase in robot complexity and the diversity of domains in which we encounter robots, there is an increased need for research focusing on more varied aspects of human-robot interaction. While most research has focused on the dyadic interaction (one-to-one) between one human and one robot, we are currently observing a paradigm shift towards increased attention to HRI in non-dyadic systems. However, we still have limited knowledge of which interaction techniques work well for non-dyadic HRI combining human participants with multiple digital artefacts, including robots. We investigate what characterises non-dyadic HRI in various contexts, including the home and industry, and how the addition of robots affects how we interact in groups. This paper presents our research questions, preliminary results, and plans for future studies, thereby contributing to a better understanding of the concepts and interaction techniques in non-dyadic interaction in human-robot groups in various contexts.

Personalized Meta-Learning for Domain Agnostic Learning from Demonstration

For robots to perform novel tasks in the real-world, they must be capable of learning from heterogeneous, non-expert human teachers across various domains. Yet, novice human teachers often provide suboptimal demonstrations, making it difficult for robots to successfully learn. Therefore, to effectively learn from humans, we must develop learning methods that can account for teacher suboptimality and can do so across various robotic platforms. To this end, we introduce Mutual Information Driven Meta-Learning from Demonstration (MIND MELD) [12, 13], a personalized meta-learning framework which meta-learns a mapping from suboptimal human feedback to feedback closer to optimal, conditioned on a learned personalized embedding. In a human subjects study, we demonstrate MIND MELD's ability to improve upon suboptimal demonstrations and learn meaningful, personalized embeddings. We then propose Domain Agnostic MIND MELD, which learns to transfer the personalized embedding learned in one domain to a novel domain, thereby allowing robots to learn from suboptimal humans across disparate platforms (e.g., self-driving car or in-home robot).

Leveraging Non-Experts and Formal Methods to Automatically Correct Robot Failures

State-of-the-art robots are not yet fully equipped to automatically correct their policy when they encounter new situations during deployment. We argue that in common everyday robot tasks, failures may be resolved by knowledge that non-experts could provide. Our research aims to integrate elements of formal synthesis approaches into computational human-robot interaction to develop verifiable robots that can automatically correct their policy using non-expert feedback on the fly. Preliminary results from two online studies show that non-experts can indeed correct failures and that robots can use the feedback to automatically synthesize correction mechanisms to avoid failures.

Robots That Can Anticipate and Learn in Human-Robot Teams

Robots are moving from working in isolated chambers to working in close-proximity with human collaborator(s) as part of human-robot teams. In such situations, robots are increasingly expected to work with multiple humans and effectively model both human-human and human-robot dynamics before taking timely actions. Working toward this goal, we have proposed new algorithms that model human intent and motion while being interpretable and scalable to multiple humans. Our current work builds upon these algorithms to 1) obtain a more holistic representation of the environment and 2) interleave robot perception and control. Our proposed algorithms have attained state-of-the-art performances over various benchmarks and learning scenarios. As part of future work, we aim to enhance our learning algorithms with the capability of acquiring knowledge continually, without overwriting past information.

SESSION: Video

AMIGUS: A Robot Companion for Students (Video Abstract)

The pandemic has made everybody stay studying at home, but also there have always been isolated people who need a little push to study or interact with others. AMIGUS is a social robot that helps with good companionship, motivational messages and tasks during the times when students are dealing with online classes.

How to Make People Think You're Thinking if You're a Drawing Robot: Expressing Emotions Through the Motions of Writing

We developed a system to explore expressiveness for a robot playing Tic-Tac-Toe against a human. Our robot is based around a pen plotter which performs expressions through the modalities of motion and drawing, aiming to enhance the social engagement of the human-robot interaction.

ADioS: Angel and Devil on the Shoulder for Encouraging Human Decision Making

"Angel and Devil" expression has been used to cause a dilemma in various animations and comics.In this video demo, we implemented this expression with multiple robots. We supposed a scene in daily life, a student carrying a backpack and two robots riding on both shoulders to encourage the student to make decisions. This research concept can be the novelty use case of personal AI robots. In future works, the decision-making methodologies suggested by behavioral economics, such as nudging and boosting, can adopt to human-robot interaction by using multiple robots.

Interacting with a Conveyor Belt in Virtual Reality using Pointing Gestures

We present an interactive demonstration where users are immersed in a virtual reality simulation of a logistic automation system. Using pointing gestures sensed by wrist-worn inertial measurement unit, users select defective packages transported on conveyor belts. The demonstration allows users to experience a novel way to interact with automation systems, and shows an effective application of virtual reality for human-robot interaction studies.

PopupBot, a Robotic Pop-up Space for Children: Origami-based Transformable Robotic Playhouse Recognizing Children's Intention

To help people use their limited space efficiently, we propose an origami-based transformable robotic space called "PopupBot." Specifically, we developed a robotic playhouse for children. A large origami structure with a bellows pattern is controlled by a servo motor and transforms into various types of furniture. For natural child-robotic space interaction, PopupBot perceives the child's intention for the space through speech recognition and provides appropriate space by inferring the space type matching the intention. We expect the PopupBot to provide a new space for people who suffer from staying in limited space especially due to the COVID-19 pandemic.

Two-way Human-Robot Interaction in 5G Tele-operation

The advancement in 5G technology has emerged as an enabler across a wide range of industries advancement in recent years, especially for those requiring low tolerance in latency for high-speed data transmission. With a primary focus on Human-Robot Interaction (HRI), this paper proposes an intuitive and user-friendly design on mobile robot tele-operation system to exploit 5G's advantages. Unlike traditional remote control with limited signal range and single-direction command control, our design demonstrates an interactive approach for 5G tele-operation, empowering an active and instant two-way interaction between pilots and robots with the aids of control, visual, audible, and haptic cues, to minimize incident rate.

Demonstration of a Robo-Barista for In the Wild Interactions

We present a demonstration of a Robo-Barista: a social robot that takes hot beverage orders through verbal interaction and completes them via a Bluetooth enabled coffee machine. The demonstration is highly robust and it is the intention that this could be installed as a permanent feature, enabling "In the Wild" experimentation and long term studies. In the demonstration video, we show a user interacting with a Furhat robot to order a coffee. The robot has a novel architecture that allows it to exhibit both verbal and non-verbal cues, such as shared attention and chitchat. Furthermore, it is enabled with a unique tiredness detector based on visual facial features.

Demonstration of a Robot Receptionist with Multi-party Situated Interaction

We present a demonstration of a Robot Receptionist: a situated interactive robot that can coordinate turn-taking and handle multi-party engagement and dialogue in dynamic environments, where users might enter or leave the scene at any time. We use a Furhat robot, which is highly expressive and can use verbal communication as well as non-verbal cues, such as facial expressions. The system demonstrated and described here is composed of several modules, including scene analysis, engagement policies, and a dialogue manager.

MAPPO: The Assistance Pet for Oncological Children (Video Abstract)

MAPPO (Mascota Asistencial Para Pacientes Oncológicos) is a social robot designed to guide, help and keep company children who are experiencing cancer and their caregivers. It helps them face the daily challenges related to the side effects of treatments, medication compliance, healthy nutrition, mental health and emergencies. It's designed to interact and get into the child's life as a pet and guide.

A Haptic Multimodal Interface with Abstract Controls for Semi-Autonomous Manipulation

Even as autonomous capabilities improve, many robot manipulation tasks require human(s)-in-the-loop to resolve high-level problems in uncertain environments or ambiguous situations. Prior work in highly autonomous applications tends to use interfaces with few human interface modalities, potentially missing out on the benefits that multimodal interfaces have demonstrated in lower-level operation. In this work, we demonstrate a system with a multimodal interface for a controlling a robot at a high level of autonomy. This example highlights how multiple modalities could enable redundant and robust interactions, increased situational awareness, and compact representations of complex commands, such as how to grasp an object.

Reflections on "Rock, Paper, Scissors": Communicating Science to the Public through a Demonstrator

Communicating science to the public is increasingly important. Demonstrators are a valuable and established tool for communication in technology research and development. However, their role in communicating current science and technology to the public has not received much attention neither in research nor practice. This paper reflects on the design and usage of the demonstrator "Rock, Paper, Scissors", which we developed to communicate current advances in Human-Robot Interaction to public audiences. We discuss two years of "Rock, Paper, Scissors" in action and its evolution within this period. We conclude with an outlook to future work regarding technology development and evaluation of science communication.

A Robo-Pickup Artist Breaking Gender Norms

"Come Hither to Me!" is feminist robot theater with the objective to break down gender roles and provoke the audience to question accepted norms and stereotypes. Inspired by pick-up artist strategies, the female robotic performer wanders the art gallery and initiates conversation with an audience member using an exaggerated flirtatious comment, followed by a series of compliments, negs (backward compliments), humorous remarks, and both personal and general questions. The conversation design engages the participant in a performative, entertaining interaction with the robot, so she can gain information about the audience member and use it to ultimately ask them out on a date. The objective of the interactions is not only to make the participant laugh and engage but also to make them slightly uncomfortable in order to provoke thoughts and bring attention to one's gendered expectations in normalized sexist societal structures. This critical, yet humorous, robotic performance exemplifies how social justice-oriented design, critical computing, camp aesthetics, and theater staging techniques can be applied to human-robot interaction for social/gender studies, digital performance, and robotic research.

SESSION: Student-Design Competition

A Robot That Physically Snuggles to Humans

Robotic pets are expected to be a new approach to provide people with healing. To realize an active touch from a robotic pet, we focus on the "snuggling behavior" in which an animal pet rubs its head or body, and propose a robotic pet that incorporates this behavior. In this paper, we introduce the outline of the developed prototype, and describe the application scenario of the snuggling robot in the future.

Buzz! Deepening Human Connection to Plants Through Technology

In imagining a future where climate change forces more intimate relationships between humans and nature, social robots can be introduced to revolutionize the way humans understand and communicate with plants. Through the Double Diamond design method with plant owners, we uncovered different perspectives of the plant caretaking process and designed a social agent that aids in plant caretaking while fostering a positive human-robot interaction. Then, based on our takeaways from the design process, we crafted a story of an individual who interacts with a future version of our robot who overcomes the language barrier between plants and humans.

Lifecycle: A Speculative Fiction on Healthcare Automation

Set in the year 2036, this short story imagines the Healthcare Live-in Prognostic Robot (or HLPR). Resembling a diminutive drone, the HLPR is a nurse, nutritionist, dietician, and fitness coach all-in-one. As it follows the protagonist Sofia throughout her day-to-day life, questions are raised regarding the device's altruism as it is revealed to be owned by her insurance provider-and, consequently, empowered to rescind benefits should her own self-care routine ever lapse. Far from science fantasy, the HLPR builds on anxieties already nascent in smart home technologies, wearable medtech, and social credit systems.

Enhanced Facial Expressions of Avatars by Internal Movable Weights

When recognizing the facial expressions of avatars or virtual agents, visual information is used. We developed a device that can move internal weights according to facial expressions of a virtual agent on a screen. The device allowed the user to perceive the facial expressions with visual and tactile information. We conducted a pilot test, and the results suggested that moving the internal weights in accordance with the facial expressions enhanced the credibility of the virtual agent. We also created a future scenario where the device will be used.

The Limbot

This paper discusses how a possible future technology, called the Limbot, can change the face of how society experiences loss and death by providing a replacement for loved ones or public figures after death. With advancement in surveillance and data gathering, It is possible in this future reality that a robotic doppelganger can be produced after a death.

IEUM: Bridging Transportation to Humans

IEUM, a small data cube, is a fundamental agent of transportation envisioned in a future shared mobility system. By communicating with the user, IEUM understands what its user needs at the moment and acts as a mediating agent among humans, cars, and other traffic infrastructures. While taking you to your personalized route of the day, IEUM will also enhance the traffic and energy efficiency of our transportation systems with other IEUMs out on the road. With your buddy IEUM, moving is full of fun.

Waiibot

What makes a robot different from smart devices? Can we make one system that will contain the functionality of all smart devices? If we can make such a system what will happen to our existing devices? In our work along with keeping the theme in mind, we tried to find answers for these three questions and came up with one such system that can be a robot and a smart device according to your requirements. Waiibot consists of a core and multiple followers which gives the flexibility of shape and will control existing smart devices to make our life better.

SESSION: Workshops

Robot Curiosity in Human-Robot Interaction (RCHRI)

One of the fundamental modes of learning in children is through curiosity. Children (and adults) interact with new people, learn about novel objects, activities and other stimuli through curiosity and other intrinsic motivations. Creating autonomous robots that learn continually through intrinsic curiosity may result in breakthroughs in artificial intelligence. Such robots could continue to learn about themselves and the world around them through curiosity, thus improving their abilities over their 'lifetime'. Although recent works on curiosity in different fields have produced significant results, most of these works have focused on constrained simulated environments which do not involve human interaction. However, in real-world applications such as healthcare, home-assistance etc., robots generally have to interact with humans on a regular basis. In these scenarios, it is imperative that curiosity is directed towards seeking out and learning important information from the humans when needed rather than simply learning in an unsupervised manner. Further, there is limited work on how humans perceive such curious robots and whether humans prefer curious robots that adapt over time to other robots that simply perform their assigned tasks. In this workshop, our goal is to bring together researchers and practitioners in different multidisciplinary fields to discuss the role of robot curiosity in real-world applications and its implications in human-robot interaction (HRI).

Re-Configuring Human-Robot Interaction

The workshop investigates two major boundaries within HRI design and research: Firstly, we aim to cross the boundaries of engaging in interdisciplinary collaboration of such divergent disciplines as engineering, design, psychology, philosophy and sociology. Secondly, we aim to cross the boundaries of HRI design and social contexts of use - often referred to as 'real world' environments. This endeavor is not new, however we aim for approaching these two boarders of HRI research and design more systematically, e.g. by providing new methodological impulses. The idea of "configuring" has a long tradition in Science and Technology Studies (STS) to describe how potential users and use cases are shaped and in turn reshaped (configured) throughout technology design - be it explicitly or accidentally. Given HRI is becoming deeper integrated in "real world' contexts, such as public spaces, homes and care facilities, we argue for the need for a re-configuration. This includes a critical reflection of material, procedural and methodological implications that shape future users within HRI design practices - for and together with people.

Virtual, Augmented, and Mixed Reality for HRI (VAM-HRI)

The 5$^th $ International Workshop on Virtual, Augmented, and Mixed Reality for Human-Robot Interaction (VAM-HRI) will bring together HRI, robotics, and mixed reality researchers to address challenges in mixed reality interactions between humans and robots. Topics relevant to the workshop include development of robots that can interact with humans in mixed reality, use of virtual reality for developing interactive robots, the design of augmented reality interfaces that mediate communication between humans and robots, social applications for virtual and mixed reality in HRI, the investigations of mixed reality interfaces for robot learning, comparisons of the capabilities and perceptions of robots and virtual agents, and best design practices. Special topics of interest this year include VAM-HRI research during the ongoing COVID-19 pandemic as well as the ethical implications of VAM-HRI research. VAM-HRI 2022 will follow on the success of VAM-HRI 2018-21 and advance the cause of this nascent research community.

Website: https://vam-hri.github.io

Context-Awareness in Human-Robot Interaction: Approaches and Challenges

To be seamlessly integrated in human-centered environments, robots are expected to have intelligent social capabilities on top of their physical abilities. To this end, research in artificial intelligence and human-robot interaction face two major challenges. Firstly, robots need to cope with uncertainty during interaction, especially when dealing with factors that are not fully observable and hard to infer such as the states representing the dynamic environment and human behavior (e.g., intents, goals, preferences). Secondly, robots need to communicate their behaviors to agents (humans and other robots in the environment) in a clear and understandable manner. Therefore, robots need to be context-aware: being able to perceive and understand their surroundings, and adapt their functionalities accordingly.

Our workshop aims at gathering the latest theoretical and practical research studies and expertise in intelligent social robotics that investigate the applications and challenges of bringing the ability to robots to perceive and understand their environment in the context of Human-Robot interactions. Besides, the workshop will allow participants and renowned researchers from academia and industry to discuss, in a multidisciplinary panel discussion, the current and long-term challenges for context-awareness in human-robot interactions.

Fairness and Transparency in Human-Robot Interaction

As robots become more ubiquitous across human spaces, it is becoming increasingly relevant for researchers to ask the question, ''how can we ensure that we are designing robots to be sufficiently equipped to treat people fairly?''. This workshop brings together researchers across the fields of Human-Robot Interaction (HRI), fairness in machine learning, design, and transparency in AI to shed light on the relevant methodological challenges surrounding issues of fairness and transparency in HRI. In our workshop, we will attempt to identify synergies between these various fields. In particular, we will focus on how HRI can leverage these existing rich body of work to guide the formalization of fairness metrics and methodologies. Another goal of the workshop is to foster a community of interdisciplinary researchers to encourage collaboration. The complexity in defining fairness lies in its context sensitive nature, as such we look to the influx of definitions from the field of fairness in artificial intelligence, design, and organizational psychology to derive a set of definitions that could serve as guidelines for researchers in HRI.

Inclusive HRI: Equity and Diversity in Design, Application, Methods, and Community

Discrimination and bias are pressing issues of many AI and robotics applications. These outcomes may derive from limited datasets that do not fully represent society as a whole or from the AI scientific community's western-male configuration bias. Although being a pressing issue, understanding how robotic systems can replicate and amplify inequalities and injustice among underrepresented communities is still in its infancy among social science and technical communities. This workshop contributes to filling this gap by exploring the research question: What do diversity and inclusion mean in the context of Human- Robot Interaction (HRI)? Here, attention is directed to three different levels of HRI: the technical, the community, and the target user level. Overall, this workshop will focus on the idea that AI systems can be created to be more attuned to inclusive societal needs, respect fundamental rights, and represent contemporary values in modern societies by integrating diversity and inclusion considerations.

Joint Action, Adaptation, and Entrainment in Human-Robot Interaction

Research in joint action focuses on the psychological, neurological, and physical mechanisms by which humans collaborate with other agents, and overlaps with several domains related to human-robot interaction. The development of artificial systems that can support or emulate the requisite aspects of joint action could lead to improved human-robot team performance as well as improvements in subjective metrics (e.g., trust). This workshop highlights theoretical and technical considerations about human-robot joint action and real-time adaptation, with a particular focus on socio-motor entrainment, showing how the emulation of psychological mechanisms (e.g., emotion, intention signaling, mirroring) can lead to improved performance. We will invite speakers with backgrounds in robotics, neuroscience and psychology, as well as speakers with a focus in adjacent works, such as in human-robot coordinated dance, alignment, or synchronization. We will call for papers that utilize the theory of joint-action in an interactive human-robot context. We will also call for position papers on the application of the theory of joint action to robotics, with a heavy focus on psychological mechanisms that could potentially be emulated or adapted to a human-robot context. Participants will have the opportunity to brainstorm considerations and techniques that would be applicable to joint action inspired works through breakout sessions with the aim to lead to new and improved collaborations across fields.

Towards a Common Understanding and Vision for Theory-Grounded Human-Robot Interaction (THEORIA)

While the accumulation of practical knowledge provided researchers with much insight into successful human-robot interaction (HRI), a broader discussion about the role of theoretical knowledge is still lacking. It is unfortunate because it is also important to explicitly consider theory and theorizing in HRI as crucial contributions when aspiring to develop this field of research into a mature science. With our proposed interactive half-day workshop, we aim to provide a vibrant setting for the participants to discuss the what, why, and how of theoretical knowledge in HRI, as they share and learn from each other's experiences and competence. In the long-term perspective, the outcome of this workshop will lay the foundation for a supportive research community that encourages researchers to reflect and collaborate further on theory-grounded HRI work.

Longitudinal Social Impacts of HRI over Long-Term Deployments

The Longitudinal Social Impacts of HRI over Long-Term Deployments Workshop seeks to bring together researchers working on all aspects of thoroughly understanding such deployments. This includes researchers working in contributing areas, such as longitudinal studies of human-robot interaction, long-term autonomy, and real-world reployments.

This workshop seeks to grow the study of how real-world, deployed robot systems impact the people who interact with them and the social structure of the places that they inhabit. Historically, research in this area has been high-impact. As the world sees robots begin to inhabit places designed for people - delivery robots on city streets, and robots with jobs in airports, shopping malls, and in the home - we expect the importance of understanding these impacts to grow.

Lifelong Learning and Personalization in Long-Term Human-Robot Interaction (LEAP-HRI)

While most research in Human-Robot Interaction (HRI) studies one-off or short-term interactions in constrained laboratory settings, a growing body of research focuses on breaking through these boundaries and studying long-term interactions that arise through deployments of robots "in the wild". Under these conditions, robots need to incrementally learn new concepts or abilities (i.e., "lifelong learning") to adapt their behaviors within new situations and personalize their interactions with users to maintain their interest and engagement. The second edition of the "Lifelong Learning and Personalization in Long-Term Human-Robot Interaction (LEAP-HRI)" workshop aims to address the developments and challenges in these areas and create a medium for researchers to share their work in progress, present preliminary results, learn from the experience of invited researchers and discuss relevant topics. The workshop focuses on studies on lifelong learning and adaptivity to users, context, environment, and tasks in long-term interactions in a variety of fields such as education, rehabilitation, elderly care, collaborative tasks, service, and companion robots.

Robo-Identity: Exploring Artificial Identity and Emotion via Speech Interactions

Following the success of the first edition of Robo-Identity, the second edition will provide an opportunity to expand the discussion about artificial identity. This year, we are focusing on emotions that are expressed through speech and voice. Synthetic voices of robots can resemble and are becoming indistinguishable from expressive human voices. This can be an opportunity and a constraint in expressing emotional speech that can (falsely) convey a human-like identity that can mislead people, leading to ethical issues. How should we envision an agent's artificial identity? In what ways should we have robots that maintain a machine-like stance, e.g., through robotic speech, and should emotional expressions that are increasingly human-like be seen as design opportunities? These are not mutually exclusive concerns. As this discussion needs to be conducted in a multidisciplinary manner, we welcome perspectives on challenges and opportunities from variety of fields. For this year's edition, the special theme will be "speech, emotion and artificial identity''.

2nd International Workshop on Designerly HRI Knowledge. Reflecting on HRI practices through Annotated Portfolios of Robotic Artefacts

We propose a workshop stemming from ongoing conversations about the role of design methods and designed artefacts within the field of Human-Robot Interaction (HRI). Given the growing interest in understanding what the field can learn from design explorations, the workshop focuses on hands-on annotating activity where participants (researchers and practitioners from HRI, Human-Computer Interaction, and Design Research) will analyze and reflect upon selected collections of robotic artefacts. Ultimate goal of the workshop is to explicate values, concepts and perspectives that usually remain tacitly embedded in the designed artefacts and, as such, hard to appreciate as proper HRI contributions. The expected outcome of the workshop is a set of methodological recommendations and concrete examples of what kind of knowledge can be generated through robotic artefacts.

4th Annual Workshop on Test Methods and Metrics for Effective HRI

The drive for increasing adoption of HRI technologies is evident through research and development of manufacturing, social, medical, and service robot solutions. However, novel methods and metrics are required to overcome the barrier between fundamental HRI research and its adoption in real-world environments. Hence, the fourth installment of the annual workshop, 'Test Methods and Metrics for Effective HRI,' seeks to identify novel and emerging test methods and metrics for the holistic assessment and assurance of HRI performance. Specifically, the focus is on identifying innovative methods for the evaluation of HRI performance and to advance the growth of the HRI community based on the principles of collaboration, data sharing, and repeatability. The goal of this workshop is to break the boundaries between the development and adoption of HRI technologies through the promotion of robust experimental design, test methods, and metrics for assessing interaction and interface designs. This workshop will have participants from various sectors in the HRI research community including academia, industry, and government in order to accomplish its aims.

Machine Learning in Human-Robot Collaboration: Bridging the Gap

This workshop aims to bring together researchers to explore and identify ways in which human-robot collaboration can reap the benefits of modern machine learning. The intended outcome is a roadmap that identifies key milestones that will lead us towards fluent effective human-robot teaming. In addition to focus groups and creative brainstorming exercises, this workshop will comprise invited talks, contributed paper talks, a poster session, and a debate. The papers, talks, posters, and roadmap will be made publicly available on our website: https://sites.google.com/view/mlhrc-hri-2022/home

Human-Interactive Robot Learning (HIRL)

With robots poised to enter our daily environments, we conjecture that they will not only need to work for people, but also learn from them. An active area of investigation in the robotics, machine learning, and human-robot interaction communities is the design of teachable robotic agents that can learn interactively from human input. To refer to these research efforts, we use the umbrella term Human-Interactive Robot Learning (HIRL). While algorithmic solutions for robots learning from people have been investigated in a variety of ways, HIRL, as a fairly new research area, is still lacking: 1) a formal set of definitions to classify related but distinct research problems or solutions, 2) benchmark tasks, interactions, and metrics to evaluate the performance of HIRL algorithms and interactions, and 3) clear long-term research challenges to be addressed by different communities. The main goal of this workshop will be to consolidate relevant recent work falling under the HIRL umbrella into a coherent set of long, medium, and short-term research problems, and identify the most pressing future research goals in this area. As HIRL is a developing research area, this workshop is an opportunity to break the existing boundaries between relevant research communities by developing and sharing a diverse set of benchmark tasks and metrics for HIRL, inspired by other fields including neuroscience, biology, and ethics research.

Workshop YOUR Study Design! Participatory Critique and Refinement of Participants' Studies

HRI is an interdisciplinary field that requires researchers to be knowledgeable in broad areas ranging from social sciences to engineering. Study design is a multifaceted aspect of HRI that is hard to develop and perfect. Thus, the second edition of the "Workshop Your Study Design" workshop aims to improve the quality of future HRI studies by training researchers and boosting the accessibility of HRI as a field. Participants will have the opportunity to receive guidance and feedback on their study from an expert mentor.

Researchers from all avenues of HRI will be invited to submit a 2-4 page paper on an HRI study they are currently designing, including a brief introduction and a complete methods section. Accepted submissions will be discussed in small groups led by mentors with relevant expertise. Prior to the workshop, papers will be shared within each group. Participants will be encouraged to read other submissions. During the workshop, attendees will work within their mentee-mentor groups to discuss each paper and provide feedback. There will also be a session where mentors lead mini discussions on topics important to study design, such as balancing qualitative and quantitative design, power analysis, and research ethics. The workshop will end with a session where all participants can share important lessons that they learned with fellow attendees.

The Road to a Successful HRI: AI, Trust and ethicS (TRAITS) Workshop

The aim of this workshop is to foster the exchange of insights on past and ongoing research towards effective and long-lasting collaborations between humans and robots. This workshop will provide a forum for representatives from academia and industry communities to analyse the different aspects of HRI that impact on its success. We particularly focus on AI techniques required to implement autonomous and proactive interactions, on the factors that enhance, undermine, or recover humans' acceptance and trust in robots, and on the potential ethical and legal concerns related to the deployment of such robots in human-centred environments.

Website: https://sites.google.com/view/traits-hri-2022 Website: https://sites.google.com/view/traits-hri-2022

Human-Robot Interaction in Public Spaces

There has been a recent trend to test robots and intelligent virtual agents as social interaction partners in public domains. Commercial solutions such as Pepper or Cruz are increasingly being tested in scenarios outside the lab. Though at the same, time customer value and business models for social robots in public spaces are scarce, and with the recently halted production of the Pepper, it seems evident that there is no killer application for social robots in public spaces yet. This work wants to break the boundaries between academia and business and give both sides a venue to exchange lessons learned and develop a roadmap on the technical, legal, ethical, and business challenges for deploying social robots.

Participatory Design and End-User Programming for Human-Robot Interaction

The Participatory Design and End-User Programming for Human-Robot Interaction (HRI) workshop aims to advance research on how to design systems that can be used by end users to program robots. There tends to be a fracture in HRI between the technical designers of robot programs (often engineers or computer scientists) and the actual users of such robots. Developers have the capabilities to program robots but often lack insights possessed by domain experts, sometimes leading to technically interesting but impractical systems. With this workshop, we aim to bridge two different methods often used individually within the wider HRI community to involve end users in robot program design: Participatory Design (PD) and End-User Programming (EUP). Both methods empower end users to co-produce robots addressing real-world needs. However, there have been limited opportunities to unite researchers who specialize in these areas and engage in mutual learning. We will address this shortcoming with a full-day workshop, which will put the PD and EUP communities in touch, inviting speakers from both sides and welcoming a wide range of publications from describing new end-user programming methods to compiling insights learned from conducting participatory design studies.

Interdisciplinary Explorations of Processes of Mutual Understanding in Interaction with Assistive Shopping Robots

The main goal of this workshop is to establish awareness of the emerging field of socially assistive shopping robots in human-robot interaction (HRI) and, simultaneously, to foster interdisciplinary approaches that combine the development of social robots with a sequential and embodied perspective regarding mutual processes of understanding in shopping interactions with assistive robots.

Modeling Human Behavior in Human-Robot Interactions

This interdisciplinary workshop aims to break boundaries between the researchers who develop human models (e.g., from the fields of human factors, cognitive psychology, and computational neuroscience) and roboticists who use human models in different human-robot interaction (HRI) contexts. The keynote talks, contributed submissions, and interactive discussions will focus on the questions such as: How can modeling humans help us understand and design human-robot interactions? What kinds of models are useful for which HRI contexts (physical/cognitive interactions) and purposes (behavior prediction/personalization/theory-of-mind/etc.)? What common lessons can be learned from human behavior modeling in HRI across different application domains? How can modeling humans in HRI tasks help us to better understand human cognition/behavior? By stimulating an interdisciplinary conversation around these questions, we aim to raise awareness of the benefits of modeling and expose the wider HRI community to a variety of different modeling approaches, and facilitate the HRI researchers who already engage in modeling to exchange views on methodology of modeling and best practices from diverse fields.