Deep learning has dramatically improved the performance of speech recognition and image recognition. However, this does not mean that human intelligence has been reproduced. Nevertheless, it is expected that research and development into robots with human-like intelligence, which interact like humans, and have social relationships and coexist with humans, will continue to develop quickly. The speaker has been working on the research and development of autonomous robots that interact with people in the JST ERATO Ishiguro Symbiotic Human Robot Interaction Project.
The research approach involved is called the constructive method. This methodology reproduces complicated social phenomena, whose underlying principles are unknown, with robots in order to investigate the underlying mechanisms. By combining the various elements necessary for the realization of intelligent robots, we will develop a robot that behaves and interacts like a human. If this robot conveys human meta-level cognitive functions, such as intelligence, emotions, and consciousness through dialogue, the mechanisms of the robot will give us hints for clarifying the mechanisms underlying the meta-level cognitive functions.
In this talk, the speaker will introduce research on autonomous conversational androids which have intentions and desires and integrate various implementable technologies, tele-operated androids, and social conversational robots, as examples of the constructive method. We will then discuss the potential of this method both for scientific research and practical application.
Robots are a distributive technology that promotes change. Human-Robot Interaction is an emerging field that has exciting prospects for widespread transformation. Today, embedding social intelligence into human-robot interaction design is a moonshot. Moonshots can boost science and engineering by galvanizing and incentivizing research communities and industry in the pursuit of impossible challenges to accelerate progress and scale impact. Social intelligence is all about power and influence; privacy and trust; ethical decisions and happiness. What if, robots could take the initiative and purposefully persuade you to believe something new. HRI with social intelligence has the potential to create breakthrough insights and that will energize the reimagination of how humans and robots will collaborate in business and society.
Empathic Computing is an emerging research field that aims to use technology to create deeper shared understanding or empathy between people . The field sits at the junction of research in Natural Collaboration, Experience Capture and Implicit Understanding. Technologies such as Augmented Reality (AR) and Virtual Reality (VR), can be combined with the sensing of human physiological signals to create new types of collaborative experiences. For example, Empathy Glasses  use gaze- and face-tracking to share non-verbal communication cues and enhance remote collaborative. More complex tools, such as EEG, can measure brain activity synchronization and physiological states not normally perceived by humans .
This talk explores how lessons learned from Empathic Computing can be applied to field of Human Robot Interaction. Previous research has shown how humans can develop empathy for robots , and how robots can be used as telepresence surrogates for real people . This shows that there is great potential to create robot mediated Empathic Computing experiences to enhance face to face and remote collaboration.
For the success of automated vehicles, in addition to legal, safety, and technical aspects, user acceptance and trust in automation systems are considered to have a significant impact. The driving style can decisively influence these factors. Besides driving dynamics, such as velocity and accelerations, tactical decisions (e.g. for lane changes) also define the driving style on highways. In order to get a better understanding of lane change behavior expected by passengers during automated driving, participants (N=35) determined and initiated the desired point in time to perform lane changes in a study under real highway driving conditions. Subsequently, a logistic regression analysis could determine the probability of a lane change considering different environmental variables as predictors. Thus, the most important factors influencing lane change decisions could be identified. The results indicate that for lane changes from the right to the middle lane as well as from the middle to the right lane, the preceding and approaching vehicle on the target lane as well as the preceding vehicle on the current lane have a significant influence on the lane change decision. In addition, vehicles entering the highway and presence of more than one preceding vehicle on the right lane were revealed as significant predictors for the lane change decision to the right. Contrarily, for the lane change decision to the left, the relative velocity to the preceding vehicle as well as a speed limit equal to the target velocity have decisive influence. The results give a first important insight for a user-centered automated lane change behavior. Following, individual aspects of these results can be considered to be evaluated more specifically.
This work has developed an iteratively refined understanding of participants' natural perceptions and responses to unmanned aerial vehicle (UAV) flight paths, or gestures. This includes both what they believe the UAV is trying to communicate to them, in addition to how they expect to respond through physical action. Previous work in this area has focused on eliciting gestures from participants to communicate specific states, or leveraging gestures that are observed in the world rather than on understanding what the participants believe is being communicated and how they would respond. This work investigates previous gestures either created or categorized by participants to understand the perceived content of their communication or expected response, through categories created by participant free responses and confirmed through forced choice testing. The human-robot interaction community can leverage this work to better understand how people perceive UAV flight paths, inform future designs for non-anthropomorphic robot communications, and apply lessons learned to elicit informative labels from people who may or may not be operating the vehicle. We found that the Negative Attitudes towards Robots Scale (NARS) can be a good indicator of how we can expect a person to react to a robot. Recommendations are also provided to use motion approaching/retreating from a person to encourage following, perpendicular to their field of view for blocking, and to use either no motion or large altitude changes to encourage viewing.
Augmentative and alternative communication (AAC) devices enable speech-based communication. However, AAC devices do not support nonverbal communication, which allows people to take turns, regulate conversation dynamics, and express intentions. Nonverbal communication requires motion, which is often challenging for AAC users to produce due to motor constraints. In this work, we explore how socially assistive robots, framed as ''sidekicks,'' might provide augmented communicators (ACs) with a nonverbal channel of communication to support their conversational goals. We developed and conducted an accessible co-design workshop that involved two ACs, their caregivers, and three motion experts. We identified goals for conversational support, co-designed prototypes depicting possible sidekick forms, and enacted different sidekick motions and behaviors to achieve speakers' goals. We contribute guidelines for designing sidekicks that support ACs according to three key parameters: attention, precision, and timing. We show how these parameters manifest in appearance and behavior and how they can guide future designs for augmented nonverbal communication.
Omni-directional robots have gradually been popular for social interactions with people in human environments. The characteristics of omni-directional bases allow the robots to change their body orientation freely while moving straight. However, human spectators show dislike when observing robots behave unnaturally. In this paper, we observed how humans naturally move to goals and then developed a motion planning algorithm for omni-directional robots to resemble human movements in a time-efficient manner. Instead of treating the translation and rotation of a robot separately, the proposed motion planner couples the two motions with constraints inspired from the observation of human behaviors. We implemented the proposed method onto an omni-directional robot and conducted navigation experiments in a shop with shelves and narrow corridors at width of 90cm. Results from a within-participants study of 300 human spectators validated that the proposed human-inspired motion planner provided people with more natural and predictable feelings compared to the common rotate-while-move or rotate-then-move strategies.
In the not-so-distant future, people in service experiences are likely to interact with more than a single intelligent system, often sequentially, including different robots and devices. However, there has been sparse work exploring the characteristics of transferring people from one intelligent system to another. This paper aims to create a context-independent taxonomy to differentiate and categorize the transfer of users across robots, devices, and human staff in service interactions. We conducted two sets of design workshops where participants generated scenarios of human-multi-robot interactions and existing person transfers. Using the outcomes of both workshops, we analyzed scenarios and constructed a taxonomy for person transfers with 4-dimensions: Rationale, Type, Design, and Information Shared. We showcase different ways to utilize the taxonomy, and, through it, we discuss the trade-offs and design considerations in the implementation of person transfers.
Can we influence how a robot is perceived by designing the sound of its movement? Drawing from practices in film sound, we overlaid a video depicting a robot's movement routine with three types of artificial movement sound. In a between-subject study design, participants saw either one of the three designs or a quiet control condition and rated the robot's movement quality, safety, capability, and attractiveness. We found that, compared to our control, the sound designs both increased and decreased perceived movement quality. Coupling the same robotic movement with different sounds lead to the motions being rated as more or less precise, elegant, jerky, or uncontrolled, among others. We further found that the sound conditions decreased perceived safety, and did not affect perceived capability and attractiveness. More unrealistic sound conditions led to larger differences in ratings, while the subtle addition of harmonic material was not rated differently to the control condition in any of the measures. Based on these findings, we discuss the challenges and opportunities regarding the use of artificial movement sound as an implicit channel of communication that may eventually be able to selectively target specific characteristics, helping designers in creating more refined and nuanced human-robot interactions.
Previous work has shown that people provide different moral judgments of robots and humans in the case of moral dilemmas. In particular, robots are blamed more when they fail to intervene in a situation in which they can save multiple lives but must sacrifice one person's life. Previous studies were all conducted with U.S. participants; the present two experiments provide a careful comparison of moral judgments among Japanese and U.S. participants. The experiments assess multiple ways in which cross-cultural differences in moral evaluations may emerge: in the willingness to treat robots as moral agents; the norms that are imposed on robots' behaviors; and the degree of blame that accrues to them when they violate the imposed norms. Even though Japanese and U.S. participants differ to some extent in their treatment of robots as moral agents and in the particular norms they impose on them, the two cultures show parallel patterns of greater blame for robots who fail to intervene in moral dilemmas.
Optimal performance of collaborative tasks requires consideration of the interactions between socially intelligent agents, such as social robots, and their human counterparts. The functionality and success of these systems lie in their ability to establish and maintain user trust; with too much or too little trust leading to over-reliance and under-utilisation, respectively. This problem highlights the need for an appropriate trust calibration methodology, with the work in this paper focusing on the first step: investigating user trust as a behavioural prior. Two pilot studies (Study 1 and 2) are presented, the results of which inform the design of Study 3. Study 3 investigates whether trust can determine user decision making and task outcome during a human-agent collaborative task. Results demonstrate that trust can be behaviourally assessed in this context using an adapted version of the Trust Game. Further, an initial behavioural measure of trust can significantly predict task outcome. Finally, assistance type and task difficulty interact to impact user performance. Notably, participants were able to improve their performance on the hard task when paired with correct assistance, with this improvement comparable to performance on the easy task with no assistance. Future work will focus on investigating factors that influence user trust during human-agent collaborative tasks and providing a domain-independent model of trust calibration.
Can robots influence the moral behavior of humans by simply doing their job? Robots have been considered as replacement for humans in repetitive jobs in public spaces, but what could that mean for the behavior of the surrounding people is not known. In this work we were interested to see how people change their behavior when they observe either a robot or a person do a morally-laden task. In particular, we studied the influence of seeing a robot or a human picking up and discarding garbage on the observer's willingness to litter or to pick up garbage. The study was done as a video-based survey. Results show that while observing a person clean up does make people less keen to litter, this effect is not present when people watch a robot doing the same action. Moreover, people appear to feel less guilty about littering if they observed a robot doing the cleaning up than in the case when they watched a human cleaner.
Trust in human-robot interactions (HRI) is measured in two main ways: through subjective questionnaires and through behavioral tasks. To optimize measurements of trust through questionnaires, the field of HRI faces two challenges: the development of standardized measures that apply to a variety of robots with different capabilities, and the exploration of social and relational dimensions of trust in robots (e.g., benevolence). In this paper we look at how different trust questionnaires (Lyons & Guznov, 2019; Schaefer, 2016; Ullman & Malle, 2018) fare given these challenges that pull in different directions (being general vs. being exploratory) by studying whether people think the items in these questionnaires are applicable to different kinds of robots and interactions. In Study 1 we show that after being presented with a robot (non-humanoid) and an interaction scenario (fire evacuation), participants rated multiple questionnaire items such as "This robot is principled" as "Non-applicable to robots in general" or "Non-applicable to this robot." In Study 2 we show that the frequency of these ratings change (indeed, even for items rated as N/A to robots in general) when a new scenario is presented (game playing with a humanoid robot). Finally, while overall trust scores remained robust to N/A ratings, our results revealed potential fallacies in the way these scores are commonly interpreted. We conclude with recommendations for the development, use and results-reporting of trust questionnaires for future studies, as well as theoretical implications for the field of HRI.
In this paper we apply the recent concept of robot Ethical Risk Assessment to an exemplar Socially Assistive Robot (SAR); specifically considering ethical risks posed by anthropomorphism in this context. We draw on two complimentary studies to demonstrate that anthropomorphism is important to overall SAR function and overall relatively low ethical risk. As such, rather than avoiding anthropomoprhism all together (as suggested in a recently published standard on robot ethics), we suggest anthropomorphism in SARs should be a customisable trait that can be adapted to the user.
As human-robot interaction (HRI) researchers, like all scientists, we must demonstrate the reproducibility of findings-especially across robots. We present a three-study replication effort that illustrates the challenges and opportunities for replication science in HRI.
A recent human-robot trust study (Ullman & Malle, 2017) suggested that people "in the loop" with a robot via a simple button press trusted the robot (a Thymio) more than those who observed the robot complete the task autonomously. This intriguing finding was based on a small sample (n = 40) and was therefore greatly underpowered (observed power 1 - β = .35), prompting replication.
To test whether the in-the-loop effect generalizes to similar robots, we conducted a conceptual replication (Study 1) using the Create robot. The effect did not replicate, despite a large sample (n = 140) and expected power of .86. This result called for a direct replication (Study 2) using the original Thymio robot. The effect again did not replicate, despite a large sample (n = 200) and expected power of .96. We then conducted an online study (Study 3) with videos of both robots to examine whether different expectations for each robot drove the divergent results, but the hypothesis was disconfirmed (n = 400). However, one finding held across all studies: Participants consistently trusted imagined future robots far less in social than nonsocial use contexts (effect sizes of ds = -0.71, -0.78, and -0.79 in the lab studies; d = -0.38 in the online study).
To get a better understanding of people's natural responses to humanlike robots outside the lab, we analyzed commentary on online videos depicting robots of different humanlikeness and gender. We built on previous work, which compared online video commentary of moderately and highly humanlike robots with respect to valence, uncanny valley, threats, and objectification. Additionally, we took into account the robot's gender, its appearance, its societal impact, the attribution of mental states, and how people attribute human stereotypes to robots. The results are mostly in line with previous work. Overall, the findings indicate that moderately humanlike robot design may be preferable over highly humanlike robot design because it is less associated with negative attitudes and perceptions. Robot designers should therefore be cautious when designing highly humanlike and gendered robots.
In two surveys of adults in the United States (N=723), we asked about perceptions of the degree to which a variety of behaviors, when engaged in with a sex robot or a human, would constitute monogamous relationship infidelity (Study 1), and also asked respondents to consider monogamous partner behavior when committed with a robot that was matched to sexual partner preferences (Study 2). Study 1 revealed that acts committed with sex robots were considered less severe and less likely to be judged as infidelity as those same acts committed with another human. Results further revealed that male survey respondents rated all partner behaviors with sex robots as less likely to constitute cheating behavior than their female counterparts. This finding may be explained by the portrayal of sex robots as hyper-feminized female sexual partners for men, both in the way these technologies are presented as well as how they are sold. However, when asked to consider a sex robot that was matched to males or females (Study 2), this difference disappeared. For all respondents, giving sex robots specificity as either male or female resulted in higher ratings of partner infidelity as compared to Study 1. This work allows us to empirically speak to a common concern at the center of many debates over the societal implications of sex robots---potential harm to human relationships.
Robots will increasingly collaborate with human partners necessitating research into how robots negotiate negative collaborative outcomes. This study investigates the effect of blame attribution on trust assessments in human-robot collaboration. Participants (n = 60) collaboratively played a game with a humanoid robot in one of four conditions in a 2 (blame correctness: correct vs. incorrect) by 2 (blame target: human vs. robot) between-subjects experiment. Results show that people evaluate a robot more positively when it blames itself for collaborative failures, especially, it seems, in the case of incorrect self-blame. Our findings indicate a need to further research on effective communication strategies for robots that need to negotiate collaborative failures without compromising the trust relationships with its human partner.
As automation becomes more prevalent, the fear of job loss due to automation increases . Workers may not be amenable to working with a robotic co-worker due to a negative perception of the technology. The attitudes of workers towards automation are influenced by a variety of complex and multi-faceted factors such as intention to use, perceived usefulness and other external variables . In an analog manufacturing environment, we explore how these various factors influence an individual's willingness to work with a robot over a human co-worker in a collaborative Lego building task. We specifically explore how this willingness is affected by: 1) the level of social rapport established between the individual and his or her human co-worker, 2) the anthropomorphic qualities of the robot, and 3) factors including trust, fluency and personality traits. Our results show that a participant's willingness to work with automation decreased due to lower perceived team fluency (p=0.045), rapport established between a participant and their co-worker (p=0.003), the gender of the participant being male (p=0.041), and a higher inherent trust in people (p=0.018).
We explored different ways in which a multi-robot system might recover after one robot experiences a failure. We compared four recovery conditions: Update (a robot fixes its error and continues the task), Re-embody (a robot transfers its intelligence to a different body), Call (the failed robot summons a second robot to take its place), and Sense (a second robot detects the failure and proactively takes the place of the first robot). We found that trust in the system and perceived competence of the system were higher when a single robot recovered from a failure on its own (by updating or re-embodying) than when a second robot took over the task. We also found evidence that two robots that used the same socially interactive intelligence were perceived more similarly than two robots with different intelligences. Finally, our study revealed a relationship between how people perceive the agency of a robot and how they perceive the performance of the system.
With their ability to embody users in physically distant spaces, telepresence robots have gained popularity in environments including hospitals, schools, and offices. However, with platforms lacking in individuation and social presence, users often personalize telepresence robots with clothing and accessories to increase their recognizability and sense of embodiment. Toward understanding personalization preferences, as well as perceptions of personalized platforms, we conducted a series of five studies that investigate patterns in personalization of a telepresence robot and evaluate the impacts of common personalizations along five dimensions (robot uniqueness, humanness, pleasantness/unpleasantness, and people's willingness to interact with it). Finding a strong preference for the use of clothing and headwear in Studies 1-2 (N=52), we systematically manipulated a robot's appearance using these items and evaluated the qualitative and quantitative impacts on observer perceptions in Studies 3-4 (N=160). Observing that personalization increased perceptions of uniqueness and humanness, but also decreased positive responding, we then investigated the associations between personalization preferences and perceptions via a fifth study (N=100). Across the five studies, tensions emerged between operators' interest in using wigs and interlocutors' dislike of wigs. This result highlights a need to consider both operator and interlocutor perspectives when personalizing telepresence robots.
Mixed Reality visualizations provide a powerful new approach for enabling gestural capabilities on non-humanoid robots. This paper explores two different categories of mixed-reality deictic gestures for armless robots: a virtual arrow positioned over a target referent (a non-ego-sensitive allocentric gesture) and a virtual arm positioned over the gesturing robot (an ego-sensitive allocentric gesture). Specifically, we present the results of a within-subjects Mixed Reality HRI experiment (N=23) exploring the trade-offs between these two types of gestures with respect to both objective performance and subjective social perceptions. Our results show a clear trade-off between performance and social perception, with non-ego-sensitive allocentric gestures enabling faster reaction time and higher accuracy, but ego-sensitive gestures enabling higher perceived social presence, anthropomorphism, and likability.
In this paper, we argue in favor of creating robots that both teach and learn. We propose a methodology for building robots that can learn a skill from an expert, perform the skill independently or collaboratively with the expert, and then teach the same skill to a novice. This requires combining insights from learning from demonstration, human-robot collaboration, and intelligent tutoring systems to develop knowledge representations that can be shared across all three components. As a case study for our methodology, we developed a glockenspiel-playing robot. The robot begins as a novice, learns how to play musical harmonies from an expert, collaborates with the expert to complete harmonies, and then teaches the harmonies to novice users. This methodology allows for new evaluation metrics that provide a thorough understanding of how well the robot has learned and enables a robot to act as an efficient facilitator for teaching across temporal and geographic separation.
When human students practise new skills with a teacher, they often display nonverbal behaviours (e.g., head and limb movements, gaze, etc.) to communicate their level of understanding and expressing their interest in the task. Similarly, a student robot's capability to provide human teachers with social signals to express its internal state might improve learning outcomes. This could also lead to a more successful social interactions between intelligent robots and human teachers. However, to design successful nonverbal communication for a robot, we first need to understand how human teachers interpret such nonverbal cues when watching a trainee robot practising a task. Therefore, in this paper, we study the effects of different gaze behaviours as well as manipulating speed and smoothness of arm movement on human teachers' perception of a robot's (a) confidence, (b) eagerness to learn, and (c) attention to the task. In an online experiment, we asked the 167 participants (as teachers) to rate the behaviours of a trainee robot in the context of learning a physical task. The results suggest that splitting the robot's gaze between the teacher and the task not only affects the perceived attention, but can also make the robot appear to be more eager to learn. Furthermore, perceptions of all three attributes tested were systematically affected by varying parameters of the robot's arm movement trajectory while performing task actions.
Learning from Demonstration (LfD) algorithms seek to enable end-users to teach robots new skills through human demonstration of a task. Previous studies have analyzed how robot failure affects human trust, but not in the context of the human teaching the robot. In this paper, we investigate how human teachers react to robot failure in an LfD setting. We conduct a study in which participants teach a robot how to complete three tasks, using one of three instruction methods, while the robot is pre-programmed to either succeed or fail at the task. We find that when the robot fails, people trust the robot less (p < .001$) and themselves less (p=.004) and they believe that others will trust them less (p < .001$). Human teachers also have a lower impression of the robot and themselves (p < .001) and found the task more difficult when the robot fails (p < .001$). Motion capture was found to be a less difficult instruction method than teleoperation (p=.016), while kinesthetic teaching gave the teachers the lowest impression of themselves compared to teleoperation (p=.017) and motion capture (p < .001). Importantly, a mediation analysis showed that people's trust in themselves is heavily mediated by what they think that others -- including the robot -- think of them (p < .001). These results provide valuable insights to improving the human-robot relationship for LfD.
When a person is not satisfied with how a robot performs a task, they can intervene to correct it. Reward learning methods enable the robot to adapt its reward function online based on such human input, but they rely on handcrafted features. When the correction cannot be explained by these features, recent work in deep Inverse Reinforcement Learning (IRL) suggests that the robot could ask for task demonstrations and recover a reward defined over the raw state space. Our insight is that rather than implicitly learning about the missing feature(s) from demonstrations, the robot should instead ask for data that explicitly teaches it about what it is missing. We introduce a new type of human input in which the person guides the robot from states where the feature being taught is highly expressed to states where it is not. We propose an algorithm for learning the feature from the raw state space and integrating it into the reward function. By focusing the human input on the missing feature, our method decreases sample complexity and improves generalization of the learned reward over the above deep IRL baseline. We show this in experiments with a physical 7DOF robot manipulator, as well as in a user study conducted in a simulated environment.
Most social robot behaviors in human-robot interaction are designed to be polite, but there is little research about how or when a robot could be impolite, and if that may ever be beneficial. We explore the potential benefits and tradeoffs of different politeness levels for human-robot interaction in an exercise context. We designed impolite and polite phrases for a robot exercise trainer and conducted a 24-person experiment where people squat in front of the robot as it uses (im)polite phrases to encourage them. We found participants exercised harder and felt competitive with the impolite robot, while the polite robot was found to be friendly, but sometimes uncompelling and disingenuous. Our work provides evidence that human-robot interaction should continue to aim for more nuanced and complex models of communication.
The practice of social distancing during the COVID-19 pandemic resulted in billions of people quarantined in their homes. In response, we designed and deployed VectorConnect, a robot teleoperation system intended to help combat the effects of social distancing in children during the pandemic. VectorConnect uses the off-the-shelf Vector robot to allow its users to engage in physical play while being geographically separated. We distributed the system to hundreds of users in a matter of weeks. This paper details the development and deployment of the system, our accomplishments, and the obstacles encountered throughout this process. Also, it provides recommendations to best facilitate similar deployments in the future. We hope that this case study about Human-Robot Interaction practice serves as an inspiration to innovate in times of global crises.
Robots that lend social and emotional support to their users have the potential to extend the quality of care that humans can provide. However, developing robotic aids to address symptoms of loneliness, anxiety and social isolation can be especially challenging due to factors that are complex and multi-faceted. Using a user-centered approach, a prototype therapeutic robot, TACO, was developed. The design of this robot was closely informed by a comprehensive needfinding process which included a detailed literature review, ethical analysis, interviews with pediatric domain experts, and a site visit to a pediatric hospital. The prototype robot was evaluated over the course of several structured play sessions, using short interviews with children as well as a modified version of the SOFIT testing procedure. Results from early-stage testing suggest that TACO was well-liked, children found playing with it engaging and frequently exhibited affective behaviors like cuddling and stroking. These findings motivate follow-on work to further advance its design and to test its effectiveness as a therapeutic tool.
The vision of responsive architecture predicts that human experience can be evoked through the dynamic orchestration of space-defining elements. Whereas recent studies have robotically actuated furniture for functional goals, little is known how this capability can be deployed meaningfully on an architectural scale. We thus evaluated the spatial impact of a responsive wall on the inhabitants of ordinary apartments. To maintain safety during the COVID-19 pandemic, we developed a novel remote, semi-immersive cross-reality simulation evaluation methodology. Based on the orchestration of three space-defining operations, we define a theoretical framework that suggests how the position of a responsive wall can be determined through five distinct architectural qualities. This framework thus proposes how human-building interaction (HBI) could complement its functional goals with augmenting the well-being of occupants in the physical as well as the virtual realm.
This paper introduces and justifies (through an n=210 online human-subject study) Deconstructed Trustee Theory, a theory of human-robot trust that factors the representation of trustee into robot body and robot identity in order to differentially model perceived trustworthiness of robot body and identity. This theory predicts (a) that different levels of trustworthiness can be attributed to a robot body and a robot identity, (b) that divergence between levels of perceived trustworthiness of body and identity may be effected by communication policies that reveal the potential for phenomena such as re-embodiment, co-embodiment, and agent migration in multi-robot systems, and (c) that perceived trustworthiness of body and identity may further diverge and be refined through moral cognitive processes triggered on observation of blameworthy actions.
In this work, we provide an argument and first empirical insights that existing technology acceptance models fall short when it comes to explaining spontaneous, unplanned and unsolicited encounters between humans and delivery robots on the street. Since technology acceptance models have been defined by the technology's perceived ease of use, perceived usefulness and behavioural intention to use, they are not well suited to explain acceptance in situations in which humans meet robots without any prior intention to use. Nevertheless, acceptance of delivery robots might be a driving force for safe navigation. Thus, the concept of acceptance should not be limited to its current focus on (planned) usage. In consequence, we i) expand the understanding of technology acceptance, ii) propose the concept of Existence Acceptance for autonomous systems, and iii) explore a new model for acceptance in an online study (n = 185). Theoretical considerations hint towards the relevance of existence acceptance models for autonomous systems.
Human-robot interaction (HRI) research frequently explores how to design interfaces that enable humans to effectively teleoperate and supervise robots. One of the principle goals of such systems is to support data collection, analysis, and human decision making, which requires representing robot data in ways that support fast and accurate analyses by humans. However, the interfaces for these systems do not always use best-practice principles for effectively visualizing data. We present a new framework to scaffold reasoning about robot interface design that emphasizes the need to consider data visualization for supporting analysis and decision making processes, detail several data visualization best practices relevant to HRI, identify a set of core data tasks that commonly occur in HRI, and highlight several promising opportunities for further synergistic activities at the intersection of these two research areas.
Games are often used to foster human partners' engagement and natural behavior, even when they are played with or against robots. Therefore, beyond their entertainment value, games represent ideal interaction paradigms where to investigate natural human-robot interaction and to foster robots' diffusion in the society. However, most of the state-of-the-art games involving robots, are driven with a Wizard of Oz approach. To address this limitation, we present an end-to-end (E2E) architecture to enable the iCub robotic platform to autonomously lead an entertaining magic card trick with human partners. We demonstrate that with this architecture a robot is capable of autonomously directing the game from beginning to end. In particular, the robot could detect in real-time when the players lied in the description of one card in their hands (the secret card). In a validation experiment the robot achieved an accuracy of 88.2% (against a chance level of 16.6%) in detecting the secret card while the social interaction naturally unfolded. The results demonstrate the feasibility of our approach and its effectiveness in entertaining the players and maintaining their engagement. Additionally, we provide evidence on the possibility to detect important measures of the human partner`s inner state such as cognitive load related to lie creation with pupillometry in a short and ecological game-like interaction with a robot.
Many small group activities, like working teams or study groups, have a high dependency on the skill of each group member. Differences in skill level among participants can affect not only the performance of a team but also influence the social interaction of its members. In these circumstances, an active member could balance individual participation without exerting direct pressure on specific members by using indirect means of communication, such as gaze behaviors. Similarly, in this study, we evaluate whether a social robot can balance the level of participation in a language skill-dependent game, played by a native speaker and a second language learner. In a between-subjects study (N = 72), we compared an adaptive robot gaze behavior, that was targeted to increase the level of contribution of the least active player, with a non-adaptive gaze behavior. Our results imply that, while overall levels of speech participation were influenced predominantly by personal traits of the participants, the robot's adaptive gaze behavior could shape the interaction among participants which lead to more even participation during the game.
Robot-Robot-Human Interaction is an emerging field, holding the potential to reveal social effects involved in human interaction with more than one robot. We tested if an interaction between one participant and two non-humanoid robots can lead to negative feelings related to ostracism, and if it can impact fundamental psychological needs including control, belonging, meaningful existence, and self-esteem. We implemented a physical ball-tossing activity based on the Cyberball paradigm. The robots' ball-tossing ratio towards the participant was manipulated in three conditions: Exclusion (10%), Inclusion (33%), and Over-inclusion (75%). Objective and subjective measures indicated that the Exclusion condition led to an ostracism experience which involved feeling "rejected", "ignored", and "meaningless", with an impact on various needs including control, belonging, and meaningful existence. We conclude that interaction with more than one robot can form a powerful social context with the potential to impact psychological needs, even when the robots have no humanoid features.
More and more voice-user interfaces (VUIs), such as smart speakers like Amazon Alexa or social robots like Jibo or Cozmo, are entering multi-user environments including homes. VUIs can utilize multi-modal cues such as graphics, expressive sounds, and movement to convey social engagement, affecting how users perceive agents as social others. Reciprocal relationships with VUIs, i.e., relationships with give-and-take between the VUI and user, are of key interest as they are more likely to foster rapport and emotional engagement, and lead to successful collaboration. Through an elicitation study with three commercially available VUIs, we explore small group interactions (n = 33 participants) focused on the behaviors participants display to various VUIs to understand (1) reciprocal interactions between VUIs and participants and among small groups and (2) how participants engage with VUIs as the interface's embodiment becomes more socially capable. The discussion explores (1) theories of sociability applied to the users' behaviors seen with the VUIs, and (2) the group contexts where VUIs that build reciprocal relationships with users can become a powerful persuasive technology and a collaborative companion. We conclude the discussion with recommendations for promoting reciprocity from participants and, therefore, fostering rapport and emotional engagement in VUI interactions.
Humans interpret and predict behavior of others with reference to mental states or, in other words, by adopting the intentional stance. The present study investigated to what extent individuals adopt the intentional stance towards two agents (a humanoid robot and a human). We asked participants to judge whether two different descriptions fit the behaviors of the robot/human displayed in photographic scenarios. We measured acceptance/rejection rate of the descriptions (as an explicit measure) and response times in making the judgment (as an implicit measure). Our results show that at the explicit level, participants are more likely to use mentalistic descriptions for the human agent and mechanistic descriptions for the robot. Interestingly, at the implicit level, we found no difference in response times associated with the robotic agent. We argue that, at the implicit level, both stances are processed as "equally likely" to explain the behavior of a humanoid robot, while at the explicit level there is an asymmetry in the adopted stance. Furthermore, cluster analysis on participants' individual differences in anthropomorphism likelihood revealed that people with a high tendency to anthropomorphize tend to accept faster the mentalistic description. This suggests that the decisional process leading to adoption of one or the other stance to adopt is influenced by individual tendency to anthropomorphize non-human agents.
Intent recognition models, which match a written or spoken input's class in order to guide an interaction, are an essential part of modern voice user interfaces, chatbots, and social robots. However, getting enough data to train these models can be very expensive and challenging, especially when designing novel applications such as real-world human-robot interactions. In this work, we first investigate how much training data is needed for high performance in an intent classification task. We train and evaluate BiLSTM and BERT models on various subsets of the ATIS and Snips datasets. We find that only 25 training examples per intent are required for our BERT model to achieve 94% intent accuracy compared to 98% with the entire datasets, challenging the belief that large amounts of labeled data are required for high performance in intent recognition. We apply this knowledge to train models for a real-world HRI application, character strength recognition during a positive psychology interaction with a social robot, and evaluate against the Character Strength dataset collected in our previous HRI study. Our real-world HRI application results also confirm that our model can produce 76% intent accuracy with 25 examples per intent compared to 80% with 100 examples. In a real-world scenario, the difference is only one additional error per 25 classifications. Finally, we investigate the limitations of our minimal data models and offer suggestions on developing high quality datasets. We conclude with practical guidelines for training BERT intent recognition models with minimal training data and make our code and evaluation framework available for others to replicate our results and easily develop models for their own applications.
With the growing capabilities of intelligent systems, the integration of robots in our everyday life is increasing. However, when interacting in such complex human environments, the occasional failure of robotic systems is inevitable. The field of explainable AI has sought to make complex-decision making systems more interpretable but most existing techniques target domain experts. On the contrary, in many failure cases, robots will require recovery assistance from non-expert users. In this work, we introduce a new type of explanation, εerr, that explains the cause of an unexpected failure during an agent's plan execution to non-experts. In order for error explanations to be meaningful, we investigate what types of information within a set of hand-scripted explanations are most helpful to non-experts for failure and solution identification. Additionally, we investigate how such explanations can be autonomously generated, extending an existing encoder-decoder model, and generalized across environments. We investigate such questions in the context of a robot performing a pick-and-place manipulation task in the home environment. Our results show that explanations capturing the context of a failure and history of past actions, are the most effective for failure and solution identification among non-experts. Furthermore, through a second user evaluation, we verify that our model-generated explanations can generalize to an unseen office environment, and are just as effective as the hand-scripted explanations.
We studied the programming and debugging processes of an autonomous mobile social robot with a focus on the programmers. This process is time-consuming in a populated environment where a mobile social robot is designed to interact with real pedestrians. From our observations, we identified two types of time-wasting behaviors among programmers: cherry-picking and a shortage of coverage in their testing. We developed a new tool, a test generator framework, to help avoid these testing time-wasters. This framework generates new testing scenarios to be used in a simulator by blending a user-prepared test with pre-stored pedestrian patterns. Finally, we conducted a user study to verify the effects of our test generator. The results showed that our test generator significantly reduced the programming and debugging time needed for autonomous mobile social robots.
Two people walking towards each other in a colliding course is an everyday problem of human-human interaction. In spite of the different environmental and individual factors that might jeopardise successful human trajectories, people are generally skilled at avoiding crashing into each other. However, it is not clear if the same strategies will apply when a human is in a colliding course with a robot, nor which (if any) robot-related factors will influence the human's decision to swerve or not. In this work, we present the results of an online study where participants walked towards a virtual robot that differed in terms of anthropomorphism and perceived autonomy, and had to decide whether to swerve, or continue straight. The experiment was inspired by the game-theoretic game of chicken. We found that people performed more swerving actions when they believed the robot to be teleoperated by another participant. When they swerved, they also swerved closer to the robot with high levels of human-likeness, and farther away from the robot with low anthropomorphism score, suggesting a higher uncertainty about the mechanical-looking robot's intentions. These results are discussed in the context of socially-aware robot navigation, and will be used to design novel algorithms for robot trajectories that take robot-related differences into account.
Receiving a hug is one of the best ways to feel socially supported, and the lack of social touch can have severe negative effects on an individual's well-being. Based on previous research both within and outside of HRI, we propose six tenets (''commandments'') of natural and enjoyable robotic hugging: a hugging robot should be soft, be warm, be human sized, visually perceive its user, adjust its embrace to the user's size and position, and reliably release when the user wants to end the hug. Prior work validated the first two tenets, and the final four are new. We followed all six tenets to create a new robotic platform, HuggieBot~2.0, that has a soft, warm, inflated body (HuggieChest) and uses visual and haptic sensing to deliver closed-loop hugging. We first verified the outward appeal of this platform in comparison to the previous PR2-based HuggieBot 1.0 via an online video-watching study involving 117 users. We then conducted an in-person experiment in which 32 users each exchanged eight hugs with HuggieBot~2.0, experiencing all combinations of visual hug initiation, haptic sizing, and haptic releasing. The results show that adding haptic reactivity definitively improves user perception a hugging robot, largely verifying our four new tenets and illuminating several interesting opportunities for further improvement.
Interest in design methods and tools has been steadily growing in HRI. Yet, design is not acknowledged as a discipline with specific epistemology and methodology. Designerly HRI work is validated through user studies which, we argue, provide a limited account of the knowledge design produces. This paper aims to broaden current understanding of designerly HRI work and its contributions by unpacking what designerly knowledge is and how to produce it. Through a critical analysis of current HRI design literature, we identify a lack of work dedicated to understanding the conceptual implications of robotic artifacts. These, in fact, are implicit carriers of crucial HRI knowledge that can challenge established assumptions about how a robot should look, act, and be like. We conclude by discussing a set of practices desirable to legitimize designerly HRI work, and calling for further research addressing the conceptual implications designerly HRI work.
We present the design process of the robot YOLO aimed at stimulating creativity in children. This robot was developed under a human-centered design approach with participatory design practices during two years and involving 142 children as active contributors at all design stages. The main contribution of this work is the development of methods and tools for child-centered robot design. We adapted existing participatory design practices used with adults to fit children's development stages. We followed the Double-Diamond Design Process Model and rested the design process of the robot on the following principles: low floor and wide walls, creativity provocations, open-ended playfulness, and disappointment avoidance through abstraction. The final product is a social robot designed for and with children. Our results show that YOLO increases their creativity during play, demonstrating a successful robot design project. We identified several guidelines that made the design process successful: the use of toys as tools, playgrounds as spaces, the emphasis of playfulness for child expression, and child policies as allies for design studies. The design process described empowers children's in the design of robots.