Although robot adaptation and learning have been addressed by many researchers, training of humans appears to have received comparatively little attention in the HRI literature, even though this area is very important. One reason for this apparent trend is that an often unstated goal of HRI is to produce systems that do not require significant training. This appears to hold partly because many robot systems are designed to be used in very specific domains for brief periods of times[165, 166]. Moreover, robot learning and adaptation are often treated as useful in behavior design and in task-specific learning, though adaptation is certainly a key element of long-term interactions between humans and robots [167].
On one hand, it is important to minimize the amount of human training and adaptation required to interact with robots that are used in therapeutic or educational roles for children, autistic individuals, or mentally challenged individuals. On the other hand, it is important that HRI include proper training for problems that include, for example, handling hazardous materials; similarly the goal of using robots in therapeutic and educational roles implies that humans should adapt and learn in response to interaction [168] . In this section, we discuss not only HRI domains that require minimum operator training, but also domains that require careful training. We also discuss efforts aimed to train HRI scientists and designers, and then conclude with a discussion of how the concept of training can be used to help robots evolve new skills in new application domains.
Minimizing Operator Training Minimizing training appears to be an implicit goal for “edutainment” robots, which include robots designed for use in classrooms and museums, for personal entertainment, and for home use. These robots are typically designed to be manageable by a wide variety of humans, and training can range from instruction manuals, instruction from a researcher, or instructions from the robot itself [128, 169].
One relevant study explored how ROOMBA robots are used in practice without attempting to make operators use the robots in a specific way [170]. Such studies are important because they can be used to create training materials that guide expectations and alert humans to possible dangers. Other such studies include those that explore how children use education robots in classroom settings [168], investigate how disabled children interact with robots in social settings [30], support humans in the house [171], and identify interaction patterns with museum guide robots [169].
Complementing such studies are efforts to use archetype patterns of behavior and well-known metaphors that trigger correct mental models of robot operation. Examples include the often stated hypotheses that people with “gaming experience” will be able to interact better (in some sense) with mobile robots than those with limited experiences in games [172]. We are not aware of any studies that directly support this hypothesis, but if it is true then it would seem to suggest that people with experiences in video-conferencing, instant-messaging, and other computer-mediated forms of communication might more naturally interact with robots. Whether this hypothesis is true is a matter of future work, but it is almost certainly true that such experiences help people form mental models that influence interactions [173]. Designers are seeking (a) to identify interaction modes that invoke commonly held mental models [174] such as those invoked by anthropomorphic robots [175] or (b) to exploit fundamental cognitive, social, and emotional processes [176]. One possible caution for these efforts is that robots may reach an “uncanny valley” where expectations evoked by the robot fall short of actual behavior producing an interaction that can feel strangely uncomfortable to humans [88, 177]. However, this uncanny valley theory is unproven although researchers are now trying to experimentally verify its existence [178].
Efforts to Train Humans. In contrast to the goal of minimizing training in edutainment robots, some application domains involving remote robots require careful training because operator workload or risk is so high. Important examples of such training are found in military and police applications, space applications, and in search and rescue applications. Training for military and police applications is typified by “bomb squad” robots, training for space applications is typified by telemanipulation tasks [179], and training for military and civilian search and rescue is typified by reconnaissance using small, “human-packable robots” [180]. In both the military and search application domains, training efforts exist for both air and ground robots, and these efforts tend to emphasize the use of mobile robots in a mission context [51]. Training efforts include instructions on using the interface, interpreting video, controlling the robot, coordinating with other members of the team, and staying safe while operating the robot in a hostile environment. Such training is often given to people who are already experts in their fields (such as in search and rescue), but is also given to people who may be relatively inexperienced. In the military, police and space domains, training programs may be complemented by selection criteria to help determine individuals are likely to be better (in some sense) at managing a robot [181]. Selection appears to have received more attention in air robots than ground robots.
By contrast to interactions with remote robots, many applications involving proximate robots are designed to produce learning or behavioral responses with humans. Therapeutic and social robots are designed to change, educate, or train people, especially in long-term interactions [168, 182, 183]. People also adapt to service robots over the long-term and over a wide range of tasks [184], and there is growing evidence that many long-term interactions require mutual adaptation including with human bystanders [185-187]. Importantly, culture appears to influence both long-term and short-term adaptation, at least as far as accepting interactions with a robot [60, 142, 188-191].
Training Designers. Importantly, an often overlooked area is the training of HRI researchers and designers in the procedures and practices of those whom they seek to help. Important examples of training researchers include Murphy’s workshops on search and rescue robotics [192], tutorials and workshops on methodologies for understanding a work-practice domain and field studies [193], tutorials for young researchers on search and rescue [42, 43],and tutorials and workshops on metrics or experiment design for robot applications [194].
Training Robots. It is tempting to restrict training to the education of the human side of HRI, but this would be a mistake given current HRI research. In HRI, robots are also learning, both offline as part of the design process [31, 195] and online as part of interaction, especially long-term interaction [127, 196]. Such learning includes improving perceptual capabilities through efficient communication between humans and robots [127, 196-198], improving reasoning and planning capabilities through interaction [199, 200], and improving autonomous capabilities [201]. Approaches to robot learning include teaching or programming by demonstration [202-207], task learning [127, 195, 200], and skill learning including social, cognitive and locomotion skills [136, 199, 208-210]. Some researchers are exploring biologically inspired learning models, including how teaching among humans or social animals can be used to train a robot [208, 211]; others are exploring how learning can become more efficient if it leverages information about how the human brain learns in very few trials [212].
Interestingly, it can be argued that providing support for efficient programming or knowledge management systems is an important aspect of training robots in HRI [39, 130]. Additionally, it can be argued that sensitizing a robot to issues of culture and etiquette allows them to adapt to slowly changing humhuman norms of behavior [22, 213, 214].