Live Document

7: Relation to Other Fields

Although we have framed HRI as a new field in this paper, HRI has strong ties to previous and ongoing work in telerobotics, intelligent vehicle systems, supervisory control, and aviation. In this section, we review many of the stronger ties to these fields. We begin with the most relevant: telerobotics and supervisory control.

A. Telerobotics and Teleoperation
Sheridan’s papers and books on telerobotics and supervisory control are perhaps the most influential in the field. In 1992, his book outlined the state of the art in human-robot interaction, with an emphasis on open problems, mathematical models, and information flow[35]. This book was followed by a 2002 updated survey and framework of human factors for the general human-machine interaction problem [34].

Even more influential than his books, perhaps, is Sheridan’s and Verplank’s levels of automation in human-machine interaction. These 10 levels of automation span the range from direct control through decision support to supervisory control [77]. More recently, Parasuraman and Wickens teamed with Sheridan to extend these 10 levels of automation beyond decision support to other aspects of human-machine interaction[225]. Levels of automation foreshadow more recent concepts of dynamic autonomy in all its forms [78, 298].

In addition to these seminal works, there are numerous examples of remote robot operation. One example comes from attempts during World War II to remotely control aircraft. This lead to the study of remotely piloted vehicles [12], a precursor of more modern work on the Human Factors of Unmanned/Uninhabited Aerial Vehicles [299].

Complementing the work on remotely piloted aircraft is a work on unmanned underwater vehicles (UUVs). This work includes both military and scientific applications, and spans topics of remote visualization, telepresence, and information display [292].

B. Human Factors and Automation Science
The field of human factors emerged as the confluence of engineering psychology, ergonomics, and accident analysis. Human factors work relevant to HRI includes important lessons from thought provoking papers such as Bainbridge’s “Ironies of Automation” [216] and Hancock’s position paper on the make-up of HRI teams [151]. Human factors work is motivated by numerous stories, sometimes humorous and sometimes sobering, from years of humans interacting with automation in various forms [300].

The human factors literature has produced key concepts of interaction, such as mental workload [301, 302], situation awareness [105], mental models [303, 304], and trust in automation [305]. It also includes several themes, frameworks, and models that provide a solid foundation for describing and predicting responses to human-robot interaction. These contributions include the seminal work of Rasmussen who presented a hierarchy of interaction that included knowledge-based, rule-based, and skill-based interactions [306]. Rasmussen’s hierarchy is a human factors complement to hierarchical and intelligent control [63, 87, 307]. Contributions also include general principles of cognitive ergonomics, with particularly powerful ideas such as Wickens’s Multiple Resource Theory [36]. Complementing these models are interaction phenomena that are common enough that they are elevated to the status of law by David Woods [308].

Rich as these models and laws are, they cannot substitute for practical real world observation. This point was strongly made by Hutchins book “Cognition in the Wild” [309]. In the spirit of real world observation, the field of ethnography has developed a set of methodologies for recording observations in real-world settings, and some ethnographers have tried to translate these observation and summarization methods into tools for designing interventions [193, 310, 311].

Growing out of the need to understand the goals, tasks, and information flow of existing processes, a series of methodologies have emerged that produce formalized models for how “things get done.” These methodologies include Goal-Directed Task Analysis, Cognitive Task Analysis, and Cognitive Work Analysis [105, 215]. These methodologies produce models of goals, tasks, and information flow, which are being used in HRI [312]. Complementing these high-level models are cognitive models of the mental processes used to accomplish tasks, and activity analyses of existing work practices. The cognitive models and activity analyses are especially interesting to HRI, because they can be used not only as models of existing processes, but also as tools to generate behaviors such as perspective-taking and planning [199].

It is worth noting that cognitive psychology and social psychology offer perspectives and insights that are distinct from traditional human factors. There is a trend in HRI to include cognitive and social scientists in collaborative research efforts with roboticists, human factors engineers, and experts in human-computer interaction.

Given the rich history of human factors and the recent emergence of HRI, it is unfortunate and perhaps inevitable that some relevant human factors work is called by different names in the different fields. Examples include adjustable autonomy and Inagaki’s Situation-Adaptive Autonomy [313, 314]; and augmented reality/virtuality and synthetic vision [315].

C. Aviation and Air Traffic Control
Modern aircraft are among the most capable semi-autonomous systems in use. Moreover, because of the safety critical nature of aviation, aircraft systems must be extremely robust and reliable. Careful human factors analyses are often performed to justify the introduction of or change to a new aircraft system. From one perspective, an aircraft is a very capable type of robot, albeit one that happens to carry the human operator.

As a result, HRI has many lessons that it can learn from aviation, both in terms of useful technologies and careful human factors analysis. Relevant examples include the ground proximity warning systems, which uses multi-modal communications coupled with robust autonomy to prevent controlled flight into terrain [316] . Tunnel-in-the-sky displays can increase situation awareness by helping pilots to understand how control choices will affect the trajectory of the aircraft [317]. Problems caused by mode confusion, by the operator being out of the loop, by vigilance, by excessive workload, and by team coordination issues have all received attention and been mitigated by procedures and technologies.

As robots become more capable, an important issue is how many robots can be managed by a single human. This question makes another aspect of aviation relevant to HRI, namely, human factors work done with the air traffic control problem (ATC). ATC is a problem that involves sequencing, deconflicting, and handing off multiple highly capable systems [318]. Indeed, the autonomy level of these aircraft is extremely high, since they consist of both a trained and intelligent human operator as well as aircraft autonomy. Nevertheless, ATC imposes high workloads on operators. Careful human factors analyses have been performed and mitigating technologies have been developed [319, 320]. Because of the safety critical nature of ATC, many potentially useful technologies have not been incorporated into ATC systems. Even so, some ATC-related research and development could serve as a type for HRI problems.

There are three other aspects of aviation and ATC that are very relevant to HRI. First, ATC training and certification programs have many desirable attributes that could be imitated in HRI. Second, because aviation incidents are relatively rare and, when they occur, can damage career prospects, the aviation industry has developed anonymous reporting procedures which are kept in a database of problems that have occurred. As HRI matures, it could be useful to create a standardized reporting system to identify and mitigate problems that frequent arise. Third, the aviation industry has a strong set of standards. There have been recent efforts to bring the standardization process to HRI [284], though it is important that these efforts do not impose undue restrictions on creativity and design.

D. Intelligent Vehicle Systems
The field of intelligent vehicle systems (IVS) has received considerable attention in recent decades, including the emergence of several conferences and journals [321-324]. The field of intelligent vehicle systems has many problems in common with HRI, including designing autonomy that supports human behavior, creating attention-management aides, supporting planning and navigation under high-workload conditions, mitigating errors, and creating useful models and metrics [325-327]. Indeed, a strong case can be made that modern automobiles are just semi-autonomous robots that carry people.

IVS not only include automobiles, but also trains, busses, semi-trucks and other forms of public transit [328]. The users of IVS range from those that are highly trained to those that are untrained and sometimes even uninformed. Moreover, IVS must be designed to be safety-critical, time-critical, and to operate under high workload conditions. The presence of untrained operators and high-demand tasks produces technologies that may be relevant for those aspects of HRI require interaction with bystanders or naïve operators.

E. Human-Computer Interaction (HCI)
As the field of HRI has grown, it has seen many contributions from researchers in HCI and it has been nurtured by HCI organizations. For example, the first International Conference on Human-Robot Interaction was sponsored by ACM’s Computer-Human Interaction Special Interest Group [329]. HRI research is attractive to many members of the HCI community because of the unique challenges posed by the field. Of particular interest is the fact that robots occupy physical space. This offers unique challenges not offered in desktop metaphors or even pervasive computing. Physical location in a 3D space imposes strong requirements on how information is displayed in remote operation, and even stronger requirements on how space is shared when robots and humans occupy the same space. HRI benefits from contributions from HCI researchers, both in methodologies, design principles, and computing metaphors.

F. Artificial Intelligence and Cybernetics
Because of their emphasis on designing intelligence for human-built systems, the fields of artificial intelligence (AI) and cybernetics have a great deal of relevance to the field of HRI. Intelligence and autonomy are closely aligned. Indeed, when experimenters want to give the illusion of truly intelligent robots, it is common to use a “wizard of oz” design wherein experiment participants believe that they are controlling an intelligent robot but where in reality the commands that they issue are received and translated into teleoperation commands by a hidden human [330].

HRI frequently uses concepts from AI in the design of autonomy algorithms. Moreover, AI techniques have informed and been informed by concepts from cognitive science. For example, the ACT-R system, a popular tool for modeling cognition, uses AI-like production rules. Such cognitive models have increasingly become relevant to HRI, both as tools for modeling how a human might interact and as the basis for generating robot behavior [199].

Although sometimes justifiably treated as a separate field from AI, augmented reality and telepresence have much relevance to HRI. Augmented reality techniques are used to support remote interactions in NASA’s Robonaut [27]. Augmented virtuality and mixed reality are variations of augmented reality that have found application in HRI [109]. Some suggest that telepresence, the natural extension of human awareness of a remote space, is a goal of interface design in HRI, though others note that a feeling of remote presence is not necessary provided that information is displayed in a way that supports intentional action in the remote space [109].

Another AI-related area that has developed into a separate field of study is computer vision. Computer vision algorithms are frequently used to translate camera imagery into percepts that support autonomy. Moreover, these algorithms are also used to provide enhanced awareness of information through the use of image stabilization, mosaics, automated target recognition, and image enhancement.

Many AI techniques are used in computer games. These games, some of which are very sophisticated, provide a probe into the levels of autonomy needed to support useful interactions. Given these levels of autonomy, information is integrated and presented to operators in several different forms; evaluating these forms of information presentation provides guidelines for interface designers in HRI [172]. Sophisticated multi-player online games may become useful in understanding how natural language can be used to support HRI and how human-robot teams should interact.

Finally, machine learning is a subfield of AI that is proving very useful in robotics and HRI. Machine learning can be used to develop robot behaviors, robot perception, and multi-robot interaction [85, 195, 196]. Interactive learning has received attention as a way to capture and encode useful robot behaviors, to provide robot training, and to improve perception. Interactive techniques with intelligent systems are also present in AI. Interactive proof system, interactive planners, and “programming by reward” in machine learning are all examples of how human input can be used in collaboration with AI algorithms.

G. Haptics and Telemanipulation
Before concluding the paper, it is important to note that much of the field of haptics and telemanipulation are aligned with the goals and challenge problems of HRI. However, the current research culture tends to treat haptics/telemanipulation separate from HRI, perhaps because of the longer history of the field of haptics. Since the two fields have much to learn from each other, it is desirable that the research communities increase interactions.