Live Document

Teams

HRI problems are not restricted to a single human and a single robot, though this is certainly one important type of interaction. Robots used in search and rescue, for example, are typically managed by two or more people, each with special roles in the team [144, 145]. Similarly, managing Unmanned/Uninhabited Air Vehicles (UAVs) is typically performed by at least two people: a “pilot”, who is responsible for navigation and control, and a sensor/payload operator, who is responsible for managing cameras, sensors, and other payloads [146, 147].

A question that has received considerable attention, but which is directly addressed by few scientific studies, is how many remote robots a single human can manage. In general, the answer is dependent on factors such as the level of autonomy of the robot (teleoperation requires vastly more direct attention from the human), the task (which defines the type and quantity of data being returned to the human and the amount of attention and cognitive load required of the human), and the available modes of communication.

In the search and rescue domain, Murphy [144] asserts that the demands of the task, the form factor of the robot, and the need to protect robot operators requires at least two operators, an observation that has received strong support from field trials using mature technologies [148], and partial support in search and rescue competitions using less mature but more ambitious technologies [145]. In other domains, some assert that, given sophisticated enough autonomy and possibly coordinated control, it is possible for a single human to manage more than one robotic asset [149, 150] though the task may still need another human to interpret sensor information. Still others assert that this problem is ill-formed when robots are used primarily as an information-gathering tool [151]. An intermediate position is that the right question should not focus on how many robots can be managed by a single human, but rather the following: how many humans does it take to efficiently manage a fixed number of robots, allowing for the possibility of adaptable autonomy and dynamic handoffs between humans [152].

One measure that has received some attention in the literature is the notion of fan-out, which represents an upper bound on the number of independent, homogeneous robots that a single person can manage [153, 154]. This measure is supported by a limited set of techniques for estimating it [69]. Some work has been done to refine the fan-out to apply to teams of heterogeneous robots [155] and to tighten the bound by identifying various aspects of interaction [150]. In its present form, however, it is clear that fan-out is only a designer guideline and is insufficient, for example, to provide a trigger strategy [78] for adaptive automation. Alternatives to fan-out include predicting the performance of a team of heterogeneous robots from measurements of neglect tolerance and interaction times [156].

In addition to the number of humans and robots in a team, a key problem is the organization of the team [157, 158]. One important organizational question is who has authority to make certain decisions: robot, interface software, or human? Another important question is who has authority to issue instructions or commands to the robot and at what level: strategic, tactical, or operational? A third important question is how conflicts are resolved, especially when robots are placed in peer-like relationships with multiple humans. A fourth question is how roles are defined and supported: is the robot a peer, an assistant, or a slave; does it report to another robot, to a human, or is it fully independent?

Spanning all of these questions is whether the organizational structure is static or dynamic, with changes in responsibilities, authorities, and roles. In one study, managing multiple robots in a search and rescue domain under either manual or coordinated control produced results that strongly favored coordinated control [159]. In another study, four autonomy configurations, including two variations of sliding autonomy, were managed by a human working on a construction task with a team of heterogeneous robots [152]. In this study, the tradeoffs between time to completion, quality of behavior, and operator workload were strongly evident. This result emphasizes the importance of using dynamic autonomy when the world is complex and varies over time. In a third study, researchers explored how making coordination between robots explicit can reduce failures and improve consistency, in contrast to traditional interfaces [160]. In a fourth study, researchers explored the minimal amount of gestural information required to command various formations to a team of robots[161].

In many existing and envisioned problems, HRI will include not only humans and robots interacting with each other, but also coordinating with software agents. The most simple form of this is a three-agent problem which occurs when an intelligent interface is the intermediary between a human and a remote robot [162]. In this problem, the interface agent can monitor and categorize human behavior, monitor and detect problems with the robot, and support the human when workload levels, environment conditions, and robot capabilities change. A more complicated form of this teaming is in anticipated NASA applications where multiple distributed humans will interact with robots and with software agents that coordinate mission plans, human activities, and system resources [163].

A final issue that is starting to gain attention is the role of the human [164]. While much of the discussion up to this point is with respect to humans and robots performing a task together, there are cases where the robot may have to interact with bystanders or with people who are not expecting to work with a robot. Examples include the urban search and rescue robot that comes across a human to be rescued, a military robot in an urban environment that must interact with civilians, and a health assistant robot that must help a patient and interact with visitors. The role of the robot with respect to humans must be taken into account. The role of the human will be discussed in more detail in Section IV.