In this section, we briefly survey events and work that have made modern HRI possible. Clearly, the development of robots was the essential first step. Although robot technology was primarily developed in the mid and late 20th century, it is important to note that the notion of robot-like behavior and its implications for humans have been around for centuries in religion, mythology, philosophy, and fiction. The word “robot” originates from the Czechoslovakian word robota which means work [2]. “Robot” appears to have first been used in Karel Chapek’s 1920’s play Rossum’s Universal Robots, though this was by no means the earliest example of a human-like machine. Indeed, Leonardo da Vinci sketched a mechanical man around 1495, which has been evaluated for feasibility in modern times [3]. Pre-dating da Vinci’s humanoid robot are automata and mechanical creatures from ancient Egypt, Greece, and China. The Iliad refers to golden maids that behave like real people [4]. The idea of golem, an “artificial being of Hebrew folklore endowed with life” has been around for centuries [2] and was discussed by Wiener in one of his books [5]. Ancient Chinese legends and compilations mention robot-like creations, such as the story from the West Zhou Dynasty (1066BC-771BC) that describes how the craftsman Yanshi presented a humanoid. The creation looked and moved so much like a human that when it winked at the concubines, it was necessary to dismantle it to prove that it was an artificial creation [6]. Similar robotic devices, such as a wooden ox and floating horse, were believed to have been invented by the Chinese strategist Zhuge Liang [7], and a famous Chinese carpenter was reported to have created a wooden/bamboo magpie that could stay aloft for up to three days [8]. An ancient scientist, Zhang Heng (78AD-139AD), also invented a robotic cart that can measure distance traveled. A wooden humanoid would pound a drum everytime the cart had traveled 10 Li (equivalent to 6km) and strike a bell at the 100 Li mark [331]. During the Tang Dynasty (618AD-907AD), a craftman named Yang Wulian created a humanoid resembling a monk, which is capable of begging for alms holding a copper bowl. It even bowed after collecting money and knew how to put the money away when the bowl is full [332]. [Lanny Lin; 21 Mar 2012; BYU]. In European literature, the book Gulliver’s Travels describes how Gulliver was conceived to be “a piece of clockwork … contrived by some ingenious artist” when he was in the land of giants [333]. [Michael Goodrich; 3 May 2012; BYU]. More recently, robotic-like automata, including Vaucanson’s duck, have been created [9]. Mechanical-like birds were present in the 1933 poem Byzantium by W. B. Yeats [10], and robots have had a large presence in science fiction literature, most notably Azimov’s works [11]. Indeed, Asimov’s Laws of Robotics appear to be the first designer guidelines for HRI.
Early robot implementations were remotely operated devices with no or minimal autonomy. In 1898, Nicola Tesla demonstrated a radio-controlled boat (see Figure 1), which he described as incorporating “a borrowed mind.” In fact, Tesla controlled the boat remotely. His invention, which he generalized to many different types of vehicles, was described in patent 613,809, “Method and Apparatus for Controlling Mechanism of Moving Vessels.” Tesla hypothesized, “…you see there the first of a race of robots, mechanical men which will do the laborious work of the human race.” He even envisioned one or more operators simultaneously directing fifty or a hundred vehicles.
Figure 1. Tesla’s boat, Available from: http://www.brotherhoodoflife.com/Tesla-boat.jpg. Used with permission.
Other examples include: The Naval Research Laboratory’s “Electric Dog” robot from 1923, attempts to remotely pilot bombers during World War II, the creation of remotely piloted vehicles, and mechanical creatures designed to give the appearance of life. As technology evolved, the capabilities of remotely operated robots have grown (see [12] for a brief history). This is perhaps nowhere more evident then in the very successful application of unmanned underwater vehicles that have been used to explore the ocean’s surface to find lost ships, explore underwater life, assist in underwater construction, and study geothermal activity [13].
Complementing the advances in robot mechanics, research in artificial intelligence has attempted to develop fully autonomous robots. The most commonly cited example of an early autonomous robot was Shakey, which was capable of navigating through a block world under carefully controlled lighting conditions at the glacially slow speed of approximately 2 meters per hour [14]. Many agree that these early works laid a foundation for much that goes on in hybrid control architectures today [15, 16].
A breakthrough in autonomous robot technology occurred in the mid 1980s with work in behavior-based robotics [17, 18]. Indeed, it could be argued that this work is a foundation for many current robotic applications. Behavior-based robotics breaks with the monolithic sense-plan-act loop of a centralized system, and instead uses distributed sense-response loops to generate appropriate responses to external stimuli. The combination of these distributed responses produces “emergent” behavior that can produce very sophisticated responses that are robust to changes in the environment. However, the real breakthrough for autonomy as it applied to HRI is the emergence of hybrid architectures; these architectures simultaneously allow sophisticated reactive behaviors that provide fundamental robot capabilities along with the high-level cognitive reasoning required for complex and enduring interactions with humans. Robot behaviors initially focused on mobility, but more recent contributions seek to develop lifelike anthropomorphic behaviors [19], acceptable behaviors of household robots [20], and desirable behaviors for robots that follow, pass, or approach humans [21-23].
The development of robust robot platforms and communications technologies for extreme environments has been accomplished by NASA and other international space agencies. Space agencies have had several high profile robotic projects, designed with an eye toward safely exploring remote planets and moons. Examples include early successes of the Soviet Lunokhods [12], and NASA’s more recent success of exploring the surface of Mars [24, 25]. Importantly, many of the failures have been the result of software problems rather than mechanical failures. Complementing NASA’s fielded robots have been several robots developed and evaluated on earth [26]. Robonaut is a well-known example of successful teleoperation of a humanoid robot [27], and this work is being extended at a rapid pace to include autonomous movement and reasoning. Autonomous robots that have the anthropomorphic dimensions, mimic human-like behaviors, and include human-like reasoning are known as humanoid robots; work in this area has been ongoing for over a decade and is rapidly expanding [27-33].
Emerging from the early work in robotics, human factors experts have given considerable attention to two paradigms for human-robot interaction: teleoperation and supervisory control. At the teleoperation extreme, a human remotely controls a mobile robot or robotic arm. With supervisory control, a human supervises the behavior of an autonomous system and intervenes as necessary. Early work was usually performed by people who were interested not only in robotics but also factory automation, aviation and intelligent vehicles. Work in these areas is typified by Sheridan’s seminal contributions [34, 35], and other significant contributions from human factors researchers [36, 37].
Every robot application appears to have some form of interaction, even those that might be considered “fully autonomous”. For a teleoperated robot, the type of interaction is obvious. For a fully autonomous robot, the interaction may consist of high-level supervision and direction of the robot, with the human providing goals and with the robot maintaining knowledge about the world, the task and its constraints. In addition, the interactions may be through observation of the environment and implicit communications, for example, by the robot responding to what its human peer is doing. Taking a very broad and general view of HRI, one might consider that it includes developing algorithms,programming, testing, refining, fielding, and maintaining the robots.In this case, interaction consists primarily in discovering and diagnosing problems, solving these problems, and then reprogramming (orrewiring) the robot. The difference between this type of “programmingbased” interaction and modern HRI is that the field currently emphasizes efficient and dynamic interactions rather than just infrequentinteractions. However, some researchers are addressing programmingbased of interaction by exploring efficient programming paradigms tosupport robot development [128, 327].