Plenary Speakers

Embodied Language Learning and Interaction with the Humanoid Robot iCub

Dr. Angelo CangelosiUniversity of Plymouth

Recent theoretical and experimental research on action and language processing clearly demonstrates the strict interaction between language and action, and the role of embodiment in cognition. These studies have important implication for the design of communication and linguistic capabilities in cognitive systems and robots, and have led to the new interdisciplinary approach of Cognitive Developmental Robotics. In the European FP7 project “ITALK” we follow this integrated view of action and language learning for the development of cognitive capabilities in the humanoid robot iCub. The robot’s cognitive development is the results of interaction with the physical world (e.g. object manipulation learning) and from cooperation and communication between robots and humans (Cangelosi et al., 2010). During the talk we will psent ongoing results from iCub experiments. These include human-robot interaction experiments with the iCub on the embodiment biases in early word acquisition (“Modi” experiment; Morse et al. 2010), studies on word order cues for lexical development and the sensorimotor bases of action words (Marocco et al 2010), and recent experiments on action and language compositionality. The talk will also introduce the simulation software of the iCub robot, an open source software tool to perform cognitive modeling experiments in simulation (Tikhanoff et al. in pss).

Professor Angelo Cangelosi is the director of the Centre for Robotics and Neural Systems of the University of Plymouth. Cangelosi’s main research expertise is on language and cognitive modelling in cognitive systems (e.g. humanoid robot iCub and cognitive agents), on language evolution and grounding in multi-agent systems, and the application of bio-inspired techniques to robot control (e.g. swarm of UAVs). He currently is the coordinator of the Integrating Project “ITALK: Integration and Transfer of Action and Language Knowledge in robots” (2009-2012, italkproject.org), the Marie Curie ITN “RobotDoC: Robotics for Development of Cognition” (2009-2013, robotdoc.org) and the UK EPSRC project “VALUE: Vision, Action, and Language Unified by Embodiment (Cognitive Systems Foresight). Cangelosi has produced more than 150 scientific publications in the field, is Editor-in-Chief of the journal Interaction Studies, has chaired numerous workshops and conferences including General Chair of the forthcoming IEEE ICDL-EpiRob 2011 Conference (Frankfurt, 24-27 August 2011), and is a regular speaker at international conferences and seminars.

References

Cangelosi A., Metta G., Sagerer G., Nolfi S., Nehaniv C.L., Fischer K., Tani J., Belpaeme B., Sandini G., Fadiga L., Wrede B., Rohlfing K., Tuci E., Dautenhahn K., Saunders J., Zeschel A. (2010). Integration of action and language knowledge: A roadmap for developmental robotics. IEEE Transactions on Autonomous Mental Development, 2(3), 167-195

Marocco D., Cangelosi A., Fischer K., Belpaeme T. (2010). Grounding action words in the sensorimotor interaction with the world: Experiments with a simulated iCub humanoid robot. Frontiers in Neurorobotics, 4:7,

Morse A.F., Belpaeme T., Cangelosi A., Smith L.B. (2010). Thinking with your body: Modelling spatial biases in categorization using a real humanoid robot. Proceedings of 2010 Annual Meeting of the Cognitive Science Society. Portland, pp 1362-1368

Tikhanoff V., Cangelosi A., Metta G. (in pss). Language understanding in humanoid robots: iCub simulation experiments. IEEE Transactions on Autonomous Mental Development.


Embodied Object Recognition and Metacognition

Dr. Randall C. O’ReillyUniversity of Colorado

One of the great unsolved questions in our field is how the human brain, and simulations thereof, can achieve the kind of common-sense understanding that is widely believed to be essential for robust intelligence.  Many have argued that embodiment is important for developing common-sense understanding, but exactly how this occurs at a mechanistic level remains unclear.  In the process of building an embodied virtual robot that learns from experience in a virtual environment, my colleagues and I have developed several insights into this process.   At a general level, embodiment provides access to a rich, continuous source of training signals that, in conjunction with the proper neural structures, naturally support the learning of complex sensory-motor abilities, which then provide the foundation for more abstract cognitive abilities.  A specific instance is learning to recognize objects in cluttered scenes, which requires learning what is figure vs. (back)ground. We have demonstrated how visual learning in a 3D environment can provide training signals for learning weaker 2D depth and figure/ground cues. This learning process also requires bidirectional excitatory connectivity and associated interactive attractor dynamics, which we show provide numerous benefits for object recognition more generally.  Finally, the virtual robot can extract graded signals of recognition confidence, and use these to select what it explores in the environment.  These “metacognitive” signals can also be communicated to others so that they can better determine when to trust the robot or not (collaborative work with Christian Lebiere using ACT-R framework).

Bio:  Dr. O’Reilly is Professor of Psychology and Neuroscience at the University of Colorado, Boulder. He has authored over 50 journal articles and an influential textbook on computational cognitive neuroscience. His work focuses on biologically-based computational models of learning mechanisms in different brain areas, including hippocampus, pfrontal cortex & basal ganglia, and posterior visual cortex. He has received significant funding from NIH, NSF, ONR, and DARPA. He is a primary author of the Emergent neural network simulation environment. O’Reilly completed a postdoctoral position at the Massachusetts Institute of Technology, earned his M.S. and Ph.D. degrees in Psychology from Carnegie Mellon University and was awarded an A.B. degree with highest honors in Psychology from Harvard University.


Gesture, language and cognition

Dr. Sotaro KitaUniversity of Birmingham

We (humans) produce gestures spontaneously not only when we speak (“co-speech gestures”), but also when we think without speaking (“co-thought” gestures). I will psent studies that shed light on the cognitive architecture for gesture production. I will first review the evidence that co-speech gestures are highly sensitive to what goes on in speech production. For example, gestural repsentation of motion events varies as a function of the linguistic structures used to encode motion events.  Gestures are produced more frequently when it is difficult to organise ideas for linguistic expssion. Despite these pieces of evidence for a tight link between gesture and language, there are indications that gesture production is dissociable from speech production. Furthermore, new evidence shows that there are important parallelisms between co-speech gestures and co-thought gestures, suggesting that these two types of gestures are produced from the same mechanism, which is outside of speech production processes. I will conclude that gestures are produced from a mechanism that is inherently independent from, but highly interactive with, the speech production process. I will propose a cognitive architecture in which gesture production is related to action generation, spatial cognition, and speech production in an intricate way.

Dr. Sotaro Kita received a BA and MA at the University of Tokyo and a PhD at the University of Chicago.  He has worked at the Max Planck Institute for Psycholingusitics and is now a Reader at the University of Birmingham.

Previous post:

Next post: