Keynote Speakers

James Kennedy

Creating Expressive and Engaging Robotic Characters

Abstract: Designing robots for entertainment scenarios presents many of the same challenges encountered in other Human-Robot Interaction (HRI) deployments, such as maintaining relationships over time, operating in unpredictable and noisy environments, and personalizing interactions for diverse users and user groups. However, the success of such interactions also hinges on additional factors such as personality, expressivity, and storytelling. Whether creating a newrobotic character, or bringing an existing character to life, a great deal of both technical and artistic collaboration is required to create compelling experiences. In this talk, I will showcase several robotic character projects that are at various stages of real world deployment and testing. Using these projects as examples, I will discuss the process of going from a research prototype to creating believable, expressive and engaging characters. This will include dynamic robots that are designed for large audiences, where nonverbal behavior is crucial, and robots that interact with smaller groups, relying more heavily on conversation and language understanding.

Bio: Dr. James Kennedy is currently a Senior Research Scientist at Disney Research in Glendale, California. His work focuses on applying new developments in dialogue management, Natural Language Understanding, and human perception to enhance user experiences with robots and artificial agents. Prior to his role at Disney Research, Dr. Kennedy completed his
PhD in Human-Robot Interaction at the University of Plymouth, U.K., where his thesis examined the impact of robot tutor social behavior on children. He has also served as a Senior Software Engineer at Futronics (NA) Corporation in Pasadena, California. Here, he had a leading role in the development of software for the Smart Elderly Care Solution for Nursing Homes, a project that won the 2023 Edison Bronze Award for Medtech. Dr. Kennedy’s research interests lie at the intersection of robotics, artificial intelligence, and human-computer interaction. His work aims to understand and improve the ways in which humans and robots interact, with a particular emphasis on the use of robots in the entertainment industry.

Carolina Parada

What do Foundation Models have to do with and for HRI?

Abstract: Foundation models have unlocked major advancements in AI. What do foundation models have to do with and for Human-Robot Interaction? And how can HRI help unlock more powerful foundation models for robot learning and embodied reasoning? In this talk, I will discuss examples of how foundation models could enable a step function in human-robot interaction research, including: how to leverage foundation models to enable multimodal human-robot communication, enable non-expert users to teach robots new low-level skills and personalized high level plans through natural interactions, create new expressive robot behaviors, etc. At the same time, foundation models still have significant gaps in human-robot interaction contexts. I will share early insights showing that HRI could be key to evolving the foundation models themselves, enabling even more powerful interactions, and improving robot learning. The fields of HRI and robot learning seem to have evolved and grown in parallel, but foundation models might be the breakthrough we needed to bring these fields together. Now there is a unique opportunity for HRI to unlock robot learning in-the-wild, not only because it will yield robots that are more useful and adaptable to humans, but because it will enable improving the foundation models that will likely affect every aspect of robot learning.

Bio: Dr. Carolina Parada is an Engineering Director at Google DeepMind Robotics who is passionate about developing useful robots through human centered robot learning. Since 2019, she leads research groups in robot learning for mobility, perception, simulation, embodied reasoning, self-improving robots, and human-robot interaction. Prior to that, she led the camera perception team for self-driving cars at Nvidia for 2 years. She was also a lead with Speech @ Google for 7 years, where she drove multiple research and engineering efforts that enabled Ok Google, the Google Assistant, and Voice-Search.

Ryan Calo

Socio-Digital Vulnerability

Abstract: This talk describes the phenomenon of socio-digital vulnerability (SDV). SDV refers to the susceptibility of individuals and groups within mediated environments to decisional, social, or constitutive interference. Drawing from work in law and design, Professor Calo uses dark patterns, robots, generative artificial intelligence, and other examples to evidence he problem of SDV; he argues that vulnerability in mediated environments is best under in context, rather than as a binary; and he suggests policy frameworks that go behind harm mitigation to address the power imbalances that underpin SDV.

Bio: Ryan Calo is the Lane Powell and D. Wayne Gittinger Professor at the University of Washington School of Law. He is a founding co-director (with Batya Friedman and Tadayoshi Kohno) of the interdisciplinary UW Tech Policy Lab and a co-founder (with Chris Coward, Emma Spiro, Kate Starbird, and Jevin West) of the UW Center for an Informed Public. Professor Calo holds a joint appointment at the Information School and an adjunct appointment at the Paul G. Allen School of Computer Science and Engineering. Professor Calo’s research on law and emerging technology appears in leading law reviews (California Law Review, Columbia Law Review, Duke Law Journal, UCLA Law Review, and University of Chicago Law Review) and technical publications (MIT Press, Nature, Artificial Intelligence) and is frequently referenced by the national media. His work has been translated into at least four languages. Professor Calo has testified three times before the United States Senate and organized events on behalf of the National Science Foundation, the National Academy of Sciences, and the Obama White House.

Acknowledgment: This talk is derived from co-authored work with Daniella DiPaola, a PhD candidate in social robotics at MIT’s Medial Lab.