The ACM/IEEE International Conference on Human-Robot Interaction is a premier, highly-selective venue presenting the latest advances in Human-Robot Interaction. The 15th Annual HRI conference theme is “Real World Human-Robot Interaction”. The conference seeks contributions from a broad set of perspectives, including technical, design, behavioural, theoretical, methodological, and metrological, that advance fundamental and applied knowledge and methods in human-robot interaction. Full papers will be archived in the ACM Digital Library and IEEE Xplore Digital Library.
Important Dates
1 October 2019 (23:59 PDT): Submission Deadline. 6 November 2019: Review Notification, rebuttal period begins.11 November 2019: Rebuttal period ends29 November 2019: Decision notification8 January 2020: Camera ready papers due23-26 March 2020: Conference
Format and Submission
Full papers are up to eight camera-ready pages, including figures, but excluding references. Submissions longer than eight pages of content excluding references will be desk rejected and not reviewed. Accepted full papers will be published in the conference proceedings and presented in an oral session. The HRI conference is highly selective with a rigorous, two-stage review model that includes an in-person expert program committee meeting where papers are extensively discussed. As such, all submissions are expected to be mature, polished, and detailed accounts of cutting-edge research described and presented in camera-ready style. In cases of equally qualified papers, positive consideration will be given to submissions that address this year’s theme, “Real-World Human Robot Interaction”.
Template: All papers for the conference must be submitted in PDF format and conform to the ACM SIG proceedings specifications. Please note that we are following the general ACM SIG format, not the SIGCHI format. Authors should use the sample-sigconf.tex or interim_layout.docx template files. In addition, the ACM has partnered with Overleaf, where authors can start writing using this link directly. (Note, despite previous announcements, we will not be moving to the new ACM workflow this year)
Fonts: All submissions must use only “Type 1” (scalable) fonts (not bitmapped fonts), an ACM digital library requirement.
Anonymization: The HRI 2020 review process is double blind; submissions must be properly anonymized (see the anonymization guidelines).
Supplementary materials (such as videos) can be uploaded, or external links to videos and supplementary material are also allowed in papers.
Reviewing: Authors will be expected to review a paper, if they’re asked by the Program Committee.
Posting articles publicly online (website / arXiv): To maintain the double-blind review process, we are requesting authors refrain from posting their articles online until Nov 29th (the notification date).
Submission site: Authors may submit their papers at this website.
SELECTING A THEME
To facilitate quality interdisciplinary reviewing, and to inform reviewer selection, authors will be required to select one main theme. They may optimally also select a second theme for their full paper submissions. It is important for authors to carefully select the theme as it will have an impact on how the submission is judged, and which reviewers are recruited. It is recognized that papers may not clearly fit within one theme. Consider the primary contribution to make the selection, and be sure to select the appropriate sub-theme. While authors will suggest a primary theme; the program chairs may move the paper to a different theme to improve fit.
The HRI 2020 conference has five themes: User Studies, Technical Advances, Design, Theory and Methods, and Reproducibility (See details below). Papers may have overlap between themes, but authors are encouraged to consider the main contribution of the work using this brief rule of thumb:
Human Robot Interaction User Studies: The primary contribution is human-focused, e.g., how humans perceive, interact with, or otherwise engage with robots.
Technical Advances in Human Robot Interaction: The primary contribution is robot-focused, e.g., systems, algorithms, or computational methods supporting HRI.
Human-Robot Interaction Design: The primary contribution is design-focused, e.g., new morphologies, behavior paradigms, and interaction capabilities for robots.
Theory and Methods in Human-Robot Interaction: The primary contribution is methodology-focused, e.g., fundamental HRI principles beyond individual interfaces or projects, new theoretical concepts in HRI, etc.
Reproducibility of Human-Robot Interaction: The primary contribution is science-focused, e.g., Reproduces, Replicates, or Re-creates prior HRI work (or fails to), provides new HRI artifacts (e.g., datasets, software), etc.
Across all themes, if a paper includes a study with human participants:
To support building a strong evidence base in HRI, and encourage future reproducibility of published work, all submissions involving studies with human participants should clearly outline their methodology, including:
- ethical aspects considered and clearance obtained (c.f., Geiskkovitch et al. 2016, Sections 5.2, 5.4)
- participant demographics and sampling approach (c.f., de Graaf 2017, Section 2.3)
- data collection and analysis methods (c.f., Paepcke and Takayama 2010, Section V)
- study environment and context (c.f., Short et al 2018, Section 3.5)
- if a Wizard-of-Oz paradigm was used, a detailed description of the robot, wizard, user, etc. (c.f., Riek 2012, Table 2)
- if a robot was used, a detailed description of the platform, its level of autonomy, capabilities, etc. (c.f., Beer et al. 2014, Figure 5)
1. Human Robot Interaction User Studies
This theme targets research that provides data on and analysis of human-robot interaction, in laboratory or in-the-wild settings. Work can be quantitative, qualitative, or both. It may be formative or summative in nature, and can be hypothesis-driven or exploratory. Studies can employ robots across the autonomy spectrum. Video-based user study paradigms are acceptable with appropriate/sufficient motivation, though authors are encouraged to use in-person robots wherever possible. Successful submissions should reflect rigorous methodologies (quantitative or qualitative) and mature analyses that yield novel insights into human-robot interaction, and should discuss the limitations and generalizability of the methods used.
Quantitative Studies papers should include clear consideration of their methods’ internal, external, and ecological validity. For example, measures used should be validated either in prior work or within the given paper. For development of new measures, authors should instead consider submitting to the Theory and Methods theme (See details below).
Papers that provide novel interaction techniques or designs as a primary contribution, but include a detailed user study, may belong in the Design theme. Work that is primarily on methodological advancements or analysis may belong in the Theory and Methods theme. Work that reproduces, replicates, or re-creates a prior study (or fails to) as its main contribution may belong in the HRI Reproducibility theme.
Theme chairs: Ginevra Castellano and Maha Salem
Sample papers:
- Wojciechowska, et al. (2019). Collocated Human-Drone Interaction: Methodology and Approach Strategy. HRI 2019.
- Fraune, et al. (2019). Is Human-Robot Interaction More Competitive Between Groups Than Between Individuals? HRI 2019.
- Bremner, et al. (2016). Personality Perception of Robot Avatar Tele-operators. HRI 2016.
- Leite, et al. (2012). Modelling empathic behaviour in a robotic game companion for children: an ethnographic study in real-world settings. HRI 2012.
2. Technical Advances in Human-Robot Interaction
This theme targets research providing novel robot system designs, algorithms, interface technologies, and computational methods supporting human-robot interaction. This includes contributions that enable robots to better understand, interact with, and collaborate with people, including co-located interaction or teleoperation. Submissions must present full details of the proposed technological advance to facilitate in-depth review and enable the future reproducibility, e.g., formal descriptions, pseudocode, or open-sourced code. Successful papers will clearly demonstrate how the technology improves or enables human-robot interaction, and will include evaluation appropriate to the work (e.g., comparisons to other methods, standard machine learning or computer vision metrics, usability studies, etc.).
If the primary focus of the paper is on the evaluation of interaction, and not the specific technology, then it may belong in the HRI User Studies theme. If the primary focus is on a novel interaction design – and not new technologies behind it – it may belong in the Design theme. If the primary focus is on new artifacts for HRI science, e.g, datasets, benchmarks, or open source software releases – it may belong in the HRI Reproducibility theme.
Theme chair: Adriana Tapus
Sample papers:
- Petric, et al. (2019). Hierarchical POMDP Framework for a Robot-Assisted ASD Diagnostic Protocol. HRI 2019.
- Roesler, et al. (2019). Evaluation of Word Representations in Grounding Natural Language Instructions Through Computational Human-Robot Interaction. HRI 2019.
- Short, et al. (2019). SAIL: Simulation-Informed Active In-the-Wild Learning. HRI 2019.
- Clark-Turner, et al. (2018). Deep reinforcement learning of abstract reasoning from demonstrations. HRI 2018.
3. Human-Robot Interaction Design
This theme targets research that makes a design-centric contribution to human-robot interaction. This includes the design of new robot morphologies and appearances, behavior paradigms, interaction techniques and scenarios, and telepresence interfaces. The design research should support unique or improved interaction experiences or abilities for robots. Research on the design process itself is welcome. Submissions must fully describe their design outcomes or process to enable detailed review and replication of the work. Further, successful papers will have evaluation appropriate to the work, for example end-user evaluation or a critical reflection on the design process or methodology.
If the paper’s primary focus is on a technical system description or novel algorithms it may be a Technical Advances paper. If the main contribution is an in-depth study that reflects on a broader interaction question it may be a User Studies paper. If a paper’s primary contribution is to re-create or replicate an existing design concept or artifact, it may belong in the HRI Reproducibility theme.
Theme chair: Jodi Forlizzi
Sample papers:
- Moharana, et al. (2019). Robots for Joy, Robots for Sorrow: Community Based Robot Design for Dementia Caregivers. HRI 2019.
- Azenkot, et al. (2016). Enabling Building Service Robots to Guide Blind People: A Participatory Design Approach. HRI 2016.
- Sirkin, et al. (2015). Mechanical Ottoman: How Robotic Furniture Offers and Withdraws Support. HRI 2015.
- Pantofaru, et al. (2012). Exploring the role of robots in home organization. HRI 2012.
4. Theory and Methods in Human-Robot Interaction
This theme targets research contributing to the understanding and study of fundamental HRI principles that span beyond individual interfaces or projects. This includes detailing underlying interaction paradigms, theoretical concepts, new interpretations of known results, or new evaluation methodologies. Submissions may be derived from original or surveyed empirical research, analysis of existing research and methods, or may also be purely theoretical or philosophical. Successful papers will clearly detail how they extend our current fundamental understanding of human-robot interaction and why the work is significant and has potential for impact. As appropriate, work must be defended by clear and sound arguments, a systematic data collection strategy, supporting data, and/or a thorough reflective analysis of the research with respect to the existing state of the art.
Theme chair: Kerstin Fischer
Sample papers:
- Carpinella, et al. (2017). The robotic social attributes scale (RoSAS): Development and validation. HRI 2017.
- Baxter, et al. (2016). From characterising three years of HRI to methodology and reporting recommendations. HRI 2016.
- Sequeira, et al. (2016). Discovering Social Interaction Strategies for Robots from Restricted-Perception Wizard-of-Oz Studies. HRI 2016.
- Fischer, et al. (2012). Levels of Embodiment: Linguistic Analyses of Factors Influencing HRI. HRI 2012.
5. Reproducibility in Human-Robot Interaction
This theme targets research that makes a contribution supporting the science of HRI via reproducing, replicating or re-creating prior HRI/HRI-relevant work, and artifacts for HRI research, to help our community build a strong and reliable evidence base. (Note: This refers to the entire field, not only papers published at the ACM/IEEE HRI conference.) To incentivize submission to this new track, this year accepted papers will receive ACM badges on their papers upon publication (see [ACM 2016]).
5.1. Reproducibility of prior quantitative HRI work: Authors may conduct reproductions that span quantitative work across the spectrum of HRI – Studies, Technical, Methods, or Design. (e.g., the original findings were obtained through primarily quantitative methodologies.) Here, there are two types of reproduction (as defined by (NSF 2018)):
- Direct Reproductions, where an author seeks to obtain the same results from an independently conducted study, using procedures and methods as closely matched to the original study as possible. For example, Study S shows result X with methodology M and robot R. The reproduction, Study SR, uses M and R to confirm (or not confirm) X. SR is conducted independently of S (e.g., with a different team or independent participant population)].
The goal of a direct reproduction is to evaluate the reliability of a previously observed HRI finding.
- Conceptual Reproductions, where an author seeks to obtain the same results from an independently conducted study, where procedures and methods are systematically varied. For example: Study S shows result X with methodology M and robot R. The reproduction, Study SR, systematically varies M1 and/or R1 to confirm (or not confirm) X. SR is conducted independently of S (e.g., with a different team or independent participant population)].
The goal of a conceptual reproduction is to build upon prior evidence to understand under what conditions, and for whom, an HRI finding holds true. In a conceptual reproduction, the research questions will help the author determine which aspects of the prior study are systematically varied (NSF, 2018).Here are a few examples of conceptual replications per theme area:
Studies / Design Conceptual Reproduction Example: If an author’s goal is to see whether behavior previously observed with robot R similarly manifests with other robots, they might vary the platforms but employ the same method. If they are also curious about how the methods used in the original study affected the results, they may vary the methods used in the original study
Technical Conceptual Reproduction Example: If an author’s goal is to see if a teaming algorithm presented in a prior paper yields the same results on experiments conducted on other robot platforms, they would vary the robot platform but employ the same method
Theory and Methods Conceptual Reproduction Example: If an author’s goal is to see whether a theory or method presented in prior work as being suitable for culture C also holds true in cultures C1 and C2, they would vary the cultural context but employ the same method and/or robot.
Across either type of reproduction (direct or conceptual), if the work yields a completely new HRI finding, if may be submitted to this track or another depending on the author’s interpretation.
Authors seeking to reproduce, replicate, or repeat quantitative work are encouraged to follow guidelines developed by the US National Science Foundation and Dept. of Education on how to design, conduct, and report such studies (See: [NSF 2018], pages 4-5). Authors should also provide clear motivations for choosing the specific work they are reproducing.
It is important to note that although the text above is framed in terms of successful reproductions of HRI science, this track also highly encourages sharing negative results. (e.g., a researcher fails to reproduce or replicate another study’s findings). In such cases, the expectation is that the results are analysed and interpreted carefully, as absence of evidence is not evidence of absence.
5.2. Re-creation of prior HRI qualitative / design work: For qualitative or design-focused HRI work, authors may seek to explore an HRI paradigm within a new culture or context, or re-create or implement designs created by another. These papers may be framed as case studies, field reports, or updated design guidelines, and should clearly describe lessons learned and best practices.
5.3. Artifacts for HRI science: We encourage submissions that introduce a novel “artifact” as an enabler to reproducibility, replicability, and re-creation of HRI research, and/or to support new lines of HRI research. An artifact could be software, hardware, data sets, protocols, evaluation measures, etc. Submissions should contain a detailed description of the artifact introduced, proposed, or implemented, as well as information about how it is novel and different from other existing artifacts, and a link to an anonymized, live version of the artifact at time of submission for review.
Before submission, authors submitting artifacts must have obtained and report Institutional Review Board (IRB) clearances to release any data collected from human participants, received any relevant organizational clearances to release software/hardware, etc.
Theme chair: Megan Strait
Sample papers:
Direct/ Conceptual Replication: Here are a series of direct/conceptual replication studies from the same group:
- Study 1: Describes a novel method for measuring aversion and investigating the existence of uncanny valley within the current humanoids design space.
- Strait, et al. (2015). Too much humanness for human-robot interaction: exposure to highly humanlike robots elicits aversive responding in observers. CHI 2015.
- Study 2: After finding evidence indicative of an uncanny valley, Study 2 was designed as a direct reproduction, as well as extension, of Study 1 (i.e., same experimental design, settings, and sampling with minor modification)
- Strait, et al. (2017). Understanding the uncanny: both atypical features and category ambiguity provoke aversion toward humanlike robots. Frontiers in psychology, 2017.
- Study 3: After replicating the findings of Study 1 and identifying evidence of cognitive underpinnings to the uncanny valley, Study 3 served as a conceptual reproduction of Studies 1-2. Specifically, Study 3 necessitated and utilized an adaptation of the method from Studies 1-2 to test whether the uncanny valley manifests across cognitive developmental stages. That is, while aspects of the method differ (thus precluding direct comparisons between the findings of Study 3 and Studies 1-2), the findings of Study 3 reinforce prior interpretations of the data from Studies 1-2 and advance the overall understanding of the uncanny valley phenomenon.
- Strait, et al. (2019). Children’s responding to humanlike agents reflects an uncanny valley. HRI 2019.
Conceptual Replication: Here is single paper which describes a large scale conceptual replication: Vogt, et al. (2019). Second Language Tutoring Using Social Robots: A Large-Scale Study. HRI 2019.
- Vogt, et al. (2019). Second Language Tutoring Using Social Robots: A Large-Scale Study. HRI 2019.
Artifact papers:
- Systems paper:
- Huang, et al. (2017). Code3: A system for end-to-end programming of mobile manipulator robots for novices and experts. HRI 2017.
- Dataset paper:
- Celiktutan, et al. (2017). Multimodal Human-Human-Robot Interactions (MHHRI) Dataset for Studying Personality and Engagement. IEEE Transactions on Affective Computing 2017.
- Benchmarking paper:
- Wisspeintner et al. (2010). RoboCup@Home: Results in Benchmarking Domestic Service Robots. RoboCup 2009.
References
Association for Computing Machinery. “Artifact Review and Badging”. URL: https://www.acm.org/publications/policies/artifact-review-badging. Last updated: Apr 2018
National Science Foundation and Institute of Education Sciences, U.S. Department of Education, “Companion Guidelines on Replication & Reproducibility in Education Research”. URL: https://ies.ed.gov/pdf/CompanionGuidelinesReplicationReproducibility.pdf Last updated: Nov 28, 2018.