THRI — ACM Transactions on Human-Robot Interaction, 2024

Introduction to the Special Issue on Artificial Intelligence for Human-Robot Interaction (AI-HRI)

Jivko Sinapov, Zhao Han, Shelly Bagchi, Muneeb Ahmad, Matteo Leonetti, Ross Mead, Reuth Mirsky, and Emmanuel Senft

THRI Special Issue on Artificial Intelligence for Human Robot Interaction AI HRI scaled

1 Introduction

This special issue of the Transactions on Human-Robot Interaction highlights, documents, and explores the interface between artificial intelligence (AI) and human-robot interaction (HRI). The application of Artificial Intelligence to Human-Robot Interaction domains has proven to be a powerfully effective mechanism for achieving robust, interactive, and autonomous systems with applications ranging from personalized tutors to smart manufacturing collaborators to healthcare assistants and nearly everything in between. Developing such systems often involves innovations and integrations between many diverse technical areas, including but not limited to task and motion planning, learning from demonstration, dialogue synthesis, activity recognition and prediction, human behavior modeling, and shared control. For this special issue, we received high-quality, original articles that present the design and/or evaluation of novel computational techniques and systems at the intersection of artificial intelligence and human-robot interaction. It brought together various articles to showcase the state-of-the-art in AI-HRI within a single issue of the world’s leading journal of Human-Robot Interaction research.


Additional Key Words and Phrases: artificial intelligence, human-robot interaction, teleoperation, human-robot collaboration, machine learning


2 Articles in this issue

This special issue of ACM THRI presents a collection of 11 papers bringing to attention multiple ways AI can support human-robot interaction in a wide diversity of paradigms. This collection of manuscripts covers a broad scope of application domains, robot designs, and bases for assessment. It starts with four articles exploring different aspects of teleoperation and shared control: a teleoperation system anticipating operator commands to facilitate robot control (1), a shared control approach for multi-step teleoperation (2), a shared control framework for urban air mobility (3), and finally body-machine interfaces to control robots (4). We follow up with two articles focused on teaching robots, with a first one on unified learning from demonstrations (5) and a second one on using verbal corrections commands in teaching (6). Then, we move on to monitor autonomous robots through AR interfaces (7). And, we conclude our special issues on four paper focused on improving human-robot collaboration by: predicting team motions (8), planning for adaptation in collaborative tasks (9), automate gesture generation (10), and finally reviewing the impact of vulnerability on trust in HRI.

  1. Assistance in Teleoperation of Redundant Robots through Predictive Joint Maneuvering. In this article, Brooks et al. present two predictive models designed to anticipate operator commands during teleoperation. These models allow optimization over an expected trajectory of future motion rather than consideration of local motion alone.
  2. Experimental Assessment of Human-Robot Teaming for Multi-Step Remote Manipulation With Expert Operators. D’Arpino et al. explore the advantages of multiple methods for remote robot operation by experts. Through their study involving expert operators, such as previous operators for the DARPA challenges, they showed that teleautonomy approaches with assisted planning could complete complex manipulation tasks as fast as direct teleoperation, but with significantly lower workload and manipulation errors.
  3. Stochastic-Skill-Level-Based Shared Control for Human Training in Urban Air Mobility Scenario. In this article, Sooyung et al. present a new personalized shared control framework, in which an assistance model is learned from human experts and the shared control policy is a gaussian mixture with a finite time horizon, based on the distance of the user’s trajectory from the expert trajectory. The framework is evaluated on a UAM simulation, where it was compared to a baseline approach in terms of change in performance.
  4. Learning to Control Complex Robots Using High-Dimensional Body–Machine Interfaces. In this article, Lee et al. demonstrate that a population of uninjured participants can learn to control an arm with high degrees of freedom through body-machine interfaces (BoMI). They also investigate and discuss the effect of joint and task control space on learning, in terms intuitiveness, learnability, and their consequences on cognitive load during learning.
  5. Unified Learning from Demonstrations, Corrections, and Preferences during Physical Human-Robot Interaction. In this paper, Mehta and Losey present a method of formalizing and unifying robot learning from demonstrations. A loss function is developed for training a variety of reward models from given demonstrations, corrections, and preferences; then, the learned reward is converted into the desired task trajectory. The authors use both simulations and a user study compared to existing baselines to demonstrate that the new approach more accurately learns manipulation tasks from physical human interactions when the robot is faced with new or unexpected objectives.
  6. “Do This Instead” – Robots That Adequately Respond to Corrected Instructions. In this article, Thierauf et al. presents a system to easily incorporate verbal corrections during verbal task instructions. Verbal corrections occur before, during, and after verbally taught sequences of tasks, demonstrating that the proposed methods enable fast corrections.
  7. Augmented Reality Visualization of Autonomous Mobile Robot Change Detection in Uninstrumented Environments. Reardon et al. present an Augmented Reality visualization solution to assist humans in interpreting data from a mobile robot that is autonomously detecting novel changes in an environment. They experimentally investigate the relationship between 3D visualization in AR and human movement in the operational environment on shared situational awareness in human-robot teams.
  8. IMPRINT: Interactional Dynamics-aware Motion Prediction in Teams using Multimodal Context. In this article, Yasar et al. present a multi-agent motion prediction framework that models the interactional dynamics and incorporates the multimodal context to accurately predict the motion of all the agents in a team (both humans and/or robots).
  9. UHTP: A User-Aware Hierarchical Task Planning Framework for Communication-Free, Mutually-Adaptive Human-Robot Collaboration. Ramachandruni et al. proposed the User-aware Hierarchical Task Planning (UHTP) framework for robot adaptation to humans in collaborative tasks. With UHTP, a robot performs actions by monitoring its human partner’s current activity to maximize the efficiency of the collaborative task. In turn, the human partner benefits from UHTP’s adaptation algorithms by completing collaborative tasks without having to wait for the robot. A user study shows that UHTP can adapt to a wide range of human behaviors, requires no communication, reduces cognitive workload during collaboration, and is preferred over a non-adaptive baseline.
  10. Face2Gesture: Translating Facial Expressions Into Robot Movements Through Shared Latent Space Neural Networks. In this article, Suguitan et al. present a method to automatically generate affective robot movements in reaction to emotive facial expressions. Using autoencoder neural networks to compress robot movement data and facial expression images into a shared latent embedding space, movements can be reconstructed to align the embeddings by emotion classes rather than data modality.
  11. A Meta-analysis of Vulnerability and Trust in Human-Robot Interaction. In this article, McKenna et al. explore the specific impact of vulnerability in trust building in HRI. While vulnerability is key to build bonds between humans, its impact is underexplored in HRI. Authors tackle this aspect through a meta-analysis and modelings to provide suggestions about building effective trust between humans and robots.

We are thrilled by the diversity of AI-assisted human-robot interactions paradigms covered in this Special Issue on Artificial Intelligence for Human-Robot Interaction, from shared control and teleoperation to learning and modeling. This diversity showcases how key AI and machine learning is to human-robot interaction. We would like to extend our deepest gratitude to the reviewers and the editors-in-chief and associate managing editors of THRI who dedicated time and efforts to make this special issue possible.


Authors’ addresses: Jivko Sinapov, Tufts University, USA, jivko.sinapov@tufts.edu; Zhao Han, University of South Florida, 4202 E. Fowler Avenue, Tampa, FL, 33620, USA, zhaohan@usf.edu; Shelly Bagchi, U.S. National Institute of Standards and Technology, 100 Bureau Dr., Stop 8230, Gaithersburg, MD, 20899, USA, shelly.bagchi@nist.gov; Muneeb Ahmad, Swansea University, UK, m.i.ahmad@swansea.ac.uk; Matteo Leonetti, King’s College London, UK, matteo.leonetti@kcl.ac.uk; Ross Mead, Semio, USA, ross@semio.ai; Reuth Mirsky, Bar Ilan University, Israel, mirskyr@cs.biu.ac.il; Emmanuel Senft, Idiap Research Institute, Martigny, Switzerland, esenft@idiap.ch.

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org.

© 2024 Copyright held by the owner/author(s).
ACM 2573-9522/2024/7-ART
https://doi.org/10.1145/3672535

ACM Reference Format:

Jivko Sinapov, Zhao Han, Shelly Bagchi, Muneeb Ahmad, Matteo Leonetti, Ross Mead, Reuth Mirsky, and Emmanuel Senft. 2024. Introduction to the Special Issue on Artificial Intelligence for Human-Robot Interaction (AI-HRI). ACM Transactions on Human-Robot Interaction 13, 3 (September 2024), 5 pages. https://doi.org/10.1145/3672535


Posted