Demonstrating and Learning Multimodal Socio-communicative Behaviors for HRI: Building Interactive Models from Immersive Teleoperation Data

Gérard Bailly, Frédéric Elisei


The main aim of artificial intelligence (AI) is to provide machines with intelligence. Machine learning is now widely used to extract such intelligence from data. Collecting and modeling multimodal interactive data is thus a major issue for fostering AI for HRI. We first discuss the egg-and-chicken problem of collecting ground-truth HRI data without actually disposing of robots with mature social skills. Particular issues raised by the current multimodal end-to-end mapping frameworks are also commented. We then analyze the benefits and challenges raised by using immersive teleoperation for endowing humanoid robots with such skills. We finally argue for establishing stronger gateways between HRI and Augmented/Virtual Reality research domains.


 DOI: 10.21437/AI-MHRI.2018-10

Cite as: Bailly, G., Elisei, F. (2018) Demonstrating and Learning Multimodal Socio-communicative Behaviors for HRI: Building Interactive Models from Immersive Teleoperation Data. Proc. FAIM/ISCA Workshop on Artificial Intelligence for Multimodal Human Robot Interaction, 39-43, DOI: 10.21437/AI-MHRI.2018-10.


@inproceedings{Bailly2018,
  author={Gérard Bailly and Frédéric Elisei},
  title={Demonstrating and Learning Multimodal Socio-communicative Behaviors for HRI: Building Interactive Models from Immersive Teleoperation Data},
  year=2018,
  booktitle={Proc. FAIM/ISCA Workshop on Artificial Intelligence for Multimodal Human Robot Interaction},
  pages={39--43},
  doi={10.21437/AI-MHRI.2018-10},
  url={http://dx.doi.org/10.21437/AI-MHRI.2018-10}
}