Conversing with Social Agents That Smile and Laugh

Catherine Pelachaud


Our aim is to create virtual conversational partners. As such we have developed computational models to enrich virtual characters with socio-emotional capabilities that are communicated through multimodal behaviors. The approach we follow to build interactive and expressive interactants relies on theories from human and social sciences as well as data analysis and user-perception-based design. We have explored specific social signals such as smile and laughter, capturing their variation in production but also their different communicative functions and their impact in human-agent interaction. Lately we have been interested in modeling agents with social attitudes. Our aim is to model how social attitudes color the multimodal behaviors of the agents. We have gathered a corpus of dyads that was annotated along two layers: social attitudes and nonverbal behaviors. By applying sequence mining methods we have extracted behavior patterns involved in the change of perception of an attitude. We are particularly interested in capturing the behaviors that correspond to a change of perception of an attitude. In this talk I will present the GRETA/VIB platform where our research is implemented.


Cite as: Pelachaud, C. (2017) Conversing with Social Agents That Smile and Laugh. Proc. Interspeech 2017, 2052.


@inproceedings{Pelachaud2017,
  author={Catherine Pelachaud},
  title={Conversing with Social Agents That Smile and Laugh},
  year=2017,
  booktitle={Proc. Interspeech 2017},
  pages={2052}
}