Modeling Global Body Configurations in American Sign Language

Nicholas Wilkins, Max Cordes Galbraith, Ifeoma Nwogu

In this paper we consider the problem of computationally representing American Sign Language (ASL) phonetics. We specifically present a computational model inspired by the sequential phonological ASL representation, known as the Movement-Hold (MH) Model. Our computational model is capable of not only capturing ASL phonetics, but also has generative abilities. We present a Probabilistic Graphical Model (PGM) which explicitly models holds and implicitly models movement in the MH model. For evaluation, we introduce a novel data corpus, ASLing, and compare our PGM to other models (GMM, LDA, and VAE) and show its superior performance. Finally, we demonstrate our model’s interpretability by computing various phonetic properties of ASL through the inspection of our learned model.

 DOI: 10.21437/Interspeech.2020-2873

Cite as: Wilkins, N., Galbraith, M.C., Nwogu, I. (2020) Modeling Global Body Configurations in American Sign Language. Proc. Interspeech 2020, 671-675, DOI: 10.21437/Interspeech.2020-2873.

  author={Nicholas Wilkins and Max Cordes Galbraith and Ifeoma Nwogu},
  title={{Modeling Global Body Configurations in American Sign Language}},
  booktitle={Proc. Interspeech 2020},