The current main trend in paralinguistic information recognition is the so-called static classification. In this kind of classification the low level descriptors are pooled together by means of statistical functionals and all, or almost all, information about the temporal structure and evolution of speech is lost. Although this approach represents the state-of-the-art, we believe that dynamic classification, where temporal information is kept, still deserves some attention due to its capability to handle aspects impossible to do by the static one. In this paper the INTERSPEECH 2011 Speaker State Challenged is addressed using the Automatic Speech Recognition system developed at UPC, which has already been used in a similar task: emotion recognition. Although results fall below the baseline, we believe that they are close enough to be taken into account.
Bibliographic reference. Rodríguez, Albino Nogueiras (2011): "An HMM-based approach to the INTERSPEECH 2011 speaker state challenge", In INTERSPEECH-2011, 3289-3292.