Everyday Features for Everyday Listening

John Ashley Burgoyne


You are sitting on a commuter train. How many passengers are wearing headphones? What are they listening to? What else are they doing? Most importantly, amid the cornucopia of distractions, what exactly are they hearing? Much research in music cognition pits ‘musicians’, variously defined, against non-musicians. Recently, especially since the appearance of reliable measurement instruments for musicality in the general population (e.g., Müllensiefen et al., 2014), there has been growing interest in the space in between. Moreover, the ubiquity of smartphones has greatly enhanced the ability of techniques like gamification or Sloboda’s ‘experience sampling’ to reach this general population outside of a psychology lab. Music information retrieval (MIR) – and signal processing research more generally – can provide the last ingredients to understand what is happening between our commuters’ earbuds: everyday features for studying everyday listening. Since Aucouturier and Bigand’s 2012 manifesto on the poor interpretability of traditional DSP measures, clever dimensionality reduction paired with feature sets like those from the FANTASTIC (Müllensiefen and Frieler, 2006) or CATCHY (Van Balen et al., 2015) toolboxes have sought a middle ground. This talk will present several uses of everyday features from the CATCHY toolbox for studying everyday listening, most notably a discussion of the Hooked on Music series of experiments (Burgoyne et al., 2013) and a recent user study of thumbnailing at a national music service. In conclusion, it will outline some areas where MIR expertise can go further than just recommendation to learn about and engage with listeners during their daily musical activities.


Cite as: Burgoyne, J.A. (2019) Everyday Features for Everyday Listening. Proc. SMM19, Workshop on Speech, Music and Mind 2019.


@inproceedings{Burgoyne2019,
  author={John Ashley Burgoyne},
  title={{Everyday Features for Everyday Listening}},
  year=2019,
  booktitle={Proc. SMM19, Workshop on Speech, Music and Mind 2019}
}