e-space
Manchester Metropolitan University's Research Repository

    Audio emotion recognition using machine learning to support sound design

    Cunningham, S ORCID logoORCID: https://orcid.org/0000-0002-5348-7700, Ridley, H, Weinel, J and Picking, R (2019) Audio emotion recognition using machine learning to support sound design. In: Audio Mostly, 18 September 2019 - 20 September 2019, Nottingham, UK.

    [img]
    Preview
    Accepted Version
    Download (621kB) | Preview

    Abstract

    In recent years, the field of Music Emotion Recognition has become established. Less attention has been directed towards the counterpart domain of Audio Emotion Recognition, which focuses upon detection of emotional stimuli resulting from non-musical sound. By better understanding how sounds provoke emotional responses in an audience it may be possible to enhance the work of sound designers. The work in this paper uses the International Affective Digital Sounds set. Audio features are extracted and used as the input to two machine-learning approaches: regression modelling and artificial neural networks, in order to predict the emotional dimensions of arousal and valence. It is found that shallow neural networks perform better than a range of regression models. Consistent with existing research in emotion recognition, prediction of arousal is more reliable than that of valence. Several extensions of this research are discussed, including work related to improving data sets as well as the modelling processes.

    Impact and Reach

    Statistics

    Activity Overview
    6 month trend
    440Downloads
    6 month trend
    116Hits

    Additional statistics for this dataset are available via IRStats2.

    Altmetric

    Repository staff only

    Edit record Edit record