e-space
Manchester Metropolitan University's Research Repository

    The Affective Audio Dataset (AAD) for non-musical, non-vocalized, audio emotion research

    Ridley, Harrison, Cunningham, Stuart ORCID logoORCID: https://orcid.org/0000-0002-5348-7700, Darby, John, Henry, John ORCID logoORCID: https://orcid.org/0000-0003-3674-8208 and Stocker, Richard (2024) The Affective Audio Dataset (AAD) for non-musical, non-vocalized, audio emotion research. IEEE Transactions on Affective Computing. pp. 1-12.

    [img]
    Preview
    Accepted Version
    Available under License In Copyright.

    Download (5MB) | Preview

    Abstract

    The Affective Audio Dataset (AAD) is a new and novel dataset of non-musical, non-anthropomorphic sounds intended for use in affective research. Sounds are annotated for their affective qualities by sets of human participants. The dataset was created in response to a lack of suitable datasets within the domain of audio emotion recognition. A total of 780 sounds are selected from the BBC Sounds Library. Participants are recruited online and asked to rate a subset of sounds based on how they make them feel. Each sound is rated for arousal and valence. It was found that while evenly distributed, there was bias towards the low-valence, high-arousal quadrant, and displayed a greater range of ratings in comparison to others. The AAD is compared with existing datasets to check its consistency and validity, with differences in data collection methods and intended use-cases highlighted. Using a subset of the data, the online ratings were validated against an in-person data collection experiment with findings strongly correlating. The AAD is used to train a basic affect-prediction model and results are discussed. Uses of this dataset include, human-emotion research, cultural studies, other affect-based research, and industry use such as audio post-production, gaming, and user-interface design.

    Impact and Reach

    Statistics

    Activity Overview
    6 month trend
    46Downloads
    6 month trend
    18Hits

    Additional statistics for this dataset are available via IRStats2.

    Altmetric

    Repository staff only

    Edit record Edit record