e-space
Manchester Metropolitan University's Research Repository

    Unreliable Narrator: Reparative Approaches to Harmful Biases in AI Storytelling for the HE Classroom And Future Creative Industries

    Jackson, David ORCID logoORCID: https://orcid.org/0000-0002-7985-2104 and Courneya, Marsha (2023) Unreliable Narrator: Reparative Approaches to Harmful Biases in AI Storytelling for the HE Classroom And Future Creative Industries. Brazilian Creative Industries Journal, 3 (2). pp. 50-66. ISSN 2763-8677

    [img]
    Preview
    Published Version
    Available under License Creative Commons Attribution.

    Download (1MB) | Preview

    Abstract

    Generative AI has the potential to amplify marginalised storytellers and their narratives through powerful virtual production tools and automation of processes such as artworking, scriptwriting and video editing (Ramesh et al., 2022; Brown et al., 2020, Esser et al, 2023). However, adoption of generative AI into media workflows and outputs risks compounding cultural biases from dominant storytelling traditions. Generative AIs typically require the input of many millions of novels, screenplays, images and other media to generate their synthetic narrative output. Stories produced can then contain biases from these texts through stereotypical character tropes, dialogues, word-image associations and story arcs (Bianchi et al., 2022). Whilst there is significant discussion of these biases, little exists to date on how we prepare storytellers for the problems of generative AI in production. How can we engage without further isolating marginalised storytellers, and in a way that encourages new voices to be heard? The paper examines the potential issues of AI generative technologies for marginalised students in the creative education sector and provides case studies that provide a pathway towards a reparative approach by creative producers and educators. It provides an introduction to some of the issues arising from the reproduced biases of these LLMs and suggests potential strategies to incorporate awareness of these biases into the creative process . In order to evidence and illustrate our approach, two short case studies are provided: the Algowritten AI short story project led by the authors with other volunteers as a part of Mozilla Foundation’s Trustworthy AI to identify patterns of bias in AI written narratives and a novel reflective AI system called Stepford, which is designed to highlight instances of gender bias in generative text segments. Both case studies are intended to outline how reparative approaches to algorithmic creative production can seek to highlight and mitigate cultural biases endemic in generative media systems.

    Impact and Reach

    Statistics

    Activity Overview
    6 month trend
    263Downloads
    6 month trend
    143Hits

    Additional statistics for this dataset are available via IRStats2.

    Altmetric

    Repository staff only

    Edit record Edit record