e-space
Manchester Metropolitan University's Research Repository

    Deanthropomorphising NLP: can a language model be conscious?

    Shardlow, Matthew ORCID logoORCID: https://orcid.org/0000-0003-1129-2750 and Przybyła, Piotr ORCID logoORCID: https://orcid.org/0000-0001-9043-6817 (2024) Deanthropomorphising NLP: can a language model be conscious? PLoS One, 19 (12). e0307521. ISSN 1932-6203

    [img]
    Preview
    Published Version
    Available under License Creative Commons Attribution.

    Download (820kB) | Preview

    Abstract

    This work is intended as a voice in the discussion over previous claims that a pretrained large language model (LLM) based on the Transformer model architecture can be sentient. Such claims have been made concerning the LaMDA model and also concerning the current wave of LLM-powered chatbots, such as ChatGPT. This claim, if confirmed, would have serious ramifications in the Natural Language Processing (NLP) community due to wide-spread use of similar models. However, here we take the position that such a large language model cannot be conscious, and that LaMDA in particular exhibits no advances over other similar models that would qualify it. We justify this by analysing the Transformer architecture through Integrated Information Theory of consciousness. We see the claims of sentience as part of a wider tendency to use anthropomorphic language in NLP reporting. Regardless of the veracity of the claims, we consider this an opportune moment to take stock of progress in language modelling and consider the ethical implications of the task. In order to make this work helpful for readers outside the NLP community, we also present the necessary background in language modelling.

    Impact and Reach

    Statistics

    Activity Overview
    6 month trend
    8Downloads
    6 month trend
    12Hits

    Additional statistics for this dataset are available via IRStats2.

    Altmetric

    Repository staff only

    Edit record Edit record