Manchester Metropolitan University's Research Repository

Modelling Multiple Language Learning in a Developmental Cognitive Architecture

Giorgi, Ioanna and Golosio, Bruno and Esposito, Massimo and Cangelosi, Angelo and Masala, Giovanni L (2020) Modelling Multiple Language Learning in a Developmental Cognitive Architecture. IEEE Transactions on Cognitive and Developmental Systems. p. 1. ISSN 2379-8920


Download (835kB) | Preview


In this work, we model multiple natural language learning in a developmental neuroscience-inspired architecture. The ANNABELL model (Artificial Neural Network with Adaptive Behaviour Exploited for Language Learning), is a large-scale neural network, however, unlike most deep learning methods that solve natural language processing (NLP) tasks, it does not represent an empirical engineering solution for specific NLP problems; rather, its organisation complies with findings from cognitive neuroscience, particularly the multi-compartment working memory models. The system is appropriately trained to understand the level of cognitive development required for language acquisition and the robustness achieved in learning simultaneously four languages, using a corpus of text-based exchanges of developmental complexity. The selected languages, Greek, Italian and Albanian, besides English, differ significantly in structure and complexity. Initially, the system was validated in each language alone and was then compared with the open-ended cumulative training, in which languages are learned jointly, prior to querying with random language at random order. We aimed to assess if the model could learn the languages together to the same degree of skill as learning each apart. Moreover, we explored the generalisation skill in multilingual context questions and the ability to elaborate a short text of preschool literature. We verified if the system could follow a dialogue coherently and cohesively, keeping track of its previous answers and recalling them in subsequent queries. The results show that the architecture developed broad language processing functionalities, with satisfactory performances in each language trained singularly, maintaining high accuracies when they are acquired cumulatively.

Impact and Reach


Activity Overview

Additional statistics for this dataset are available via IRStats2.


Actions (login required)

View Item View Item