Manchester Metropolitan University's Research Repository

    Layer-wise partitioning and merging for efficient and scalable deep learning

    Akintoye, Samson B, Han, Liangxiu ORCID logoORCID: https://orcid.org/0000-0003-2491-7473, Lloyd, Huw ORCID logoORCID: https://orcid.org/0000-0001-6537-4036, Zhang, Xin ORCID logoORCID: https://orcid.org/0000-0001-7844-593X, Dancey, Darren, Chen, Haoming and Zhang, Daoqiang (2023) Layer-wise partitioning and merging for efficient and scalable deep learning. Future Generation Computer Systems, 149. pp. 432-444. ISSN 0167-739X

    Published Version
    Available under License Creative Commons Attribution.

    Download (4MB) | Preview


    Deep Neural Network (DNN) models are usually trained sequentially from one layer to another, which causes forward, backward and update locking problems, leading to poor performance in terms of training time. The existing parallel strategies to mitigate these problems provide suboptimal runtime performance. In this work, we have proposed a novel layer-wise partitioning and merging, forward and backward pass parallel framework to provide better training performance. The novelty of the proposed work consists of (1) a layer-wise partition and merging model which can minimise communication overhead between devices without the memory cost of existing strategies during the training process; (2) a forward pass and backward pass parallelisation to address the update locking problem and minimise the total training cost. The experimental evaluation on real use cases shows that the proposed method outperforms the state-of-the-art approaches in terms of training speed; and achieves almost linear speedup without compromising the accuracy performance of the non-parallel approach.

    Impact and Reach


    Activity Overview
    6 month trend
    6 month trend

    Additional statistics for this dataset are available via IRStats2.


    Repository staff only

    Edit record Edit record