e-space
Manchester Metropolitan University's Research Repository

A hybrid parallelization approach for distributed and scalable deep learning

Akintoye, Samson B, Han, Liangxiu ORCID logoORCID: https://orcid.org/0000-0003-2491-7473, Zhang, Xin ORCID logoORCID: https://orcid.org/0000-0001-7844-593X, Chen, Haoming and Zhang, Daoqiang (2022) A hybrid parallelization approach for distributed and scalable deep learning. IEEE Access. ISSN 2169-3536

[img]
Preview
Published Version
Available under License Creative Commons Attribution.

Download (1MB) | Preview

Abstract

Recently, Deep Neural Networks (DNNs) have recorded significant success in handling medical and other complex classification tasks. However, as the sizes of DNN models and the available datasets increase, the training process becomes more complex and computationally intensive, usually taking longer to complete. In this work, we have proposed a generic full end-to-end hybrid parallelization approach combining model and data parallelism for efficiently distributed and scalable training of DNN models. We have also proposed a Genetic Algorithm Based Heuristic Resources Allocation (GABRA) mechanism for optimal distribution of partitions on the available GPUs for computing performance optimization. We have applied our proposed approach to a real use case based on 3D Residual Attention Deep Neural Network (3D-ResAttNet) for efficient Alzheimer Disease (AD) diagnosis on multiple GPUs and compared with the existing state-of-the-art parallel methods. The experimental evaluation shows that our proposed approach is 20% averagely better than existing parallel methods in terms of training time and achieves almost linear speedup with little or no differences in accuracy performance when compared with the existing non-parallel DNN models.

Impact and Reach

Statistics

Activity Overview
6 month trend
8Downloads
6 month trend
87Hits

Additional statistics for this dataset are available via IRStats2.

Altmetric

Actions (login required)

View Item View Item