Your browser doesn't support javascript.
loading
Towards accelerating model parallelism in distributed deep learning systems.
Choi, Hyeonseong; Lee, Byung Hyun; Chun, Se Young; Lee, Jaehwan.
Afiliação
  • Choi H; Department of Computer Engineering, Korea Aerospace University, Goyang, South Korea.
  • Lee BH; Department of Electrical and Computer Engineering, Seoul, South Korea.
  • Chun SY; Department of Electrical and Computer Engineering, Seoul, South Korea.
  • Lee J; INMC & IPAI, Seoul National University, Seoul, South Korea.
PLoS One ; 18(11): e0293338, 2023.
Article em En | MEDLINE | ID: mdl-37917655
ABSTRACT
Modern deep neural networks cannot be often trained on a single GPU due to large model size and large data size. Model parallelism splits a model for multiple GPUs, but making it scalable and seamless is challenging due to different information sharing among GPUs with communication overhead. Specifically, we identify two key issues to make the parallelism being inefficient and inaccurate; an efficient pipelining technique is crucial to maximize GPU utilization and normalizations in deep neural networks may affect the performance due to different statistics sharing of mini-batch. In this work, we address these issues by investigating efficient pipelining for model parallelism and effective normalizations in model / data parallelisms when training a model with large mini-batch in multiple GPUs so that the model performance in accuracy can not be compromised. Firstly, we propose a novel method to search for an optimal micro-batch size considering the number of GPUs and memory size for model parallelism. For efficient pipelining, mini-batch is usually divided into smaller batches (called micro-batch). To maximize the utilization of GPU computing resources, training should be performed with the optimal micro-batch size. Our proposed micro-batch size search algorithm achieved increased image throughput by up to 12% and improved trainable mini-batch size by 25% as compared to the conventional model parallelism method. Secondly, we investigate normalizations in distributed deep learning training for different parallelisms. Our experiments using different normalization methods suggested that the performance with batch normalization can be improved by sharing the batch information among GPUs when performing data parallelism. It was also confirmed that group normalization helped minimizing accuracy degradation when performing model parallelism with pipelining and yielded consistent accuracies for diverse mini-batch sizes.
Assuntos

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Aprendizado Profundo Idioma: En Ano de publicação: 2023 Tipo de documento: Article

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Aprendizado Profundo Idioma: En Ano de publicação: 2023 Tipo de documento: Article