On the Performance and Memory Footprint of Distributed Training: An Empirical Study on Transformers

Z Lu, F Wang, Z Xu, F Yang, T Li�- arXiv preprint arXiv:2407.02081, 2024 - arxiv.org
Z Lu, F Wang, Z Xu, F Yang, T Li
arXiv preprint arXiv:2407.02081, 2024arxiv.org
Transformer models have emerged as potent solutions to a wide array of multidisciplinary
challenges. The deployment of Transformer architectures is significantly hindered by their
extensive computational and memory requirements, necessitating the reliance on advanced
efficient distributed training methodologies. Prior research has delved into the performance
bottlenecks associated with distributed training, aiming to unravel these bottlenecks and
suggest optimization directions. However, such analyses often overlook three aspects�…
Transformer models have emerged as potent solutions to a wide array of multidisciplinary challenges. The deployment of Transformer architectures is significantly hindered by their extensive computational and memory requirements, necessitating the reliance on advanced efficient distributed training methodologies. Prior research has delved into the performance bottlenecks associated with distributed training, aiming to unravel these bottlenecks and suggest optimization directions. However, such analyses often overlook three aspects unique to Transformer models: the specialized architecture, the dependency on various distributed strategies, and the requirement to balance computational and memory overhead. This paper aims to bridge this gap by offering a comprehensive examination of the performance bottlenecks inherent in distributed training of Transformer models, leveraging both theoretical analysis and empirical investigation. We propose an analytical framework tailored to these unique aspects of Transformers, facilitating a holistic evaluation of model architectures, distributed strategies, and resource consumption. Based on this analytical framework, we conduct a comparative analysis of theoretical performances and further systematically explore how various distributed training strategies fare in real-world scenarios. Most of the experimental results can be well explained by the analytical outcomes derived from the analytical framework. Notably, our findings suggest an advantage of pipeline parallelism over data parallelism for Transformer models. Moreover, we shed light on some unexpected outcomes, such as the potential for increased total memory overhead due to suboptimal model partitioning within pipeline parallelism. Additionally, we underscore the significance of communication block size and waiting time to further enhance performance.
arxiv.org