Skip to main content In just 5 minutes help us improve arXiv: Annual Global Survey We gratefully acknowledge support from the Simons Foundation, member institutions , and all contributors. Donate > cs > arXiv:2402.15627 Help | Advanced Search All fields Title Author Abstract Comments Journal reference ACM classification MSC classification Report number arXiv identifier DOI ORCID arXiv author ID Help pages Full text Search open search GO open navigation menu quick links Login Help Pages About --> Computer Science > Machine Learning arXiv:2402.15627 (cs) [Submitted on 23 Feb 2024] Title: MegaScale: Scaling Large Language Model Training to More Than 10,000 GPUs Authors: Ziheng Jiang , Haibin Lin , Yinmin Zhong , Qi Huang , Yangrui Chen , Zhi Zhang , Yanghua Peng , Xiang Li , Cong Xie , Shibiao Nong , Yulu Jia , Sun He , Hongmin Chen , Zhihao Bai , Qi Hou , Shipeng Yan , Ding Zhou , Yiyao Sheng , Zhuo Jiang , Haohan Xu , Haoran Wei , Zhang Zhang , Pengfei Nie , Leqi Zou , Sida Zhao , Liang Xiang , Zherui Liu , Zhe Li , Xiaoying Jia , Jianxi Ye , Xin Jin , Xin Liu View a PDF of the paper titled MegaScale: Scaling Large Language Model Training to More Than 10,000 GPUs, by Ziheng Jiang and 31 other authors View PDF HTML (experimental) Abstract: We present the design, implementation and engineering experience in building and deploying MegaScale, a production system for training large language models (LLMs) at the scale of more than 10,000 GPUs. Training LLMs at this scale brings unprecedented challenges to training efficiency and stability. We take a full-stack approach that co-designs the algorithmic and system components across model block and optimizer design, computation and communication overlapping, operator optimization, data pipeline, and network performance tuning. Maintaining high efficiency throughout the training process (i.e., stability) is an important consideration in production given the long extent of LLM training jobs. Many hard stability issues only emerge at large scale, and in-depth observability is the key to address them. We develop a set of diagnosis tools to monitor system components and events deep in the stack, identify root causes, and derive effective techniques to achieve fault tolerance and mitigate stragglers. MegaScale achieves 55.2% Model FLOPs Utilization (MFU) when training a 175B LLM model on 12,288 GPUs, improving the MFU by 1.34x compared to Megatron-LM. We share our operational experience in identifying and fixing failures and stragglers. We hope by articulating the problems and sharing our experience from a systems perspective, this work can inspire future LLM systems research. Subjects: Machine Learning (cs.LG) ; Distributed, Parallel, and Cluster Computing (cs.DC) Cite as: arXiv:2402.15627 [cs.LG] (or arXiv:2402.15627v1 [cs.LG] for this version) https://doi.org/10.48550/arXiv.2402.15627 Focus to learn more arXiv-issued DOI via DataCite Submission history From: Ziheng Jiang [ view email ] [v1] Fri, 23 Feb 2024 22:10:59 UTC (2,516 KB) Full-text links: Access Paper: View a PDF of the paper titled MegaScale: Scaling Large Language Model Training to More Than 10,000 GPUs, by Ziheng Jiang and 31 other authors View PDF HTML (experimental) TeX Source view license Current browse context: cs.LG < prev | next > new | recent | 2024-02 Change to browse by: cs cs.DC References & Citations NASA ADS Google Scholar Semantic Scholar export BibTeX citation Loading... BibTeX formatted citation × loading... Data provided by: Bookmark Bibliographic Tools Bibliographic and Citation Tools Bibliographic Explorer Toggle Bibliographic Explorer ( What is the Explorer? ) Connected Papers Toggle Connected Papers ( What is Connected Papers? ) Litmaps Toggle Litmaps ( What is Litmaps? ) scite.ai Toggle scite Smart Citations ( What are Smart Citations? ) Code, Data, Media Code, Data and Media Associated with this Article alphaXiv Toggle alphaXiv ( What is alphaXiv? ) Links to Code Toggle CatalyzeX Code Finder for Papers ( What is CatalyzeX? ) DagsHub Toggle DagsHub ( What is DagsHub? ) GotitPub Toggle Gotit.pub ( What is GotitPub? ) Huggingface Toggle Hugging Face ( What is Huggingface? ) Links to Code Toggle Papers with Code ( What is Papers with Code? ) ScienceCast Toggle ScienceCast ( What is ScienceCast? ) Demos Demos Replicate Toggle Replicate ( What is Replicate? ) Spaces Toggle Hugging Face Spaces ( What is Spaces? ) Spaces Toggle TXYZ.AI ( What is TXYZ.AI? ) Related Papers Recommenders and Search Tools Link to Influence Flower Influence Flower ( What are Influence Flowers? ) Core recommender toggle CORE Recommender ( What is CORE? ) IArxiv recommender toggle IArxiv Recommender ( What is IArxiv? ) Author Venue Institution Topic About arXivLabs arXivLabs: experimental projects with community collaborators arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website. Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them. Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs . Which authors of this paper are endorsers? | Disable MathJax ( What is MathJax? ) About Help contact arXiv Click here to contact arXiv Contact subscribe to arXiv mailings Click here to subscribe Subscribe Copyright Privacy Policy Web Accessibility Assistance arXiv Operational Status