System for deep learning training. Currently, I only summarize some arxiv papers here and put accepted papers into conference section
- Mayer, Ruben, and Hans-Arno Jacobsen. "Scalable Deep Learning on Distributed Infrastructures: Challenges, Techniques, and Tools." ACM Computing Surveys (CSUR) 53.1 (2020): 1-37. [Paper]
- Pollux: Co-adaptive Cluster Scheduling for Goodput-Optimized Deep Learning [arxiv] [GitHub]
- arXiv preprint arXiv:2008.12260 (2020).
- Qiao, Aurick, Willie Neiswanger, Qirong Ho, Hao Zhang, Gregory R. Ganger, and Eric P. Xing
- Themis: Fair and Efficient {GPU} Cluster Scheduling. [Paper]
- Mahajan, K., Balasubramanian, A., Singhvi, A., Venkataraman, S., Akella, A., Phanishayee, A. and Chawla, S., 2020. Themis: Fair and Efficient {GPU} Cluster Scheduling.
- In 17th {USENIX} Symposium on Networked Systems Design and Implementation ({NSDI} 20) (pp. 289-304).
- Tiresias: A {GPU} cluster manager for distributed deep learning. [Paper] [GitHub]
- Gu, J., Chowdhury, M., Shin, K.G., Zhu, Y., Jeon, M., Qian, J., Liu, H. and Guo, C., 2019.
- In 16th {USENIX} Symposium on Networked Systems Design and Implementation ({NSDI} 19) (pp. 485-500).
- Microsoft OpenPAI HiveDScheduler: As one standalone component of Microsoft OpenPAI, HiveD is designed to be a Kubernetes Scheduler Extender for Multi-Tenant GPU clusters. [Project]
- Gandiva: Introspective cluster scheduling for deep learning. [Paper]
- Xiao, Wencong, et al. (OSDI 2018)
- Summary: Improvet the efficency of hyper-parameter in cluster. Aware of hardware utilization.
- Optimus: an efficient dynamic resource scheduler for deep learning clusters [Paper]
- Peng, Yanghua, et al. (EuroSys 2018)
- Summary: Job scheduling on clusters. Total complete time as the metric.
- Multi-tenant GPU clusters for deep learning workloads: Analysis and implications. [Paper] [dataset]
- Jeon, Myeongjae, Shivaram Venkataraman, Junjie Qian, Amar Phanishayee, Wencong Xiao, and Fan Yang
- Slurm: A Highly Scalable Workload Manager [GitHub]
- ZeRO: Memory Optimization Towards Training A Trillion Parameter Models. Microsoft Work [Paper] [GitHub]
- Class materials for a distributed systems lecture series [GitHub]
- A Unified Architecture for Accelerating Distributed
DNN Training in Heterogeneous GPU/CPU Clusters [Paper] [GitHub]
- Yimin Jiang, Yibo Zhu, Chang Lan, Bairen Yi, Yong Cui, Chuanxiong Guo. (OSDI 2020)
- Summary: SoTA Parameter Server
- PipeDream: Generalized Pipeline Parallelism for DNN Training (SOSP2019) [Paper] [Github]
- Exploring Hidden Dimensions in Parallelizing Convolutional Neural Networks. [Paper] [GitHub]
- Zhihao Jia, Sina Lin, Charles R. Qi, and Alex Aiken. (ICML 2018)
- Mesh-TensorFlow: Deep Learning for Supercomputers [Paper] [GitHub]
- Shazeer, Noam, Youlong Cheng, Niki Parmar, Dustin Tran, et al. (NIPS 2018)
- Summary: Data parallelism for language model
- PyTorch-BigGraph: A Large-scale Graph Embedding System [Paper] [GitHub]
- Lerer, Adam and Wu, Ledell and Shen, Jiajun and Lacroix, Timothee and Wehrstedt, Luca and Bose, Abhijit and Peysakhovich, Alex (SysML 2019)
- Beyond data and model parallelism for deep neural networks [Paper] [GitHub]
- Jia, Zhihao, Matei Zaharia, and Alex Aiken. (SysML 2019)
- Summary: SOAP (sample, operation, attribution and parameter) parallelism. Operator graph, device topology and extution optimizer. MCMC search algorithm and excution simulator.
- Device placement optimization with reinforcement learning [Paper]
- Mirhoseini, Azalia, Hieu Pham, Quoc V. Le, Benoit Steiner, Rasmus Larsen, Yuefeng Zhou, Naveen Kumar, Mohammad Norouzi, Samy Bengio, and Jeff Dean. (ICML 17)
- Summary: Using REINFORCE learn a device placement policy. Group operations to excute. Need a lot of GPUs.
- Spotlight: Optimizing device placement for training deep neural networks [Paper]
- Gao, Yuanxiang, Li Chen, and Baochun Li (ICML 18)
- GPipe: Efficient Training of Giant Neural Networks using Pipeline Parallelism [Paper] [GitHub] [News]
- Huang, Yanping, et al. (arXiv preprint arXiv:1811.06965 (2018))
- Horovod: Distributed training framework for TensorFlow, Keras, and PyTorch. [GitHub]
- Distributed machine learning infrastructure for large-scale robotics research [GitHub] [Blog]
- A Generic Communication Scheduler for Distributed DNN Training Acceleration [Paper] [BytePS]
- PENG, Y., Zhu, Y., CHEN, Y., BAO, Y., Yi, B., Lan, C., Wu, C. and Guo, (SOSP 2019)
- Summary: communication schedular
- Oobleck: Resilient Distributed Training of Large Models Using Pipeline Templates [Paper] [GitHub]
- Jang, Insu and Yang, Zhenning and Zhang, Zhen and Jin, Xin and Chowdhury, Mosharaf, (SOSP 2023)
- Bamboo: Making Preemptible Instances Resilient for Affordable Training of Large DNNs [Paper] [GitHub]
- John Thorpe, Pengzhan Zhao, Jonathan Eyolfson, Yifan Qiao, Zhihao Jia, Minjia Zhang, Ravi Netravali and Guoqing Harry Xu, (NSDI 2023)
- Varuna: scalable, low-cost training of massive deep learning models [Paper] [GitHub]
- Athlur, Sanjith and Saran, Nitika and Sivathanu, Muthian and Ramjee, Ramachandran and Kwatra, Nipun, (EuroSys 2022)
- Zeus: Understanding and Optimizing GPU Energy Consumption of DNN Training [Paper] [GitHub] [ml.energy] [The ML.ENERGY Initiative]
- Jie You, Jae-Won Chung, and Mosharaf Chowdhury (NSDI 2023)