Follow
Haibin Lin
Haibin Lin
Bytedance
Verified email at bytedance.com - Homepage
Title
Cited by
Cited by
Year
Resnest: Split-attention networks.
H Zhang, C Wu, Z Zhang, Y Zhu, Z Zhang, H Lin, Y Sun, T He, J Mueller, ...
Conference on Computer Vision and Pattern (ECV), 2022
20142022
Deep Graph Library: Towards Efficient and Scalable Deep Learning on Graphs
M Wang, L Yu, Q Gan, D Zheng, Y Gai, Z Ye, M Li, J Zhou, Q Huang, ...
International Conference on Learning Representations, 2019
8342019
Self-Driving Database Management Systems.
A Pavlo, G Angulo, J Arulraj, H Lin, J Lin, L Ma, P Menon, TC Mowry, ...
CIDR 4, 1, 2017
3662017
GluonCV and GluonNLP: Deep Learning in Computer Vision and Natural Language Processing
J Guo, H He, T He, L Lausen, M Li, H Lin, X Shi, C Wang, J Xie, S Zha, ...
Journal of Machine Learning Research, 2019
2372019
Is Network the Bottleneck of Distributed Training?
Z Zhang, C Chang, H Lin, Y Wang, R Arora, X Jin
SIGCOMM NetAI, 2020
822020
Temporal-Contextual Recommendation in Real-Time
Y Ma, BM Narayanaswamy, H Lin, H Ding
KDD 2020, 2020
792020
{MegaScale}: Scaling Large Language Model Training to More Than 10,000 {GPUs}
Z Jiang, H Lin, Y Zhong, Q Huang, Y Chen, Z Zhang, Y Peng, X Li, C Xie, ...
21st USENIX Symposium on Networked Systems Design and Implementation (NSDI …, 2024
702024
Local AdaAlter: Communication-Efficient Stochastic Gradient Descent with Adaptive Learning Rates
C Xie, O Koyejo, I Gupta, H Lin
NeurIPS 2020, optimizations for machine learning, 2019
492019
CSER: Communication-efficient SGD with Error Reset
C Xie, S Zheng, OO Koyejo, I Gupta, M Li, H Lin
Advances in Neural Information Processing Systems 33, 2020
432020
Hi-Speed DNN Training with Espresso: Unleashing the Full Potential of Gradient Compression with Near-Optimal Usage Strategies
Z Wang, H Lin, Y Zhu, TSE Ng
Proceedings of the Eighteenth European Conference on Computer Systems, 867-882, 2023
25*2023
Dynamic Mini-batch SGD for Elastic Distributed Training: Learning in the Limbo of Resources
H Lin, H Zhang, Y Ma, T He, Z Zhang, S Zha, M Li
arXiv preprint arXiv:1904.12043, 2019
222019
Accelerated Large Batch Optimization of BERT Pretraining in 54 minutes
S Zheng, H Lin, S Zha, M Li
arXiv preprint arXiv:2006.13484, 2020
202020
dPRO: A Generic Performance Diagnosis and Optimization Toolkit for Expediting Distributed DNN Training
H Hu, C Jiang, Y Zhong, Y Peng, C Wu, Y Zhu, H Lin, C Guo
Proceedings of Machine Learning and Systems 4, 623-637, 2022
122022
SAPipe: Staleness-Aware Pipeline for Data Parallel DNN Training
Y Chen, C Xie, M Ma, J Gu, Y Peng, H Lin, C Wu, Y Zhu
Advances in Neural Information Processing Systems 35, 17981-17993, 2022
102022
LLM-PQ: Serving LLM on Heterogeneous Clusters with Phase-Aware Partition and Adaptive Quantization
J Zhao, B Wan, Y Peng, H Lin, C Wu
arXiv preprint arXiv:2403.01136, 2024
82024
Compressed Communication for Distributed Training: Adaptive Methods and System
Y Zhong, C Xie, S Zheng, H Lin
arXiv preprint arXiv:2105.07829, 2021
82021
LEMON: Lossless model expansion
Y Wang, J Su, H Lu, C Xie, T Liu, J Yuan, H Lin, R Sun, H Yang
arXiv preprint arXiv:2310.07999, 2023
72023
Deep graph library
M Wang, L Yu, Q Gan, D Zheng, Y Gai, Z Ye, M Li, J Zhou, Q Huang, ...
72018
Just-in-Time Dynamic-Batching
S Zha, Z Jiang, H Lin, Z Zhang
Conference on Neural Information Processing Systems, 2018
62018
Flux: Fast software-based communication overlap on gpus through kernel fusion
LW Chang, W Bao, Q Hou, C Jiang, N Zheng, Y Zhong, X Zhang, Z Song, ...
arXiv preprint arXiv:2406.06858, 2024
42024
The system can't perform the operation now. Try again later.
Articles 1–20