基本信息
浏览量:46
职业迁徙
个人简介
I am developing ways to change the neural network training algorithm to improve the efficiency of training. The cost of training state-of-the-art neural networks is increasing exponentially, and hardware and compiler improvements alone are insufficient to counterbalance this trend. Instead, I believe we need to fundamentally change the underlying training algorithms. Training is an approximate computing problem; there is nothing sacred about the math or training recipes we use today. Instead, this line of work leverages empirical analysis of the training dynamics of real-world networks to change the math behind training in ways that improve efficiency without affecting quality.
At MosaicML, we have developed dozens of speedup methods that improve the efficieny of training standard models for computer vision and natural language processing. All of these methods are available open-source in our Composer PyTorch trainer. We have written a description of each speedup method in the Composer documentation. You can interactively explore the results of applying these speedup methods to training standard benchmarks in the MosaicML Explorer. Our best recipes speedup ResNet-50 on ImageNet by 7x, DeepLabv3 on ADE20K by 5x, BERT Pre-Training by 2x, and GPT Language Modeling by 2x while maintaining the same quality as the baselines.
At MosaicML, we have developed dozens of speedup methods that improve the efficieny of training standard models for computer vision and natural language processing. All of these methods are available open-source in our Composer PyTorch trainer. We have written a description of each speedup method in the Composer documentation. You can interactively explore the results of applying these speedup methods to training standard benchmarks in the MosaicML Explorer. Our best recipes speedup ResNet-50 on ImageNet by 7x, DeepLabv3 on ADE20K by 5x, BERT Pre-Training by 2x, and GPT Language Modeling by 2x while maintaining the same quality as the baselines.
研究兴趣
论文共 45 篇作者统计合作学者相似作者
按年份排序按引用量排序主题筛选期刊级别筛选合作者筛选合作机构筛选
时间
引用量
主题
期刊级别
合作者
合作机构
Aaron Gokaslan,A. Feder Cooper,Jasmine Collins, Landan Seguin, Austin Jacobson, Mihir Patel,Jonathan Frankle, Cory Stephenson,Volodymyr Kuleshov
CVPR 2024 (2024)
引用0浏览0引用
0
0
Dan Biderman, Jose Gonzalez Ortiz,Jacob Portes,Mansheej Paul,Philip Greengard, Connor Jennings, Daniel King, Sam Havens, Vitaliy Chiley,Jonathan Frankle,Cody Blakeney,John P. Cunningham
arxiv(2024)
引用0浏览0引用
0
0
CoRR (2024)
引用0浏览0EI引用
0
0
Elliot Bolton, Abhinav Venigalla,Michihiro Yasunaga,David Hall, Betty Xiong,Tony Lee,Roxana Daneshjou,Jonathan Frankle,Percy Liang,Michael Carbin,Christopher D. Manning
CoRR (2024)
引用0浏览0EI引用
0
0
Conference of the European Chapter of the Association for Computational Linguistics (2023): 477-487
引用0浏览0EI引用
0
0
Jacob Portes, Alexander R Trott, Sam Havens, DANIEL KING, Abhinav Venigalla,Moin Nadeem, Nikhil Sardana, Daya Khudia,Jonathan Frankle
NeurIPS 2023 (2023)
引用0浏览0EI引用
0
0
Aaron Gokaslan,A. Feder Cooper,Jasmine Collins, Landan Seguin, Austin Jacobson, Mihir Patel,Jonathan Frankle,Cory Stephenson,Volodymyr Kuleshov
CoRR (2023)
引用0浏览0EI引用
0
0
arxiv(2023)
引用0浏览0EI引用
0
0
加载更多
作者统计
合作学者
合作机构
D-Core
- 合作者
- 学生
- 导师
数据免责声明
页面数据均来自互联网公开来源、合作出版商和通过AI技术自动分析结果,我们不对页面数据的有效性、准确性、正确性、可靠性、完整性和及时性做出任何承诺和保证。若有疑问,可以通过电子邮件方式联系我们:report@aminer.cn