基本信息
views: 9674
Career Trajectory
Bio
My primary research interest covers the following areas:
Deep Learning (yes, everyone works on this now). What interests me particularly are algorithms for state updates, invariances and statistical testing.
Scalability of algorithms. This means pushing algorithms to internet scale, distributing them on many (faulty) machines, showing convergence, and modifying models to fit these requirements. For instance, randomized techniques are quite promising in this context. In other words, I'm interested in big data.
Kernels methods are quite an effective means of making linear methods nonlinear and nonparametric. My research interests include support vector Machines, gaussian processes, and conditional random fields. Kernels are very useful also for the representation of distributions, that is two-sample tests, independence tests and many applications to unsupervised learning.
Statistical modeling, primarily with Bayesian Nonparametrics is a great way of addressing many modeling problems. Quite often, the techniques overlap with kernel methods and scalability in rather delightful ways.
Applications, primarily in terms of user modeling, document analysis, temporal models, and modeling data at scale is a great source of inspiration. That is, how can we find principled techniques to solve the problem, what are the underlying concepts, how can we solve things automatically.
Deep Learning (yes, everyone works on this now). What interests me particularly are algorithms for state updates, invariances and statistical testing.
Scalability of algorithms. This means pushing algorithms to internet scale, distributing them on many (faulty) machines, showing convergence, and modifying models to fit these requirements. For instance, randomized techniques are quite promising in this context. In other words, I'm interested in big data.
Kernels methods are quite an effective means of making linear methods nonlinear and nonparametric. My research interests include support vector Machines, gaussian processes, and conditional random fields. Kernels are very useful also for the representation of distributions, that is two-sample tests, independence tests and many applications to unsupervised learning.
Statistical modeling, primarily with Bayesian Nonparametrics is a great way of addressing many modeling problems. Quite often, the techniques overlap with kernel methods and scalability in rather delightful ways.
Applications, primarily in terms of user modeling, document analysis, temporal models, and modeling data at scale is a great source of inspiration. That is, how can we find principled techniques to solve the problem, what are the underlying concepts, how can we solve things automatically.
Research Interests
Papers共 465 篇Author StatisticsCo-AuthorSimilar Experts
By YearBy Citation主题筛选期刊级别筛选合作者筛选合作机构筛选
时间
引用量
主题
期刊级别
合作者
合作机构
Trans Mach Learn Res (2024)
Cited0Views0EIBibtex
0
0
arxiv(2024)
Cited0Views0Bibtex
0
0
ICLR 2023 (2023)
CoRR (2023)
Load More
Author Statistics
#Papers: 468
#Citation: 133005
H-Index: 130
G-Index: 364
Sociability: 7
Diversity: 2
Activity: 94
Co-Author
Co-Institution
D-Core
- 合作者
- 学生
- 导师
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn