My primary research interest covers the following areas:
Deep Learning (yes, everyone works on this now). What interests me particularly are algorithms for state updates, invariances and statistical testing.
Scalability of algorithms. This means pushing algorithms to internet scale, distributing them on many (faulty) machines, showing convergence, and modifying models to fit these requirements. For instance, randomized techniques are quite promising in this context. In other words, I'm interested in big data.
Kernels methods are quite an effective means of making linear methods nonlinear and nonparametric. My research interests include support vector Machines, gaussian processes, and conditional random fields. Kernels are very useful also for the representation of distributions, that is two-sample tests, independence tests and many applications to unsupervised learning.
Statistical modeling, primarily with Bayesian Nonparametrics is a great way of addressing many modeling problems. Quite often, the techniques overlap with kernel methods and scalability in rather delightful ways.
Applications, primarily in terms of user modeling, document analysis, temporal models, and modeling data at scale is a great source of inspiration. That is, how can we find principled techniques to solve the problem, what are the underlying concepts, how can we solve things automatically.