I work at the interface of Machine Learning and Theoretical Computer Science. Modern machine learning tasks pose a unique challenge for theoreticians as traditional worst case analysis leaves something to be desired and does not always accurately reflect the "practical" understanding of such problems. This disparity leads to the following questions How does one explain the immense success of various machine learning heuristics such as EM, Lloyd's algorithm etc. What structure exists in real data and how to exploit it to get near optimal learning algorithms Is it possible to do efficient learning under highly noisy adversarial setting and still make optimal use of available resources What are good models to formally study learning problems involving back and forth interaction such as crowdsourcing?