I have investigated broadly on topics in machine learning and natural language processing, including relational learning, sequence modeling, lexical semantics and graph representation learning. My recent research has comprehensively extended representation learning models to capture various properties of multi-relational data, including transferability, uncertainty, and logical properties. Some of my work also addresses label-less or data-less learning problems with semi-supervised co-training and joint/multi-task learning. I have also worked on deep-learning for modeling sequence data, and extending such techniques to computational biology.