Experience
Education
Bio
My research lies in the intersection of machine learning and natural language. In particular, I design unsupervised learning algorithms for processing text and speech, from discovering phonemes and words to higher level semantics and understanding. My goal is to build human language technology (e.g., speech recognition, synthesis, and translation) using minimal human-annotated data. In summer 2018, I interned at the Tacotron team within Google Perception working on improving end-to-end speech synthesis models in terms of data efficiency during training. In summer 2019, I interned at Google Brain working on self-supervised language representation learning for zero-shot abstractive text summarization.