My current research focuses on deep learning for NLP, including Neural Machine Translation (NMT), large-scale Pretrained Language Model (PLM), Cross-lingual Language Model (XLM), Spoken Language Understanding (SLU), Speech Translation (ST) and Sentiment Analysis (SA). More Recently, I have been focusing on trustworthy foundation models of general NLP. We start from data, models, objectives and better adaptation to various downstream tasks to investigate how to efficiently and trustworthily transfer knowledge from large-scale data to the parameters of the pre-training model. Our model scale has reached GPT-3 level, i.e. 175B.