For an up-to-date list, please refer to my Google Scholar page.
(*=equal contribution)
2021
-
Attention-guided generative models for extractive question answering
Xu, Peng*, Liang, Davis*, Huang, Zhiheng, and Xiang, Bing
arXiv preprint arXiv:2110.06393 2021
-
Multiplicative Position-aware Transformer Models for Language Understanding
Huang, Zhiheng, Liang, Davis, Xu, Peng, and Xiang, Bing
arXiv preprint arXiv:2109.12788 2021
2020
-
Embedding-based Zero-shot Retrieval through Query Generation
Liang, Davis*, Xu, Peng*, Shakeri, Siamak, Santos, Cicero Nogueira dos, Nallapati, Ramesh, Huang, Zhiheng, and Xiang, Bing
arXiv preprint arXiv:2009.10270 2020
-
Masked language model scoring
Salazar, Julian, Liang, Davis, Nguyen, Toan Q, and Kirchhoff, Katrin
ACL 2020
-
Improve transformer models with better relative position embeddings
Huang, Zhiheng, Liang, Davis, Xu, Peng, and Xiang, Bing
EMNLP Findings 2020
-
Decoding and Diversity in Machine Translation
Roberts, Nicholas, Liang, Davis, Neubig, Graham, and Lipton, Zachary C
NeurIPS Resistance AI Workshop 2020
-
TRANS-BLSTM: Transformer with bidirectional LSTM for language understanding
Huang, Zhiheng, Xu, Peng, Liang, Davis, Mishra, Ajay, and Xiang, Bing
arXiv preprint arXiv:2003.07000 2020
2019
2018
-
Invariant representation learning for robust deep networks
Salazar, Julian, Liang, Davis, Huang, Zhiheng, and Lipton, Zachary C
In Workshop on Integration of Deep Learning Theories, NeurIPS 2018
2017
-
Deep automated multi-task learning
Liang, Davis, and Shu, Yan
IJCNLP 2017