AI helps you reading Science

AI generates interpretation videos

AI extracts and analyses the key points of the paper to generate videos automatically


pub
Go Generating

AI Traceability

AI parses the academic lineage of this thesis


Master Reading Tree
Generate MRT

AI Insight

AI extracts a summary of this paper


Weibo:
We describe a novel methodology to generate a rich set of self-contained segments from the documents, use determinantal point processes to identify summary highlights

Better Highlighting: Creating Sub Sentence Summary Highlights

EMNLP 2020, pp.6282-6300, (2020)

Cited by: 0|Views256
Full Text
Bibtex
Weibo

Abstract

Amongst the best means to summarize is highlighting. In this paper, we aim to generate summary highlights to be overlaid on the original documents to make it easier for readers to sift through a large amount of text. The method allows summaries to be understood in context to prevent a summarizer from distorting the original meaning, of wh...More

Code:

Data:

0
Introduction
  • A summary is reliable only if it is true-to-original. Abstractive summarizers are considered to be less reliable despite their impressive performance on benchmark datasets, because they can hallucinate facts and struggle to keep the original meanings intact (Kryscinski et al, 2019; Lebanoff et al, 2019).
  • The authors seek to generate summary highlights to be overlaid on the original documents to allow summaries to be understood in context and avoid misdirecting readers to false conclusions.
  • This is especially important in areas involving legislation, political speeches, public policies, social media, and more (Sadeh et al, 2013; Kornilova and Eidelman, 2019).
  • Highlighting is most commonly used in education to make important information stand out and bring attention of readers to the essential topics (Rello et al, 2014).
Highlights
  • A summary is reliable only if it is true-to-original
  • To best estimate the size of segments, we present a novel method to “overgenerate” a rich set of selfcontained, partially-overlapping sub-sentence segments from any sentence based on contextualized representations (Yang et al, 2019; Devlin et al, 2019), leverage determinantal point processes to identify an essential subset based on saliency and non-redundancy criteria
  • We propose to generate sub-sentence summary highlights to be overlaid on source documents to enable users to quickly navigate through content
  • We compare our method with strong extractive and abstractive summarization systems for multidocument summarization, results are shown in Tables 3 and 5
  • We describe a novel methodology to generate a rich set of self-contained segments from the documents, use determinantal point processes to identify summary highlights
Methods
  • The authors present a new method to identify self-contained segments, select important and non-redundant segments to form a summary, as text fragments containing incomplete and disorganized information are hardly successful summary highlights.

    3.1 Self-Contained Segments

    A self-contained segment is, in a sense, a miniature sentence.
  • The authors present a new method to identify self-contained segments, select important and non-redundant segments to form a summary, as text fragments containing incomplete and disorganized information are hardly successful summary highlights.
  • A self-contained segment is, in a sense, a miniature sentence.
  • Table 2 presents examples of selfcontained and non-self-contained segments.
  • The automatic identification of self-contained segments requires more than segmentation or parsing sentences into tree structures (Dozat and Manning, 2018).
  • The authors perform exhaustive search to analyze every segment of a given sentence to determine if it is self-contained or not
Results
  • The authors compare the method with strong extractive and abstractive summarization systems for multidocument summarization, results are shown in Tables 3 and 5.
  • 0.0008 are closed and hundreds of flights have been canceled as winter fact that frequently occurring words are more likely to be included in the summary.
  • LexRank (Erkan and Radev, 2004) is a graph-based approach estimating sentence importance based on eigenvector centrality.
  • All of these methods extract whole sentences rather than segments from a set of documents
Conclusion
  • The authors make a first attempt to create sub-sentence summary highlights that are understandable and require minimum information from the surrounding context.
  • Highlighting is important to help readers sift through a large amount of texts and quickly grasp the main points.
  • The authors describe a novel methodology to generate a rich set of self-contained segments from the documents, use determinantal point processes to identify summary highlights.
  • The method can be extended to other text genres such as public policies to aid reader comprehension, which will be the future work to explore
Tables
  • Table1: An example of sub-sentence highlights overlaid on the original document; the highlights are self-contained
  • Table2: Examples of self-contained and non-self-contained segments extracted from a document sentence
  • Table3: Results on DUC-04 dataset evaluated by ROUGE
  • Table4: Example system outputs for a topic in DUC-04. Our highlighting method is superior to sentence extraction as it allows readers to quickly skim through a large amount of text to grasp the main points. XLNet segments are better than tree segments. Not only can they aid reader comprehension but they are also self-contained and more concise
  • Table5: ROUGE results on the TAC-11 dataset
  • Table6: Examples of segments generated by XLNet and their scores of self-containedness
  • Table7: Statistics of text segments generated by XLNet and the constituent parse tree method on DUC/TAC datasets
  • Table8: Human evaluation of the self-containedness of text segments. The top-3 segments of XLNet exhibit a high degree of self-containedness: 61% of them have an average score of 3 or above, 34% have ≥4 score, and 12% receive the full score
  • Table9: Example text segments produced by the XLNet algorithm. Each segment is judged by five human evaluators on a scale of 1 (worst) to 5 (best) and we report their average scores. Human evaluation suggests that text segments generated by our model demonstrate a high degree of self-containedness
  • Table10: Example system outputs for a topic in DUC-04. Highlighting allows readers to quickly sift through a large amount of text to grasp the main points. XLNet segments perform better than tree segments. Not only can they aid reader comprehension but they are also self-contained and more concise
  • Table11: Example system outputs for a topic in DUC-04. Highlighting allows readers to quickly sift through a large amount of text to grasp the main points. XLNet segments perform better than tree segments. Not only can they aid reader comprehension but they are also self-contained and more concise. Our method further allows multiple segments, denoted by and , to be selected from the same sentence
  • Table12: Table 12
  • Table13: Example system outputs for a topic in TAC-11. Highlighting allows readers to quickly sift through a large amount of text to grasp the main points. XLNet segments perform better than tree segments. Not only can they aid reader comprehension but they are also self-contained and more concise
  • Table14: Table 14
  • Table15: Example text segments produced by the XLNet model. The scores of self-containedness are shown in parentheses. Each segment is judged by five human evaluators on a scale of 1 (worst) to 5 (best) and we report their average scores. Human evaluation suggests that text segments generated by our model demonstrate a high degree of self-containedness
  • Table16: Table 16
  • Table17: Table 17
  • Table18: Table 18
  • Table19: Table 19
  • Table20: Example text segments produced by the XLNet model. The scores of self-containedness are shown in parentheses. Each segment is judged by five human evaluators on a scale of 1 (worst) to 5 (best) and we report their average scores. This example is among the worst cases; we use it to illustrate the difficulty of finding self-contained segments in a polynomial space
Download tables as Excel
Related work
  • An abstract failing to retain the original meaning poses a substantial risk of harm to applications. Abstractive summarizers can copy words from source documents or generate new words (See et al, 2017; com/ucfnlp/better-highlighting

    Original Sentence

    • Some interstates are closed and hundreds of flights have been canceled as winter storms hit during one of the year’s busiest travel weeks.

    Self-Contained Segments

    • Some interstates are closed • hundreds of flights have been canceled as winter storms hit • flights have been canceled as winter storms hit • winter storms hit during one of the year’s busiest travel weeks

    Non-Self-Contained Segments

    • Some interstates are • closed and hundreds of flights have been • been canceled as winter storms hit during one of • hit during one of the year’s

    Tan et al, 2017; Chen and Bansal, 2018; Narayan et al, 2018; Gehrmann et al, 2018; Liu and Lapata, 2019; Laban et al, 2020). With greater flexibility comes increased risk. Failing to accurately convey the original meaning can hinder the deployment of summarization techniques in real-world scenarios, as inaccurate and untruthful summaries can lead the readers to false conclusions (Cao et al, 2018; Falke et al, 2019; Lebanoff et al, 2019). We aim to produce summary highlights in this paper, which will be overlaid on source documents to allow summaries to be interpreted in context.
Funding
  • This research was supported in part by the National Science Foundation grant IIS-1909603
Study subjects and analysis
times as many people: 5
Original Document and Summary Highlights. Afghan opium kills 100,000 people every year worldwide – more than any other drug – and the opiate heroin kills five times as many people in NATO countries each year than the eight-year total of NATO troops killed in Afghan combat, the United Nations said Wednesday. About 15 million people around the world use heroin, opium or morphine, fueling a $65 billion market for the drug and also fueling terrorism and insurgencies

news documents: 10
These datasets are previously used as benchmarks for multi-document summarization competitions.3. Our task is to generate a summary of less than 100 words from a set of 10 news documents, where a summary contains a set of selected text segments. There are four human reference summaries for each document set, created by NIST evaluators

people: 122700
• Although the companies only confirmed that they were discussing the possibility of a merger, a person close to the discussions said the boards of both Exxon and Mobil were expected to meet Tuesday to consider an agreement. • Analysts predicted that there would be huge cuts in duplicate staff from both companies, which employ 122,700 people. (Rest omitted.). TAC-11 Test Set

Reference
  • Reinald Kim Amplayo and Mirella Lapata. 2020. Unsupervised opinion summarization with noising and denoising. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1934–1945, Online. Association for Computational Linguistics.
    Google ScholarLocate open access versionFindings
  • Kristjan Arumae, Parminder Bhatia, and Fei Liu. 2019. Towards annotating and creating summary highlights at sub-sentence level. In Proceedings of the 2nd Workshop on New Frontiers in Summarization, pages 64–69, Hong Kong, China. Association for Computational Linguistics.
    Google ScholarLocate open access versionFindings
  • Michele Banko, Michael J. Cafarella, Stephen Soderland, Matt Broadhead, and Oren Etzioni. 2007.
    Google ScholarFindings
  • Open information extraction from the web. In Proceedings of the 20th International Joint Conference on Artifical Intelligence, IJCAI’07, page 2670–2676, San Francisco, CA, USA.
    Google ScholarLocate open access versionFindings
  • Ziqiang Cao, Furu Wei, Wenjie Li, and Sujian Li. 2018. Faithful to the original: Fact aware neural abstractive summarization. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI).
    Google ScholarLocate open access versionFindings
  • Yen-Chun Chen and Mohit Bansal. 2018. Fast abstractive summarization with reinforce-selected sentence rewriting. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 675–686, Melbourne, Australia. Association for Computational Linguistics.
    Google ScholarLocate open access versionFindings
  • Jianpeng Cheng and Mirella Lapata. 2016. Neural summarization by extracting sentences and words. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 484–494, Berlin, Germany. Association for Computational Linguistics.
    Google ScholarLocate open access versionFindings
  • Sangwoo Cho, Logan Lebanoff, Hassan Foroosh, and Fei Liu. 2019a. Improving the similarity measure of determinantal point processes for extractive multidocument summarization. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1027–1038, Florence, Italy. Association for Computational Linguistics.
    Google ScholarLocate open access versionFindings
  • Sangwoo Cho, Chen Li, Dong Yu, Hassan Foroosh, and Fei Liu. 2019b. Multi-document summarization with determinantal point processes and contextualized representations. In Proceedings of the 2nd Workshop on New Frontiers in Summarization, pages 98–103, Hong Kong, China. Association for Computational Linguistics.
    Google ScholarLocate open access versionFindings
  • Kevin Clark, Urvashi Khandelwal, Omer Levy, and Christopher D. Manning. 2019. What does BERT look at? an analysis of BERT’s attention. In Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 276–286, Florence, Italy. Association for Computational Linguistics.
    Google ScholarLocate open access versionFindings
  • Hoa Trang Dang and Karolina Owczarzak. 2008. Overview of the TAC 2008 update summarization task. In Proceedings of Text Analysis Conference.
    Google ScholarLocate open access versionFindings
  • Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
    Google ScholarLocate open access versionFindings
  • Timothy Dozat and Christopher D. Manning. 2018. Simpler but more accurate semantic dependency parsing. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 484–490, Melbourne, Australia. Association for Computational Linguistics.
    Google ScholarLocate open access versionFindings
  • Aria Haghighi and Lucy Vanderwende. 2009. Exploring content models for multi-document summarization. In Proceedings of the North American Chapter of the Association for Computational Linguistics (NAACL).
    Google ScholarLocate open access versionFindings
  • Noemie Elhadad. 2006. User-Sensitive Text Summarization: Application to the Medical Domain. Ph.D. thesis, USA.
    Google ScholarFindings
  • Günes Erkan and Dragomir R. Radev. 2004. LexRank: Graph-based lexical centrality as salience in text summarization. Journal of Artificial Intelligence Research.
    Google ScholarLocate open access versionFindings
  • Alexander Fabbri, Irene Li, Tianwei She, Suyi Li, and Dragomir Radev. 2019. Multi-news: A large-scale multi-document summarization dataset and abstractive hierarchical model. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1074–1084, Florence, Italy. Association for Computational Linguistics.
    Google ScholarLocate open access versionFindings
  • Tobias Falke, Leonardo F. R. Ribeiro, Prasetya Ajie Utama, Ido Dagan, and Iryna Gurevych. 2019. Ranking generated summaries by correctness: An interesting but challenging application for natural language inference. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2214–2220, Florence, Italy. Association for Computational Linguistics.
    Google ScholarLocate open access versionFindings
  • Kavita Ganesan, ChengXiang Zhai, and Jiawei Han. 2010. Opinosis: A graph-based approach to abstractive summarization of highly redundant opinions. In Proceedings of the International Conference on Computational Linguistics (COLING).
    Google ScholarLocate open access versionFindings
  • Sebastian Gehrmann, Yuntian Deng, and Alexander Rush. 2018. Bottom-up abstractive summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4098–4109, Brussels, Belgium. Association for Computational Linguistics.
    Google ScholarLocate open access versionFindings
  • Dan Gillick and Benoit Favre. 2009. A scalable global model for summarization. In Proceedings of the NAACL Workshop on Integer Linear Programming for Natural Langauge Processing.
    Google ScholarLocate open access versionFindings
  • Ross Girshick, Jeff Donahue, Trevor Darrell, and Jitendra Malik. 2014. Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition, CVPR ’14, pages 580–587, Washington, DC, USA. IEEE Computer Society.
    Google ScholarLocate open access versionFindings
  • Boqing Gong, Wei-Lun Chao, Kristen Grauman, and Fei Sha. 2014. Diverse sequential subset selection for supervised video summarization. In Proceedings of Neural Information Processing Systems (NIPS).
    Google ScholarLocate open access versionFindings
  • Kazi Saidul Hasan and Vincent Ng. 2014. Automatic keyphrase extraction: A survey of the state of the art. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1262–1273, Baltimore, Maryland. Association for Computational Linguistics.
    Google ScholarLocate open access versionFindings
  • John Hewitt and Christopher D. Manning. 2019. A structural probe for finding syntax in word representations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4129–4138, Minneapolis, Minnesota. Association for Computational Linguistics.
    Google ScholarLocate open access versionFindings
  • Kai Hong, John M Conroy, Benoit Favre, Alex Kulesza, Hui Lin, and Ani Nenkova. 2014. A repository of state of the art and competitive baseline summaries for generic news summarization. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC).
    Google ScholarLocate open access versionFindings
  • Rebecca Hwa. 1999. Supervised grammar induction using training data with limited constituent information. In Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics, pages 73–79, College Park, Maryland, USA. Association for Computational Linguistics.
    Google ScholarLocate open access versionFindings
  • Michael Kaisser, Marti A. Hearst, and John B. Lowe. 2008. Improving search results quality by customizing summary lengths. In Proceedings of ACL-08: HLT, pages 701–709, Columbus, Ohio. Association for Computational Linguistics.
    Google ScholarLocate open access versionFindings
  • Hidetaka Kamigaito, Katsuhiko Hayashi, Tsutomu Hirao, and Masaaki Nagata. 2018. Higher-order syntactic attention network for longer sentence compression. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1716–1726, New Orleans, Louisiana. Association for Computational Linguistics.
    Google ScholarLocate open access versionFindings
  • Anastassia Kornilova and Vladimir Eidelman. 2019. BillSum: A corpus for automatic summarization of US legislation. In Proceedings of the 2nd Workshop on New Frontiers in Summarization, pages 48–56, Hong Kong, China. Association for Computational Linguistics.
    Google ScholarLocate open access versionFindings
  • Wojciech Kryscinski, Nitish Shirish Keskar, Bryan McCann, Caiming Xiong, and Richard Socher. 2019. Neural text summarization: A critical evaluation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the
    Google ScholarLocate open access versionFindings
  • 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 540– 551, Hong Kong, China. Association for Computational Linguistics.
    Google ScholarLocate open access versionFindings
  • Alex Kulesza and Ben Taskar. 2011. Learning determinantal point processes. In Proceedings of the Conference on Uncertainty in Artificial Intelligence (UAI).
    Google ScholarLocate open access versionFindings
  • Alex Kulesza and Ben Taskar. 2012. Determinantal Point Processes for Machine Learning. Now Publishers Inc.
    Google ScholarFindings
  • Philippe Laban, Andrew Hsi, John Canny, and Marti A. Hearst. 2020. The summary loop: Learning to write abstractive summaries without examples. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5135– 5150, Online. Association for Computational Linguistics.
    Google ScholarLocate open access versionFindings
  • Logan Lebanoff, John Muchovej, Franck Dernoncourt, Doo Soon Kim, Seokhwan Kim, Walter Chang, and Fei Liu. 2019. Analyzing sentence fusion in abstractive summarization. In Proceedings of the 2nd Workshop on New Frontiers in Summarization, pages 104–110, Hong Kong, China. Association for Computational Linguistics.
    Google ScholarLocate open access versionFindings
  • Junyi Jessy Li, Kapil Thadani, and Amanda Stent. 2016. The role of discourse units in near-extractive summarization. In Proceedings of the 17th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 137–147, Los Angeles. Association for Computational Linguistics.
    Google ScholarLocate open access versionFindings
  • Chin-Yew Lin. 2004. ROUGE: a package for automatic evaluation of summaries. In Proceedings of ACL Workshop on Text Summarization Branches Out.
    Google ScholarLocate open access versionFindings
  • Yang Liu and Mirella Lapata. 2019. Hierarchical transformers for multi-document summarization. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5070– 5081, Florence, Italy. Association for Computational Linguistics.
    Google ScholarLocate open access versionFindings
  • Christopher D Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J Bethard, and David McClosky. 2014. The stanford CoreNLP natural language processing toolkit. In Proceedings of the Association for Computational Linguistics (ACL) System Demonstrations.
    Google ScholarLocate open access versionFindings
  • Shashi Narayan, Shay B. Cohen, and Mirella Lapata. 2018. Don’t give me the details, just the summary! topic-aware convolutional neural networks for extreme summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1797–1807, Brussels, Belgium. Association for Computational Linguistics.
    Google ScholarLocate open access versionFindings
  • Ani Nenkova and Kathleen McKeown. 2011. Automatic summarization. Foundations and Trends in Information Retrieval.
    Google ScholarFindings
  • Paul Over and James Yen. 2004. An introduction to DUC-2004. National Institute of Standards and Technology.
    Google ScholarFindings
  • Michael Paul, ChengXiang Zhai, and Roxana Girju. 2010. Summarizing contrastive viewpoints in opinionated text. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 66–76, Cambridge, MA. Association for Computational Linguistics.
    Google ScholarLocate open access versionFindings
  • Luz Rello, Horacio Saggion, and Ricardo Baeza-Yates. 2014. Keyword highlighting improves comprehension for people with dyslexia. In Proceedings of the 3rd Workshop on Predicting and Improving Text Readability for Target Reader Populations (PITR), pages 30–37, Gothenburg, Sweden. Association for Computational Linguistics.
    Google ScholarLocate open access versionFindings
  • Norman Sadeh, Alessandro Acquisti, Travis D. Breaux, Lorrie Faith Cranor, Aleecia M. McDonald, Joel R. Reidenberg, Noah A. Smith, Fei Liu, N. Cameron Russell, Florian Schaub, and Shomir Wilson. 2013. The usable privacy policy project. Technical Report, CMU-ISR-13-119, Carnegie Mellon University.
    Google ScholarFindings
  • Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointergenerator networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1073– 1083, Vancouver, Canada. Association for Computational Linguistics.
    Google ScholarLocate open access versionFindings
  • Aidean Sharghi, Ali Borji, Chengtao Li, Tianbao Yang, and Boqing Gong. 2018. Improving sequential determinantal point processes for supervised video summarization. In Proceedings of the European Conference on Computer Vision (ECCV).
    Google ScholarLocate open access versionFindings
  • Sasha Spala, Franck Dernoncourt, Walter Chang, and Carl Dockhorn. 2018. A web-based framework for collecting and assessing highlighted sentences in a document. In Proceedings of the 27th International Conference on Computational Linguistics: System Demonstrations, pages 78–81, Santa Fe, New Mexico. Association for Computational Linguistics.
    Google ScholarLocate open access versionFindings
  • Gabriel Stanovsky, Jessica Ficler, Ido Dagan, and Yoav Goldberg. 2016. Getting more out of syntax with props.
    Google ScholarFindings
  • Jiwei Tan, Xiaojun Wan, and Jianguo Xiao. 2017. Abstractive document summarization with a graphbased attentional neural model. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1171–1181, Vancouver, Canada. Association for Computational Linguistics.
    Google ScholarLocate open access versionFindings
  • Lucy Vanderwende, Hisami Suzuki, Chris Brockett, and Ani Nenkova. 2007. Beyond SumBasic: Taskfocused summarization with sentence simplification and lexical expansion. Information Processing and Management, 43(6):1606–1618.
    Google ScholarLocate open access versionFindings
  • G. Vladutz. 1983. Natural language text segmentation techniques applied to the automatic compilation of printed subject indexes and for online database access. In First Conference on Applied Natural Language Processing, pages 136–142, Santa Monica, California, USA. Association for Computational Linguistics.
    Google ScholarLocate open access versionFindings
  • Kristian Woodsend and Mirella Lapata. 2010. Automatic generation of story highlights. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 565–574, Uppsala, Sweden. Association for Computational Linguistics.
    Google ScholarLocate open access versionFindings
  • Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. In Advances in Neural Information Processing Systems 32, pages 5753–5763. Curran Associates, Inc.
    Google ScholarLocate open access versionFindings
Your rating :
0

 

Tags
Comments
数据免责声明
页面数据均来自互联网公开来源、合作出版商和通过AI技术自动分析结果,我们不对页面数据的有效性、准确性、正确性、可靠性、完整性和及时性做出任何承诺和保证。若有疑问,可以通过电子邮件方式联系我们:report@aminer.cn
小科