AI helps you reading Science

AI generates interpretation videos

AI extracts and analyses the key points of the paper to generate videos automatically


pub
Go Generating

AI Traceability

AI parses the academic lineage of this thesis


Master Reading Tree
Generate MRT

AI Insight

AI extracts a summary of this paper


Weibo:
We showed that generating a definition can be a viable, suitable alternative to the traditional use of sense inventories in computational lexical semantics, and one that better reflects the non-discrete nature of word meaning

Generationary or “How We Went beyond Word Sense Inventories and Learned to Gloss”

empirical methods in natural language processing, pp.7207-7221, (2020)

Cited by: 0|Views10
Full Text
Bibtex
Weibo

Abstract

Mainstream computational lexical semantics embraces the assumption that word senses can be represented as discrete items of a predefined inventory. In this paper we show this needs not be the case, and propose a unified model that is able to produce contextually appropriate definitions. In our model, Generationary, we employ a novel span-...More

Code:

Data:

Introduction
  • All modern approaches to Word Sense Disambiguation (WSD), i.e. the task of automatically mapping a word in context to its meaning (Navigli, 2009), use predetermined word senses from a machine lexicon, both in supervised (Huang et al, 2019; Bevilacqua and Navigli, 2020; Scarlini et al, 2020b) and in knowledge-based settings (Tripodi and Navigli, 2019; Scarlini et al, 2020a; Scozzafava et al, 2020).
  • As Kilgarriff (2007) argued, different language users have different understandings of words
  • This fact explains why inter-annotator agreement (ITA) estimates on WSD annotation tasks have never exceeded the figure of 80% (Edmonds and Kilgarriff, 2002; Navigli et al, 2007; Palmer et al, 2007).
  • While English inventories of senses and corpora are widely available the same cannot be said for other languages (Scarlini et al, 2019; Barba et al, 2020; Pasini, 2020), and this limits the scalability of Natural Language Understanding tasks to multiple languages (Navigli, 2018)
Highlights
  • All modern approaches to Word Sense Disambiguation (WSD), i.e. the task of automatically mapping a word in context to its meaning (Navigli, 2009), use predetermined word senses from a machine lexicon, both in supervised (Huang et al, 2019; Bevilacqua and Navigli, 2020; Scarlini et al, 2020b) and in knowledge-based settings (Tripodi and Navigli, 2019; Scarlini et al, 2020a; Scozzafava et al, 2020)
  • Having no indisputable way of determining where one sense of a word ends and another begins, together with the fact that little consensus about how to represent word meaning has hitherto existed (Pustejovsky, 1991; Hanks, 2000; Nosofsky, 2011), are issues lying at the core of what makes WSD hard (Jackson, 2019)
  • To choose a possible sense from WordNet and perform WSD, we evaluate the techniques presented in Section 3.2, i.e. probability scoring (Prob.), simple similarity scoring (Sim.), and similarity scoring with MBRR
  • We report the results of the WSD evaluation in Table 4
  • We showed that generating a definition can be a viable, suitable alternative to the traditional use of sense inventories in computational lexical semantics, and one that better reflects the non-discrete nature of word meaning
  • We introduced Generationary, an approach to automatic definition generation which, thanks to a flexible encoding scheme, can (i) encode targets of arbitrary length, and (ii) exploit the vast amount of knowledge encoded in the BART pre-trained Encoder-Decoder, through fine-tuning
Results
  • As shown in Table 2, Generationary models outperform competitors in every setting. On CHAS, the specialized model (Gen-CHAS) shows much better results than Gen-UNI, because NLG measures give high scores to glosses which are lexically similar to the gold ones, while multi-inventory training will, instead, expose the model to many other, differently formulated, but valid definitions.
  • Compared to Gen-SEM (MBRR), Gen-UNI (MBRR) sacrifices 0.4 and 0.2 points on, respectively, ALL and ALL−, but obtains 8 points more on the zero-shot set, improving over GlossBERT by 4.3 points
  • This demonstrates that, when using Generationary with data from multiple inventories, (i) performances remain in the same ballpark as those of a state-of-the-art system, and (ii) much improved generalizability is achieved, as shown by the state-of-the-art results on the zero-shot setting.
  • C7 The mind is haunted by the ghosts of the past. g7 People’s memories of the past are still present in their mind, even after they have ceased to exist.
Conclusion
  • The authors showed that generating a definition can be a viable, suitable alternative to the traditional use of sense inventories in computational lexical semantics, and one that better reflects the non-discrete nature of word meaning.
  • From two points of view, Generationary represents a unified approach: first, it exploits multiple inventories simultaneously, going beyond the quirks of each one; second, it is able to tackle both generative (Definition Modeling) and discriminative tasks (Word Sense Disambiguation and Wordin-Context), obtaining competitive to state-of-theart results, with strong performances on zero-shot settings.
  • The authors make the software and reproduction materials, along with a new evaluation dataset of definitions for adjective-noun phrases (Hei++), available at http://generationary.org
Tables
  • Table1: Training, dev and test instances and number of unique glosses in the datasets used
  • Table2: DM evaluation results. Columns: perplexity, BLEU, Rouge-L, METEOR, BERTScore (ppl/BL/RL/MT/BS). Row groups are mutually comparable (bold = best). ↑/↓: higher/lower is better. *: re-trained
  • Table3: Macro precision@k (lemmas and senses) on the retrieval task of <a class="ref-link" id="cChang_2019_a" href="#rChang_2019_a">Chang and Chen (2019</a>). Row groups are mutually comparable (bold = best)
  • Table4: Results on the WSD evaluation. Row groups: (1) previous approaches; (2) Generationary. Columns: datasets in the evaluation framework (S2 to S15), ALL w/ and w/o the dev set (ALL/ALL−), zero-shot set (0-shot), and results by PoS on ALL (N/V/A/R). F1 is reported. Bold: best. *: re-computed with the original code
  • Table5: Qualitative evaluation results. Columns: dataset, average Likert for gold and Generationary, % of Generationary scores equal or better than gold (≥)
  • Table6: Sample of Generationary definitions (g) for several targets in context (c). g: gold definition
  • Table7: Random sample of Generationary definitions (g) for Hei++ contexts (c). g: gold definition
  • Table8: Generationary definitions (g) for random targets and contexts (c) excerpted from webtext
  • Table9: Table 9
  • Table10: Annotation guidelines excerpt. Rows: Likert score, explanation and example definition for target
Download tables as Excel
Related work
  • Recent years have witnessed the blossoming of research in Definition Modeling (DM), whose original aim was to make static word embeddings intepretable by producing a natural language definition (Noraset et al, 2017).2 While subsequently released datasets have included usage examples to account for polysemy (Gadetsky et al, 2018; Chang et al, 2018), many of the approaches to “contextual” DM have nevertheless exploited the context merely in order to select a static sense embedding from which to generate the definition (Gadetsky et al, 2018; Chang et al, 2018; Zhu et al, 2019). Such embeddings, however, are non-contextual.

    Other works have made a fuller use of the sentence surrounding the target, with the goal of explaining the meaning of a word or phrase as embedded in its local context (Ni and Wang, 2017; Mickus et al, 2019; Ishiwatari et al, 2019). However, these approaches have never explicitly dealt with WSD, and have shown limits regarding the marking of the target in the context encoder, preventing an effective exploitation of the context and making DM overly reliant on static embeddings or surface form information. For example, in the model of Ni and Wang (2017), the encoder is unaware of the contextual target, whereas Mickus et al (2019) use a marker embedding to represent targets limited to single tokens. Finally, Ishiwatari et al (2019) replace the target with a placeholder, and the burden of representing it is left to a character-level encoder and to static embeddings. This latter approach is interesting, in that it is the only one that can handle multi-word targets; however, it combines token embeddings via order-invariant sum, and thus it is suboptimal for differentiating instances such as pet house and house pet.
Funding
  • The authors gratefully acknowledge the support of the ERC Consolidator Grant MOUSSE No 726487 under the European Union’s Horizon 2020 research and innovation programme. This work was supported in part by the MIUR under the grant “Dipartimenti di eccellenza 20182022” of the Department of Computer Science of Sapienza University
Study subjects and analysis
datasets: 5
we want to show that this degree of freedom does not come at the expense of performance when presented with the task of choosing a sense from a finite predefined list. We test on the five datasets collected in the evaluation framework of Raganato et al (2017), namely: Senseval-2 (Edmonds and Cotton, 2001), Senseval3 (Snyder and Palmer, 2004), SemEval-2007 (Pradhan et al, 2007), SemEval-2013 (Navigli et al, 2013), SemEval-2015 (Moro and Navigli, 2015), which are all annotated with WordNet 3.0 senses (or converted to its inventory). We denote with ALL and ALL− the concatenation of all evaluation datasets, including or excluding, respectively, SemEval-2007, which is our development set for this experiment

WSD datasets: 5
While our previous experiments shed light upon the quality of Generationary in comparison with other automatic systems, here we employ human annotators to compare definitions produced with our approach against glosses written by human lexicographers. The datasets that we use in this experiment are (i) our Hei++ dataset of definitions for adjectivenouns phrases (Section 4.2) and (ii) SamplEval, i.e. a sample of 1,000 random instances made up of 200 items12 for each of the five WSD datasets included in ALL (see Section 5.2), with at most one. 12We do not sample instances annotated with many senses

Reference
  • Satanjeev Banerjee and Alon Lavie. 2005. METEOR: An automatic metric for MT evaluation with improved correlation with human judgments. In Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization, pages 65–72, Ann Arbor, MI, USA.
    Google ScholarLocate open access versionFindings
  • Edoardo Barba, Luigi Procopio, Niccolo Campolungo, Tommaso Pasini, and Roberto Navigli. 2020. MuLaN: Multilingual Label propagatioN for word sense disambiguation. In Proceedings of the TwentyNinth International Joint Conference on Artificial Intelligence, IJCAI-20, pages 3837–3844.
    Google ScholarLocate open access versionFindings
  • Michele Bevilacqua and Roberto Navigli. 2020. Breaking through the 80% glass ceiling: Raising the state of the art in word sense disambiguation by incorporating knowledge graph information. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2854–2864, Online.
    Google ScholarLocate open access versionFindings
  • Ting-Yun Chang and Yun-Nung Chen. 2019. What does this word mean? Explaining contextualized embeddings with natural language definition. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 6064– 6070, Hong Kong, China.
    Google ScholarLocate open access versionFindings
  • Ting-Yun Chang, Ta-Chung Chi, Shang-Chi Tsai, and Yun-Nung Chen. 2018. xSense: Learning senseseparated sparse representations and textual definitions for explainable word sense networks. arXiv preprint arXiv:1809.03348.
    Findings
  • Philip Edmonds and Scott Cotton. 2001. SENSEVAL2: Overview. In Proceedings of SENSEVAL2 Second International Workshop on Evaluating Word Sense Disambiguation Systems, pages 1–5, Toulouse, France.
    Google ScholarLocate open access versionFindings
  • Philip Edmonds and Adam Kilgarriff. 2002. Introduction to the special issue on evaluating word sense disambiguation systems. Natural Language Engineering, 8(4):279–291.
    Google ScholarLocate open access versionFindings
  • Katrin Erk and Diana McCarthy. 2009. Graded word sense assignment. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 440–449, Singapore.
    Google ScholarLocate open access versionFindings
  • Christiane Fellbaum, editor. 1998. WordNet: An Electronic Lexical Database. MIT Press, Cambridge, MA, USA.
    Google ScholarFindings
  • Artyom Gadetsky, Ilya Yakubovskiy, and Dmitry Vetrov. 2018. Conditional generators of words definitions. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (ACL), pages 266–271, Melbourne, Australia.
    Google ScholarLocate open access versionFindings
  • Albert Gatt and Emiel Krahmer. 2018. Survey of the state of the art in natural language generation: Core tasks, applications and evaluation. Journal of Artificial Intelligence Research, 61:65–170.
    Google ScholarLocate open access versionFindings
  • Patrick Hanks. 2000. Do word meanings exist? Computers and the Humanities, 34(1–2):205–215.
    Google ScholarLocate open access versionFindings
  • Matthias Hartung. 2016. Distributional Semantic Models of Attribute Meaning in Adjectives and Nouns. Ph.D. thesis, Institut fur Computerlinguistik Ruprecht-Karls-Universitat Heidelberg.
    Google ScholarFindings
  • Luyao Huang, Chi Sun, Xipeng Qiu, and Xuanjing Huang. 2019. GlossBERT: BERT for word sense disambiguation with gloss knowledge. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3509–3514, Hong Kong, China.
    Google ScholarLocate open access versionFindings
  • Shonosuke Ishiwatari, Hiroaki Hayashi, Naoki Yoshinaga, Graham Neubig, Shoetsu Sato, Masashi Toyoda, and Masaru Kitsuregawa. 2019. Learning to describe unknown phrases with local and global contexts. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT), pages 3467–3476, Minneapolis, MN, USA.
    Google ScholarLocate open access versionFindings
  • Philip C. Jr. Jackson. 2019. I do believe in word senses. Proceedings ACS, 321:340.
    Google ScholarLocate open access versionFindings
  • Adam Kilgarriff. 1997. I don’t believe in word senses. Computers and the Humanities, 31(2):91–113.
    Google ScholarLocate open access versionFindings
  • Adam Kilgarriff. 2007. Word senses. In Eneko Agirre and Phillip Edmonds, editors, Word Sense Disambiguation, pages 29–46.
    Google ScholarLocate open access versionFindings
  • Shankar Kumar and William Byrne. 2004. Minimum Bayes-risk decoding for statistical machine translation. In Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics (HLT-NAACL), pages 169–176, Boston, MA, USA.
    Google ScholarLocate open access versionFindings
  • Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. 2019. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461.
    Findings
  • Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In Text Summarization Branches Out, pages 74–81, Barcelona, Spain.
    Google ScholarFindings
  • Edward Loper and Steven Bird. 2002. NLTK: The Natural Language Toolkit. In Proceedings of the ACL-02 Workshop on Effective Tools and Methodologies for Teaching Natural Language Processing and Computational Linguistics - Volume 1, ETMTNLP ’02, pages 63–70, Stroudsburg, PA, USA.
    Google ScholarLocate open access versionFindings
  • Daniel Loureiro and Alıpio Jorge. 2019. Language modelling makes sense: Propagating representations through WordNet for full-coverage word sense disambiguation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5682–5691, Florence, Italy.
    Google ScholarLocate open access versionFindings
  • Timothee Mickus, Denis Paperno, and Matthieu Constant. 2019. Mark my word: A sequence-tosequence approach to definition modeling. In Proceedings of the First NLPL Workshop on Deep Learning for Natural Language Processing, pages 1–11, Turku, Finland.
    Google ScholarLocate open access versionFindings
  • George A. Miller, Claudia Leacock, Randee Tengi, and Ross T. Bunker. 1993. A semantic concordance. In Human Language Technology: Proceedings of a Workshop Held at Plainsboro, New Jersey, March 21-24, 1993.
    Google ScholarLocate open access versionFindings
  • Andrea Moro and Roberto Navigli. 2015. SemEval2015 task 13: Multilingual all-words sense disambiguation and entity linking. In Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval 2015), pages 288–297, Denver, CO, USA.
    Google ScholarLocate open access versionFindings
  • Roberto Navigli. 2009. Word sense disambiguation: A survey. ACM Computing Surveys, 41(2):1–69.
    Google ScholarLocate open access versionFindings
  • Roberto Navigli. 2018. Natural language understanding: Instructions for (present and future) use. In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI18, pages 5697–5702.
    Google ScholarLocate open access versionFindings
  • Roberto Navigli, David Jurgens, and Daniele Vannella. 2013. SemEval-2013 task 12: Multilingual word sense disambiguation. In Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 2: Proceedings of the Seventh International Workshop on Semantic Evaluation (SemEval 2013), pages 222–231, Atlanta, GA, USA.
    Google ScholarLocate open access versionFindings
  • Roberto Navigli, Kenneth C. Litkowski, and Orin Hargraves. 2007. SemEval-2007 task 07: Coarsegrained English all-words task. In Proceedings of the Fourth International Workshop on Semantic Evaluations (SemEval-2007), pages 30–35, Prague, Czech Republic.
    Google ScholarLocate open access versionFindings
  • Ke Ni and William Yang Wang. 2017. Learning to explain non-standard English words and phrases. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (IJCNLP), pages 413–417, Taipei, Taiwan.
    Google ScholarLocate open access versionFindings
  • Thanapon Noraset, Chen Liang, Larry Birnbaum, and Doug Downey. 2017. Definition modeling: Learning to define word embeddings in natural language. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, pages 3259–3266, San Francisco, CA, USA.
    Google ScholarLocate open access versionFindings
  • Robert M Nosofsky. 2011. The generalized context model: An exemplar model of classification. In Emmanuel M. Pothos and Andy J. Wills, editors, Formal Approaches in Categorization, pages 18–39. Cambridge University Press, Cambridge.
    Google ScholarLocate open access versionFindings
  • Martha Palmer, Hoa Trang Dang, and Christiane Fellbaum. 2007. Making fine-grained and coarsegrained sense distinctions, both manually and automatically. Natural Language Engineering, 13(2):137–163.
    Google ScholarLocate open access versionFindings
  • Tommaso Pasini. 2020. The Knowledge Acquisition Bottleneck Problem in Multilingual Word Sense Disambiguation. In Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI-20, pages 4936–4942.
    Google ScholarLocate open access versionFindings
  • Mohammad Taher Pilehvar and Jose CamachoCollados. 2019. WiC: the Word-in-Context dataset for evaluating context-sensitive meaning representations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT), pages 1267–1273, Minneapolis, MN, USA.
    Google ScholarLocate open access versionFindings
  • Matt Post. 2018. A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 186– 191, Brussels, Belgium.
    Google ScholarLocate open access versionFindings
  • Sameer Pradhan, Edward Loper, Dmitriy Dligach, and Martha Palmer. 2007. SemEval-2007 task-17: English lexical sample, SRL and all words. In Proceedings of the Fourth International Workshop on Semantic Evaluations (SemEval-2007), pages 87–92, Prague, Czech Republic.
    Google ScholarLocate open access versionFindings
  • James Pustejovsky. 1991. The generative lexicon. Computational Linguistics, 17(4).
    Google ScholarLocate open access versionFindings
  • Alessandro Raganato, Jose Camacho-Collados, and Roberto Navigli. 2017. Word sense disambiguation: A unified evaluation framework and empirical comparison. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 99–110, Valencia, Spain.
    Google ScholarLocate open access versionFindings
  • Sunny Rai and Shampa Chakraverty. 2020. A survey on computational metaphor processing. ACM Computing Surveys (CSUR), 53(2):1–37.
    Google ScholarLocate open access versionFindings
  • Rachel Ramsey. 2017. An Exemplar-Theoretic Account of Word Senses. Ph.D. thesis, Northumbria University.
    Google ScholarFindings
  • Nils Reimers and Iryna Gurevych. 2019. SentenceBERT: Sentence embeddings using Siamese BERTnetworks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3982–3992, Hong Kong, China.
    Google ScholarLocate open access versionFindings
  • Marco Tulio Ribeiro, Tongshuang Wu, Carlos Guestrin, and Sameer Singh. 2020. Beyond accuracy: Behavioral testing of NLP models with CheckList. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4902– 4912, Online.
    Google ScholarLocate open access versionFindings
  • Eleanor Rosch and Carolyn B Mervis. 1975. Family resemblances: Studies in the internal structure of categories. Cognitive psychology, 7(4):573–605.
    Google ScholarLocate open access versionFindings
  • Bianca Scarlini, Tommaso Pasini, and Roberto Navigli. 2019. Just “OneSeC” for producing multilingual sense-annotated data. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 699–709, Florence, Italy.
    Google ScholarLocate open access versionFindings
  • Bianca Scarlini, Tommaso Pasini, and Roberto Navigli. 2020a. SensEmBERT: Context-enhanced sense embeddings for multilingual word sense disambiguation. In Thirty-Fourth AAAI Conference on Artificial Intelligence, New York, NY, USA.
    Google ScholarLocate open access versionFindings
  • Bianca Scarlini, Tommaso Pasini, and Roberto Navigli. 2020b. With more contexts comes better performance: Contextualized sense embeddings for allround Word Sense Disambiguation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), online.
    Google ScholarLocate open access versionFindings
  • Federico Scozzafava, Marco Maru, Fabrizio Brignone, Giovanni Torrisi, and Roberto Navigli. 2020. Personalized PageRank with syntagmatic information for multilingual word sense disambiguation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 37–46, Online.
    Google ScholarLocate open access versionFindings
  • Benjamin Snyder and Martha Palmer. 2004. The English all-words task. In Proceedings of SENSEVAL3, the Third International Workshop on the Evaluation of Systems for the Semantic Analysis of Text, pages 41–43, Barcelona, Spain.
    Google ScholarLocate open access versionFindings
  • Rocco Tripodi and Roberto Navigli. 2019. Game theory meets embeddings: A unified framework for word sense disambiguation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 88–99, Hong Kong, China.
    Google ScholarLocate open access versionFindings
  • Andrea Tyler and Vyvyan Evans. 2001. Reconsidering prepositional polysemy networks: The case of over. Language, 77(4):724–765.
    Google ScholarLocate open access versionFindings
  • Liner Yang, Cunliang Kong, Yun Chen, Yang Liu, Qinan Fan, and Erhong Yang. 2020. Incorporating sememes into Chinese definition modeling. IEEE/ACM Transactions on Audio, Speech, and Language Processing.
    Google ScholarLocate open access versionFindings
  • Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q Weinberger, and Yoav Artzi. 2019. BERTScore: Evaluating text generation with BERT. arXiv preprint arXiv:1904.09675.
    Findings
  • Ruimin Zhu, Thanapon Noraset, Alisa Liu, Wenxin Jiang, and Doug Downey. 2019. Multi-sense definition modeling using word sense decompositions. arXiv preprint arXiv:1909.09483.
    Findings
Author
Your rating :
0

 

Tags
Comments
数据免责声明
页面数据均来自互联网公开来源、合作出版商和通过AI技术自动分析结果,我们不对页面数据的有效性、准确性、正确性、可靠性、完整性和及时性做出任何承诺和保证。若有疑问,可以通过电子邮件方式联系我们:report@aminer.cn
小科