AI helps you reading Science

AI generates interpretation videos

AI extracts and analyses the key points of the paper to generate videos automatically


pub
Go Generating

AI Traceability

AI parses the academic lineage of this thesis


Master Reading Tree
Generate MRT

AI Insight

AI extracts a summary of this paper


Weibo:
We proposed a BERT-based supervised classifier for affective event recognition and showed that it substantially outperforms a large affective event knowledge base

Affective Event Classification with Discourse enhanced Self training

EMNLP 2020, pp.5608-5617, (2020)

Cited by: 0|Views110
Full Text
Bibtex
Weibo

Abstract

Prior research has recognized the need to associate affective polarities with events and has produced several techniques and lexical resources for identifying affective events. Our research introduces new classification models to assign affective polarity to event phrases. First, we present a BERT-based model for affective event classific...More

Code:

Data:

0
Introduction
  • Researchers have been tackling the problem of identifying affective events, which are events that have a positive or negative effect on people who experience the event.
  • Events that are typically negative include being fired from a job, breaking an arm, or having your house burn down.
  • The authors will refer to these events as having positive or negative polarity with respect to an implicit affective state.
  • Research has shown that recognizing affective events is important for a variety of natural language processing tasks, including narrative text comprehension and summarization (Lehnert, 1981; Goyal et al, 2013), dialogue systems (Andreet al., 2004), response generation (Ritter et al, 2011), and sarcasm detection (Riloff et al, 2013)
Highlights
  • In recent years, researchers have been tackling the problem of identifying affective events, which are events that have a positive or negative effect on people who experience the event
  • Aff-BERT(Gold) produced only a small improvement, but we developed a new discourse-enhanced self-training algorithm that achieved bigger performance gains
  • For the Discourseenhanced Self-training (DEST) model, to ensure a rich set of discourse contexts, we only used unlabeled events that (a) had at least 10 distinct coreferent sentiment expressions and (b) did not include “this”, “that” or “it” as a subject or object of the event phrase because an event is often vague without knowing what the pronoun refers to
  • We proposed a BERT-based supervised classifier for affective event recognition and showed that it substantially outperforms a large affective event knowledge base
  • We designed a novel discourse-enhanced self-training algorithm to leverage unlabeled data iteratively. By combining both the affective event classifier’s prediction and the polarities of coreferent sentiment expressions, our algorithm substantially improved upon the supervised learning results
  • We believe that the general idea behind our discourse-enhanced self-training approach could be useful for many other types of problems where additional information can be extracted from larger contexts to serve as a secondary signal to help confirm or disconfirm a classifier’s predictions
Methods
  • The strength of the method is that this signal can serve alongside the main classifier to produce a diverse new set of high-quality labels
Results
  • Experimental Results for Blogs Data

    Baselines The authors developed two baselines to compare with Aff-BERT.
  • Table 4 reports the performance of the models after 10 iterations of learning with unlabeled data, where the first row shows the results for Aff-BERT trained only with gold labeled data for comparison.
  • For both self-training models, no new examples were labeled after 10 iterations.
  • The authors' Discourse-enhanced Selftraining algorithm achieved larger gains, improving precision over the supervised model from 76.5% → to 79.6% and improving recall from 75.2% → 78.7%.
Conclusion
  • The authors proposed a BERT-based supervised classifier for affective event recognition and showed that it substantially outperforms a large affective event knowledge base.
  • The authors designed a novel discourse-enhanced self-training algorithm to leverage unlabeled data iteratively.
  • By combining both the affective event classifier’s prediction and the polarities of coreferent sentiment expressions, the algorithm substantially improved upon the supervised learning results.
  • The authors believe that the general idea behind the discourse-enhanced self-training approach could be useful for many other types of problems where additional information can be extracted from larger contexts to serve as a secondary signal to help confirm or disconfirm a classifier’s predictions
Summary
  • Introduction:

    Researchers have been tackling the problem of identifying affective events, which are events that have a positive or negative effect on people who experience the event.
  • Events that are typically negative include being fired from a job, breaking an arm, or having your house burn down.
  • The authors will refer to these events as having positive or negative polarity with respect to an implicit affective state.
  • Research has shown that recognizing affective events is important for a variety of natural language processing tasks, including narrative text comprehension and summarization (Lehnert, 1981; Goyal et al, 2013), dialogue systems (Andreet al., 2004), response generation (Ritter et al, 2011), and sarcasm detection (Riloff et al, 2013)
  • Objectives:

    The authors' goal is to design a classifier that can label an event tuple with affective polarity.
  • Methods:

    The strength of the method is that this signal can serve alongside the main classifier to produce a diverse new set of high-quality labels
  • Results:

    Experimental Results for Blogs Data

    Baselines The authors developed two baselines to compare with Aff-BERT.
  • Table 4 reports the performance of the models after 10 iterations of learning with unlabeled data, where the first row shows the results for Aff-BERT trained only with gold labeled data for comparison.
  • For both self-training models, no new examples were labeled after 10 iterations.
  • The authors' Discourse-enhanced Selftraining algorithm achieved larger gains, improving precision over the supervised model from 76.5% → to 79.6% and improving recall from 75.2% → 78.7%.
  • Conclusion:

    The authors proposed a BERT-based supervised classifier for affective event recognition and showed that it substantially outperforms a large affective event knowledge base.
  • The authors designed a novel discourse-enhanced self-training algorithm to leverage unlabeled data iteratively.
  • By combining both the affective event classifier’s prediction and the polarities of coreferent sentiment expressions, the algorithm substantially improved upon the supervised learning results.
  • The authors believe that the general idea behind the discourse-enhanced self-training approach could be useful for many other types of problems where additional information can be extracted from larger contexts to serve as a secondary signal to help confirm or disconfirm a classifier’s predictions
Tables
  • Table1: Performance of AEKB across data sets
  • Table2: Performance on the blogs test set
  • Table3: Examples of harvested tweets and extracted events
  • Table4: Results for learning from unlabeled data
  • Table5: Recall and precision across polarities
  • Table6: Examples of labels that are changed by the joint scoring function
Download tables as Excel
Related work
  • Several lines of research have focused on the problem of recognizing events that have implicit affective states. Research on narrative understanding used bootstrapped learning to identify patient polarity verbs, which impart affective polarity to their patients (Goyal et al, 2010, 2013). Vu et al (2014) extracted “emotion-provoking events” using the seed pattern “I am < EMOTION > that < EVENT >, pattern expansion, and clustering. Reed et al (2017) learned lexico-syntactic patterns associated with first-person affect to improve affective sentence classification alongside supervised learners. Li et al (2014) extracted “major life events” from Twitter by clustering tweets that occurred with speech act words, such as “congratulations” or “condolences”. But their work did not assign affective polarity to events, and focused only on major life events that prompt expressive speech acts. Our work has a broader scope, aiming to recognize everyday events as well (e.g., being hungry is negative, and seeing a rainbow is positive).
Funding
  • Our Discourse-enhanced Selftraining algorithm achieved larger gains, improving precision over the supervised model from 76.5% → to 79.6% and improving recall from 75.2% → 78.7%
  • Discourse-enhanced Self-training showed even greater relative improvement over the supervised learner when only 50% of the gold data was used for training
Study subjects and analysis
tweets: 3
If the context around the sentiment expression satisfies the constraints mentioned earlier, then we extract the events in the previous sentence as affective event candidates. Table 3 shows three tweets that were retrieved with queries for the sentiment expressions in italics along with the events extracted from each tweet in boldface. 4.2 Creating Event Queries

tweets: 5000
In each iteration, we form queries for sentiment or event phrases that have frequency ≥ 5 and have not been used as queries previously. We download 5,000 tweets for each event query and 1,000 tweets for sentiment expression query.4. Finally, we discard retweets and duplicated tweets5

Reference
  • Elisabeth Andre, Laila Dybkjær, Wolfgang Minker, and Paul Heisterkamp. 2004. Affective Dialogue Systems: Tutorial and Research Workshop. In Lecture Notes in Computer Science, volume 3068. Springer.
    Google ScholarLocate open access versionFindings
  • A. Blum and T. Mitchell. 1998. Combining Labeled and Unlabeled Data with Co-Training.
    Google ScholarFindings
  • Ying Chen, Wenjun Hou, Xiyao Cheng, and Shoushan Li. 2018. Joint Learning for Emotion Classification and Emotion Cause Detection. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP 2018).
    Google ScholarLocate open access versionFindings
  • Yoonjung Choi and Janyce Wiebe. 201+/EffectWordNet: Sense-level Lexicon Acquisition for Opinion Inference. In Proceedings of the 2014 Conference on Empirical Methods on Natural Language Processing (EMNLP 2014).
    Google ScholarLocate open access versionFindings
  • Lingjia Deng and Janyce Wiebe. 2014. Sentiment Propagation via Implicature Constraints. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics (EACL 2014).
    Google ScholarLocate open access versionFindings
  • Lingjia Deng and Janyce Wiebe. 2015. Joint Prediction for Entity/Event-Level Sentiment Analysis using Probabilistic Soft Logic Models. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP 2015).
    Google ScholarLocate open access versionFindings
  • Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (HLT/NAACL 2019).
    Google ScholarLocate open access versionFindings
  • Haibo Ding and Ellen Riloff. 2016. Acquiring Knowledge of Affective Events from Blogs using Label Propagation. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence (AAAI 2016).
    Google ScholarLocate open access versionFindings
  • Haibo Ding and Ellen Riloff. 2018. Weakly Supervised Induction of Affective Events by Optimizing Semantic Consistency. In Proceedings of the ThirtySecond AAAI Conference on Artificial Intelligence (AAAI 2018).
    Google ScholarLocate open access versionFindings
  • A. Goyal, E. Riloff, and H. Daume III. 20Automatically producing plot unit representations for narrative text. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing (EMNLP 2010).
    Google ScholarLocate open access versionFindings
  • Amit Goyal, Ellen Riloff, and Hal Daume III. 2013. A Computational Model for Plot Units. Computational Intelligence, 29(3):466–488.
    Google ScholarLocate open access versionFindings
  • Lin Gui, Jiannan Hu, Yulan He, Ruifeng Xu, Qin Lu, and Jiachen Du. 2017. A Question Answering Approach for Emotion Cause Extraction. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP 2017).
    Google ScholarLocate open access versionFindings
  • Lin Gui, Dongyin Wu, Ruifeng Xu, Qin Lu, and Yu Zhou. 2016. Event-Driven Emotion Cause Extraction with Corpus Construction. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP 2016).
    Google ScholarLocate open access versionFindings
  • Jun Seok Kang, Song Feng, Leman Akoglu, and Yejin Choi. 20ConnotationWordNet: Learning Connotation over the Word+Sense Network. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (ACL 2014).
    Google ScholarLocate open access versionFindings
  • A. Kehler, D. Appelt, L. Taylor, and A. Simma. 2004. Competitive Self-Trained Pronoun Interpretation. In Proceedings of the Annual Meeting of the North American Chapter of the Associ ation for Computational Linguistics: Human Language Technologies (HLT/NAACL 2004).
    Google ScholarLocate open access versionFindings
  • Wendy G Lehnert. 1981. Plot Units and Narrative Summarization. Cognitive Science, 5(4):293–331.
    Google ScholarLocate open access versionFindings
  • Jiwei Li, Alan Ritter, Claire Cardie, and Eduard Hovy. 2014. Major life event extraction from twitter based on congratulations/condolences speech acts. In Proceedings of Empirical Methods in Natural Language Processing (EMNLP 2014).
    Google ScholarLocate open access versionFindings
  • Xiangju Li, Kaisong Song, Shi Feng, Daling Wang, and Yifei Zhang. 20A Co-Attention Neural Network Model for Emotion Cause Analysis with Emotional Context Awareness. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP 2018).
    Google ScholarLocate open access versionFindings
  • D. McClosky, E. Charniak, and M Johnson. 2006. Effective self-training for parsing. In Proceedings of the Annual Meeting of the North American Chapter of the Associatio n for Computational Linguistics: Human Language Technologies (HLT/NAACL 2006).
    Google ScholarLocate open access versionFindings
  • R. Mihalcea. 2004. Co-training and Self-training for Word Sense Disambiguation. In Proceedings of the Eighth Conference on Natural Language Learning (CoNLL 2004).
    Google ScholarLocate open access versionFindings
  • Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (HLT/NAACL 2018).
    Google ScholarLocate open access versionFindings
  • Jun Saito, Yugo Murawaki, and Sadao Kurohashi. 2019. Minimally Supervised Learning of Affective Events Using Discourse Relations. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP/IJCNLP 2019).
    Google ScholarLocate open access versionFindings
  • Pratik Saraf, R. Sedamkar, and Sheetal Rathi. 2015. PrefixSpan Algorithm for Finding Sequential Pattern with Various Constraints. International Journal of Applied Information Systems, 9:37–41.
    Google ScholarLocate open access versionFindings
  • Hoa Trong Vu, Graham Neubig, Sakriani Sakti, Tomoki Toda, and Satoshi Nakamura. 2014. Acquiring a Dictionary of Emotion-Provoking Events. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics (EACL 2014).
    Google ScholarLocate open access versionFindings
  • Theresa Wilson, Janyce Wiebe, and Paul Hoffmann. 2005. Recognizing Contextual Polarity in PhraseLevel Sentiment Analysis. In Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing (HLT/EMNLP 2005).
    Google ScholarLocate open access versionFindings
  • Rui Xia and Zixiang Ding. 2019. Emotion-Cause Pair Extraction: A New Task to Emotion Analysis in Texts. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL 2019).
    Google ScholarLocate open access versionFindings
  • Hannah Rashkin, Sameer Singh, and Yejin Choi. 2016. Connotation Frames: A Data-Driven Investigation. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL 2016).
    Google ScholarLocate open access versionFindings
  • Lena Reed, JiaQi Wu, Shereen Oraby, Pranav Anand, and Marilyn A. Walker. 2017. Learning lexicofunctional patterns for first-person affect. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL 2017).
    Google ScholarLocate open access versionFindings
  • E. Riloff, A. Qadir, P. Surve, L. De Silva, N. Gilbert, and R. Huang. 2013. Sarcasm as Contrast between a Positive Sentiment and Negative Situation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP 2013).
    Google ScholarLocate open access versionFindings
  • Alan Ritter, Colin Cherry, and William B Dolan. 2011. Data-driven Response Generation in Social Media. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing (EMNLP 2011).
    Google ScholarLocate open access versionFindings
  • Sara Rosenthal, Noura Farra, and Preslav Nakov. 2017. SemEval-2017 Task 4: Sentiment Analysis in Twitter. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval 2017).
    Google ScholarLocate open access versionFindings
Author
Yuan Zhuang
Yuan Zhuang
Tianyu Jiang
Tianyu Jiang
Your rating :
0

 

Tags
Comments
小科