AI helps you reading Science

AI generates interpretation videos

AI extracts and analyses the key points of the paper to generate videos automatically


pub
Go Generating

AI Traceability

AI parses the academic lineage of this thesis


Master Reading Tree
Generate MRT

AI Insight

AI extracts a summary of this paper


Weibo:
Our model achieves better performance compared to the previous methods, significantly improves cause extraction by 5.15% and emotion-cause pair extraction by 2.26% in F 1-measure with p < 0.001

Emotion Cause Pair Extraction as Sequence Labeling Based on A Novel Tagging Scheme

EMNLP 2020, pp.3568-3573, (2020)

Cited by: 0|Views111
Full Text
Bibtex
Weibo

Abstract

The task of emotion-cause pair extraction deals with finding all emotions and the corresponding causes in unannotated emotion texts. Most recent studies are based on the likelihood of Cartesian product among all clause candidates, resulting in a high computational cost. Targeting this issue, we regard the task as a sequence labeling probl...More

Code:

Data:

0
Introduction
  • Emotion-cause pair extraction (ECPE) aims to extract all potential pairs of emotions and the corresponding causes from unannotated emotion texts, such as (c3, c1) and (c3, c2) in: Ex.1 A policeman visited the old man with the lost money, (c1)| and told him that the thief was caught. (c2)| The old man was very happy, (c3)| and deposited the money in the bank. (c4)

    This task for pair extraction closely relates to the traditional emotion cause extraction task, which aims at identifying the causes for a given emotion expression.
  • Many works (Gui et al, 2017; Li et al, 2018, 2019; Xu et al, 2019; Fan et al, 2019; Xia et al, 2019; Ding et al, 2019) related to emotion cause extraction have been published recently, and all of them are evaluated with the dataset released by Gui et al (2016)
  • It suffers that emotions must be annotated before extracting the causes, which is labor intensive and limits the applications in real-world scenarios.
  • Recent studies (Song et al, 2020; Tang et al, 2020) have focused on solving this task using multitask learning framework (Caruana, 1993) with welldesigned attention mechanism (Bahdanau et al, 2015), but they extract emotion-cause pairs by calculating a pair matrix, which is based on the likelihood of Cartesian product among all clauses in texts, leading to the computational cost is expensive, that is, the time complexity is O(n2)
Highlights
  • Emotion-cause pair extraction (ECPE) aims to extract all potential pairs of emotions and the corresponding causes from unannotated emotion texts, such as (c3, c1) and (c3, c2) in: Ex.1 A policeman visited the old man with the lost money, (c1)| and told him that the thief was caught. (c2)| The old man was very happy, (c3)| and deposited the money in the bank. (c4)

    This task for pair extraction closely relates to the traditional emotion cause extraction task, which aims at identifying the causes for a given emotion expression
  • Recent studies (Song et al, 2020; Tang et al, 2020) have focused on solving this task using multitask learning framework (Caruana, 1993) with welldesigned attention mechanism (Bahdanau et al, 2015), but they extract emotion-cause pairs by calculating a pair matrix, which is based on the likelihood of Cartesian product among all clauses in texts, leading to the computational cost is expensive, that is, the time complexity is O(n2)
  • LML outperforms E2EECPE by capturing mutual interdependence between emotions and causes using a multi-level attention mechanism, and LMB further improves the performance based on BERT embeddings
  • Our model achieves better performance compared to the previous methods, significantly improves cause extraction by 5.15% and emotion-cause pair extraction by 2.26% in F 1-measure with p < 0.001
  • The reason may be that our model always processes the texts with linear time complexity, instead of based on Cartesian product, which the time complexity is O(n2), thereby greatly reducing the search space
Methods
  • The joint models (E2EECPE, LML, LMB) have better performance compared to the previous pipelined methods by reducing error propagation.
  • LML outperforms E2EECPE by capturing mutual interdependence between emotions and causes using a multi-level attention mechanism, and LMB further improves the performance based on BERT embeddings.
  • The authors' model achieves better performance compared to the previous methods, significantly improves cause extraction by 5.15% and emotion-cause pair extraction by 2.26% in F 1-measure with p < 0.001.
  • The reason may be that the model always processes the texts with linear time complexity, instead of based on Cartesian product, which the time complexity is O(n2), thereby greatly reducing the search space
Results
  • The authors choose l = 3 in the final model since it gives the best performance in the experiments.
  • The authors still perform a further experiment to confirm this superiority empirically.
  • The authors only conduct runtime analysis between LMB and ours, since LMB is based on BERT and is the current state-of-the-art method.
  • The results suggest that the model is 36% and 44% faster than LMB in training and inference stage respectively, indicating the efficiency of the proposed method
Conclusion
  • The authors consider the emotion-cause pair extraction as a sequence labeling problem and propose an end-to-end model based on a novel tagging scheme with multiple labels.
  • The proposed model is capable of integrating the emotion-cause structure into a unified framework, so that emotions with the related causes can be extracted simultaneously.
  • The proposed model parses the input texts in order from left to right, greatly reducing the search space, leading to a speed up.
  • The authors will explore the extension of this approach to achieve full coverage
Summary
  • Introduction:

    Emotion-cause pair extraction (ECPE) aims to extract all potential pairs of emotions and the corresponding causes from unannotated emotion texts, such as (c3, c1) and (c3, c2) in: Ex.1 A policeman visited the old man with the lost money, (c1)| and told him that the thief was caught. (c2)| The old man was very happy, (c3)| and deposited the money in the bank. (c4)

    This task for pair extraction closely relates to the traditional emotion cause extraction task, which aims at identifying the causes for a given emotion expression.
  • Many works (Gui et al, 2017; Li et al, 2018, 2019; Xu et al, 2019; Fan et al, 2019; Xia et al, 2019; Ding et al, 2019) related to emotion cause extraction have been published recently, and all of them are evaluated with the dataset released by Gui et al (2016)
  • It suffers that emotions must be annotated before extracting the causes, which is labor intensive and limits the applications in real-world scenarios.
  • Recent studies (Song et al, 2020; Tang et al, 2020) have focused on solving this task using multitask learning framework (Caruana, 1993) with welldesigned attention mechanism (Bahdanau et al, 2015), but they extract emotion-cause pairs by calculating a pair matrix, which is based on the likelihood of Cartesian product among all clauses in texts, leading to the computational cost is expensive, that is, the time complexity is O(n2)
  • Methods:

    The joint models (E2EECPE, LML, LMB) have better performance compared to the previous pipelined methods by reducing error propagation.
  • LML outperforms E2EECPE by capturing mutual interdependence between emotions and causes using a multi-level attention mechanism, and LMB further improves the performance based on BERT embeddings.
  • The authors' model achieves better performance compared to the previous methods, significantly improves cause extraction by 5.15% and emotion-cause pair extraction by 2.26% in F 1-measure with p < 0.001.
  • The reason may be that the model always processes the texts with linear time complexity, instead of based on Cartesian product, which the time complexity is O(n2), thereby greatly reducing the search space
  • Results:

    The authors choose l = 3 in the final model since it gives the best performance in the experiments.
  • The authors still perform a further experiment to confirm this superiority empirically.
  • The authors only conduct runtime analysis between LMB and ours, since LMB is based on BERT and is the current state-of-the-art method.
  • The results suggest that the model is 36% and 44% faster than LMB in training and inference stage respectively, indicating the efficiency of the proposed method
  • Conclusion:

    The authors consider the emotion-cause pair extraction as a sequence labeling problem and propose an end-to-end model based on a novel tagging scheme with multiple labels.
  • The proposed model is capable of integrating the emotion-cause structure into a unified framework, so that emotions with the related causes can be extracted simultaneously.
  • The proposed model parses the input texts in order from left to right, greatly reducing the search space, leading to a speed up.
  • The authors will explore the extension of this approach to achieve full coverage
Tables
  • Table1: Statistical information about the dataset
  • Table2: Comparison with baselines. † denotes average scores over 20 runs, and the best scores are in bold
  • Table3: F 1 scores with different emotion scope limitation over all the tasks
Download tables as Excel
Funding
  • This work was partially supported by National Natural Science Foundation of China 61632011, 61876053, Shenzhen Foundational Research Funding JCYJ20180507183527919, JCYJ20180507183608379, Guangdong Province Covid-19 Pandemic Control Research Funding 2020KZDZX1224
Study subjects and analysis
samples with one emotion-cause pair: 1746
We also pre-process the whole dataset by following Xia and Ding (2019). In detail, there are 1746 samples with one emotion-cause pair, 177 samples with two pairs, and 22 samples with more than two pairs. Besides, the quartile information about clause number of per sample is also shown in Table 1

Reference
  • Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.
    Google ScholarLocate open access versionFindings
  • Rich Caruana. 1993. Multitask learning: A knowledgebased source of inductive bias. In Machine Learning, Proceedings of the Tenth International Conference, University of Massachusetts, Amherst, MA, USA, June 27-29, 1993, pages 41–48.
    Google ScholarLocate open access versionFindings
  • Ying Chen, Sophia Yat Mei Lee, Shoushan Li, and ChuRen Huang. 2010. Emotion cause detection with linguistic constructions. In COLING 2010, 23rd International Conference on Computational Linguistics, Proceedings of the Conference, 23-27 August 2010, Beijing, China, pages 179–187. Tsinghua University Press.
    Google ScholarLocate open access versionFindings
  • Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 4171–4186.
    Google ScholarLocate open access versionFindings
  • Zixiang Ding, Huihui He, Mengran Zhang, and Rui Xia. 2019. From independent prediction to reordered prediction: Integrating relative position and global label information to emotion cause identification. In The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019, The Thirty-First Innovative Applications of Artificial Intelligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019, Honolulu, Hawaii, USA, January 27 - February 1, 2019, pages 6343–6350. AAAI Press.
    Google ScholarLocate open access versionFindings
  • Steffen Eger, Johannes Daxenberger, and Iryna Gurevych. 2017. Neural end-to-end learning for computational argumentation mining. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017, Vancouver, Canada, July 30 - August 4, Volume 1: Long Papers, pages 11–22.
    Google ScholarLocate open access versionFindings
  • Chuang Fan, Hongyu Yan, Jiachen Du, Lin Gui, Lidong Bing, Min Yang, Ruifeng Xu, and Ruibin Mao. 2019. A knowledge regularized hierarchical approach for emotion cause analysis. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, pages 5613–5623. Association for Computational Linguistics.
    Google ScholarLocate open access versionFindings
  • Lin Gui, Jiannan Hu, Yulan He, Ruifeng Xu, Qin Lu, and Jiachen Du. 2017. A question answering approach for emotion cause extraction. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP 2017, Copenhagen, Denmark, September 9-11, 2017, pages 1593–1602.
    Google ScholarLocate open access versionFindings
  • Lin Gui, Dongyin Wu, Ruifeng Xu, Qin Lu, and Yu Zhou. 2016. Event-driven emotion cause extraction with corpus construction. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP 2016, Austin, Texas, USA, November 1-4, 2016, pages 1639–1649.
    Google ScholarLocate open access versionFindings
  • Sepp Hochreiter and Jurgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735–1780.
    Google ScholarLocate open access versionFindings
  • Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.
    Google ScholarLocate open access versionFindings
  • Xiangju Li, Shi Feng, Daling Wang, and Yifei Zhang. 2019. Context-aware emotion cause analysis with multi-attention-based neural network. Knowl.Based Syst., 174:205–218.
    Google ScholarLocate open access versionFindings
  • Xiangju Li, Kaisong Song, Shi Feng, Daling Wang, and Yifei Zhang. 2018. A co-attention neural network model for emotion cause analysis with emotional context awareness. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language
    Google ScholarLocate open access versionFindings
Author
Chaofa Yuan
Chaofa Yuan
Chuang Fan
Chuang Fan
Jianzhu Bao
Jianzhu Bao
Ruifeng Xu
Ruifeng Xu
Your rating :
0

 

Tags
Comments
小科