AI helps you reading Science

AI generates interpretation videos

AI extracts and analyses the key points of the paper to generate videos automatically


pub
Go Generating

AI Traceability

AI parses the academic lineage of this thesis


Master Reading Tree
Generate MRT

AI Insight

AI extracts a summary of this paper


Weibo:
We proposed two gated neural networks for targeted sentiment analysis, one being used to capture tweet-level syntactic and semantic information, and the other being used to model the interactions between the left context and the right context of a given target

Gated Neural Networks for Targeted Sentiment Analysis.

AAAI, pp.3087-3093, (2016)

Cited: 224|Views176
EI
Full Text
Bibtex
Weibo

Abstract

Targeted sentiment analysis classifies the sentiment polarity towards each target entity mention in given text documents. Seminal methods extract manual discrete features from automatic syntactic parse trees in order to capture semantic information of the enclosing sentence with respect to a target entity mention. Recently, it has been sh...More

Code:

Data:

0
Introduction
  • Targeted sentiment analysis investigates the classification of opinion polarities towards certain target entity mentions in given sentences (Jiang et al 2011; Dong et al 2014; Vo and Zhang 2015).
  • Jiang et al (2011) define rich features over POS tags and dependency links of a given.
  • She began to love [miley ray cyrus]+ since 2013 :) Some chocolate a tup of ice cream and [taylor swift]+ songs.
  • Does Vmware fusion support [Windows 7]0 yet? [nick cannon]− face is annoying!!!!!! the author has no interest in seeing [britney spears]− do anything
Highlights
  • Targeted sentiment analysis investigates the classification of opinion polarities towards certain target entity mentions in given sentences (Jiang et al 2011; Dong et al 2014; Vo and Zhang 2015)
  • The final results on the test dataset are given in Table 4, which shows the performances of the baseline, baselines with separate Gated recurrent neural networks (GRNN) and G3, and the final model, respectively
  • We can see that both GRNN and G3 bring significant improvements, and the combination of the two leads to the best results, which are consistent with the development experiments
  • We proposed two gated neural networks for targeted sentiment analysis, one being used to capture tweet-level syntactic and semantic information, and the other being used to model the interactions between the left context and the right context of a given target
  • Gates are used in both neural networks, so that the target influences the selection of sentiment signals over the context
  • Experiments demonstrated that the two gated neural networks are effective in targeted sentiment analysis, bringing significant improvements
Methods
  • Given a set of annotated training examples, the models are trained to minimize a cross-entropy loss objective, with a l2 regularization term, defined by L(θ) = − λ log pti + 2.
  • Θ2 i where θ is the set of model parameters, pti is the probability of the ith training example as given by the model and λ is the regularization hyper-parameter.
  • The authors apply online training, where model parameters are optimized by using Adagrad (Duchi, Hazan, and Singer 2011).
  • In order to avoid overfitting, the authors use the dropout technique (Hinton et al 2012), randomly dropping some dimensions of the input word embedding with a fixed probability pdrop
Results
  • The final results on the test dataset are given in Table 4, which shows the performances of the baseline, baselines with separate GRNN and G3, and the final model, respectively.
  • There has been other previous work on targeted sentiment analysis (Jiang et al 2011; Tang et al 2014; Dong et al 2014), which the authors did not include in Table 4
  • This is mainly because their results are reported on a different dataset, on which Vo and Zhang (2015) has given by far the best reported accuracies
Conclusion
  • Baseline #nowplaying [lady gaga]+ let love down [Michael]0 dislikes to work with him.
  • Baseline + GRNN #nowplaying [lady gaga]0 let love down [Michael]+ dislikes to work with him.
  • The author cannot get a fresh install installed, stupid drivers errors Lay on the author's sofa and listen to by [Britney Spears]0.
  • Another boring day.
  • Experiments demonstrated that the two gated neural networks are effective in targeted sentiment analysis, bringing significant improvements
Tables
  • Table1: Experimental corpus statistics
  • Table2: Hyper-parameter values in our model
  • Table3: Development results
  • Table4: Final results on the test dataset, where ‡ denotes a p-value below 10−5 by pairwise t-test, compared with the baseline system
  • Table5: Example outputs of baseline and GRNN
  • Table6: Example outputs of GRNN and final model
Download tables as Excel
Related work
  • Targeted sentiment analysis is related to fine-grained sentiment analysis (Wiebe, Wilson, and Cardie 2005; Jin, Ho, and Srihari 2009; Li et al 2010; Yang and Cardie 2013; Nakov et al 2013), which extracts opinion expressions, holders and targets jointly from given sentences. Compared with fine-grained sentiment, targeted sentiment offers less operational details, but on the other hand requires less manual annotation. There has also been work on open domain targeted sentiment (Mitchell et al 2013; Zhang, Zhang, and Vo 2015), which identifies both the opinion targets and their sentiments. The task can be regarded as a joint problem of entity recognition and targeted sentiment classification.

    Other related tasks include aspect-oriented sentiment analysis (Hu and Liu 2004; Popescu and Etzioni 2007), which extracts product features and opinions towards them from user reviews, and topic-oriented sentiment analysis (Yi et al 2003; Wang et al 2011), which extracts features and/or sentiments towards certain topics or subjects. These tasks are close in spirit to targeted sentiment analysis, with subtle variations on the domain and task formulation.
Funding
  • This work is supported by the Singapore Ministry of Education (MOE) AcRF Tier 2 grant T2MOE201301, SRG ISTD 2012 038 from Singapore University of Technology and Design, and National Natural Science Foundation of China (NSFC) under grant 61170148
Reference
  • Bengio, Y.; Simard, P.; and Frasconi, P. 1994. Learning long-term dependencies with gradient descent is difficult. Neural Networks, IEEE Transactions on 5(2):157–166.
    Google ScholarLocate open access versionFindings
  • Cho, K.; van Merrienboer, B.; Bahdanau, D.; and Bengio, Y. 2014a. On the properties of neural machine translation: Encoder–decoder approaches. Syntax, Semantics and Structure in Statistical Translation 103.
    Google ScholarFindings
  • Cho, K.; van Merrienboer, B.; Gulcehre, C.; Bahdanau, D.; Bougares, F.; Schwenk, H.; and Bengio, Y. 2014b. Learning phrase representations using rnn encoder–decoder for statistical machine translation. In EMNLP, 1724–1734.
    Google ScholarLocate open access versionFindings
  • Chung, J.; Gulcehre, C.; Cho, K.; and Bengio, Y. 201Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555.
    Findings
  • Dong, L.; Wei, F.; Tan, C.; Tang, D.; Zhou, M.; and Xu, K. 2014. Adaptive recursive neural network for targetdependent twitter sentiment classification. In ACL, 49–54.
    Google ScholarLocate open access versionFindings
  • dos Santos, C., and Gatti, M. 2014. Deep convolutional neural networks for sentiment analysis of short texts. In Proceedings of COLING 2014, 69–78.
    Google ScholarLocate open access versionFindings
  • Duchi, J.; Hazan, E.; and Singer, Y. 2011. Adaptive subgradient methods for online learning and stochastic optimization. JMLR 12:2121–2159.
    Google ScholarLocate open access versionFindings
  • Go, A.; Bhayani, R.; and Huang, L. 2009. Twitter sentiment classification using distant supervision. CS224N Project Report, Stanford 1:12.
    Google ScholarFindings
  • Graves, A. 2012. Supervised Sequence Labelling with Recurrent Neural Networks, volume 385. Springer.
    Google ScholarLocate open access versionFindings
  • Hinton, G. E.; Srivastava, N.; Krizhevsky, A.; Sutskever, I.; and Salakhutdinov, R. R. 2012. Improving neural networks by preventing co-adaptation of feature detectors. arXiv preprint arXiv:1207.0580.
    Findings
  • Hochreiter, S., and Schmidhuber, J. 1997. Long short-term memory. Neural computation 9(8):1735–1780.
    Google ScholarLocate open access versionFindings
  • Hu, M., and Liu, B. 2004. Mining and summarizing customer reviews. In SIGKDD, 168–177.
    Google ScholarLocate open access versionFindings
  • Irsoy, O., and Cardie, C. 20Bidirectional recursive neural networks for token-level labeling with structure. CoRR abs/1312.0493.
    Findings
  • Jiang, L.; Yu, M.; Zhou, M.; Liu, X.; and Zhao, T. 2011. Target-dependent twitter sentiment classification. In Proceedings of the 49th ACL, 151–160.
    Google ScholarLocate open access versionFindings
  • Jin, W.; Ho, H. H.; and Srihari, R. K. 2009. A novel lexicalized hmm-based learning framework for web opinion mining. In Proceedings of the 26th ICML, 465–472.
    Google ScholarLocate open access versionFindings
  • Kalchbrenner, N.; Grefenstette, E.; and Blunsom, P. 2014. A convolutional neural network for modelling sentences. In ACL, 655–665.
    Google ScholarLocate open access versionFindings
  • Li, F.; Han, C.; Huang, M.; Zhu, X.; Xia, Y.-J.; Zhang, S.; and Yu, H. 2010. Structure-aware review mining and summarization. In Proceedings of the 23rd COLING, 653–661.
    Google ScholarLocate open access versionFindings
  • Mikolov, T.; Karafiat, M.; Burget, L.; Cernocky, J.; and Khudanpur, S. 2010. Recurrent neural network based language model. In INTERSPEECH 2010, 1045–1048.
    Google ScholarLocate open access versionFindings
  • Mikolov, T.; Chen, K.; Corrado, G.; and Dean, J. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781.
    Findings
  • Mitchell, M.; Aguilar, J.; Wilson, T.; and Van Durme, B. 2013. Open domain targeted sentiment. In EMNLP 2013, 1643–1654.
    Google ScholarLocate open access versionFindings
  • Mohammad, S. M.; Kiritchenko, S.; and Zhu, X. 2013. Nrccanada: Building the state-of-the-art in sentiment analysis of tweets. In Semeval 2013, volume 2, 321–327.
    Google ScholarLocate open access versionFindings
  • Nakov, P.; Rosenthal, S.; Kozareva, Z.; Stoyanov, V.; Ritter, A.; and Wilson, T. 2013. Semeval-2013 task 2: Sentiment analysis in twitter. In SemEval 2013, 312–320.
    Google ScholarLocate open access versionFindings
  • Pang, B.; Lee, L.; and Vaithyanathan, S. 2002. Thumbs up?: sentiment classification using machine learning techniques. In ACL, 79–86.
    Google ScholarLocate open access versionFindings
  • Paulus, R.; Socher, R.; and Manning, C. D. 2014. Global belief recursive neural networks. In NIPS, 2888–2896.
    Google ScholarLocate open access versionFindings
  • Popescu, A.-M., and Etzioni, O. 2007. Extracting product features and opinions from reviews. In Natural language processing and text mining. Springer. 9–28.
    Google ScholarFindings
  • Socher, R.; Perelygin, A.; Wu, J. Y.; Chuang, J.; Manning, C. D.; Ng, A. Y.; and Potts, C. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In EMNLP, 1631–1642.
    Google ScholarFindings
  • Sutskever, I.; Vinyals, O.; and Le, Q. V. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, 3104–3112.
    Google ScholarLocate open access versionFindings
  • Tang, D.; Wei, F.; Yang, N.; Zhou, M.; Liu, T.; and Qin, B. 2014. Learning sentiment-specific word embedding for twitter sentiment classification. In ACL, 1555–1565.
    Google ScholarLocate open access versionFindings
  • Vo, D.-T., and Zhang, Y. 2015. Target-dependent twitter sentiment classification with rich automatic features. In Proceedings of the IJCAI, 1347–1353.
    Google ScholarLocate open access versionFindings
  • Wang, X.; Wei, F.; Liu, X.; Zhou, M.; and Zhang, M. 2011. Topic sentiment analysis in twitter: a graph-based hashtag sentiment classification approach. In CIKM, 1031–1040.
    Google ScholarFindings
  • Wiebe, J.; Wilson, T.; and Cardie, C. 2005. Annotating expressions of opinions and emotions in language. Language resources and evaluation 39(2-3):165–210.
    Google ScholarLocate open access versionFindings
  • Yang, B., and Cardie, C. 2013. Joint inference for finegrained opinion extraction. In ACL (1), 1640–1649.
    Google ScholarLocate open access versionFindings
  • Yi, J.; Nasukawa, T.; Bunescu, R.; and Niblack, W. 2003. Sentiment analyzer: Extracting sentiments about a given topic using natural language processing techniques. In Proceedings of the ICDM, 427–434.
    Google ScholarLocate open access versionFindings
  • Zhang, M.; Zhang, Y.; and Vo, D.-T. 2015. Neural networks for open domain targeted sentiment. In Proceedings of the 2015 Conference on EMNLP.
    Google ScholarLocate open access versionFindings
  • Zhou, S.; Chen, Q.; Wang, X.; and Li, X. 2014. Hybrid deep belief networks for semi-supervised sentiment classification. In Proceedings of COLING 2014, 1341–1349.
    Google ScholarLocate open access versionFindings
0
Your rating :

No Ratings

Tags
Comments
数据免责声明
页面数据均来自互联网公开来源、合作出版商和通过AI技术自动分析结果,我们不对页面数据的有效性、准确性、正确性、可靠性、完整性和及时性做出任何承诺和保证。若有疑问,可以通过电子邮件方式联系我们:report@aminer.cn