AI helps you reading Science

AI generates interpretation videos

AI extracts and analyses the key points of the paper to generate videos automatically


pub
Go Generating

AI Traceability

AI parses the academic lineage of this thesis


Master Reading Tree
Generate MRT

AI Insight

AI extracts a summary of this paper


Weibo:
We tried to see if Deep Learning models could improve the accuracy of sentiment analysis of StockTwits messages

Big Data: Deep Learning for financial sentiment analysis.

J. Big Data, no. 1 (2018)

Cited by: 6|Views2446
EI

Abstract

Deep Learning and Big Data analytics are two focal points of data science. Deep Learning models have achieved remarkable results in speech recognition and computer vision in recent years. Big Data is important for organizations that need to collect a huge amount of data like a social network and one of the greatest assets to use Deep Lear...More

Code:

Data:

0
Introduction
  • The Internet, as a global system of interconnection, provides a link between billions of devices and people around the world.
  • The rapid development of social networks causes the tremendous growth of users and digital content [1].
  • It opens opportunities for people with various skills and knowledge to share their experiences and wisdom with each other.
  • There are many websites like Yelp, Wikipedia, Flickr, etc.
  • There are websites that give users the ability to consult with professionals, and one topic that is always popular is investment.
Highlights
  • The Internet, as a global system of interconnection, provides a link between billions of devices and people around the world
  • In “Deep Learning in Big Data analytics” section we explore how Deep Learning can be used for Big Data analysis we discuss some challenges that Deep Learning needs to overcome to do analysis in the Big Data domain; “Results and discussion” section explains our experiments, and goes into depth about how we can apply Deep Learning to financial sentiment analysis
  • Sentiment analysis Following the early work in sentiment analysis done in [63, 64], we examine source materials and apply natural language processing techniques to determine the attitude of the writer towards a subject
  • With the result of logistic regression based on the bag-of-words model used as a baseline, we investigate whether Deep Learning methods can improve the accuracy of this logistic regression in Big Data
  • Our paper mainly focuses on information retrieval so we summarize Deep Learning in sentiment analysis
  • We tried to see if Deep Learning models could improve the accuracy of sentiment analysis of StockTwits messages
Methods
  • Dataset The authors were fortunate to receive permission from StockTwits Inc. to have access to their datasets.
  • As a social network, it provides the opportunity for sharing experience among traders in the stock market.
  • Through the StockTwits website, investors, analysts, and others interested in the market can contribute a short message limited to 140 characters about the stock market.
  • This message will be posted to a public stream visible to all site visitors.
  • Messages can be labeled Bullish or Bearish by the authors to specify their sentiment about various stocks
Results
  • Results and discussion

    the authors will explain the experiments in applying Deep Learning methods on the StockTwits dataset.
  • Doc2vec As the first step, the authors apply the doc2vec model to the StockTwits dataset to see if it can increase the accuracy of sentiment prediction for stock market writers.
  • This was chosen as the first model because it uses the paragraph as a memory to keep the order of the words in a sentence, and maps paragraphs, as well as words, to a vector
Conclusion
  • Deep Learning has good performance and promise in many areas, such as natural language processing.
  • Hidden layers in Deep Learning are generally used to extract features or data representations.
  • This hierarchical learning process in Deep Learning provides the opportunity to find word semantics and relations.
  • These attributes make Deep Learning one of the most desirable models for sentiment analysis
Tables
  • Table1: Performance of the logistic regression on the StockTwits dataset
  • Table2: Performance of the Chi-squared feature selection on the StockTwits dataset
  • Table3: Performance of the ANOVA F-test feature selection on the StockTwits dataset
  • Table4: Performance of the mutual information feature selection on the StockTwits dataset
  • Table5: Performance of doc2vec on the StockTwits dataset
  • Table6: Performance of the LSTM on the StockTwits dataset
  • Table7: Performance of the convolutional neural network on the StockTwits dataset
  • Table8: Compare Deep Learning models in financial sentiment analysis
Download tables as Excel
Related work
  • Related work Specific Big

    Data domains including computer vision [29] and speech recognition [30], have seen the advantages of using Deep Learning to improve classification modeling results but, there are a few works on Deep Learning architecture for sentiment analysis. In 2006 Alexandrescu et al [31] present a model where each word is represented as a vector of features. A single embedding matrix is used to look up all of these features. Luong et al [32] use a recursive neural network (RNN) to model the morphological structures of words and learn morphologically-aware embeddings. In 2013 Lazaridou et al [33] try to learn meanings of a phrase by using compositional distributional semantic models. In 2013 Chrupala use a simple recurrent network (SRN) to learn continuous vector representations for sequences of characters. They use their model to solve a character level text segmentation and labeling task. A meaningful search space via Deep Learning can be constructed by using Recurrent Neural Network [34] Socher et al in 2011 [35], use recursive autoencoders [36,37,38,39] for predicting sentiment distribution and proposed a semi-supervised approach model. In 2012 Socher et al [40] propose a model for semantic compositionality with the ability to learn compositional vector representation for sentences of arbitrary length. Their proposed model is a matrix-vector recursive neural network model. Recursive Neural Tensor Network (RNTN) architecture proposed in [41]. RNTN use word vector and a parse tree to represent a phrase and then use a tensor-based composition function to compute vectors for higher nodes [42]
Funding
  • Funding Not applicable Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Received: 20 April 2017 Accepted: 28 December 2017
Reference
  • Ellison NB, et al. Social network sites: definition, history, and scholarship. J Comput Mediat Commun. 2007;13(1):210–30.
    Google ScholarLocate open access versionFindings
  • Wang G, Wang T, Wang B, Sambasivan D, Zhang Z, Zheng H, Zhao BY. Crowds on wall street: extracting value from social investing platforms, foundations and trends in information retrieval. New York: ACM; 2014.
    Google ScholarFindings
  • Freedman DA. Statistical models: theory and practice. Cambridge: Cambridge University Press; 2009.
    Google ScholarFindings
  • Krizhevsky A, Sutskever I, Hinton GE. Imagenet classification with deep convolutional neural networks. In: Advances in neural information processing systems; 2012. p. 1097–105.
    Google ScholarFindings
  • Socher R, Huang EH, Pennin J, Manning CD, Ng AY. Dynamic pooling and unfolding recursive autoencoders for paraphrase detection. Adv Neural Inf Process Syst. 2011;24:801–9.
    Google ScholarLocate open access versionFindings
  • Pearson K. Notes on regression and inheritance in the case of two parents. Proc R Soc Lond. 1895;58:240–2.
    Google ScholarLocate open access versionFindings
  • Graves A, Mohamed AR, Hinton G. Speech recognition with deep recurrent neural networks. In: 2013 IEEE international conference on acoustics, speech and signal processing (ICASSP); 2013.
    Google ScholarFindings
  • Dahl G, Mohamed AR, Hinton GE. Phone recognition with the mean–covariance restricted Boltzmann machine. Adv Neural Inf Process Sytst. 2010;23:469–77.
    Google ScholarLocate open access versionFindings
  • George E, Yu D, Deng L, Acero A. Context-dependent pre-trained deep neural networks for large-vocabulary speech recognition. IEEE Trans Audio Speech Lang Process. 2012;20(1):30–42.
    Google ScholarLocate open access versionFindings
  • Seide F, Li G, Yu D. Conversational speech transcription using context-dependent deep neural networks. In: Twelfth annual conference of the international speech communication association; 2011.
    Google ScholarLocate open access versionFindings
  • Mohamed A, Dahl GE, Hinton G. Acoustic modeling using deep belief networks. IEEE Trans Audio Speech Lang Process. 2012;20(1):14–22.
    Google ScholarLocate open access versionFindings
  • Itamar A, Rose DC, Karnowski TP. Deep machine learning—a new frontier in artificial intelligence research [research frontier]. IEEE Comput Intell Mag. 2010;5(4):13–8.
    Google ScholarLocate open access versionFindings
  • Najafabadi NM, Villanustre F, Khoshgoftaar TM, Seliya N, Wald R, Muharemagic E. Deep learning applications and challenges in big data analytics. J Big Data. 2015;2:1.
    Google ScholarLocate open access versionFindings
  • LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proc IEEE. 1998;86(11):2278–324.
    Google ScholarLocate open access versionFindings
  • Collobert R, Weston J, Bottou L, Karlen M, Kavukcuoglu K, Kuksa P. Natural language processing (almost) from scratch. J Mach Learn Res. 2011;12:2493–537.
    Google ScholarLocate open access versionFindings
  • Collobert R, Weston J. A unified architecture for natural language processing: deep neural networks with multitask learning. In: Proceedings of the 25th international conference on machine learning. London: ACM; 2008. p. 160–7.
    Google ScholarLocate open access versionFindings
  • Gao J, Deng L, Gamon M, He X, Pantel P. Modeling interestingness with deep neural networks. 2014. US Patent App. 14/304,863.
    Google ScholarFindings
  • Kalchbrenner N, Grefenstette E, Blunsom P. A convolutional neural network for modelling sentences. arXiv preprint arXiv:1404.2188. 2014.
    Findings
  • Kim Y. Convolutional neural networks for sentence classification. arXiv preprint arXiv:1408.5882. 2014.
    Findings
  • Shen Y, He X, Gao J, Deng L, Mesnil G. A latent semantic model with convolutional-pooling structure for information retrieval. In: Proceedings of the 23rd ACM international conference on information and knowledge management. New York: ACM; 2014. p. 101–10.
    Google ScholarLocate open access versionFindings
  • Liheng X, Liu K, Lai S, Zhao J, et al. Product feature mining: semantic clues versus syntactic constituents. ACL. 2014;1:336–46.
    Google ScholarLocate open access versionFindings
  • Tang Duyu, Wei Furu, Yang Nan, Zhou Ming, Liu Ting, Qin Bing. Learning sentiment-specific word embedding for twitter sentiment classification. ACL. 2014;1:1555–65.
    Google ScholarLocate open access versionFindings
  • Weston J, Chopra S, Adams K. # tagspace: semantic embeddings from hashtags. 2014.
    Google ScholarFindings
  • Geoffrey E, Osindero S, Teh YW. A fast learning algorithm for deep belief nets. Neural Comput. 2006;18(7):1527–54.
    Google ScholarLocate open access versionFindings
  • Bengio Y, Lamblin P, Popovici D, Larochelle H. Greedy layer-wise training of deep networks. Adv Neural Inf Process Syst. 2007;19:153–60.
    Google ScholarLocate open access versionFindings
  • Li G, Zhu H, Cheng G, Thambiratnam K, Chitsaz B, Yu D, Seide F. Context-dependent deep neural networks for audio indexing of real-life data. In: IEEE spoken language technology workshop (SLT). 2012. p. 143–8.
    Google ScholarLocate open access versionFindings
  • Brin S, Page L. Reprint of: the anatomy of a large-scale hypertextual web search engine. Comput Netw. 2012;56(18):3825–33.
    Google ScholarLocate open access versionFindings
  • Mortensen EN, Barrett WA. Interactive segmentation with intelligent scissors. Graph Models Image Process. 1998;60(5):349–84.
    Google ScholarFindings
  • Krizhevsky A, Sutskever I, Hinton GE. Imagenet classification with deep convolutional neural networks. Adv Neural Inf Process Syst. 2012:1097–105.
    Google ScholarLocate open access versionFindings
  • Hinton G, Deng L, Dong Y, Dahl GE, Mohamed Abdel-rahman, Jaitly Navdeep, Senior Andrew, Vanhoucke Vincent, Nguyen Patrick, Sainath Tara N, et al. Deep neural networks for acoustic modeling in speech recognition: the shared views of four research groups. IEEE Signal Process Mag. 2012;29(6):82–97.
    Google ScholarLocate open access versionFindings
  • Alexandrescu A, Kirchhoff K. Factored neural language models. In: Proceedings of the human language technology conference of the NAACL, companion, volume: short papers. Association for computational linguistics; 2006. p. 1–4.
    Google ScholarLocate open access versionFindings
  • Luong T, Socher R, Manning CD. Better word representations with recursive neural networks for morphology. Vancouver: CoNLL; 2013. p. 104–13.
    Google ScholarFindings
  • Lazaridou A, Marelli M, Zamparelli R, Baroni M. Compositionally derived representations of morphologically complex words in distributional semantics. ACL. 2013;1:1517–26.
    Google ScholarLocate open access versionFindings
  • Kilgarriff A, Grefenstette G. Introduction to the special issue on the web as corpus. Comput Linguis. 2003;29(3):333–47.
    Google ScholarLocate open access versionFindings
  • Socher R, Pennington J, Huang EH, Ng AY, Manning CD. Semi-supervised recursive autoencoders for predicting sentiment distributions. In: Proceedings of the conference on empirical methods in natural language processing. Association for computational linguistics; 2011. p. 151–61.
    Google ScholarLocate open access versionFindings
  • Hinton GE, Salakhutdinov RR. Reducing the dimensionality of data with neural networks. Science. 2006;313(5786):504–7.
    Google ScholarLocate open access versionFindings
  • Hinton GE, Zemel RS. Autoencoders, minimum description length and helmholtz free energy. Adv Neural Inform Process Syst. 1994:3–10.
    Google ScholarLocate open access versionFindings
  • Smolensky P. Information processing in dynamical systems: foundations of harmony theory. Technical report, Colorado Univ at Boulder Dept of Computer Science; 1986.
    Google ScholarFindings
  • Hinton GE. Training products of experts by minimizing contrastive divergence. Neural Comput. 2006;14(8):1771–800.
    Google ScholarLocate open access versionFindings
  • Socher R, Huval B, Manning CD, Ng AY. Semantic compositionality through recursive matrix-vector spaces. In: Proceedings of the 2012 joint conference on empirical methods in natural language processing and computational natural language learning. Association for computational linguistics; 2012. p. 1201–11.
    Google ScholarLocate open access versionFindings
  • Socher R, Bauer J, Manning CD, Ng AY. Parsing with compositional vector grammars. ACL. 2013;1:455–65.
    Google ScholarLocate open access versionFindings
  • Socher R, Lin CC, Manning C, Ng AY. Parsing natural scenes and natural language with recursive neural networks. In: Proceedings of the 28th international conference on machine learning (ICML-11). 2011. p. 129–36.
    Google ScholarLocate open access versionFindings
  • Collobert R. Deep learning for efficient discriminative parsing. In: Proceedings of the fourteenth international conference on artificial intelligence and statistics. 2011. p. 224–32.
    Google ScholarLocate open access versionFindings
  • Market Sentiment. http://www.investopedia.com/.45. Pang B, Lee L. Opinion mining and sentiment analysis. Found Trends Inf Retrieval. 2008;2:1–35.
    Locate open access versionFindings
  • 46. Loughran T, McDonald B. When is a liability not a liability? Textual analysis, dictionaries. J Finance. 2011;66:35–65.
    Google ScholarLocate open access versionFindings
  • 47. Mao H, Gao P, Wang Y, Bollen J. Automatic construction of financial semantic orientation lexicon from large scale Chinese news corpus. In: 7th Financial risks international forum; 2014.
    Google ScholarFindings
  • 48. Steinwart I, Christmann A. Support vector machine. Berlin: Springer; 2008.
    Google ScholarFindings
  • 49. Saif H, He Y, Alani H. Semantic sentiment analysis of Twitter. The semantic Web-ISWC 2012. Berlin: Springer; 2012. p. 508–24.
    Google ScholarFindings
  • 50. Silva N, Hruschka E, Hruschka E. Tweet sentiment analysis with classifier ensembles. Decis Support Syst. 2014;66:170–9.
    Google ScholarFindings
  • 51. Fersini E, Messina E, Pozzi FA. Automatic construction of financial semantic orientation lexicon from large scale Chinese news corpus. Decis Support Syst. 2014;68:26–38.
    Google ScholarFindings
  • 52. Potts C, Pearson K. From frequency to meaning: vector space models of semantics. J Artif Intell Res. 2010;37:141–88.
    Google ScholarLocate open access versionFindings
  • 53. Salton G, Buckley C. Term-weighting approaches in automatic text retrieval. Inf Process Manag. 1988;24(5):513–23.
    Google ScholarLocate open access versionFindings
  • 54. Robertson SE, Walker S. Some simple effective approximations to the 2-Poisson model for probabilistic weighted retrieval. In: Proceedings of the 17th annual international ACM SIGIR conference on research and development in information retrieval. New York: Springer Inc.; 1994. p. 232–41.
    Google ScholarLocate open access versionFindings
  • 55. Bengio Y, et al. Learning deep architectures for AI. Found Trends Mach Learn. 2009;2(1):1–127.
    Google ScholarFindings
  • 56. Chen H, Chiang RHL, Storey VC. Business intelligence and analytics: from big data to big impact. MIS Quart. 2012;36:4.
    Google ScholarLocate open access versionFindings
  • 57. Dalal N, Triggs B. Histograms of oriented gradients for human detection. In: IEEE computer society conference on computer vision and pattern recognition, CVPR 2005, vol. 1. 2005. p. 886–93.
    Google ScholarFindings
  • 58. Lowe DG. Object recognition from local scale-invariant features. In: The proceedings of the seventh IEEE international conference on computer vision, vol. 2. 1999. p. 1150–7.
    Google ScholarLocate open access versionFindings
  • 59. Bahdanau D, Cho K, Bengio Y. Neural machine translation by jointly learning to align and translate. arXiv:1409.0473. 2014.
    Findings
  • 60. Coates A, Ng AY. The importance of encoding versus training with sparse coding and vector quantization. In: Proceedings of the 28th international conference on machine learning (ICML-11). 2011. p. 921–8.
    Google ScholarLocate open access versionFindings
  • 61. Hinton GE, Srivastava N, Krizhevsky A, Sutskever I, Salakhutdinov RR. Improving neural networks by preventing co-adaptation of feature detectors. arXiv:1207.0580. 2012.
    Findings
  • 62. Fan J, Han F, Liu H. Challenges of big data analysis. Natl Sci Rev. 2014;1(2):293–314.
    Google ScholarLocate open access versionFindings
  • 63. Abadi M, Agarwal A, Barham P, Brevdo E. Thumbs up or thumbs down? Semantic orientation applied to unsupervised classification of reviews. Proc Assoc Comput Linguis. 2002;66:417–24.
    Google ScholarLocate open access versionFindings
  • 64. Pang B, Lee L, Vaithyanathan S. Thumbs up? Sentiment classification using machine learning techniques. In: Proceedings of the conference on empirical methods in natural language processing, vol. 66; 2002. p. 79–86.
    Google ScholarLocate open access versionFindings
  • 65. Kiritchenko S, Zhu X, Mohammad S. Sentiment analysis of short informal texts. J Artif Intell Res. 2014;50:723–62.
    Google ScholarLocate open access versionFindings
  • 66. Hanley JA, McNeil BJ. The meaning and use of the area under a receiver operating characteristic (ROC) curve. Radiology. 1982;143:29–36.
    Google ScholarLocate open access versionFindings
  • 67. Bermingham ML, Pong-Wong R, Spiliopoulou A, Hayward C, Rudan I, Campbell H, Wright AF, Wilson JF, Agakov F, Navarro P, Haley CS. Application of high-dimensional feature selection: evaluation for genomic prediction in man. Sci Rep. 2015;5:10312.
    Google ScholarLocate open access versionFindings
  • 68. Torgo L. Data mining with R. Boca Raton: CRC Press; 2010.
    Google ScholarFindings
  • 69. Saeys Y, Inza I, Larrañaga P. A review of feature selection techniques in bioinformatics. Bioinformatics. 2007;23(19):2507–17.
    Google ScholarLocate open access versionFindings
  • 70. Pearson K. X. On the criterion that a given system of deviations from the probable in the case of a corsystem of variables is such that it can be reasonably supposed to have arisen from random sampling. Lond Edinburgh Dublin Philos Mag J Sci. 1900;50:157–75.
    Google ScholarLocate open access versionFindings
  • 71. Fisher R. Dispersion on a sphere. Proc R Soc Lond. 1953;217:295–305.
    Google ScholarLocate open access versionFindings
  • 72. Grünauer A, Vincze M. Using dimension reduction to improve the classification of high-dimensional data. arXiv preprint arXiv:1505.06907. 2015.
    Findings
  • 73. Le Q, Mikolov T. Distributed representations of sentences and documents. In: International conference on machine learning, vol. 31. 2014.
    Google ScholarLocate open access versionFindings
  • 74. Mikolov T, Chen K, Corrado G, Dean J. Efficient estimation of word representations in vector space. In: Workshop at ICLR. 2013.
    Google ScholarLocate open access versionFindings
  • 75. Mikolov T, Sutskever I, Chen K, Corrado GS, Dean J. Distributed representations of words and phrases and their compositionality. In: Workshop at ICLR. 2013.
    Google ScholarLocate open access versionFindings
  • 76. Hochreiter S, Schmidhuber J. Long short-term memory. Neural Comput. 1997;9:1735–80.
    Google ScholarLocate open access versionFindings
  • 77. Chellapilla K, Puri S, Simard P. High performance convolutional neural networks for document processing. In: Tenth international workshop on frontiers in handwriting recognition. Seattle: Suvisoft; 2006.
    Google ScholarLocate open access versionFindings
  • 78. Goodfellow I, Bengio Y, Courville A. Deep learning. MIT Press; 2016. http://www.deeplearningbook.org.
    Findings
  • 79. Bengio Y, LeCun Y, et al. Scaling learning algorithms towards AI. Large Scale Kernel Mach. 2007;34(5):1–41.
    Google ScholarFindings
  • 80. Bengio Y, Courville A, Vincent P. Representation learning: a review and new perspectives. IEEE Trans Pattern Anal Mach Intell. 2013;35(8):1798–828.
    Google ScholarLocate open access versionFindings
  • 81. Bengio Y. Deep learning of representations: looking forward. In: International conference on statistical language and speech processing. Berlin: Springer; 2013. p. 1–37.
    Google ScholarFindings
  • 82. Calandra R, Raiko T, Deisenroth MP, Pouzols FM. Learning deep belief networks from non-stationary streams. In: International conference on artificial neural networks. Berlin: Springer; 2012. p. 379–86.
    Google ScholarLocate open access versionFindings
  • 83. Zhou G, Sohn K, Lee H. Online incremental feature learning with denoising autoencoders. In: Artificial intelligence and statistics. 2012. p. 1453–61.
    Google ScholarLocate open access versionFindings
  • 84. Dean J, Corrado G, Monga R, Chen K, Devin M, Mao M, Senior A, Tucker P, Yang K, Le QV, et al. Large scale distributed deep networks. In: Advances in neural information processing systems. 2012. p. 1223–31.
    Google ScholarFindings
  • 85. Coates A, Huval B, Wang T, Wu D, Catanzaro B, Ng A. Deep learning with cots hpc systems. In: International conference on machine learning. 2013. p. 1337–45.
    Google ScholarFindings
  • 86. Glorot X, Bordes A, Bengio Y. Domain adaptation for large-scale sentiment classification: a deep learning approach. In: Proceedings of the 28th international conference on machine learning (ICML-11); 2011. p. 513–20.
    Google ScholarLocate open access versionFindings
  • 87. Chopra S, Balakrishnan S, Gopalan R. Dlid: deep learning for domain adaptation by interpolating between domains. In: ICML workshop on challenges in representation learning, vol. 2; 2013.
    Google ScholarLocate open access versionFindings
  • 88. Larochelle H, Bengio Y, Louradour J, Lamblin P. Exploring strategies for training deep neural networks. J Mach Learn Res. 2009;10:1–40.
    Google ScholarLocate open access versionFindings
  • 89. Olshausen AB, Field DJ. Sparse coding with an overcomplete basis set: a strategy employed by v1? Vision Res. 1997;37(23):3311–25.
    Google ScholarFindings
  • 90. Hinton G, Salakhutdinov R. Discovering binary codes for documents by learning deep generative models. Topics Cogn Sci. 2011;3(1):74–91.
    Google ScholarLocate open access versionFindings
  • 91. Salakhutdinov R, Hinton G. Semantic hashing. Int J Approx Reas. 2009;50(7):969–78.
    Google ScholarLocate open access versionFindings
  • 92. Le QV. Building high-level features using large scale unsupervised learning. In: 2013 IEEE international conference on acoustics, speech and signal processing (ICASSP). 2013. p. 8595–8.
    Google ScholarLocate open access versionFindings
  • 93. Vincent P, Larochelle H, Bengio Y, Manzagol PA. Extracting and composing robust features with denoising autoencoders. In: Proceedings of the 25th international conference on Machine learning. New York: ACM; 2008. p. 1096–103.
    Google ScholarLocate open access versionFindings
  • 94. Ranzato M, Szummer M. Semi-supervised learning of compact document representations with deep networks. In: Proceedings of the 25th international conference on machine learning. New York: ACM; 2008. p. 792–9.
    Google ScholarLocate open access versionFindings
  • 95. Mikolov T, Chen K, Corrado G, Dean J. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781. 2013.
    Findings
  • 96. Mikolov T, Chen K, Corrado G, Dean J. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781. 2013.
    Findings
  • 97. Kim Y. Convolutional neural networks for sentence classification. In: Proceedings of EMNLP. 2014.
    Google ScholarLocate open access versionFindings
  • 98. Le QV, Zou WY, Yeung Sy, Ng AY. Learning hierarchical invariant spatio-temporal features for action recognition with independent subspace analysis. In: 2011 IEEE conference on computer vision and pattern recognition (CVPR). 2011. p. 3361–8.
    Google ScholarLocate open access versionFindings
  • 99. Gers F, Schmidhuber J, Cummins F. Learning to forget: continual prediction with LSTM. Neural Comput. 2000;12:2451–71.
    Google ScholarLocate open access versionFindings
  • 100. Graves A. Supervised sequence labelling with recurrent neural networks. Heidelberg: Springer; 2012.
    Google ScholarFindings
  • 101. Bastien F, Lamblin P, Pascanu R, Bengio Y. Theano: new features and speed improvements. In: NIPS workshop on deep learning and unsupervised feature learning. 2012.
    Google ScholarFindings
  • 102. Bergstra J, Breuleux O, Bastien F, Bengio Y. Theano: a CPU and GPU math expression compiler. In: Python for scientific computing conference. 2012.
    Google ScholarFindings
  • 103. Bergstra J, Breuleux O, Bastien F, Lamblin P. Thumbs up? Sentiment classification using machine learning techniques, Python in science, vol. 9. 2015.
    Google ScholarLocate open access versionFindings
  • 104. Pang B, Lee L, Vaithyanathan S. TensorFlow: large-scale machine learning on heterogeneous distributed systems. In: Preliminary white paper, vol. 9. 2015.
    Google ScholarLocate open access versionFindings
  • 105. Blitzer J, Dredze M, Pereira F, et al. Biographies, bollywood, boom-boxes and blenders: domain adaptation for sentiment classification. ACL. 2007;7:440–7.
    Google ScholarLocate open access versionFindings
  • 106. Wang S, Manning CD. Baselines and bigrams: simple, good sentiment and topic classification. In: Proceedings of the 50th annual meeting of the association for computational linguistics: short papers, vol. 2. Association for computational linguistics; 2012. p. 90–4.
    Google ScholarLocate open access versionFindings
  • 107. Tan C-M, Wang Y-F, Lee C-D. The use of bigrams to enhance text categorization. Inform Process Manag. 2002;38(4):529–46.
    Google ScholarLocate open access versionFindings
Your rating :
0

 

Tags
Comments
数据免责声明
页面数据均来自互联网公开来源、合作出版商和通过AI技术自动分析结果,我们不对页面数据的有效性、准确性、正确性、可靠性、完整性和及时性做出任何承诺和保证。若有疑问,可以通过电子邮件方式联系我们:report@aminer.cn
小科