AI帮你理解科学

AI 生成解读视频

AI抽取解析论文重点内容自动生成视频


pub
生成解读视频

AI 溯源

AI解析本论文相关学术脉络


Master Reading Tree
生成 溯源树

AI 精读

AI抽取本论文的概要总结


微博一下
We modeled legislator tweeting behavior towards Donald Trump, predicting the frequency and sentiment of their tweets

An Embedding Model for Estimating Legislative Preferences from the Frequency and Sentiment of Tweets

EMNLP 2020, pp.627-641, (2020)

被引用0|浏览160
下载 PDF 全文
引用
微博一下

摘要

Legislator preferences are typically represented as measures of general ideology estimated from roll call votes on legislation, potentially masking important nuances in legislators’ political attitudes. In this paper we introduce a method of measuring more specific legislator attitudes using an alternative expression of preferences: tweet...更多

代码

数据

0
简介
  • Legislator preferences are typically estimated as general measures of ideology using roll-call votes on legislation
  • Such measures fail to capture aspects of preferences not reflected in legislation, such as attitudes towards a sitting president.
  • Understanding legislators’ attitudes toward the president enables greater understanding and measurement of such strategic communications.
  • These attitudes matter for understanding the president’s ability to pass his legislative agenda
重点内容
  • Legislator preferences are typically estimated as general measures of ideology using roll-call votes on legislation
  • We develop an embedding model that jointly predicts the frequency and sentiment of legislator tweets about Donald Trump
  • To demonstrate the efficacy of our model for legislator tweeting behavior with respect to President Donald Trump, we first show that the construction of Trump embeddings from the language of his own tweets provides an informational signal for legislators to react to
  • We present the final negative log-likelihood of the count model for both the Poisson and Negative Binomial models described in Section 4.1, and for both the case in which the text of Donald Trump’s tweets is used to construct his daily embedding representation and the case in which the Trump embeddings are free parameters of the model
  • We modeled legislator tweeting behavior towards Donald Trump, predicting the frequency and sentiment of their tweets
  • Our application suggests that ideal points estimated from roll call votes can miss this critical aspect of political preferences for members of Congress
结果
  • To demonstrate the efficacy of the model for legislator tweeting behavior with respect to President Donald Trump, the authors first show that the construction of Trump embeddings from the language of his own tweets provides an informational signal for legislators to react to.
  • Since the model seeks to capture two aspects of legislator tweeting behavior, the authors evaluate the model using two metrics: the negative-log likelihood of the count model and the mean-absolute-error (MAE) of the sentiment model.
  • The authors present the results for three settings of γ, which allows them to analyze the two components of the model separately before analyzing the joint model.
  • For all results presented here, the authors set K = 2, and use a linear text map.
  • Code can be found on the author’s Github at: github.com/gspell/CongressionalTweets
结论
  • The authors modeled legislator tweeting behavior towards Donald Trump, predicting the frequency and sentiment of their tweets.
  • Whereas legislative voting might recover ideological similarities and differences with the president, it is not well suited to measure attitudes toward the president orthogonal to policy preferences, such as criticisms of his rhetoric and tone
  • To address this shortcoming and obtain representations of legislators’ attitudes toward Trump, the authors have proposed a model that assigns a vector to each legislator based on the content of their tweets about Trump.
  • From this model the authors obtain representations of legislators that capture their attitudes toward the president
总结
  • Introduction:

    Legislator preferences are typically estimated as general measures of ideology using roll-call votes on legislation
  • Such measures fail to capture aspects of preferences not reflected in legislation, such as attitudes towards a sitting president.
  • Understanding legislators’ attitudes toward the president enables greater understanding and measurement of such strategic communications.
  • These attitudes matter for understanding the president’s ability to pass his legislative agenda
  • Results:

    To demonstrate the efficacy of the model for legislator tweeting behavior with respect to President Donald Trump, the authors first show that the construction of Trump embeddings from the language of his own tweets provides an informational signal for legislators to react to.
  • Since the model seeks to capture two aspects of legislator tweeting behavior, the authors evaluate the model using two metrics: the negative-log likelihood of the count model and the mean-absolute-error (MAE) of the sentiment model.
  • The authors present the results for three settings of γ, which allows them to analyze the two components of the model separately before analyzing the joint model.
  • For all results presented here, the authors set K = 2, and use a linear text map.
  • Code can be found on the author’s Github at: github.com/gspell/CongressionalTweets
  • Conclusion:

    The authors modeled legislator tweeting behavior towards Donald Trump, predicting the frequency and sentiment of their tweets.
  • Whereas legislative voting might recover ideological similarities and differences with the president, it is not well suited to measure attitudes toward the president orthogonal to policy preferences, such as criticisms of his rhetoric and tone
  • To address this shortcoming and obtain representations of legislators’ attitudes toward Trump, the authors have proposed a model that assigns a vector to each legislator based on the content of their tweets about Trump.
  • From this model the authors obtain representations of legislators that capture their attitudes toward the president
表格
  • Table1: Split of Training, Validation, and Test sets
  • Table2: Predictive evaluation metrics on test for our model with γ = 1. Note that because only the count loss is being optimized, MAE does not reflect model performance here. Best model result bolded
  • Table3: Predictive evaluation metrics on test for our model with γ = 0. Best model result with respect to MAE is bolded. Comparison between the sentiment model with/without the legislator bias term as well as with/without Trump tweet text
  • Table4: Predictive evaluation metrics on test for our model with γ = 0.03. Best model result bolded
  • Table5: Breakdown of labeled tweet sentiment classes according to party
Download tables as Excel
基金
  • Bob Corker (R-TN) famously referred to the Trump White House as an “adult day-care center,” John McCain (R-AZ) said Trump “is often poorly informed,”and Jeff Flake (R-AZ) called him a “danger to a democracy,” yet all of these Republican Senators cast more than 80% of their legislative votes in line with president (Silver and Bycoffe, 2019)
  • Our model’s predictive performance is robust to a variety of settings and achieves sentiment predictive performance of 0.127 mean-absolute-error and 89.3% accuracy, demonstrating its capability to predict legislator tweeting behavior
研究对象与分析
tweets: 29696
“potus.” Of these, we further restricted the tweets to span in time from November 2016 to February 2018, when the data was collected. This culling process yielded 29,696 tweets from 451 legislators. The model also incorporates tweets from Trump, which we obtained from the website www.trumptwitterarchive.com

Trump-related legislator tweets: 29696
The variation in tweets across time highlights one of the key features of the model—the incorporation of not only the sentiment of tweets about Trump by also the number of daily tweets. Of the 29,696 Trump-related legislator tweets, a subset of 4,661 tweets were randomly selected to be manually labeled with respect to their sentiment about Trump, using a three-point “positive,” “negative,” “neutral” scale based on the text of the. 4 Legislator Tweet Model Formulation

tweets: 128
For all results presented here, we set K = 2, and use a linear text map. The number of epochs for which the model was trained varies depending on model setting, but in all cases each training batch comprises 128 tweets. The model was implemented in TensorFlow (Abadi et al, 2015) and trained on a single NVIDIA Titan X GPU

tweets: 100
In the figure we also see that embeddings are not simply an artifact of the number of tweets about Trump authored by the legislator, nor whether the legislator is a member of the House or Senate. Legislators with more extreme values of Twitter sentiment relative to other members of their party can be found in both chambers of Congress and range from having authored fewer than 100 tweets about Trump to over 500.6. Another initial validating characteristic of the embeddings is the clustering of prominent Republican senators who have been publicly critical of Trump

tweets: 5
The positions. 6For visualization, legislators with fewer than 5 tweets are omitted from Figure 2. Comparisons of Embeddings with: DW-NOMINATE

引用论文
  • Martın Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dandelion Mane, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Viegas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. 2015. TensorFlow: Large-scale machine learning on heterogeneous systems. Software available from tensorflow.org.
    Google ScholarLocate open access versionFindings
  • Pablo Barbera. 2015. Birds of the same feather tweet together: Bayesian ideal point estimation using twitter data. Political Analysis.
    Google ScholarFindings
  • Yoshua Bengio, Rejean Ducharme, Pascal Vincent, and Christian Janvin. 200A neural probabilistic language model. The Journal of Machine Learning Research.
    Google ScholarLocate open access versionFindings
  • Adam Bonica. 2018. Inferring roll-call scores from campaign contributions using supervised machine learning. American Journal of Political Science.
    Google ScholarLocate open access versionFindings
  • Joshua Clinton, Simon Jackman, and Douglas Rivers. 2004. The statistical analysis of roll call data. The American Political Science Review.
    Google ScholarLocate open access versionFindings
  • Skyler J. Cranmer and Bruce A. Desmarais. 2017. What can we learn from predictive modeling? Political Analysis, 25(2):145–166.
    Google ScholarLocate open access versionFindings
  • Sean Gerrish and David M. Blei. 2011. Predicting legislative roll calls from text. International Conference of Machine Learning 28.
    Google ScholarLocate open access versionFindings
  • Sean Gerrish and David M. Blei. 2012. How they vote: Issue-adjusted models of legislative behavior. Advances in Neural Information Processing Systems 25.
    Google ScholarLocate open access versionFindings
  • Yupeng Gu, Yizhou Sun, Ning Jiang, Bingyu Wang, and Ting Chen. 2014. Topic-factorized ideal point estimation model for legislative voting network. KDD 2014.
    Google ScholarLocate open access versionFindings
  • P. A. Gutierrez, M. Perez-Ortiz, J. Sanchez-Monedero, F. Fernandez-Navarro, and C. Hervas-Martınez. 2016. Ordinal regression methods: Survey and experimental study. IEEE Transactions on Knowledge and Data Engineering, 28(1):127–146.
    Google ScholarLocate open access versionFindings
  • Kosuke Imai, James Lo, and Jonathan Olmsted. 2016. Fast estimation of ideal points with massive data. American Political Science Review, 110(4):631– 656.
    Google ScholarLocate open access versionFindings
  • Nal Kalchbrenner, Edward Grefenstette, and Phil Blumson. 2014. A convolutional neural network for modelling sentences. Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics.
    Google ScholarLocate open access versionFindings
  • Yoon Kim. 2014. Convolutional neural networks for sentence classification. Conference on Empirical Methods in Natural Language Processing.
    Google ScholarLocate open access versionFindings
  • Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. Proceedings of the 3rd Annual International Conference for Learning Representations.
    Google ScholarLocate open access versionFindings
  • Anastassia Kornilova, Daniel Argyle, and Vlad Eidelman. 2018. Party matters: Enhancing legislative embeddings with author attributes for vote prediction. Proceedings of the Conference for the Association for Computational Linguistics.
    Google ScholarLocate open access versionFindings
  • Peter Kraft, Hirsh Jain, and Alexander M. Rush. 20An embedding model for predicting roll-call votes. Proceedings of the Conference on Empirical Methods in Natural Language Processing.
    Google ScholarLocate open access versionFindings
  • Nicole Lewis, Amber Phillips, Kevin Schaul, and Leslie Shapiro. 2017.
    Google ScholarFindings
  • Trump. Available online at https://www.
    Findings
  • Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Distributed representations of words and phrases and their compositionality. Advances in Neural Information Processing Systems.
    Google ScholarLocate open access versionFindings
  • Viet-An Nguyen, Jordan Boyd-Graber, Philip Resnik, and Kristina Miler. 2015. Tea party in the house: A hierarchical ideal point topic model and its applications to republican legislators in the 112th congress. Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics.
    Google ScholarLocate open access versionFindings
  • Keith T. Poole and Howard Rosenthal. 1997. Congress: A Political-Economic History of Roll Call Voting. Oxford University Press.
    Google ScholarFindings
  • Daniel Preotiuc-Pietro, Ye Liu, Daniel Hopkins, and Lyle Ungar. 2017. Beyond binary labels: political ideology prediction of twitter users. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 729–740.
    Google ScholarLocate open access versionFindings
  • Ludovic Rheault and Christopher Cochrane. 2020. Word embeddings for the analysis of ideological placement in parliamentary corpora. Political Analysis, 28(1):112–133.
    Google ScholarLocate open access versionFindings
  • Dinghan Shen, Guoyin Wang, Wenlin Wang, Martin Renqiang Min, Qinliang Su, Yizhe Zhang, Chunyuan Li, Ricardo Henao, and Lawrence Carin. 2018. Baseline needs more love: On simple wordembedding-based models and associated pooling mechanisms. Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics.
    Google ScholarLocate open access versionFindings
  • Nate Silver and Aaron Bycoffe. 2019. Tracking Congress In The Age Of Trump. Available online at https://projects.fivethirtyeight.com/congress-trump-score/.
    Findings
  • Yanchuan Sim, Bryan Routledge, and Noah A. Smith. 2016. Friends with motives: Using text to infer influence on SCOTUS. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1724–1733, Austin, Texas. Association for Computational Linguistics.
    Google ScholarLocate open access versionFindings
  • Richard Socher, Cliff Chiung-Yu Lin, Andrew Y. Ng, and Christopher D. Manning. 2011. Parsing natural scenes and natural language with recursive neural networks. Proceedings of the 28th International Conference on Machine Learning.
    Google ScholarLocate open access versionFindings
  • Chris Tausanovitch and Christopher Warshaw. 2018. Does the Ideological Proximity Between Candidates and Voters Affect Voting in US House Elections? Political Behavior, 40(1):223–245.
    Google ScholarLocate open access versionFindings
  • Eric Wang, Dehong Liu, Jorge Silva, David Dunson, and Lawrence Carin. 2010. Joint analysis of timeevolving binary matrices and associated documents. Advances in Neural Information Processing Systems 23.
    Google ScholarLocate open access versionFindings
  • Yu Wang, Richard Niemi, and Jiebo Luo. 2016. Tactics and tallies: A study of the 2016 u.s. presidential campaign using twitter ‘likes’. KDD.
    Google ScholarFindings
  • Zhengming Xing, Sunshine Hillygus, and Lawrence Carin. 2017. Evaluating U.S. Electoral Representation with a Joint Statistical Model of Congressional Roll-Calls, Legislative Text, and Voter Registration Data. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’17, page 1205–1214.
    Google ScholarLocate open access versionFindings
  • The 10 Republicans with the most tweets about Trump were, in order: Paul Ryan Bradley Byrne, Sean Duffy, Paul Gosar, Bill Flores, Orrin Hatch, Mitch McConnell, Roger Wicker, Steve Scalise, Kevin McCarthy. The 10 Democrats with the most tweets about Trump were: Donald Beyer, Betty McCollum, Yvette Clarke, Jerrold Nadler, Edward Markey, James McGovern, Nancy Pelosi, Tom Udall, Robert Case, Joseph Crowley. In both cases we observe leadership in both parties among the most frequent authors of tweets about the president. Perhaps unsurprisingly, the days with both the most positive and the most negative tweets about Trump were those in which Trump addressed Congress: his joint address on February 28, 2017 (597 positive, 288 negative) and the 2018 State of the Union (512 positive and 340 negative).
    Google ScholarLocate open access versionFindings
作者
Gregory Spell
Gregory Spell
Brian Guay
Brian Guay
Sunshine Hillygus
Sunshine Hillygus
您的评分 :
0

 

标签
评论
数据免责声明
页面数据均来自互联网公开来源、合作出版商和通过AI技术自动分析结果,我们不对页面数据的有效性、准确性、正确性、可靠性、完整性和及时性做出任何承诺和保证。若有疑问,可以通过电子邮件方式联系我们:report@aminer.cn
小科