AI helps you reading Science

AI generates interpretation videos

AI extracts and analyses the key points of the paper to generate videos automatically


pub
Go Generating

AI Traceability

AI parses the academic lineage of this thesis


Master Reading Tree
Generate MRT

AI Insight

AI extracts a summary of this paper


Weibo:
We present three models based on GPT-2 that incorporate the protagonist’s emotion arc as a controllable attribute while preserving content quality: an Emotion Supervision, and two Emotion-Reinforced models based on reinforcement learning

Modeling Protagonist Emotions for Emotion Aware Storytelling

EMNLP 2020, pp.5277-5294, (2020)

Cited by: 0|Views293
Full Text
Bibtex
Weibo

Abstract

Emotions and their evolution play a central role in creating a captivating story. In this paper, we present the first study on modeling the emotional trajectory of the protagonist in neural storytelling. We design methods that generate stories that adhere to given story titles and desired emotion arcs for the protagonist. Our models inclu...More

Code:

Data:

0
Introduction
  • Stories are an integral part of human culture. They allow them to express emotions, share knowledge, and to shape the perspective of the world (McKee, 2003).
  • Automatic storytelling systems based on symbolic planning showed that addressing character emotions for plot construction resulted in more diverse and interesting stories (Theune et al, 2004; Pérez y Pérez, 2007; Méndez et al, 2016).
  • These studies were rule-based and limited to small-scale data.
  • Despite the broad recognition of its importance, neural story generation methods have not explored the modeling of emotional trajectory
Highlights
  • Stories are an integral part of human culture
  • We present the first study to take into account the emotional trajectory of the protagonist in neural story generation
  • We present three models based on GPT-2 (Radford et al, 2019) that incorporate the protagonist’s emotion arc as a controllable attribute while preserving content quality: an Emotion Supervision (EmoSup), and two Emotion-Reinforced (EmoRL) models based on reinforcement learning
  • We proposed the emotion-aware storytelling task for modeling the emotion arc of the protagonist
  • This paper is a step towards future research directions on planning emotional trajectory while generating stories
  • Our automatic and manual evaluations demonstrate that these models are significantly better at generating stories that follow the desired emotion arcs compared to baseline methods, without sacrificing story quality
  • We focused only on the protagonist, but future works can explore modeling motivations, goals, achievements, and emotional trajectory of all characters
Methods
  • 4.1 Dataset and Annotation Pipeline

    The authors use the ROCStories corpus (Mostafazadeh et al, 2016) for the experiments.
  • In step 2, we identify the character with the most mentions as the protagonist (e.g., ‘Iris’ who is mentioned in 4 sentences).
  • In step 3, in each sentence of the story, the authors identify the protagonist’s role as Agent or Other using its dependency parse5.
  • In step 4, the authors obtain the emotional reaction of the protagonist in each sentence using COMET.
  • Depending on the protagonist’s role in the sentence, the authors use the appropriate relation to get their emotional reaction, g, and COMET’s confidence in the prediction, φg.
  • In sentences without an explicit mention of the protagonist, his/her role is assigned as Other, and the authors use oReact since the event in that sentence will affect all characters of the story, including the protagonist
Results
  • Results and Discussion

    The authors first describe the experiments on choosing the base storytelling model (§5.1) followed by evaluation of the proposed models (§5.2).

    5.1 Base Storytelling Model Results

    As noted before, the models build upon a base storytelling model (GPT-2).
  • Using various evaluation measures described earlier, the experiments showed that fine-tuned GPT-2 outperforms all baselines, indicating that it can serve as good base storytelling model.
  • This is in-line with the observations made in Guan et al.
  • Than all baselines for all measures
  • This demonstrates that it can be used as a good base storytelling model upon which the models are built
Conclusion
  • The authors proposed the emotion-aware storytelling task for modeling the emotion arc of the protagonist.
  • To this goal, the authors designed two emotionconsistency rewards using a commonsense transformer and an emotion classifier.
  • The authors presented two case studies, which show interesting use cases of the model
  • Such models can have educational applications by enabling children to explore creative writing at an early age and addressing the literary learning needs of learners with disabilities.
  • The authors' approach is general and provides a blueprint for similar works going forward and can be used outside emotion-aware storytelling, e.g., for generating other emotional content or text with other attributes or properties
Tables
  • Table1: Automatic evaluation of content quality (top) and emotion faithfulness (bottom). For content quality, RLCLF and RL-EM outperform all baselines for BLEU and diversity/repetition scores respectively (p < 0.05). For emotion faithfulness, RL-CLF outperforms all baselines (p < 0.05). * indicates absence of emotion arc as input
  • Table2: Manual evaluation results. For each criteria, we report the average improvements as well as the absolute scores for the two models, separated by a comma. RL-CLF is preferred over other methods (p < 0.05)
  • Table3: For a given title, our model can generate different stories for different emotion arcs. Story segments with corresponding emotions are highlighted
  • Table4: Given a story, our model can generate another story with similar emotion arc
  • Table5: Predefined terms used for tracking the protagonist
  • Table6: Emotion classification results on the tweets dataset (upper block), and the automatically annotated story corpus (lower block)
  • Table7: Base Storytelling model: Automatic evaluation. The scores marked with † indicate models that have access to extra ground-truth information besides title (keywords and event tuples)
Download tables as Excel
Related work
Funding
  • Our automatic and manual evaluations demonstrate that these models are significantly better at generating stories that follow the desired emotion arcs compared to baseline methods, without sacrificing story quality
  • We choose GPT-2 (medium) (Radford et al, 2019) because our initial experiments demonstrated that it outperforms other state-of-the-art story generation models, in general (§5.1)
Study subjects and analysis
tweets: 6857
1Code at: https://github.com/fabrahman/ Emo-Aware-Storytelling. First, we train this classifier on a humanannotated dataset for emotion identification in tweets (Mohammad et al, 2018), consisting of 6, 857 tweets, with binary labels for 11 emotions, among which we only focus on our basic emotions. On this dataset, the classifier achieves better or comparable performance to state-of-the-art results (Kant et al, 2019) (see Appendix B.1 for detailed results)

Reference
  • Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In 3rd International Conference on Learning Representations.
    Google ScholarLocate open access versionFindings
  • David Bamman, Brendan O’Connor, and Noah A. Smith. 2013. Learning latent personas of film characters. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 352–361, Sofia, Bulgaria. Association for Computational Linguistics.
    Google ScholarLocate open access versionFindings
  • David Bamman, Ted Underwood, and Noah A. Smith. 2014. A bayesian mixed effects model of literary character. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 370–379, Baltimore, Maryland. Association for Computational Linguistics.
    Google ScholarLocate open access versionFindings
  • Christos Baziotis, Athanasiou Nikolaos, Alexandra Chronopoulou, Athanasia Kolovou, Georgios Paraskevopoulos, Nikolaos Ellinas, Shrikanth S. Narayanan, and Alexandros Potamianos. 2018. NTUA-SLP at semeval-2018 task 1: Predicting affective content in tweets with deep attentive RNNs and transfer learning. In Proceedings of The 12th International Workshop on Semantic Evaluation, pages 245–255.
    Google ScholarLocate open access versionFindings
  • Antoine Bosselut, Hannah Rashkin, Maarten Sap, Chaitanya Malaviya, Asli Celikyilmaz, and Yejin Choi. 2019. COMET: Commonsense transformers for automatic knowledge graph construction. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4762–4779.
    Google ScholarLocate open access versionFindings
  • Paweł Budzianowski and Ivan Vulic. 2019.
    Google ScholarFindings
  • Hello, it’s GPT-2 - how can I help you? Towards the use of pretrained language models for task-oriented dialogue systems. In Proceedings of the 3rd Workshop on Neural Generation and Translation, pages 15–22.
    Google ScholarLocate open access versionFindings
  • Snigdha Chaturvedi, Haoruo Peng, and Dan Roth. 2017. Story comprehension for predicting what happens next. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1603–1614.
    Google ScholarLocate open access versionFindings
  • Jiaao Chen, Jianshu Chen, and Zhou Yu. 201Incorporating structured commonsense knowledge in story completion. In Proceedings of the ThirtyThird AAAI Conference on Artificial Intelligence, pages 6244–6251.
    Google ScholarLocate open access versionFindings
  • Elizabeth Clark, Yangfeng Ji, and Noah A. Smith. 2018. Neural text generation in stories using entity representations as context. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2250–2260.
    Google ScholarLocate open access versionFindings
  • Sumanth Dathathri, Andrea Madotto, Janice Lan, Jane Hung, Eric Frank, Piero Molino, Jason Yosinski, and Rosanne Liu. 2020. Plug and play language models: A simple approach to controlled text generation. In 8th International Conference on Learning Representations.
    Google ScholarLocate open access versionFindings
  • Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4171–4186.
    Google ScholarLocate open access versionFindings
  • Paul Ekman. 1992. An argument for basic emotions. Cognition & emotion, 6(3-4):169–200.
    Google ScholarLocate open access versionFindings
  • Angela Fan, David Grangier, and Michael Auli. 2018a. Controllable abstractive summarization. In Proceedings of the 2nd Workshop on Neural Machine Translation and Generation, pages 45–54.
    Google ScholarLocate open access versionFindings
  • Angela Fan, Mike Lewis, and Yann Dauphin. 2018b. Hierarchical neural story generation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 889–898.
    Google ScholarLocate open access versionFindings
  • Angela Fan, Mike Lewis, and Yann Dauphin. 2019. Strategies for structuring story generation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2650– 2660.
    Google ScholarLocate open access versionFindings
  • Matt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, Nelson F. Liu, Matthew Peters, Michael Schmitz, and Luke Zettlemoyer. 2018. AllenNLP: A deep semantic natural language processing platform. In Proceedings of Workshop for NLP Open Source Software (NLP-OSS), pages 1–6.
    Google ScholarLocate open access versionFindings
  • Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N. Dauphin. 2017. Convolutional sequence to sequence learning. In Proceedings of the 34th International Conference on Machine Learning, pages 1243–1252.
    Google ScholarLocate open access versionFindings
  • Morton Gernsbacher, Harold Goldsmith, and Rachel Robertson. 1992. Do readers mentally represent characters’ emotional states? Cognition & Emotion, 6(2):89–111.
    Google ScholarLocate open access versionFindings
  • Pablo Gervás, Belén Díaz-Agudo, Federico Peinado, and Raquel Hervás. 2005. Story plot generation based on CBR. In Applications and Innovations in Intelligent Systems XII, 28(1):33–46.
    Google ScholarLocate open access versionFindings
  • Marjan Ghazvininejad, Xing Shi, Jay Priyadarshi, and Kevin Knight. 2017. Hafez: an interactive poetry generation system. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, System Demonstrations, pages 43–48.
    Google ScholarLocate open access versionFindings
  • Simeng Gu, Wei Wang, Fushun Wang, and Jason H Huang. 2016. Neuromodulator and emotion biomarker for stress induced mental disorders. Neural plasticity.
    Google ScholarFindings
  • Jian Guan, Fei Huang, Minlie Huang, Zhihao Zhao, and Xiaoyan Zhu. 2020. A knowledge-enhanced pretraining model for commonsense story generation. Transactions of the Association for Computational Linguistics, pages 93–108.
    Google ScholarLocate open access versionFindings
  • P.C. Hogan. 2011. What Literature Teaches Us about Emotion. Studies in Emotion and Social Interaction. Cambridge University Press.
    Google ScholarFindings
  • Zhiting Hu, Haoran Shi, Bowen Tan, Wentao Wang, Zichao Yang, Tiancheng Zhao, Junxian He, Lianhui Qin, Di Wang, Xuezhe Ma, Zhengzhong Liu, Xiaodan Liang, Wanrong Zhu, Devendra Singh Sachan, and Eric P. Xing. 2019. Texar: A modularized, versatile, and extensible toolkit for text generation. In Proceedings of the 57th Conference of the Association for Computational Linguistics, pages 159–164.
    Google ScholarLocate open access versionFindings
  • Zhiting Hu, Zichao Yang, Xiaodan Liang, Ruslan Salakhutdinov, and Eric P. Xing. 2017. Toward controlled generation of text. In Proceedings of the 34th International Conference on Machine Learning, pages 1587–1596.
    Google ScholarLocate open access versionFindings
  • Chenyang Huang, Osmar Zaïane, Amine Trabelsi, and Nouha Dziri. 2018. Automatic dialogue generation with expressed emotions. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Short Papers), pages 49–54.
    Google ScholarLocate open access versionFindings
  • Mohit Iyyer, Anupam Guha, Snigdha Chaturvedi, Jordan Boyd-Graber, and Hal Daumé III. 2016. Feuding families and former friends: Unsupervised learning for dynamic fictional relationships. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1534–1544.
    Google ScholarLocate open access versionFindings
  • Rachael E Jack, Oliver GB Garrod, and Philippe G Schyns. 2014. Dynamic facial expressions of emotion transmit an evolving hierarchy of signals over time. Current biology, 24(2):187–192.
    Google ScholarLocate open access versionFindings
  • Parag Jain, Priyanka Agrawal, Abhijit Mishra, Mohak Sukhwani, Anirban Laha, and Karthik Sankaranarayanan. 2017. Story generation from sequence of independent short descriptions. Workshop on Machine Learning for Creativity.
    Google ScholarFindings
  • Neel Kant, Raul Puri, Nikolai Yakovenko, and Bryan Catanzaro. 2019. Practical text classification with large pre-trained language models. CoRR.
    Google ScholarLocate open access versionFindings
  • Yuta Kikuchi, Graham Neubig, Ryohei Sasano, Hiroya Takamura, and Manabu Okumura. 2016. Controlling output length in neural encoder-decoders. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1328–1338.
    Google ScholarLocate open access versionFindings
  • Evgeny Kim and Roman Klinger. 2019. Frowning Frodo, wincing Leia, and a seriously great friendship: Learning to classify emotional relationships of fictional characters. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 647–653.
    Google ScholarLocate open access versionFindings
  • Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations.
    Google ScholarLocate open access versionFindings
  • Catherine Kobus, Josep Crego, and Jean Senellart. 2017. Domain control for neural machine translation. In Proceedings of the International Conference Recent Advances in Natural Language Processing, pages 372–378.
    Google ScholarLocate open access versionFindings
  • Hidetsugu Komeda and Takashi Kusumi. 2006. The effect of a protagonist’s emotional shift on situation model construction. Memory & Cognition, 34:1548–1556.
    Google ScholarLocate open access versionFindings
  • Vinodh Krishnan and Jacob Eisenstein. 2015. “you’re mr. lebowski, I’m the dude”: Inducing address term formality in signed social networks. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1616–1626.
    Google ScholarLocate open access versionFindings
  • Micheal Lebowitz. 1987. Planning stories. In Proceedings of the cognitive science society, pages 234–242.
    Google ScholarLocate open access versionFindings
  • VI Levenshtein. 1966. Binary Codes Capable of Correcting Deletions, Insertions and Reversals. Soviet Physics Doklady, 10(8):707.
    Google ScholarLocate open access versionFindings
  • Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016. A diversity-promoting objective function for neural conversation models. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 110–119.
    Google ScholarLocate open access versionFindings
  • 2020. A character-centric neural model for automated story generation. In Thirty-Fourth AAAI Conference on Artificial Intelligence.
    Google ScholarLocate open access versionFindings
  • Fuli Luo, Damai Dai, Pengcheng Yang, Tianyu Liu, Baobao Chang, Zhifang Sui, and Xu Sun. 2019. Learning to control the fine-grained sentiment for story ending generation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6020–6026.
    Google ScholarLocate open access versionFindings
  • Lara J. Martin, Prithviraj Ammanabrolu, Xinyu Wang, William Hancock, Shruti Singh, Brent Harrison, and Mark O. Riedl. 2018. Event representations for automated story generation with deep neural nets. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, pages 868–875.
    Google ScholarLocate open access versionFindings
  • Robert McKee. 2003. Storytelling that moves people: A conversation with screenwriting coach robert mckee. Harvard business review, 81:51–5, 136.
    Google ScholarLocate open access versionFindings
  • Hardik Meisheri and Lipika Dey. 2018. TCS research at SemEval-2018 task 1: Learning robust representations using multi-attention architecture. In Proceedings of The 12th International Workshop on Semantic Evaluation, pages 291–299.
    Google ScholarLocate open access versionFindings
  • Gonzalo Méndez, Pablo Gervás, and Carlos León. 2016. On the use of character affinities for story plot generation. In Knowledge, Information and Creativity Support Systems, pages 211–225.
    Google ScholarLocate open access versionFindings
  • Saif Mohammad. 2018. Word affect intensities. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation, pages 173– 183.
    Google ScholarLocate open access versionFindings
  • Saif Mohammad, Felipe Bravo-Marquez, Mohammad Salameh, and Svetlana Kiritchenko. 2018. SemEval2018 task 1: Affect in tweets. In Proceedings of The 12th International Workshop on Semantic Evaluation, pages 1–17.
    Google ScholarLocate open access versionFindings
  • Daniel G. Morrow. 1985. Prominent characters and events organize narrative understanding. Journal of Memory and Language, 24(3):304–319.
    Google ScholarLocate open access versionFindings
  • Nasrin Mostafazadeh, Nathanael Chambers, Xiaodong He, Devi Parikh, Dhruv Batra, Lucy Vanderwende, Pushmeet Kohli, and James Allen. 2016. A corpus and cloze evaluation for deeper understanding of commonsense stories. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 839–849.
    Google ScholarLocate open access versionFindings
  • Eric W. Noreen. 1989. Computer-Intensive Methods for Testing Hypotheses: An Introduction. Wiley New York.
    Google ScholarFindings
  • Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318.
    Google ScholarLocate open access versionFindings
  • Brian Parkinson and Antony Manstead. 1993. Making sense of emotion in stories and social life. Cognition and Emotion, 7(3-4):295–323.
    Google ScholarLocate open access versionFindings
  • Romain Paulus, Caiming Xiong, and Richard Socher. 2018. A deep reinforced model for abstractive summarization. In 6th International Conference on Learning Representations.
    Google ScholarLocate open access versionFindings
  • Nanyun Peng, Marjan Ghazvininejad, Jonathan May, and Kevin Knight. 2018. Towards controllable story generation. In Proceedings of the First Workshop on Storytelling, pages 43–49.
    Google ScholarLocate open access versionFindings
  • Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, pages 1532–1543.
    Google ScholarLocate open access versionFindings
  • Rafael Pérez y Pérez and Mike Sharples. 2001. Mexica: A computer model of a cognitive account of creative writing. Journal of Experimental and Theoretical Artificial Intelligence, 13(2):119–139.
    Google ScholarLocate open access versionFindings
  • Robert Plutchik. 1982. A psychoevolutionary theory of emotions. Social Science Information, 21(4-5):529– 553.
    Google ScholarLocate open access versionFindings
  • Jullie Porteous and Mike Cavazza. 2009. Controlling narrative generation with planning trajectories: the role of constraints. In ICIDS, pages 234–245.
    Google ScholarLocate open access versionFindings
  • Gerald Prince. 2009. Speech and Language Processing: An Introduction to Natural Language Processing, Computational Linguistics, and Speech Recognition, second edition. Pearson Prentice Hall.
    Google ScholarFindings
  • Rafael Pérez y Pérez. 2007. Employing emotions to drive plot generation in a computer-based storyteller. Cognitive Systems Research, 8:89–109.
    Google ScholarLocate open access versionFindings
  • Lianhui Qin, Antoine Bosselut, Ari Holtzman, Chandra Bhagavatula, Elizabeth Clark, and Yejin Choi. 2019. Counterfactual story reasoning and generation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, pages 5043–5053.
    Google ScholarLocate open access versionFindings
  • Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Blog, 1:8.
    Google ScholarLocate open access versionFindings
  • Steven J. Rennie, Etienne Marcheret, Youssef Mroueh, Jerret Ross, and Vaibhava Goel. 2017. Self-critical sequence training for image captioning. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition, pages 1179–1195.
    Google ScholarLocate open access versionFindings
  • Mark O. Riedl and R. Michael Young. 2010. Narrative planning: Balancing plot and character. Journal of Artificial Intelligence Research, 39:217–268.
    Google ScholarLocate open access versionFindings
  • Melissa Roemmele. 2016. Writing stories with help from recurrent neural networks. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, page 4311–4312.
    Google ScholarLocate open access versionFindings
  • Maarten Sap, Ronan Le Bras, Emily Allaway, Chandra Bhagavatula, Nicholas Lourie, Hannah Rashkin, Brendan Roof, Noah A. Smith, and Yejin Choi. 2019. ATOMIC: an atlas of machine commonsense for ifthen reasoning. In Proceedings of the Thirty-Third AAAI Conference on Artificial Intelligence, pages 3027–3035.
    Google ScholarLocate open access versionFindings
  • Zhihong Shao, Minlie Huang, Jiangtao Wen, Wenfei Xu, and Xiaoyan Zhu. 2019. Long and diverse text generation with planning-based hierarchical variational model. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, pages 3257– 3268.
    Google ScholarLocate open access versionFindings
  • Zhenqiao Song, Xiaoqing Zheng, Lu Liu, Mu Xu, and Xuanjing Huang. 2019. Generating responses with a specific emotion in dialog. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3685–3695.
    Google ScholarLocate open access versionFindings
  • Pradyumna Tambwekar, Murtaza Dhuliawala, Lara J. Martin, Animesh Mehta, Brent Harrison, and Mark O. Riedl. 2019. Controllable neural story plot generation via reward shaping. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence,, pages 5982–5988.
    Google ScholarLocate open access versionFindings
  • Mariët Theune, Sander Rensen, Rieks op den Akker, Dirk Heylen, and Anton Nijholt. 2004. Emotional characters for automatic plot creation. In Technologies for Interactive Digital Storytelling and Entertainment, pages 95–100.
    Google ScholarLocate open access versionFindings
  • Hardik Vala, David Jurgens, Andrew Piper, and Derek Ruths. 2015. Mr. bennet, his coachman, and the archbishop walk into a bar but only one of them gets recognized: On the difficulty of detecting characters in literary texts. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 769–774.
    Google ScholarLocate open access versionFindings
  • Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems, pages 5998–6008.
    Google ScholarLocate open access versionFindings
  • Kurt Vonnegut. 1981. Palm sunday. RosetTaBooks, LLC New York.
    Google ScholarFindings
  • Di Wang, Nebojsa Jojic, Chris Brockett, and Eric Nyberg. 2017. Steering output style and topic in neural response generation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2140–2150.
    Google ScholarLocate open access versionFindings
  • Noah Weber, Leena Shekhar, Heeyoung Kwon, Niranjan Balasubramanian, and Nathanael Chambers. 2020. Generating narrative text in a switching dynamical system. CoRR, abs/2004.03762.
    Findings
  • Ronald J. Williams. 1992. Simple statistical gradientfollowing algorithms for connectionist reinforcement learning. Machine Learning, 8(3–4):229–256.
    Google ScholarLocate open access versionFindings
  • Thomas Wolf, Victor Sanh, Julien Chaumond, and Clement Delangue. 2019. Transfertransfo: A transfer learning approach for neural network based conversational agents. CoRR, abs/1901.08149.
    Findings
  • Jingjing Xu, Xuancheng Ren, Yi Zhang, Qi Zeng, Xiaoyan Cai, and Xu Sun. 2018. A skeleton-based model for promoting coherence among sentences in narrative story generation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4306–4315.
    Google ScholarLocate open access versionFindings
  • Lili Yao, Nanyun Peng, Ralph M. Weischedel, Kevin Knight, Dongyan Zhao, and Rui Yan. 2019. Planand-write: Towards better automatic storytelling. In Proceedings of the Thirty-Third AAAI Conference on Artificial Intelligence, pages 7378–7385.
    Google ScholarLocate open access versionFindings
  • Hao Zhou, Minlie Huang, Tianyang Zhang, Xiaoyan Zhu, and Bing Liu. 2018. Emotional chatting machine: Emotional conversation generation with internal and external memory. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, pages 730–739.
    Google ScholarLocate open access versionFindings
  • Xianda Zhou and William Yang Wang. 2018. MojiTalk: Generating emotional responses at scale. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers), pages 1128–1137.
    Google ScholarLocate open access versionFindings
  • Meisheri and Dey (2018) tweets
    Google ScholarFindings
  • Baziotis et al. (2018) tweets
    Google ScholarFindings
  • Kant et al. (2019)
    Google ScholarFindings
Author
Faeze Brahman
Faeze Brahman
Your rating :
0

 

Tags
Comments
数据免责声明
页面数据均来自互联网公开来源、合作出版商和通过AI技术自动分析结果,我们不对页面数据的有效性、准确性、正确性、可靠性、完整性和及时性做出任何承诺和保证。若有疑问,可以通过电子邮件方式联系我们:report@aminer.cn
小科