Let's Talk About Race: Identity, Chatbots, and AI

Ari Schlesinger
Ari Schlesinger

CHI, 2018.

Cited by: 50|Bibtex|Views61|Links
EI
Keywords:
improved wayartificial intelligencenatural languagemachine learningnatural language processingMore(7+)
Weibo:
By drawing together technosocial interactions involved in race-talk and hate speech relative to databases, natural language processing, and machine learning, we strive to support the development of generative technosocial solutions——like a multiplicity of chatbots that upend the ...

Abstract:

Why is it so hard for chatbots to talk about race? This work explores how the biased contents of databases, the syntactic focus of natural language processing, and the opaque nature of deep learning algorithms cause chatbots difficulty in handling race-talk. In each of these areas, the tensions between race and chatbots create new opportu...More

Code:

Data:

0
Introduction
  • THE BLACKLIST: HOW DO CHATBOTS CURRENTLY HANDLE RACE-TALK? In 2017, the blacklist reigns supreme as a technical solution for handling undesirable speech like racist language in online chat.
  • THE BLACKLIST: HOW DO CHATBOTS CURRENTLY HANDLE RACE-TALK?
  • In 2017, the blacklist reigns supreme as a technical solution for handling undesirable speech like racist language in online chat.
  • The blacklist was and continues to be seen as the default fail-safe for mitigating racist talk.
  • Why would the blacklist be seen as the universal solution for how chatbots handle race-talk?.
  • In its basic form, a blacklist or wordfilter employs a list of undesirable strings to filter out
Highlights
  • THE BLACKLIST: HOW DO CHATBOTS CURRENTLY HANDLE RACE-TALK? In 2017, the blacklist reigns supreme as a technical solution for handling undesirable speech like racist language in online chat
  • With Tay and the blacklist as our foundation, we examine the networked relationships of three technical artificial intelligence chatbot domains: databases, natural language processing (NLP), and machine learning (ML)
  • We examine the data that chatbot algorithms are trained on, exposing ways that race and racism become embedded in datasets
  • HOW DO WE EMBRACE THE TROUBLE? In writing this paper, we set two essential questions to guide this work: 1) How can chatbots handle race in new and improved ways? and 2) Why is race-talk so difficult for chatbots? These questions have taken us down many paths to understand how race-talk is interwoven with technical configurations supporting chatbots
  • An important contribution of this research is helping HCI practitioners understand how specific technosocial configurations are fundamentally entangled with their work
  • By drawing together technosocial interactions involved in race-talk and hate speech relative to databases, natural language processing, and machine learning, we strive to support the development of generative technosocial solutions——like a multiplicity of chatbots that upend the all-knowing agent
Conclusion
  • HOW DO WE EMBRACE THE TROUBLE? In writing this paper, the authors set two essential questions to guide this work: 1) How can chatbots handle race in new and improved ways? and 2) Why is race-talk so difficult for chatbots? These questions have taken them down many paths to understand how race-talk is interwoven with technical configurations supporting chatbots.

    An important contribution of this research is helping HCI practitioners understand how specific technosocial configurations are fundamentally entangled with their work.
  • These questions have taken them down many paths to understand how race-talk is interwoven with technical configurations supporting chatbots.
  • An important contribution of this research is helping HCI practitioners understand how specific technosocial configurations are fundamentally entangled with their work.
  • By drawing together technosocial interactions involved in race-talk and hate speech relative to databases, NLP, and ML, the authors strive to support the development of generative technosocial solutions——like a multiplicity of chatbots that upend the all-knowing agent.
  • Clarifying a context, like race, and its manifestations can help guide these efforts
Summary
  • Introduction:

    THE BLACKLIST: HOW DO CHATBOTS CURRENTLY HANDLE RACE-TALK? In 2017, the blacklist reigns supreme as a technical solution for handling undesirable speech like racist language in online chat.
  • THE BLACKLIST: HOW DO CHATBOTS CURRENTLY HANDLE RACE-TALK?
  • In 2017, the blacklist reigns supreme as a technical solution for handling undesirable speech like racist language in online chat.
  • The blacklist was and continues to be seen as the default fail-safe for mitigating racist talk.
  • Why would the blacklist be seen as the universal solution for how chatbots handle race-talk?.
  • In its basic form, a blacklist or wordfilter employs a list of undesirable strings to filter out
  • Conclusion:

    HOW DO WE EMBRACE THE TROUBLE? In writing this paper, the authors set two essential questions to guide this work: 1) How can chatbots handle race in new and improved ways? and 2) Why is race-talk so difficult for chatbots? These questions have taken them down many paths to understand how race-talk is interwoven with technical configurations supporting chatbots.

    An important contribution of this research is helping HCI practitioners understand how specific technosocial configurations are fundamentally entangled with their work.
  • These questions have taken them down many paths to understand how race-talk is interwoven with technical configurations supporting chatbots.
  • An important contribution of this research is helping HCI practitioners understand how specific technosocial configurations are fundamentally entangled with their work.
  • By drawing together technosocial interactions involved in race-talk and hate speech relative to databases, NLP, and ML, the authors strive to support the development of generative technosocial solutions——like a multiplicity of chatbots that upend the all-knowing agent.
  • Clarifying a context, like race, and its manifestations can help guide these efforts
Funding
  • Thanks to our friends, reviewers, and colleagues who supported this work with their invaluable time, feedback, and challenges
  • This research was partially supported by the NSF under Grant No DGE-1148903
Reference
  • ACM US Policy Council. 2017. Statement on
    Google ScholarFindings
  • Sara Ahmed. 2004. Declarations of Whiteness: The Non-Performativity of Anti-Racism. borderlands ejournal 3, 2.
    Google ScholarFindings
  • Sara Ahmed. 2017. Living a Feminist Life. Duke University Press#.
    Google ScholarFindings
  • Memo Akten and Mick Grierson. 2016. Real-time interactive sequence generation and control with Recurrent Neural Network ensembles. Neural Information Processing Systems 2016.
    Google ScholarLocate open access versionFindings
  • H. Samy Alim, John R. Rickford, and Arnetha F. Ball (eds.). 2016. Raciolinguistics: How Language Shapes Our Ideas About Race. Oxford University Press.
    Google ScholarFindings
  • David Alvarez-melis and Tommi S Jaakkola. 2017. A causal framework for explaining the predictions of black-box sequence-to-sequence models. EMNLP 2017.
    Google ScholarLocate open access versionFindings
  • Mike Ananny and Kate Crawford. 2016. Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New Media & Society: 1–17. http://doi.org/10.1177/1461444816676645
    Findings
  • Alyx Baldwin. 2016. The Hidden Dangers of AI for Queer and Trans People. Model View Culture. Retrieved May 25, 2017 from https://modelviewculture.com/pieces/the-hiddendangers-of-ai-for-queer-and-trans-people
    Findings
  • James Baldwin. 2010. As Much Truth as One Can Bear. In The Cross of Redemption: Uncollected Writings, Randall Kenan (ed.). Pantheon Books, New York.
    Google ScholarFindings
  • Karen Barad. 2007. Meeting the Universe Halfway. Duke University Press, Durham.
    Google ScholarFindings
  • Jamie Bartlett. 2015. A Life Ruin: Inside the Digital Underworld. Medium. Retrieved September 16, 2017 from https://medium.com/@PRHDigital/a-life-ruininside-the-digital-underworld-590a23b14981
    Findings
  • Timothy W Bickmore, Laura M Pfeifer, and Brian W Jack. 2009. Taking the Time to Care: Empowering Low Health Literacy Hospital Patients with Virtual Nurse Agents. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, ACM, 1265–1274. http://doi.org/10.1145/1518701.1518891
    Locate open access versionFindings
  • Marcia Biederman. 2003. At $10 a Year, Automated Buddy Loses Laughs. New York Times.
    Google ScholarFindings
  • Eduardo Bonilla-Silva. 2002. The Linguistics of Color Blind Racism: How to Talk Nasty about Blacks without Sounding “Racist.” Critical Sociology 28, 1–2: 41–64. http://doi.org/10.1177/08969205020280010501
    Locate open access versionFindings
  • Danah Boyd and Kate Crawford. 2012. Critical Questions for Big Data. Information, Communication & Society 15, 5: 662–679. http://doi.org/10.1080/1369118X.2012.678878
    Locate open access versionFindings
  • Peter Bright. 2016. Tay, the neo-Nazi millennial chatbot, gets autopsied. Ars Technica. Retrieved August 27, 2017 from https://arstechnica.com/informationtechnology/2016/03/tay-the-neo-nazi-millennialchatbot-gets-autopsied/
    Findings
  • Aylin Caliskan-Islam, Joanna J. Bryson, and Arvind Narayanan. 2016. Semantics derived automatically from language corpora necessarily contain human biases. Science 186, April: 183–186. http://doi.org/10.1126/science.aal4230
    Locate open access versionFindings
  • Rich Caruana, Paul Koch, Yin Lou, Johannes Gehrke, and Marc Sturm. 2015. Intelligible Models for HealthCare: Predicting Pneumonia Risk and Hospital 30-day Readmission. Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, ACM. http://doi.org/http://dx.doi.org/10.1145/2783258.27886 13
    Locate open access versionFindings
  • Stevie Chancellor, Jessica Annette Pater, Trustin Clear, Eric Gilbert, and Munmun De Choudhury. 2016. #Thyghgapp: Instagram Content Moderation and Lexical Variation in Pro-Eating Disorder Communities. Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work & Social Computing, ACM, 1201–1213. http://doi.org/10.1145/2818048.2819963
    Locate open access versionFindings
  • Ethan Chiel. 2016. Who turned Microsoft’s chatbot racist? Surprise, it was 4chan and 8chan. Splinter News. Retrieved September 16, 2017 from http://splinternews.com/who-turned-microsoftschatbot-racist-surprise-it-was-1793855848
    Findings
  • Noam Chomsky. 2002. Syntactic Structures. Mouton de Gruyter, Berlin.
    Google ScholarFindings
  • Angèle Christin. 2017. Algorithms in practice: Comparing web journalism and criminal justice. Big Data & Society: 1–14. http://doi.org/10.1177/2053951717718855
    Findings
  • Rodney Coates. 2007. Covert Racism in the U.S. and Globally. Sociology Compass 2, 1: 208231. http://doi.org/10.1111/j.17519020.2007.00057.x
    Locate open access versionFindings
  • Beth Coleman. 2009. Race as Technology. Camera Obscura 24, 1.
    Google ScholarLocate open access versionFindings
  • Gregory F Cooper, Vijoy Abraham, Constantin F Aliferis, et al. 2005. Predicting dire outcomes of patients with community acquired pneumonia. 38: 347–366. http://doi.org/10.1016/j.jbi.2005.02.005
    Findings
  • Gregory F Cooper, Constantin F Aliferis, Richard Ambrosino, and John Aronis. 1997. An Evaluation of
    Google ScholarFindings
  • Kate Crawford and Ryan Calo. 2016. There is a Blind Spot in AI Research. Nature 538, 7625: 311–313. http://doi.org/10.1038/538311a
    Locate open access versionFindings
  • Kimberle Crenshaw. 1991. Mapping the Margins: Intersetionality, Identity Politics, and Violence Against Women of Color. Stanford Law Review 43, 6: 1241– 1299.
    Google ScholarLocate open access versionFindings
  • Gilles Deleuze and Felix Guattari. 1987. A Thousand Plateaus: Capitalism and Schizophrenia. University of Minessota Press, Minneapolis.
    Google ScholarFindings
  • Nicholas Diakopoulos. 2014. Algorithmic Accountability Reporting: On the Investigation of Black Boxes. Columbia University Academic Commons. http://doi.org/10.7916/D8ZK5TW2
    Findings
  • Tawanna R Dillahunt. 2014. Fostering Social Capital in Economically Distressed Communities. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, ACM, 531–540. http://doi.org/10.1145/2556288.2557123
    Locate open access versionFindings
  • Paul Dourish. 2014. No SQL: The Shifting Materialities of Database Technology. Computational Culture, 4: 1–37.
    Google ScholarLocate open access versionFindings
  • Jacob Eisenstein. 2013. What to do about bad language on the internet. Naacl-Hlt, Association for Computational Linguistics, 359–369.
    Google ScholarFindings
  • Sheena Erete and Jennifer O Burrell. 2017. Empowered Participation: How Citizens Use Technology in Local Governance. Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, ACM, 2307–2319. http://doi.org/10.1145/3025453.3025996
    Locate open access versionFindings
  • Sheena L Erete. 2015. Engaging Around Neighborhood Issues. Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing - CSCW ’15: 1590–1601. http://doi.org/10.1145/2675133.2675182
    Locate open access versionFindings
  • Dumitru Erhan, Yoshua Bengio, Aaron Courville, Pierre-Antoine Manzagol, Pascal Vincent, and Samy Bengio. 2010. Why does unsupervised pre-training help deep learning? Journal of Machine Learning Research 11, Feb: 625–660.
    Google ScholarLocate open access versionFindings
  • Mary Jo Foley. 2017. Microsoft launches Ruuh, yet another AI chatbot. ZDNet. Retrieved September 4, 2017 from http://www.zdnet.com/article/microsoftlaunches-ruuh-yet-another-ai-chatbot/
    Findings
  • Andrea Grimes, Martin Bednar, Jay David Bolter, and Rebecca E Grinter. 2008. EatWell: Sharing Nutritionrelated Memories in a Low-income Community. Proceedings of the 2008 ACM Conference on Computer Supported Cooperative Work, ACM, 87–96. http://doi.org/10.1145/1460563.1460579
    Locate open access versionFindings
  • Barbara J Grosz. 2012. What Question Would Turing Pose Today? AI Magazine 33, 4: 73–81. http://doi.org/10.1609/aimag.v33i4.2441
    Locate open access versionFindings
  • David Hankerson, Andrea R Marshall, Jennifer Booker, Houda El Mimouni, Imani Walker, and Jennifer A Rode. 2016. Does Technology Have Race? Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems, ACM, 473–486. http://doi.org/10.1145/2851581.2892578
    Locate open access versionFindings
  • Donna Haraway. 1991. A Cyborg Manifesto: Science, Technology, and Socialist-Feminism in the Late Twentieth Century. In Simians, Cyborgs, and Women: The Reinvention of Nature. Routledge, New York, 149–181.
    Google ScholarFindings
  • Donna J. Haraway. 1991. Simians, Cyborgs, and Women: The Reinvention of Nature. Routledge, New York. http://doi.org/10.2307/2076334
    Findings
  • Donna J. Haraway. 2016. Staying with the Trouble: Making Kin in the Chthulucene. Duke University Press, Durham.
    Google ScholarFindings
  • Kelly M. Hoffman, Sophie Trawalter, Jordan R. Axt, and M. Norman Oliver. 2016. Racial bias in pain assessment and treatment recommendations, and false beliefs about biological differences between blacks and whites. Proceedings of the National Academy of Sciences 113, 16: 4296–4301. http://doi.org/10.1073/pnas.1516047113
    Locate open access versionFindings
  • bell hooks. 2003. Talking Race and Racism. In Teaching Community: A Pedagogy of HOpe. Routledge, New York, NY, 25–40.
    Google ScholarFindings
  • Helena Horton. 2016. Microsoft deletes “teen girl” AI after it became a Hitler-loving sex robot within 24 hours. The Telegraph. Retrieved August 27, 2017 from http://www.telegraph.co.uk/technology/2016/03/24/mic rosofts-teen-girl-ai-turns-into-a-hitler-loving-sex-robotwit/
    Locate open access versionFindings
  • David Ingold and Spencer Soper. 2016. Amazon Doesn’t Consider the Race of Its Customers. Should it? Bloomberg.
    Google ScholarFindings
  • Lilly C. Irani and M. Six Silberman. 2013. Turkopticon: Interrupting Worker Invisibility in Amazon Mechanical Turk. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, ACM, 611–620. http://doi.org/10.1145/2470654.2470742
    Locate open access versionFindings
  • Lilly C Irani and M Six Silberman. 2016. Stories We Tell About Labor: Turkopticon and the Trouble with “Design.” Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, ACM, 4573–
    Google ScholarLocate open access versionFindings
  • 4586. http://doi.org/10.1145/2858036.2858592
    Findings
  • 50. Natasha Jaques, Shixiang Gu, Richard E Turner, and Douglas Eck. 2017. Tuning Recurrent Neural Networks with Reinforcement Learning. ICLR Workshop.
    Google ScholarFindings
  • 51. Sarah Jeong. 2016. How to Make a Bot That Isn’t Racist. Motherboard. Retrieved May 25, 2017 from https://motherboard.vice.com/en_us/article/how-tomake-a-not-racist-bot
    Findings
  • 52. Daniel Jurafsky and James Martin. 2017. Dialog Systems and Chatbots. In Speech and Language Processing.
    Google ScholarLocate open access versionFindings
  • 53. Daniel S Jurafsky and James H Martin. 2000. Speech and Language Processing: An Introduction to Natural Language Processing, Computational Linguistics, and Speech Recognition. Speech and Language Processing: An Introduction to Natural Language Processing, Computational Linguistics, and Speech Recognition. http://doi.org/10.1162/089120100750105975
    Locate open access versionFindings
  • 54. Darius (Dariusk) Kazemi. 2016. wordfilter. npm. Retrieved May 30, 2017 from https://www.npmjs.com/package/wordfilter
    Findings
  • 55. Zoe Kleinman. 2017. Artificial intelligence: How to avoid racist algorithms. BBC News.
    Google ScholarLocate open access versionFindings
  • 56. David Kushner. 2015. 4chan’s Overlord Christopher Poole Reveals Why He Walked Away. Rolling Stone. Retrieved September 16, 2017 from http://www.rollingstone.com/culture/features/4chansoverlord-christopher-poole-reveals-why-he-walkedaway-20150313
    Findings
  • 57. Jeff Larson, Surya Mattu, Lauren Kirchner, and Julia Angwin. 2016. How We Analyzed the COMPAS Recidivism Algorithm. ProPublica.
    Google ScholarFindings
  • 58. Lucian Leahu. 2016. Ontological Surprises: A Relational Perspective on Machine Learning. Proceedings of the 2016 ACM Conference on Designing Interactive Systems, ACM, 182–186. http://doi.org/10.1145/2901790.2901840
    Locate open access versionFindings
  • 59. Heeyoung Lee, Yves Peirsman, Angel Chang, Nathanael Chambers, Mihai Surdeanu, and Dan Jurafsky. 2011. Stanford’s Multi-pass Sieve Coreference Resolution System at the CoNLL-2011 Shared Task. Proceedings of the Fifteenth Conference on Computational Natural Language Learning: Shared Task, Association for Computational Linguistics, 28– 34.
    Google ScholarLocate open access versionFindings
  • 60. Peter Lee. 2016. Learning from Tay’s introduction. Official Microsoft Blog. Retrieved June 1, 2017 from https://blogs.microsoft.com/blog/2016/03/25/learningtaysintroduction/#sm.0000fpjmog51cfpxpwz11olji2ndk
    Findings
  • 61. Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2016. Rationalizing Neural Predictions. Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, Association for Computational Linguistics, 107–117.
    Google ScholarLocate open access versionFindings
  • 62. Ewa Luger and Abigail Sellen. 2016. “Like Having a Really Bad PA”: The Gulf between User Expectation and Experience of Conversational Agents. Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems - CHI ’16: 5286–5297. http://doi.org/10.1145/2858036.2858288
    Locate open access versionFindings
  • 63. Mitchell P. Marcus, Beatrice Santorini, Mary Ann Marcinkiewicz, and Ann Taylor. 1999. Treebank-3 LDC99T42. Philadelphia: Linguistic Data Consortium. Retrieved from https://catalog.ldc.upenn.edu/ldc99t42
    Findings
  • 64. Mark C. Marino. 2006. I, Chatbot: The Gender and Race Performativity of Conversational Agents.
    Google ScholarFindings
  • 65. Mark C. Marino. 2014. The Racial Formation of Chatbots. CLCWeb: Comparative Literature and Culture 16, 5. http://doi.org/10.7771/1481-4374.2560
    Locate open access versionFindings
  • 66. Tara McPherson. 2011. US Operating Systems at MidCentury: The Intertwining of Race and UNIX. Race After the Internet. http://doi.org/10.4324/9780203875063
    Findings
  • 67. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. ICLR Workshop.
    Google ScholarFindings
  • 68. Lisa Nakamura. 1995. Race In/For Cyberspace: Identity Tourism and Racial Passing on the Internet. Works and Days 13: 181–193.
    Google ScholarLocate open access versionFindings
  • 69. Gloria Naylor. 1986. The Meanings of a Word.
    Google ScholarFindings
  • 70. Nicholas Diakopoulos, Sorelle Friedler, Marcelo Arenas, et al. Principles for Accountable Algorithms and a Social Impact Statement for Algorithms. FAT/ML. Retrieved June 15, 2017 from http://www.fatml.org/resources/principles-foraccountable-algorithms
    Findings
  • 71. Sarah Perez. 2016. Microsoft silences its new A.I. bot Tay, after Twitter users teach it racism. Tech Crunch. Retrieved August 27, 2017 from https://techcrunch.com/2016/03/24/microsoft-silencesits-new-a-i-bot-tay-after-twitter-users-teach-it-racism/
    Findings
  • 72. Derek Powazek. 2013. What online communities can learn from twitter’s “block” blunder. Wired Magazine. Retrieved June 5, 2017 from https://www.wired.com/2013/12/twitter-blockingpolicy/
    Findings
  • 73. Emilee Rader, Margaret Echelbarger, and Justine Cassell. 2011. Brick by Brick: Iterating Interventions to Bridge the Achievement Gap with Virtual Peers. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, ACM, 2971–29http://doi.org/10.1145/1978942.1979382
    Locate open access versionFindings
  • 74. Stuart Russell and Peter Norvig. 1995. Artificial Intelligence: A Modern Approach. Prentice-Hall, Englewood Cliffs, New Jersey. http://doi.org/10.1016/0925-2312(95)90020-9
    Findings
  • 75. Saiph Savage, Andres Monroy-Hernandez, and Tobias Höllerer. 2016. Botivist: Calling Volunteers to Action Using Online Bots. Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work & Social Computing, ACM, 813–822. http://doi.org/10.1145/2818048.2819985
    Locate open access versionFindings
  • 76. Ari Schlesinger, W Keith Edwards, and Rebecca E Grinter. 2017. Intersectional HCI: Engaging Identity through Gender, Race, and Class. Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems - CHI ’17, ACM Press, 5412– 5427. http://doi.org/10.1145/3025453.3025766
    Locate open access versionFindings
  • 77. M Six Silberman, Lilly Irani, and Joel Ross. 2010. Ethics and Tactics of Professional Crowdwork. XRDS 17, 2: 39–43. http://doi.org/10.1145/1869086.1869100
    Locate open access versionFindings
  • 78. Caroline Sinders. 2016. Microsoft’s Tay is an Example of Bad Design. Medium. Retrieved August 27, 2017 from https://medium.com/@carolinesinders/microsofts-tay-is-an-example-of-bad-design-d4e65bb2569f
    Findings
  • 79. Latanya Sweeney. Discrimination in Online Ad Delivery.
    Google ScholarFindings
  • 80. Alex S Taylor. 2009. Machine Intelligence. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, ACM, 2109–2118. http://doi.org/10.1145/1518701.1519022
    Locate open access versionFindings
  • 81. Zeynep Tufekci. 2015. Algorithmic Harms beyond Facebook and Google: Emergent Challenges of Computational Agency. Journal of Telecommunications and High Technology Law 90: 203–218.
    Google ScholarLocate open access versionFindings
  • 82. Alan M Turing. 1950. Computing machinery and intelligence. Mind 59, 236: 433–460.
    Google ScholarLocate open access versionFindings
  • 83. James Vincent. 2017. Transgender YouTubers had their videos grabbed to train facial recognition software. The Verge.
    Google ScholarFindings
  • 84. Joseph Weizenbaum. 1966. ELIZA—A Computer Program for the Study of Natural Language Communication Between Man and Machine. Commun. ACM 9, 1: 36–45. http://doi.org/10.1145/365153.365168
    Locate open access versionFindings
  • 85. Kevine A. Whitehead. 2009. “Categorizing the Categorizer”: The Management of Racial Common Sense in Interaction. Social Psychology Quarterly 72, 4: 325–342.
    Google ScholarLocate open access versionFindings
  • 86. Keving A. Whitehead and Gene H. Lerner. 2009. When are persons “white”?: on some practical asymmetries of racial reference in talk-in- interaction. Discourse & Society 20, 5: 613–641. http://doi.org/10.1177/0306312706069437
    Locate open access versionFindings
  • 87. Langdon Winner. 1980. Do Artifacts Have Politics? Daedalus 109, 1: 121–136.
    Google ScholarLocate open access versionFindings
  • 88. Jun Xiao, John Stasko, and Richard Catrambone. 2007. The Role of Choice and Customization on Users’ Interaction with Embodied Conversational Agents: Effects on Perception and Performance. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, ACM, 1293–1302. http://doi.org/10.1145/1240624.1240820
    Locate open access versionFindings
  • 89. Qianli Xu, Liyuan Li, and Gang Wang. 2013. Designing Engagement-aware Agents for Multiparty Conversations. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, ACM, 2233–2242. http://doi.org/10.1145/2470654.2481308
    Locate open access versionFindings
  • 90. Zhao Yan, Nan Duan, Jun-Wei Bao, et al. 2016. DocChat: An Information Retrieval Approach for Chatbot Engines Using Unstructured Documents. ACL (1).
    Google ScholarLocate open access versionFindings
  • 91. Microsoft Bot Framework. Microsoft. Retrieved August 10, 2017 from https://dev.botframework.com/
    Findings
  • 92. IBM Watson. IBM. Retrieved August 10, 2017 from https://www.ibm.com/watson/
    Findings
  • 93. A Tribe Called Red. Tribal Spirit Music. Retrieved from https://tribalspiritmusic.com/artists/a-tribe-calledred/
    Findings
  • 94. 2016. Tay AI. Know Your Meme. Retrieved June 1, 2017 from https://blogs.microsoft.com/blog/2016/03/25/learningtaysintroduction/#sm.0000fpjmog51cfpxpwz11olji2ndk
    Findings
Your rating :
0

 

Best Paper
Best Paper of CHI, 2018
Tags
Comments