[1] Theo Araujo. 2018. Living up to the chatbot hype: The influence of anthropomorphic design cues and communicative agency framing on conversational agent
and company perceptions. Computers in Human Behavior 85 (2018), 183–189.
https://doi.org/10.1016/j.chb.2018.03.051
[2] Theo Araujo. 2020. Conversational Agent Research Toolkit: An alternative
for creating and managing chatbots for experimental research. Computational
Communication Research 2, 1 (2020), 35–51. https://doi.org/10.5117/CCR2020.1.
002.ARAU
[3] Neeraj Arora, Xavier Dreze, Anindya Ghose, James D. Hess, Raghuram Iyengar,
Bing Jing, Yogesh Joshi, V. Kumar, Nicholas Lurie, Scott Neslin, S. Sajeesh, Meng
Su, Niladri Syam, Jacquelyn Thomas, and Z. J. Zhang. 2008. Putting one-to-one
marketing to work: Personalization, customization, and choice. Marketing Letters
19, 3 (2008), 305. https://doi.org/10.1007/s11002-008-9056-z
[4] Christoph Bartneck, Dana Kulić, Elizabeth Croft, and Susana Zoghbi. 2009. Measurement Instruments for the Anthropomorphism, Animacy, Likeability, Perceived Intelligence, and Perceived Safety of Robots. International Journal of Social
Robotics 1, 1 (2009), 71–81. https://doi.org/10.1007/s12369-008-0001-3
[5] Konstantina Christakopoulou, Filip Radlinski, and Katja Hofmann. 2016. Towards
Conversational Recommender Systems. In Proceedings of the 22nd ACM SIGKDD
International Conference on Knowledge Discovery and Data Mining (San Francisco,
California, USA) (KDD ’16). Association for Computing Machinery, New York,
NY, USA, 815–824. https://doi.org/10.1145/2939672.2939746
[6] P. Cofta. 2007. Confidence, trust and identity. BT Technology Journal 25, 2 (2007),
173–178. https://doi.org/10.1007/s10550-007-0042-4
[7] Emily S Cross, Richard Ramsey, Roman Liepelt, Wolfgang Prinz, and Antonia de C Hamilton F. 2016. The shaping of social perception by stimulus
and knowledge cues to human animacy. Philosophical transactions of the
Royal Society of London.Series B, Biological sciences 371, 1686 (2016), 20150075.
https://doi.org/10.1098/rstb.2015.0075
[8] Tamara Dinev, Heng Xu, Jeff H. Smith, and Paul Hart. 2013. Information privacy
and correlates: an empirical attempt to bridge and distinguish privacy-related
concepts. European Journal of Information Systems 22, 3 (05/01 2013), 295–316.
https://doi.org/10.1057/ejis.2012.23 doi: 10.1057/ejis.2012.23.
[9] Grahame R. Dowling and Richard Staelin. 1994. A Model of Perceived Risk and
Intended Risk-Handling Activity. Journal of Consumer Research 21, 1 (1994),
119–34. https://doi.org/10.1086/209386
[10] Nicholas Epley and Adam Waytz. 2010. Mind Perception. John Wiley and Sons
Ltd. https://doi.org/10.1002/9780470561119.socpsy001014
[11] Mauricio S. Featherman and Paul A. Pavlou. 2003. Predicting e-services adoption:
a perceived risk facets perspective. International Journal of Human-Computer
Studies 59, 4 (2003), 451–474. https://doi.org/10.1016/S1071-5819(03)00111-3
[12] Ed Gerck. 2002. Trust as Qualified Reliance on Information, Part I. The COOK
Report on Internet ISSN 1071 - 6327, Vol. X (01/10 2002), 19–24. https://doi.org/
10.13140/RG.2.2.22646.04165
[13] Guy Laban and Theo Araujo. 2020. Don’t Take it Personally: Resistance to
Individually Targeted Recommendations with Anthropomorphic Recommender
Systems. https://doi.org/10.31234/osf.io/w4mkv
[14] Guy Laban and Theo Araujo. 2020. Working Together with Conversational
Agents: The Relationship of Perceived Cooperation with Service Performance
Evaluations. In Chatbot Research and Design, Asbjørn Følstad, Theo Araujo,
Symeon Papadopoulos, Effie Lai-Chong Law, Ole-Christoffer Granmo, Ewa Luger,
and Petter Bae Brandtzaeg (Eds.). Springer International Publishing, Cham, 215–
228. https://doi.org/10.1007/978-3-030-39540-7_15
[15] Guy Laban, Jean-noël George, Val Morrison, and Emily S Cross. 2020. Tell
Me More ! Assessing Interactions with Social Robots From Speech. https:
//doi.org/10.31234/osf.io/jkht2
[16] Guy Laban, Val Morrison, and Emily S Cross. 2020. Let’s Talk About It! Subjective
and Objective Disclosures to Social Robots. In Companion of the 2020 ACM/IEEE
International Conference on Human-Robot Interaction. Association for Computing
Machinery, Cambridge, UK, 328–330. https://doi.org/10.1145/3371382.3378252
[17] Lingyun Qiu and Izak Benbasat. 2009. Evaluating Anthropomorphic Product
Recommendation Agents: A Social Relationship Perspective to Designing Information Systems. Journal of Management Information Systems 25, 4 (2009),
145–182. https://doi.org/10.2753/MIS0742-1222250405
[18] Dan Russell. 1982. The Causal Dimension Scale: A measure of how individuals
perceive causes. Journal of personality and social psychology 42, 6 (1982), 1137–
1145. https://doi.org/10.1037/0022-3514.42.6.1137
[19] S Shyam Sundar. 2020. Rise of Machine Agency: A Framework for Studying
the Psychology of Human–AI Interaction (HAII). Journal of Computer-Mediated
Communication 25, 1 (01 2020), 74–88. https://doi.org/10.1093/jcmc/zmz026
[20] S. S. Sundar and Sampada S. Marathe. 2010. Personalization versus Customization:
The Importance of Agency, Privacy, and Power Usage. Human Communication
Research 36, 3 (2010), 298–322. https://doi.org/10.1111/j.1468-2958.2010.01377.x
[21] Cong Wang, Yifeng Zheng, Jinghua Jiang, and Kui Ren. 2018. Toward PrivacyPreserving Personalized Recommendation Services. Engineering 4, 1 (February
2018 2018), 21–28. https://doi.org/10.1016/j.eng.2018.02.005
[22] Bo Zhang and S. Shyam Sundar. 2019. Proactive vs. reactive personalization:
Can customization of privacy enhance user experience? International Journal of
Human-Computer Studies 128 (2019), 86 – 99. https://doi.org/10.1016/j.ijhcs.2019.
03.002