service navigation

EASP – European Association of Social Psychology

EASP Seedcorn Grant Report by Alessandro Gabbiadini¹, Federica Durante¹, Cristina Baldissarri¹ & Luca Andrighetto²

14.12.2020, by Tina Keil in grant report

¹University of Milano-Bicocca, Italy;
²University of Genova, Italy

Understanding Artificial Intelligence: What perceived risks for humans?

Full artificial intelligence could spell the end of the human race
Stephen Hawking

The hype around Machine Learning (ML) and Artificial Intelligence (AI) these days is ubiquitous and technological advances run fast, providing computer systems with the ability to perform activities which are typically considered to require human intelligence (LeCun, Bengio, & Hinton, 2015).

In the past, machines competed with humans mainly in raw physical abilities, while humans still retained edges over machines in cognition. AI technology is now beginning to outperform human abilities, such as analysing, communicating, and, importantly, understanding human emotions and replicating human cognitive skills (LeCun et al., 2015). The fear that intelligent software systems could soon replace human thinking has been recently raised both by philosophers (e.g., Harari, 2018) and by public opinion (Ipsos MORI, 2017). AI today represents reality and no longer science fiction, and public concerns about this technology are mainly related to its impact on society (Harari, 2018). AI algorithms are already present in many of the technologies we use every day (e.g., social media, smart-watch, healthcare, online ads).

However, our knowledge regarding people’s perception of AI technology is currently limited. Although AI brings remarkable benefits, there are noteworthy unanswered questions regarding individual, social and ethical facets. In this regard, one recent report (Ipsos MORI, 2017) has analysed aspects, motivations and doubts that people have on these technologies. In this study, participants reported an ambivalent attitude toward AI technologies. It is generally described as an important and useful emerging technology, and in particular, respondents were more willing to accept AI if it has social value. For example, if it is perceived as being helpful for society, benefiting individuals and groups of people (Ipsos MORI, 2017). Nevertheless, social risk concerns also emerged, such as depersonalization, risk of personal and societal harm, restriction of choice, fear of being replaced, use of personal data and privacy (Ipsos MORI, 2017).

Combining all these aspects, we hypothesise that social value and social risk could represent two key factors that would help to individuate if and which AI applications are perceived as a threat to both individuals and societies. In this regard, sociopsychological research (Tajfel & Turner, 1986) has demonstrated that people are motivated to perceive their own group (e.g., humankind) as positively distinct from others (e.g., computer agents). The literature also suggests two different types of threat elicited by an outgroup: realistic and identity threats. Perceived realistic threat represents a threat to the actual wellbeing of a group (e.g., security, safety, health, employment; Riek, Mania, & Gaertner, 2006; Stephan, Ybarra, & Morrison, 2009); while perceived identity threat concerns the group’s uniqueness (e.g., values, traditions, ideology morals; Zárate, Garcia, Garza, & Hitlan, 2004). In this perspective, the rise of AI technology may be perceived as threatening for people’s jobs, safety and wellbeing, thereby posing a realistic threat to humans. Further, AI may also elicit identity threat by blurring the border between what is perceived as human and what is perceived as an artificial intelligence agent.

To test this hypothesis, a first correlational study (Study 1; N = 280) was conducted. Statistical analyses showed a significant positive association between the perception of social risk elicited by AI applications and both perceived realistic threat and human identity threat. We also found a significant negative association of the perception of social value elicited by AI applications on both kinds of threat.

Considering this evidence, in a pre-test study, 95 real existing software applications based on AI technologies were considered. To identify the most salient ones, participants (N = 131) were asked to report whether they knew about the existence of each application and how much they thought these applications were based on artificial intelligence. Twenty applications emerged as the most known and were therefore considered in the following study (Study 2). In this study (N = 399; 58.1% females, 39.8% males, 1.8% other, 0.3% rather not answer; Mage = 33.51, SDage = 11.79), six ad-hoc created items were employed for assessing how much individuals evaluated the selected AI applications as socially valuable and risky. For exploratory purposes, further 5 applications were considered in addition to those emerged in the pre-test. Cluster analysis techniques were adopted to classify the selected applications into groups based on their similarities in terms of social risk and social value. This two-dimensional representation allowed us to identify the applications that were perceived neither particularly risky nor socially valuable (i.e., those placed in the centre of the two-dimensional space). In particular, two applications emerged to be perceived as both averagely risky and valuable.

Starting from these results, in Study 3 (N = 409; 52,3% male; 47,4% female; Mage = 27.43, SDage = 9.33), the two AI apps selected in Study 2 were manipulated by presenting different scenarios. In this case, the dependent variables were perceptions of realistic threat and human identity threat (Yogeeswaran, Złotowski, Livingstone, Bartneck, Sumioka, & Ishiguro, 2016). Results suggest that the level of social value used to describe the two different apps worked differently depending on the app presented in the manipulation.

Finally, a fourth study was conducted to explore AI effects on participants in a real-life situation. In fact, previous literature has highlighted an emerging trend of adopting AI technologies within the business environment. Recruiters already use AI algorithms to optimize talent acquisition by taking over time-consuming, repetitive tasks such as sourcing and screening applicants to improve the quality of the hiring process and neutralize human biases. Therefore, in Study 4 (N = 236; 50.8% male; 47.5% female; 1.7% other; Mage = 27.58, SDage = 8.12) we focused on the psychological consequences for candidates when evaluated by AI agents. In this study, different scenarios were created for manipulating a fictitious job interview carried out by a human selector vs. AI agent. Participants’ levels of self-objectification, the perceived control during the job interview, beliefs in free will and perception of threats were then assessed. Preliminary results highlighted a relationship between the experimental condition and self-objectfication (Gray, Gray, & Wegner, 2007), beliefs in free will (Rakos, Laurene, Skala, & Slane, 2008) and perceptions of threat via perceived control.

Considering that AI development is a hot topic in mass-media and policymaking, it is crucial to conduct empirical research that could help social psychologists understand the processes and cognitive biases behind AI people’s perceptions so to facilitate the development of socially acceptable technologies.

References

  • Gray, H. M., Gray, K., & Wegner, D. M. (2007). Dimensions of mind perception. Science, 315(5812), 619-619.
  • Harari, Y. N. (2018). 21 Lessons for the 21st Century. New York: Spiegel & Grau.
  • Ipsos MORI, (2017). Public views of machine learning. Retrieved from https://royalsociety.org/~/media/policy/projects/machine-learning/publications/public-views-of-machine-learning-ipsos-mori.pdf, accessed 20 June 2019.
  • LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521, 436-444.
  • Rakos, R. F., Laurene, K. R., Skala, S., & Slane, S. (2008). Belief in free will: Measurement and conceptualization innovations. Behavior and Social Issues, 17(1), 20-40.
  • Riek, B. M., Mania, E. W., & Gaertner, S. L. (2006). Intergroup threat and outgroup attitudes: A meta-analytic review. Personality and Social Psychology Review, 10, 336-53
  • Stephan, W. G., Ybarra, O., & Morrison, K. R. (2009). Intergroup threat theory. In T. D. Nelson (Ed.), Handbook of prejudice, stereotyping, and discrimination (pp. 43-59). New York, NY: Psychology Press.
  • Tajfel, H., & Turner, J. C. (1986). The Social Identity Theory of Intergroup Behavior. Psychology of Intergroup Relations, 5, 7-24.
  • Zárate, M. A., Garcia, B., Garza, A. A., & Hitlan, R. T. (2004). Cultural threat and perceived realistic group conflict as dual predictors of prejudice. Journal of Experimental Social Psychology, 40(1), 99-105.
  • Złotowski, J., Yogeeswaran, K., & Bartneck, C. (2017). Can we control it? Autonomous robots threaten human identity, uniqueness, safety, and resources. International Journal of Human-Computer Studies, 100, 48-54.