service navigation

EASP – European Association of Social Psychology

EASP Grant Report

29.05.2024, by Media Account in grant report

By Eva Moreno-Bella and Marcos Dono

Eva Moreno-Bella and Marcos Dono
Eva Moreno-Bella and Marcos Dono

The goal of this report is to disclose the evolution of the project ‘Choosing the Blue Pill: Examining the Psychosocial Determinants of Support for a Technocracy of Artificial Intelligence’, which received a Seedcorn Grant from EASP last year. The work hereby discussed has been executed by Marcos Dono from the University of Santiago de Compostela and Eva Moreno-Bella form the Distance Learning National University (Spain).

Theoretical background

Artificial Intelligence (AI) is one of the latest technological revolutions to impact human life at all levels (Makridakis, 2017). The capability of AI to learn and develop complex models based on incredible amounts of data allows it to efficiently execute many tasks that humans cannot tackle (Sjödin et al., 2021). While beliefs about technology being alive have been dismissed, AI may become a threat to modern participative democracies. Its cold neutrality, ability to manage immense amounts of data and its superiority in performing complex predictive models compared to humans have even led some scholars to defend a technocracy of AI (Sætra, 2020). Outside Academia, people may also be enticed by this idea, especially considering trust towards political leaders and institutions is waning (de Zúñiga et al., 2019) and conspiracy theories about evil elites grow in scope and popularity (Douglas et al., 2019). If a traditional technocracy would mean trusting all facets of government to a committee of experts, a technocracy of AI is defined as transferring those government responsibilities to an AI that would be in charge of governmental decisions.

The main goal of this project is to study potential psychosocial determinants that lead to increased support for a technocracy of AI. We argue that one of the main drivers of this tendency should be anomie, defined as the perception of deterioration of society’s values and leadership (Teymoori et al., 2017). We argue that anomie combines two key motives that have been related to supporting extreme political choices: catastrophist forecasting of society’s future (Duckitt & Fisher, 2003) and a perceived attack on one’s moral values (Skitka, 2002). For further support, anomie has already been shown to explain people’s preference for strong and authoritarian leaders (Heydari et al., 2012; Sprong et al., 2019) and has also been associated with political extremism (Ionescu et al., 2021). We build upon this evidence to argue that anomic perceptions can enhance preferences for an AI-ruled technocracy as a similarly extremist political idea. We argue that under anomie, people may approve of being ruled by an exceptionally efficient and amoral machine. Unlike politicians, AI is incorruptible, and its prowess and moral neutrality would offer a path for improving the organization of society while leaving morality out of governance and transferring it to the private dimension.

We also posit that the impact of anomie on preferences for a technocracy of AI may be moderated by the Need for Cognitive Closure (NCC; (Kruglanski & Fishman, 2009) and social dominance orientation (SDO). NCC is defined as a need for clear-cut answers to ambiguous situations and it has been deemed an antecedent of the radicalization process, as people that are intolerant towards ambiguity find refuge in the black-and-white worldviews of extreme groups (Hogg, 2014; Webber et al., 2018). Similarly, we argue that an AI-ruled technocracy should satisfy the need of people high in NCC to have clear norms and instructions to follow, especially in an anomic society. Thus, NCC should exacerbate the effects of anomie former over the preferences for an AI-ruled technocracy. On the other hand, SDO is defined as a general ideological attitude expressed in a preference for strict social hierarchies (Duckitt & Sibley, 2009; Pratto et al., 1994). Research shows that people high in SDO prefer more autocratic systems. We argue that, on top of being an autocratic system, people high in SDO may be enticed to support an AI technocracy due to its beyond-human proficiency, which should prompt a perception of AI as an unquestionable authority. The addition of this variable will provide a fuller scope to our analysis as we complement a personal desire with an ideological attitude as moderators.

Research outcomes

Through the EASP Seedcorn grant, we were able to collect data on Prolific Academic. With these resources guaranteed, we opted to submit our work as a registered report at the journal Political Psychology, associated with its call on NextGen Ideas. We are very pleased to disclose that the article received full acceptance recently and will be soon available at the journal’s Early View system. Our research aimed to ascertain whether anomie —the perception that society is disintegrating at both the societal and political level— moderated by Need for Cognitive Closure (NCC) and Social Dominance Orientation (SDO), predicts support for TecAI across three studies —two experimental— involving adult participants from Spain (Ntotal = 754). The findings confirm anomie's predictive power in support for TecAI, albeit without establishing causality, as anomie’s condition failed to alter support for TecAI in experiments 2 and 3. However, an exploratory analysis showed a small effect of the subdimension of ‘breakdown of leadership’ on support for TecAI in a directional test. Moreover, NCC and SDO did not significantly moderate the anomie-support-for-TecAI link. Ideology also emerged as a predictor, with conservatives showing more support, an intriguing avenue for future research. This study marks an initial empirical exploration into public perceptions of AI in politics, jumpstarting investigation into this critical question.


These studies constitute, to the best of our knowledge, the first examination of public support towards a technocracy of AI. The introduction of AI in multiple dimensions of life, paired with AI’s unprecedent capacity to replicate human thought, opens up an exciting realm of research for Social Psychologists. From a more applied angle, we hope that this project can spark research on the perceptions and implications of using AI in governing tasks. From our standpoint, we strive to continue working on this exciting topic. We would like to thank EASP, as the concession of this grant has allowed us to embark in the investigation of a topic that otherwise would have remained an unrealized curiosity.


de Zúñiga, H. G., Ardèvol-Abreu, A., Diehl, T., Patiño, M. G., & Liu, J. H. (2019). Trust in institutional actors across 22 countries. Examining political, science, and media trust around the world. Revista Latina de Comunicación Social, 74, Article 74.
Douglas, K. M., Uscinski, J. E., Sutton, R. M., Cichocka, A., Nefes, T., Ang, C. S., & Deravi, F. (2019). Understanding conspiracy theories. Political Psychology, 40, 3–35.
Duckitt, J., & Fisher, K. (2003). The impact of social threat on worldview and ideological attitudes. Political Psychology, 24(1), Article 1.
Duckitt, J., & Sibley, C. G. (2009). A dual process motivational model of ideological attitudes and system justification. Social and Psychological Bases of Ideology and System Justification, 292–313.
Heydari, A., Teymoori, A., Nasiri, H., & Fardzadeh, H. E. (2012). Relationship between socioeconomic status, anomie, and authoritarianism. E-BANGI, 7(1), Article 1.
Hogg, M. A. (2014). From Uncertainty to Extremism: Social Categorization and Identity Processes. Current Directions in Psychological Science, 23(5), Article 5.
Ionescu, O., Tavani, J. L., & Collange, J. (2021). Perceived societal anomie, collective memory, and support for collective action: Perceiving that current French society is anomic influences present support for collective action through the reconstructed national past. Asian Journal of Social Psychology, 24(3), Article 3.
Kruglanski, A. W., & Fishman, S. (2009). The need for cognitive closure. Handbook of Individual Differences in Social Behavior, 343–353.
Makridakis, S. (2017). The forthcoming Artificial Intelligence (AI) revolution: Its impact on society and firms. Futures, 90, 46–60.
Pratto, F., Sidanius, J., Stallworth, L. M., & Malle, B. F. (1994). Social dominance orientation: A personality variable predicting social and political attitudes. Journal of Personality and Social Psychology, 67(4), Article 4.
Sætra, H. S. (2020). A shallow defence of a technocracy of artificial intelligence: Examining the political harms of algorithmic governance in the domain of government. Technology in Society, 62, 101283.
Sjödin, D., Parida, V., Palmié, M., & Wincent, J. (2021). How AI capabilities enable business model innovation: Scaling AI through co-evolutionary processes and feedback loops. Journal of Business Research, 134, 574–587.
Skitka, L. J. (2002). Do the means always justify the ends, or do the ends sometimes justify the means? A value protection model of justice reasoning. Personality and Social Psychology Bulletin, 28(5), Article 5.
Sprong, S., Jetten, J., Wang, Z., Peters, K., Mols, F., Verkuyten, M., Bastian, B., Ariyanto, A., Autin, F., Ayub, N., Badea, C., Besta, T., Butera, F., Costa-Lopes, R., Cui, L., Fantini, C., Finchilescu, G., Gaertner, L., Gollwitzer, M., … Wohl, M. J. A. (2019). “Our Country Needs a Strong Leader Right Now”: Economic Inequality Enhances the Wish for a Strong Leader. Psychological Science, 30(11), Article 11.
Teymoori, A., Bastian, B., & Jetten, J. (2017). Towards a psychological analysis of anomie. Political Psychology, 38(6), Article 6.
Webber, D., Babush, M., Schori-Eyal, N., Vazeou-Nieuwenhuis, A., Hettiarachchi, M., Bélanger, J. J., Moyano, M., Trujillo, H. M., Gunaratna, R., Kruglanski, A. W., & Gelfand, M. J. (2018). The road to extremism: Field and experimental evidence that significance loss-induced need for closure fosters radicalization. Journal of Personality and Social Psychology, 114(2), Article 2.