service navigation

EASP – European Association of Social Psychology

Meta-analysis: Seeking studies/datasets on the value of algorithmic advice

12.12.2023, by Shaul Shalvi

Deadline: 10 January, 2024

We are conducting a meta-analysis assessing how people value advice and input given by algorithms. We refer to advice as a recommendation favoring or discouraging particular option(s). If you have a study (published or unpublished) that fulfills the inclusion criteria below, please inform us via an email to:
algorithm-meta-ASE@uva.nl
We are interested in any study or dataset with experimental data that studies algorithmic advice. If you have any questions/doubts, please don’t hesitate to write to us.

Looking forward to reading your work!
Alejandro Hirmas, Margarita Leib, Nils Köbis, Shaul Shalvi
Center for Research on Experimental Economics and political Decision making (CREED)
University of Amsterdam

Inclusion criteria:
1. An experiment where participants are confronted with the (potential) advice from an algorithm/AI;
2. Tasks that are either hypothetical or have real consequences;
3. The advice can be either ostensibly given by an algorithmic or provided by a real algorithm
4. The experiment needs to include one or more of the following outcomes:
a. Choice between Algorithm/AI or a human: Participants choose whether they prefer an advice from an algorithm or not (the choice set can be choosing between algorithmic vs. human advice or algorithmic vs. no advice)
b. Attitudes towards the algorithm/AI: Participants rate on various scales the algorithmic advice. For instance, they rate:
- Confidence/Trust in advice from an Algorithm
- Likelihood of following advice from Algorithm
- Accuracy of Algorithmic advice
c. Actions before and after the Algorithm/AI advice:
- Participants make initial decisions
- Then they receive advice coming from an AI/algorithm
- Have an opportunity to change their decision
- We aim to estimate the Weight of Advice (Harvey and Fischer, 1997) of the AI/algorithm
5. We will focus on studies that (a) compare these measures between Humans and Algorithmic advice, (b) compare algorithmic advice to a theoretical benchmark, and (c) compare algorithmic advice to no advice.

If your study fits with our criteria, please send us your paper / data so that we can include it.

Selected studies fitting the above criteria:
Dietvorst, B. J., Simmons, J. P., & Massey, C. (2015). Algorithm aversion: people erroneously avoid algorithms after seeing them err. Journal of Experimental Psychology: General, 144(1), 114.
Castelo, N., Bos, M. W., & Lehmann, D. R. (2019). Task-dependent algorithm aversion. Journal of Marketing Research, 56(5), 809-825.
Logg, J. M., Minson, J. A., & Moore, D. A. (2019). Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes, 151, 90-103.
Longoni, C., Bonezzi, A., & Morewedge, C. K. (2019). Resistance to medical artificial intelligence. Journal of Consumer Research, 46(4), 629-650.
Reich, T., Kaju, A., & Maglio, S. J. (2023). How to overcome algorithm aversion: Learning from mistakes. Journal of Consumer Psychology, 33(2), 285-302.
Yeomans, M., Shah, A., Mullainathan, S., & Kleinberg, J. (2019). Making sense of recommendations. Journal of Behavioral Decision Making, 32(4), 403-414.