The Use of Algorithmic Support: Are Individuals Averse Towards Algorithms or Rather Overconfident Regarding Their Skills?

Last registered on October 31, 2022

Pre-Trial

Trial Information

General Information

Title
The Use of Algorithmic Support: Are Individuals Averse Towards Algorithms or Rather Overconfident Regarding Their Skills?
RCT ID
AEARCTR-0009649
Initial registration date
October 31, 2022

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
October 31, 2022, 4:45 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Region

Primary Investigator

Affiliation
University of Passau

Other Primary Investigator(s)

Additional Trial Information

Status
In development
Start date
2022-10-31
End date
2023-04-30
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
Algorithms are often more efficient in strategic decision-making than humans, but humans show aversion against the use of algorithmic support. In non-algorithmic contexts, overconfident individuals are less likely to delegate decisions. The aim of this study is to analyze whether individuals are averse against algorithms or rather overconfident concerning their capabilities. To disentangle the effect of overconfidence and control over algorithms on the use of algorithmic support, I conduct an online experiment. I analyze whether information about the current performance can reduce overconfidence and increases the use of algorithmic support. Moreover, I examine whether the use and the willingness-to-pay for algorithmic support significantly differs, if individuals can either delegate their decisions to an algorithm or instead can receive algorithmic advice.
External Link(s)

Registration Citation

Citation
Hofmann, Janina. 2022. "The Use of Algorithmic Support: Are Individuals Averse Towards Algorithms or Rather Overconfident Regarding Their Skills?." AEA RCT Registry. October 31. https://doi.org/10.1257/rct.9649-1.0
Experimental Details

Interventions

Intervention(s)
Intervention Start Date
2022-10-31
Intervention End Date
2023-01-31

Primary Outcomes

Primary Outcomes (end points)
The primary outcomes comprise (i) the subjects’ (incentivized) monetary valuation of algorithmic support, i.e., their willingness-to-pay for advice from an algorithm, resp. delegation to an algorithm (ii) the use rate of algorithmic support, i.e. share of taken advice vs. share of delegated estimations.
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
To disentangle the effect of overconfidence and control over algorithms on algorithm aversion, I conduct an online experiment where participants perform an incentivized forecasting task. The experiment comprises a 2x2 full-factorial between-subject design. First, I systematically vary whether individuals receive feedback about their relative performance in the forecasting task, or not. Providing participants with feedback about their actual performance reduces overconfidence and, therefore, allows to study the effect of overconfidence on the use of algorithmic support. Second, I vary whether participants can either receive advice from an algorithm or can delegate the decision to an algorithm. This allows to investigate how the level of overconfidence at different control levels over the algorithm alters the use of algorithmic support. In all treatments, I measure the willingness-to-pay for the use of algorithmic support. This allows to analyze whether there exists a significant difference between the willingness-to-pay for advice and delegation, contingent on the performance feedback.

The experimental procedure is as follows: At the beginning of the experiment the participants receive the instructions and have to answer control questions. After successfully answering the control question, the participants execute the forecasting task where they estimate the normalized GDP per capita for different countries. They obtain some information about each country (i.e., life expectancy) but not the name of the country. Depending on the treatment participants obtain feedback about their relative performance. After completing the estimations, the participants answer a questionnaire. At the end of the experiment, they receive a payment which comprises a participation fee and the earnings of one randomly selected estimation.
Experimental Design Details
Randomization Method
Randomization by computer online
Randomization Unit
Randomization at the subject level: Randomized assignment of subjects to treatments.
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
Observations on the subject level are assumed to be independent, since subjects decide without interacting with other participants in the experimental session. Thus, the number of clusters equals the number of participants.
Sample size: planned number of observations
A minimum of 50 observations is planned for each treatment. A power analysis is conducted based on estimated effect sizes after 50 observations have been collected. Based on this power analysis, more observations may be acquired in order achieve a power level of at least 80%.
Sample size (or number of clusters) by treatment arms
See above. Minimum of 50 per treatment.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
University Research Ethics Committee of the University of Passau
IRB Approval Date
2022-08-10
IRB Approval Number
n/a

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials