Predictability of humans and AI: a lab study

Last registered on June 27, 2022

Pre-Trial

Trial Information

General Information

Title
Predictability of humans and AI: a lab study
RCT ID
AEARCTR-0009667
Initial registration date
June 27, 2022

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
June 27, 2022, 5:12 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
June 27, 2022, 5:13 PM EDT

Last updated is the most recent time when changes to the trial's registration were published.

Locations

Region

Primary Investigator

Affiliation
University of Zurich

Other Primary Investigator(s)

PI Affiliation
PI Affiliation

Additional Trial Information

Status
On going
Start date
2022-06-27
End date
2022-08-31
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
We measure the ability of non-experts to predict the performance of humans and AI in an incentivized laboratory experiment.
We compare the objective predictability of humans and AI in a given context with the subjective beliefs about the predictability. We manipulate the information about the type of the agent, human or AI, given to participants and measure how this information affects participants' performance.
External Link(s)

Registration Citation

Citation
Burri, Thomas, Markus Christen and Serhiy Kandul. 2022. "Predictability of humans and AI: a lab study." AEA RCT Registry. June 27. https://doi.org/10.1257/rct.9667-1.1
Experimental Details

Interventions

Intervention(s)
Intervention Start Date
2022-06-27
Intervention End Date
2022-07-10

Primary Outcomes

Primary Outcomes (end points)
predicition success (number of correct predictions); beliefs about performance, a score
Primary Outcomes (explanation)
prediction success is measured as a number of

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
We measure participants' ability to predict the performance of AI and a human in a specific task (within-subject design). We then compare the objective prediction success with participants' subjective beliefs about their performance in the prediction task.
Experimental Design Details
We employ a lunar lander game: participants first land the lander themselves and then are presented with pre-recorded landings performed by AI or a human expert (randomized order). Their main task is to guess whether a pre-recorded landing had resulted in a failure or a success. We then measure the objective performance in a prediction task (both for AI and a human expert) with and without disclosere of the agent's type (Ai or a human).
We compare this objective performance with incentivized believes about subjects' performance on a task.
Randomization Method
Randomized by Python.
Randomization Unit
36
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
144
Sample size: planned number of observations
144
Sample size (or number of clusters) by treatment arms
72
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
IRB Approval Date
IRB Approval Number

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials