The Inference Cost of Interventions

Last registered on April 24, 2024


Trial Information

General Information

The Inference Cost of Interventions
Initial registration date
October 05, 2023

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
October 17, 2023, 10:53 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
April 24, 2024, 4:59 PM EDT

Last updated is the most recent time when changes to the trial's registration were published.



Primary Investigator

Stanford University

Other Primary Investigator(s)

PI Affiliation
Stanford University

Additional Trial Information

In development
Start date
End date
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
The choice of whether to screen is important and prevalent in many situations, such as hiring workers or purchasing insurance. It is also implicit in many policy choices. If a policy encourages uniform behavior, different types of people behave similarly, making it impossible to infer the person's type from their behavior. Choosing whether to screen can be difficult because it requires trading off immediate costs with delayed benefits. We propose that individuals may screen too little because they fail to consider the effects of inference. To test this hypothesis, we conduct an online experiment that simulates a hiring scenario with an initial trial task. Participants make two decisions: selecting a trial task and then choosing which candidate to hire. We hypothesize that the majority of participants will opt for the suboptimal task that does not reveal the candidates' quality. This will lead to suboptimal hires and lower payoffs because these participants will not know which candidate is better. We further hypothesize that this mistake is driven largely by the failure to anticipate inference and test this using a treatment that provides inference automatically. Lastly, we test whether planning ahead reduces the mistake and whether participants value it enough. This result would show that planning is a useful intervention, but people may not take it up themselves.
External Link(s)

Registration Citation

Arrieta, Gonzalo and Maxim Bakhtin. 2024. "The Inference Cost of Interventions." AEA RCT Registry. April 24.
Experimental Details


Intervention Start Date
Intervention End Date

Primary Outcomes

Primary Outcomes (end points)
Share of mistakes (when separating is optimal and when pooling is optimal).
Primary Outcomes (explanation)
A mistake is defined as if decision-makers are not risk-loving. It consists of choosing the task that separates types when picking the one that pools maximizes payoffs and picking the task that pools types when separating maximizes payoffs, conditional of optimal behavior in part 2 of the experiment.

Secondary Outcomes

Secondary Outcomes (end points)
How much participants value the planning tool.
Secondary Outcomes (explanation)
We ask participants in baseline and planning, in the end, to estimate the average bonus in the baseline and planning treatments (coming out of the 10 rounds of the main experiment).

Experimental Design

Experimental Design
In our experiment, there are two computers and two tasks. One computer is Good, the other is Bad, but the participants do not know which computer is Good. One task is Separating; the other is Pooling. Both computers perform equally well on the Pooling task. On the Separating task, the Good computer performs better than the Bad computer, which allows inferring the computers' quality from their output.

Participants face 10 rounds. Each round consists of two parts. Part 1 is a choice between the Pooling and Separating tasks. The participant's bonus from this part is the amount the two computers produce on the chosen task. Part 2 is a choice between hiring one of the computers to work on a Separating task or getting an outside option based on the amount the computers produce in part 1. In rounds 1 and 10, the parameters of the problem are such that it is optimal to choose the Separating task in part 1. In rounds 2-9, it is optimal to choose the Separating task in half of the rounds and the Pooling task in the other half of the rounds, in randomized order.

There are four treatments. In Baseline treatment, participants make the part 1 choice first. Then, they observe the output of both computers and make the part 2 choice. In the Automatic Inference treatment, participants face the interface except we them which computer is Good and which is Bad if they choose the Separating task and do not tell them if they choose the Pooling task. We add this information to the part 1 question. In the Strategy Method treatment, participants first answer part 2 questions conditional on possible task choices, with inference done for them in the case of Separating tasks. Then, they make their part 1 choice, where they see their possible payoffs from parts 1 and 2 for the two task choices, given their part 2 strategy. In the Planning treatment, participants make part 1 and part 2 choices on the same screen in reverse order: they first see the part 2 question and then the part 1 question.
Experimental Design Details
To assess the external relevance of our findings, we ask participants three unincentivized questions about their attitudes towards SAT, parenting, and task assignments. We don't necessarily expect participants to exhibit a correlation if the confounding elements in the settings we elicit counteract the behavior we identify.
Randomization Method
Individuals are randomized into treatments through oTree
Randomization Unit
Was the treatment clustered?

Experiment Characteristics

Sample size: planned number of clusters
800 individuals
Sample size: planned number of observations
800 individuals
Sample size (or number of clusters) by treatment arms
200 individuals per treatment
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)

Institutional Review Boards (IRBs)

IRB Name
Stanford Unviersity IRB
IRB Approval Date
IRB Approval Number


Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information


Is the intervention completed?
Data Collection Complete
Data Publication

Data Publication

Is public data available?

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials