Reinforcing RCTs with Multiple Priors while Learning about External Validity

Last registered on May 21, 2024

Pre-Trial

Trial Information

General Information

Title
Reinforcing RCTs with Multiple Priors while Learning about External Validity
RCT ID
AEARCTR-0013634
Initial registration date
May 15, 2024

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
May 21, 2024, 11:07 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Primary Investigator

Affiliation
UC-Berkeley

Other Primary Investigator(s)

PI Affiliation
UC-Berkeley

Additional Trial Information

Status
In development
Start date
2024-05-26
End date
2024-06-30
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
This paper presents a framework for how to incorporate prior sources of information into the design of a sequential experiment. These sources can include previous experiments, expert opinions, or the experimenter's own introspection. We formalize this problem using a Bayesian approach that maps each source to a Bayesian model. These models are aggregated according to their associated posterior probabilities. We evaluate a broad class of policy rules according to three criteria: whether the experimenter learns the parameters of the payoff distributions, the probability that the experimenter chooses the wrong treatment when deciding to stop the experiment, and the average rewards. We show that our framework exhibits several nice finite sample theoretical guarantees, including robustness to any source that is not externally valid.
External Link(s)

Registration Citation

Citation
Finan, Frederico and Demian Pouzo. 2024. "Reinforcing RCTs with Multiple Priors while Learning about External Validity ." AEA RCT Registry. May 21. https://doi.org/10.1257/rct.13634-1.0
Experimental Details

Interventions

Intervention(s)
Intervention Start Date
2024-05-26
Intervention End Date
2024-06-30

Primary Outcomes

Primary Outcomes (end points)
Click rates and potentially refinance rates.
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
We are working with a large bank in Argentina who is experimenting with different messages to encourage their delinquent clients to refinance their loans. When offering a new refinancing loan, the bank’s first point of contact with its client is via email, and getting the client to click on this email is a critical and necessary first step in the refinancing process. The email informs the client about the basic terms of the contact including the interest rate, which given the current state of the economy might discourage potential customers. An alternative message would be to instead market the contract based on the implied monthly payments. The advantages of the alternative framing are that it avoids potential “sticker shock” and reduces the computational costs for the client.

To test the effects of the different framings, the bank has been conducting a randomized control trial in which delinquent clients are randomized into one of two arms: (1) clients receive information about the contract rate (i.e. status quo messaging), (2) clients receive information about the monthly payments. Using the same treatments, we have asked the bank to conduct a sequential experiment instead of a standard RCT.

Experimental Design Details
Not available
Randomization Method
Randomization done by computer.
Randomization Unit
Individual.
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
0 clusters
Sample size: planned number of observations
10,000 individuals
Sample size (or number of clusters) by treatment arms
5,000 per arm
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
IRB Approval Date
IRB Approval Number