Back to History Current Version

Procedural Decision-Making in Response to Complexity -- Fixing Approaches

Last registered on June 16, 2025

Pre-Trial

Trial Information

General Information

Title
Procedural Decision-Making in Response to Complexity -- Fixing Approaches
RCT ID
AEARCTR-0016228
Initial registration date
June 16, 2025

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
June 16, 2025, 7:47 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Primary Investigator

Affiliation

Other Primary Investigator(s)

PI Affiliation

Additional Trial Information

Status
In development
Start date
2025-06-16
End date
2026-12-01
Secondary IDs
Prior work
This trial is based on or builds upon one or more prior RCTs.
Abstract
This project follows up on AEARCTR-0010977. We recruit new participants to implement the choice processes collected in that study, and our main goal is to test whether implementers are noisier in complex decisions compared to simple decisions.
External Link(s)

Registration Citation

Citation
Arrieta, Gonzalo and Kirby Nielsen. 2025. "Procedural Decision-Making in Response to Complexity -- Fixing Approaches." AEA RCT Registry. June 16. https://doi.org/10.1257/rct.16228-1.0
Experimental Details

Interventions

Intervention(s)
We provide implementers with a choice process description and ask them to implement it on simple and complex lotteries, some of which are repeated and some of which are related by FOSD. We test whether implementers are noisier (i.e., less likely to choose consistently in repeated menus) in complex decisions compared to simple ones. We also test whether implementers violate FOSD more in complex lotteries. We investigate the sensitivity of these effects to specific choice processes, such as "procedures."
Intervention Start Date
2025-06-17
Intervention End Date
2025-06-18

Primary Outcomes

Primary Outcomes (end points)
noise rates, FOSD violations
Primary Outcomes (explanation)
1. Each implementer faces two simple menus that are each repeated twice and faces two complex menus that are each repeated twice. For each repeated menu, we create an indicator equal to "1" if the implementer chose a different lottery across repeats, 0 otherwise. We call these inconsistent guesses "noise." We compare the aggregate rates of noise between the simple lotteries and the complex lotteries.

2. We conduct a similar analysis at the individual level. For each implementer, we can say whether they were noisier in the complex menus, noisier in the simple menus, or equally noisy. We test whether implementers were noisier in the complex vs. simple menus.

3. We use our measures of procedural decision-making (e.g., whether a message is "perfectly replicable") developed earlier to test whether procedures are less noisy than non-procedures. These measures of procedural decision-making were created in our earlier study.

4. Restricting to the set of comparable lotteries with 10 outcomes, we test whether simple or complex messages lead to more noise and whether procedures interact with this effect.

5. We test whether implementers violate FOSD more for simple or complex lotteries (both at the individual level and in aggregate). We also test whether procedures interact with this effect.

Secondary Outcomes

Secondary Outcomes (end points)
self-reported difficulty
Secondary Outcomes (explanation)
We ask implementers whether they find it easier to implement the message on simple or complex lotteries. We test whether there are differences in reported difficulty depending on whether the implementer was implementing a simple or complex message, and whether they were implementing a procedure or not a procedure.

Experimental Design

Experimental Design
We vary whether implementers face simple or complex lotteries, and randomly select lottery menus to be repeated.
Experimental Design Details
Not available
Randomization Method
All randomization occurs through oTree: (e.g., assignment of either a simple message or a complex message, randomly selected lotteries, and randomly selected lotteries to repeat).
Randomization Unit
We randomize into treatments at the individual-level: Implementers receive either a simple or a complex message. This interacts with another source of randomness which is whether implementers receive a message that is procedural or non-procedural. The complex messages contain more procedural processes, so these two sources of randomness are not independent.
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
The design is not clustered, so each cluster is one participant. We recruit 1000 implementers to implement simple messages and 1000 implementers to implement complex messages.
Sample size: planned number of observations
We have 2000 total implementers. Each faces 20 menus, including two repeated simple menus, two repeated complex menus, one simple FOSD menu, and one complex FOSD menu.
Sample size (or number of clusters) by treatment arms
1000 implementers to implement simple messages and 1000 implementers to implement complex messages, each facing two repeated simple menus, two repeated complex menus, one simple FOSD menu, and one complex FOSD menu
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
Caltech
IRB Approval Date
2022-11-11
IRB Approval Number
IR22-1263
IRB Name
University of Zurich
IRB Approval Date
2024-10-22
IRB Approval Number
OEC IRB # 2024-088