Procedural Decision-Making in Response to Complexity-- Fixing Approaches--Cross-domain test

Last registered on September 19, 2025

Pre-Trial

Trial Information

General Information

Title
Procedural Decision-Making in Response to Complexity-- Fixing Approaches--Cross-domain test
RCT ID
AEARCTR-0016738
Initial registration date
September 18, 2025

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
September 19, 2025, 10:26 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Primary Investigator

Affiliation
University of Zurich

Other Primary Investigator(s)

PI Affiliation

Additional Trial Information

Status
In development
Start date
2025-09-15
End date
2026-12-01
Secondary IDs
Prior work
This trial is based on or builds upon one or more prior RCTs.
Abstract
This project follows up on AEARCTR-0010977 and AEARCTR-0016228. We ran the AEARCTR-0016228 experiment on June 17 and 18. We found that implementers were significantly noisier in complex decisions than in simple decisions. After running the experiment, we realized a natural confound: Since we compared noise between 2- and 10-outcome lotteries for Simple messages, and between 10- and 20-outcome lotteries for Complex messages, finding higher noise in more complex decisions could instead result from the fact that the message was tailored to the relatively simpler environment (i.e., Simple messages were written for 2- rather than 10-outcome lotteries, and Complex messages were written for 10- rather than 20-outcome lotteries). In other words, the resulting higher noise could be driven by higher complexity, or it could be driven by moving away from the domain in which the choice process was developed. To test between these two hypotheses, we will run a nearly identical experiment, using Simple messages with 3- and 13-outcome lotteries, and Complex messages with 11- and 21-outcome lotteries, respectively. This controls for the “cross-domain” effect (under the assumption that 2- and 3-, and 10- and 11-outcome lotteries are not perceived to be the same domain), isolating the effect of complexity. The main outcome variable of this experiment will be the same as before: We will compare noise rates between simple (3-outcome and 11-outcome) and complex (13- and 21-outcome) lotteries, within each type of message.
External Link(s)

Registration Citation

Citation
Arrieta, Gonzalo and Kirby Nielsen. 2025. "Procedural Decision-Making in Response to Complexity-- Fixing Approaches--Cross-domain test." AEA RCT Registry. September 19. https://doi.org/10.1257/rct.16738-1.0
Experimental Details

Interventions

Intervention(s)
We provide implementers with a choice process description and ask them to implement it on simple and complex lotteries, some of which are repeated and some of which are related by FOSD. We test whether implementers are noisier (i.e., less likely to choose consistently in repeated menus) in complex decisions compared to simple ones. We also test whether implementers violate FOSD more in complex lotteries. We investigate the sensitivity of these effects to specific choice processes, such as "procedures."
Intervention Start Date
2025-09-15
Intervention End Date
2026-12-01

Primary Outcomes

Primary Outcomes (end points)
noise rates, FOSD violations
Primary Outcomes (explanation)
1. Each implementer faces two simple menus that are each repeated twice and faces two complex menus that are each repeated twice. For each repeated menu, we create an indicator equal to "1" if the implementer chose a different lottery across repeats, 0 otherwise. We call these inconsistent guesses "noise." We compare the aggregate rates of noise between the simple lotteries and the complex lotteries.

2. We conduct a similar analysis at the individual level. For each implementer, we can say whether they were noisier in the complex menus, noisier in the simple menus, or equally noisy. We test whether implementers were noisier in the complex vs. simple menus.

3. We test whether implementers violate FOSD more for simple or complex lotteries (both at the individual level and in aggregate). We also test whether procedures interact with this effect.


Given potential data quality concerns on Prolific, we will investigate the sensitivity of our results to indicators of low data quality. In particular, we will investigate the sensitivity of our results to removing individuals with very fast survey completion times relative to others, and those with higher comprehension question errors relative to others.

Secondary Outcomes

Secondary Outcomes (end points)
self-reported difficulty
Secondary Outcomes (explanation)
We ask implementers whether they find it easier to implement the message on simple or complex lotteries. We test whether there are differences in reported difficulty depending on whether the implementer was implementing a simple or complex message, and whether they were implementing a procedure or not a procedure.


Given potential data quality concerns on Prolific, we will investigate the sensitivity of our results to indicators of low data quality. In particular, we will investigate the sensitivity of our results to removing individuals with very fast survey completion times relative to others, and those with higher comprehension question errors relative to others.

Experimental Design

Experimental Design
Implementers face simple and complex lotteries. We randomly select lottery menus to be repeated.
Experimental Design Details
Not available
Randomization Method
All randomization occurs through oTree: (e.g., assignment of either a simple message or a complex message, randomly selected lotteries, and randomly selected lotteries to repeat).
Randomization Unit
We randomize into treatments at the individual-level: Implementers receive either a simple or a complex message. This interacts with another source of randomness which is whether implementers receive a message that is procedural or non-procedural. The complex messages contain more procedural processes, so these two sources of randomness are not independent.
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
The design is not clustered, so each cluster is one participant. We recruit 1650 implementers to implement simple messages and 1650 implementers to implement complex messages.
Sample size: planned number of observations
We have 3300 total implementers. Each faces 20 menus, including two repeated simple menus, two repeated complex menus, one simple FOSD menu, and one complex FOSD menu.
Sample size (or number of clusters) by treatment arms
1650 implementers to implement simple messages and 1650 implementers to implement complex messages, each facing two repeated simple menus, two repeated complex menus, one simple FOSD menu, and one complex FOSD menu.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
We run power calculations using alpha = 0.05 and a power of 80% to identify the effect size from our earlier study fixing approaches. pooling all messages (which shows that implementers are noisier in complex environments). These calculations determine we need a total sample size of 3300 observations, which we split equally among the two treatments.
IRB

Institutional Review Boards (IRBs)

IRB Name
Caltech
IRB Approval Date
2022-11-11
IRB Approval Number
IR22-1263
IRB Name
University of Zurich
IRB Approval Date
2024-10-22
IRB Approval Number
OEC IRB # 2024-088