Back to History Current Version

Procedural Decision-Making in Response to Complexity

Last registered on March 13, 2023

Pre-Trial

Trial Information

General Information

Title
Procedural Decision-Making in Response to Complexity
RCT ID
AEARCTR-0010977
Initial registration date
March 01, 2023

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
March 13, 2023, 8:34 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Region

Primary Investigator

Affiliation
Stanford University

Other Primary Investigator(s)

PI Affiliation
California Institute of Technology

Additional Trial Information

Status
In development
Start date
2023-03-06
End date
2023-06-30
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
Individuals often change their decision-making in response to complexity, as has been discussed for decades in psychology and economics, but existing literature provides little evidence on the general characteristics of these processes. We introduce an experimental methodology to show that in the face of complexity, individuals resort to "procedural" decision-making, which we categorize as choice processes that are more describable. We elicit accuracy in replicating decision-makers' choices to experimentally measure and incentivize the choice process' describability. We show that procedural decision-making increases as we exogenously vary the complexity of the environment, defined by the choice set's cardinality. This allows for procedural reinterpretations of existing findings in decision-making under complexity, such as in the use of heuristics.
External Link(s)

Registration Citation

Citation
Arrieta, Gonzalo and Kirby Nielsen. 2023. "Procedural Decision-Making in Response to Complexity." AEA RCT Registry. March 13. https://doi.org/10.1257/rct.10977-1.0
Experimental Details

Interventions

Intervention(s)
We exogenously vary complexity of decisions and measure choice processes.
Intervention Start Date
2023-03-06
Intervention End Date
2023-06-30

Primary Outcomes

Primary Outcomes (end points)
replication rates across treatments, by round
Primary Outcomes (explanation)
For each decision maker, we will construct a replication measure that is the number (out of 5) decisions that the replicator correctly guessed. We will compare the average and distribution of this measure across treatments. We will also analyze this treatment difference *by round* in which the decision maker's message was elicited. We conjecture that procedural decision-making could, e.g., take time to develop, and therefore the treatment difference would emerge in later rounds.

Secondary Outcomes

Secondary Outcomes (end points)
message length and content
Secondary Outcomes (explanation)
We will analyze the length and content of messages elicited across treatments. We will measure message length by number of characters and we will measure content through the number of charity attributes mentioned in the message.

Experimental Design

Experimental Design
We exogenously vary complexity of decisions and measure choice processes.
Experimental Design Details
In our main experiment, participants are randomized into one of two treatments that vary in choice complexity. In the Simple treatment, participants choose which charity to donate to from a menu with two charities. In the Complex treatment, participants choose from a menu with six charities. In both treatments, participants go through 25 rounds of decisions.

In both treatments, in one random round between rounds 5 through 25, we elicit the decision-maker’s choice process by asking them to send a message to another participant who will try to guess their five previous choices; both participants are incentivized by the accuracy of the replication. To that incentivize DMs to describe their decision-making process rather than individual choices, we do not mention charity's names, which plausibly makes the individual charities harder to remember. Second, DMs know that the replicators will see the decisions in random order and that we will randomize the positioning of the charities on the screen within a decision. Finally, the message elicitation comes as a surprise to the DMs, so they have no incentive to attempt to remember their decisions or change their process while choosing.

Replicators will be matched with three DMs for a total of 15 guesses. We randomly select one decision to pay.

Comparing replication accuracy across treatments allows us to test our main hypothesis: Decision-making becomes more describable as decisions get more complex. To make the replication rates comparable across treatments, we always have replicators guess the decision from a menu that contains only two charities. This is straightforward for the Simple Treatment; we simply show replicators the same two-charity menu from which the DM chose. For the Complex Treatment, where menus have six charities, we create replication menus that contain the DM's chosen charity plus one other randomly selected charity. This keeps the random replication benchmark equal across treatments, so we can appropriately infer higher replication rates in one treatment to indicate that the DMs decision-making process was more describable.

Furthermore, we construct the menus in the Simple treatment to match the replication menus from the Complex treatment. This ensures that, in aggregate, replicators across both treatments are guessing choices from the same menus but using (potentially) different messages.

As a secondary test of the main hypothesis that decision-making becomes more describable as decisions get more complex, we compare message length and content.
Randomization Method
Individuals are randomized into treatments through oTree
Randomization Unit
We randomize into treatments at the individual-level: Decision makers will face either simple or complex menus, and replicators will be matched to three decision makers from the same treatment. Since replicators are asked to guess 5 decisions per decision maker, for three different decision makers, we cluster standard errors at the replicator level.
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
The design is not clustered, so each cluster is one participant. We recruit 500 Simple decision-makers, 500 Complex decision-makers, 333 replicators. We analyze the data clustering std. errors at the replicator level.
Sample size: planned number of observations
500 Simple decision makers, 500 Complex decision makers, 333 replicators (same as cluster)
Sample size (or number of clusters) by treatment arms
500 Simple decision makers, 500 Complex decision makers, 333 replicators
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Power calculation based on pilot data suggests we should have at least 393 decision makers per treatment
IRB

Institutional Review Boards (IRBs)

IRB Name
California Institute of Technology
IRB Approval Date
2022-11-11
IRB Approval Number
IR22-1263

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials