Procedural Decision-Making in Response to Complexity

Last registered on July 26, 2023

Pre-Trial

Trial Information

General Information

Title
Procedural Decision-Making in Response to Complexity
RCT ID
AEARCTR-0010977
Initial registration date
March 01, 2023

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
March 13, 2023, 8:34 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
July 26, 2023, 11:47 PM EDT

Last updated is the most recent time when changes to the trial's registration were published.

Locations

Region

Primary Investigator

Affiliation
Stanford University

Other Primary Investigator(s)

PI Affiliation
California Institute of Technology

Additional Trial Information

Status
In development
Start date
2023-03-06
End date
2024-01-01
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
Individuals often change their decision-making in response to complexity, as has been discussed for decades in psychology and economics, but existing literature provides little evidence on the general characteristics of these processes. We introduce an experimental methodology to show that in the face of complexity, individuals resort to "procedural" decision-making, which we categorize as choice processes that are more describable. We elicit accuracy in replicating decision-makers' choices to experimentally measure and incentivize the choice process' describability. We show that procedural decision-making increases as we exogenously vary the complexity of the environment. This allows for procedural reinterpretations of existing findings in decision-making under complexity, such as in the use of heuristics.
External Link(s)

Registration Citation

Citation
Arrieta, Gonzalo and Kirby Nielsen. 2023. "Procedural Decision-Making in Response to Complexity." AEA RCT Registry. July 26. https://doi.org/10.1257/rct.10977-2.0
Experimental Details

Interventions

Intervention(s)
We exogenously vary complexity of decisions and measure choice processes.
Intervention Start Date
2023-03-06
Intervention End Date
2023-07-31

Primary Outcomes

Primary Outcomes (end points)
replication rates across treatments, by round
Primary Outcomes (explanation)
For each decision maker, we will construct a replication measure that is the number of (out of 5) decisions that the replicator correctly guessed. We will compare the average and distribution of this measure across treatments. We will also analyze this treatment difference *by round* in which the decision maker's message was elicited. We conjecture that procedural decision-making could, e.g., take time to develop, and therefore the treatment difference would emerge in later rounds.

Secondary Outcomes

Secondary Outcomes (end points)
message length and content, button clicks to get additional information in risk experiment, choices in risk experiment
Secondary Outcomes (explanation)
We will analyze the length and content of messages elicited across treatments. We will measure message length by number of characters and we will measure content through the number of charity or risk attributes mentioned in the message. In the risk experiment, we will also analyze number of times individuals click the buttons to get additional information, which buttons they click, how this correlates with replication rates, and whether it matches the message categorization.

Furthermore, the risk experiment allows us to conduct more standard "choice" analysis and correlate this with procedural decision making. We include questions that allow us to test specific hypothesis on the quality of decision-making as it becomes procedural. These questions include lotteries related by dominance, mean preserving spreads, and repeated choices.

Experimental Design

Experimental Design
We exogenously vary complexity of decisions and measure choice processes.
Experimental Design Details
In our charity experiment, participants are randomized into one of two treatments that vary in choice complexity. In the Simple treatment, participants choose which charity to donate to from a menu with two charities. In the Complex treatment, participants choose from a menu with six charities. In both treatments, participants go through 25 rounds of decisions.

In both treatments, in one random round between rounds 5 through 25, we elicit the decision-maker’s choice process by asking them to send a message to another participant who will try to guess their five previous choices; both participants are incentivized by the accuracy of the replication. To that incentivize DMs to describe their decision-making process rather than individual choices, we do not mention charity's names, which plausibly makes the individual charities harder to remember. Second, DMs know that the replicators will see the decisions in random order and that we will randomize the positioning of the charities on the screen within a decision. Finally, the message elicitation comes as a surprise to the DMs, so they have no incentive to attempt to remember their decisions or change their process while choosing.

Replicators will be matched with three DMs for a total of 15 guesses. We randomly select one decision to pay.

Comparing replication accuracy across treatments allows us to test our main hypothesis: Decision-making becomes more describable as decisions get more complex. To make the replication rates comparable across treatments, we always have replicators guess the decision from a menu that contains only two charities. This is straightforward for the Simple Treatment; we simply show replicators the same two-charity menu from which the DM chose. For the Complex Treatment, where menus have six charities, we create replication menus that contain the DM's chosen charity plus one other randomly selected charity. This keeps the random replication benchmark equal across treatments, so we can appropriately infer higher replication rates in one treatment to indicate that the DMs decision-making process was more describable.

Furthermore, we construct the menus in the Simple treatment to match the replication menus from the Complex treatment. This ensures that, in aggregate, replicators across both treatments are guessing choices from the same menus but using (potentially) different messages.

As a secondary test of the main hypothesis that decision-making becomes more describable as decisions get more complex, we compare message length and content.


Risk experiment:
The analysis and details of the risk experiment are similar to the charity experiment. We elicit messages only in rounds 10-25 (rather than 5-25) to give participants more experience in choosing the lotteries and using the additional information provided.

The main difference lies in the measurement of procedural decision-making. Since the menus that replicators face differ across treatments in the risk experiment (unlike in the charity experiment), we need a control condition. Our control condition has replicators try to guess DMs decisions *without* seeing the DMs message. Thus, a proxy measure of procedural decision-making is the difference in replication rates with and without the message, and we hypothesize that this difference will be larger in the Complex treatment.

Finally, for this treatment (and potentially ex-post for the charity treatment), we have realized that procedural decision-making is easiest to detect in menus that are less "obvious." If all DMs (and all replicators) would pick the same alternative from a given menu, then there is no room for replicability to increase. Thus, we will analyze "non-obvious" menus separately. We will define obviousness based on the DM choice probabilities (so a menu is more obvious if a larger percentage of DMs choose the same thing from the menu).
Randomization Method
Individuals are randomized into treatments through oTree
Randomization Unit
For the charity experiment:
We randomize into treatments at the individual-level: Decision makers will face either simple or complex menus, and replicators will be matched to three decision makers from the same treatment. Since replicators are asked to guess 5 decisions per decision maker, for three different decision makers, we cluster standard errors at the replicator level.

For the risk experiment:
We randomize into treatments at the individual-level: Decision makers will face either 2-, 3-, or 10-outcome lotteries, and replicators will be matched to three decision makers from the same treatment. Since replicators are asked to guess 5 decisions per decision maker, for three different decision makers, we cluster standard errors at the replicator level. Replicators are further randomized to be in the "message" or "no message" treatment, which varies whether the replicator can see the DMs message or not.
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
The design is not clustered, so each cluster is one participant. We recruit 500 Simple decision-makers, 500 Complex decision-makers, 333 replicators. We analyze the data clustering std. errors at the replicator level.

For the risk experiment:
We recruit 500 decision-makers for each of the three treatments for a total of 1500. Since we have to split replicators into message/no-message, we recruit 1000 replicators, so we have three replicators per DM, and for both {message, no message} treatments,. We analyze the data clustering std. errors at the replicator level. Attrition in the replicators sample can generate unmatched DMs, so we may recruit further replicators to cover all DMs if attrition is substantial.
Sample size: planned number of observations
500 Simple decision makers, 500 Complex decision makers, 333 replicators (same as cluster) For the risk experiment: 500 2-outcome DMs, 500 3-outcome DMs, 500 10-outcome DMs, 500 replicators with message, 500 replicators without message
Sample size (or number of clusters) by treatment arms
500 Simple decision makers, 500 Complex decision makers, 333 replicators

For the risk experiment:
500 2-outcome DMs, 500 3-outcome DMs, 500 10-outcome DMs, 500 replicators with message, 500 replicators without message
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Power calculation based on pilot data suggests we should have at least 393 decision makers per treatment For the risk experiment: Power calculation based on pilot data suggests we should have at least 417 DMs per treatment.
IRB

Institutional Review Boards (IRBs)

IRB Name
California Institute of Technology
IRB Approval Date
2022-11-11
IRB Approval Number
IR22-1263
IRB Name
Stanford University
IRB Approval Date
2023-03-31
IRB Approval Number
44866

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials