Back to History

Fields Changed

Registration

Field Before After
Abstract Individuals often change their decision-making in response to complexity, as has been discussed for decades in psychology and economics, but existing literature provides little evidence on the general characteristics of these processes. We introduce an experimental methodology to show that in the face of complexity, individuals resort to "procedural" decision-making, which we categorize as choice processes that are more describable. We elicit accuracy in replicating decision-makers' choices to experimentally measure and incentivize the choice process' describability. We show that procedural decision-making increases as we exogenously vary the complexity of the environment, defined by the choice set's cardinality. This allows for procedural reinterpretations of existing findings in decision-making under complexity, such as in the use of heuristics. Individuals often change their decision-making in response to complexity, as has been discussed for decades in psychology and economics, but existing literature provides little evidence on the general characteristics of these processes. We introduce an experimental methodology to show that in the face of complexity, individuals resort to "procedural" decision-making, which we categorize as choice processes that are more describable. We elicit accuracy in replicating decision-makers' choices to experimentally measure and incentivize the choice process' describability. We show that procedural decision-making increases as we exogenously vary the complexity of the environment. This allows for procedural reinterpretations of existing findings in decision-making under complexity, such as in the use of heuristics.
Trial End Date June 30, 2023 January 01, 2024
Last Published March 13, 2023 08:34 AM July 26, 2023 11:47 PM
Intervention End Date June 30, 2023 July 31, 2023
Primary Outcomes (Explanation) For each decision maker, we will construct a replication measure that is the number (out of 5) decisions that the replicator correctly guessed. We will compare the average and distribution of this measure across treatments. We will also analyze this treatment difference *by round* in which the decision maker's message was elicited. We conjecture that procedural decision-making could, e.g., take time to develop, and therefore the treatment difference would emerge in later rounds. For each decision maker, we will construct a replication measure that is the number of (out of 5) decisions that the replicator correctly guessed. We will compare the average and distribution of this measure across treatments. We will also analyze this treatment difference *by round* in which the decision maker's message was elicited. We conjecture that procedural decision-making could, e.g., take time to develop, and therefore the treatment difference would emerge in later rounds.
Randomization Unit We randomize into treatments at the individual-level: Decision makers will face either simple or complex menus, and replicators will be matched to three decision makers from the same treatment. Since replicators are asked to guess 5 decisions per decision maker, for three different decision makers, we cluster standard errors at the replicator level. For the charity experiment: We randomize into treatments at the individual-level: Decision makers will face either simple or complex menus, and replicators will be matched to three decision makers from the same treatment. Since replicators are asked to guess 5 decisions per decision maker, for three different decision makers, we cluster standard errors at the replicator level. For the risk experiment: We randomize into treatments at the individual-level: Decision makers will face either 2-, 3-, or 10-outcome lotteries, and replicators will be matched to three decision makers from the same treatment. Since replicators are asked to guess 5 decisions per decision maker, for three different decision makers, we cluster standard errors at the replicator level. Replicators are further randomized to be in the "message" or "no message" treatment, which varies whether the replicator can see the DMs message or not.
Planned Number of Clusters The design is not clustered, so each cluster is one participant. We recruit 500 Simple decision-makers, 500 Complex decision-makers, 333 replicators. We analyze the data clustering std. errors at the replicator level. The design is not clustered, so each cluster is one participant. We recruit 500 Simple decision-makers, 500 Complex decision-makers, 333 replicators. We analyze the data clustering std. errors at the replicator level. For the risk experiment: We recruit 500 decision-makers for each of the three treatments for a total of 1500. Since we have to split replicators into message/no-message, we recruit 1000 replicators, so we have three replicators per DM, and for both {message, no message} treatments,. We analyze the data clustering std. errors at the replicator level. Attrition in the replicators sample can generate unmatched DMs, so we may recruit further replicators to cover all DMs if attrition is substantial.
Planned Number of Observations 500 Simple decision makers, 500 Complex decision makers, 333 replicators (same as cluster) 500 Simple decision makers, 500 Complex decision makers, 333 replicators (same as cluster) For the risk experiment: 500 2-outcome DMs, 500 3-outcome DMs, 500 10-outcome DMs, 500 replicators with message, 500 replicators without message
Sample size (or number of clusters) by treatment arms 500 Simple decision makers, 500 Complex decision makers, 333 replicators 500 Simple decision makers, 500 Complex decision makers, 333 replicators For the risk experiment: 500 2-outcome DMs, 500 3-outcome DMs, 500 10-outcome DMs, 500 replicators with message, 500 replicators without message
Power calculation: Minimum Detectable Effect Size for Main Outcomes Power calculation based on pilot data suggests we should have at least 393 decision makers per treatment Power calculation based on pilot data suggests we should have at least 393 decision makers per treatment For the risk experiment: Power calculation based on pilot data suggests we should have at least 417 DMs per treatment.
Intervention (Hidden) In our main experiment, participants are randomized into one of two treatments that vary in choice complexity. In the Simple treatment, participants choose which charity to donate to from a menu with two charities. In the Complex treatment, participants choose from a menu with six charities. In both treatments, participants go through 25 rounds of decisions. In both treatments, in one random round between rounds 5 through 25, we elicit the decision-maker’s choice process by asking them to send a message to another participant who will try to guess their five previous choices; both participants are incentivized by the accuracy of the replication. To that incentivize DMs to describe their decision-making process rather than individual choices, we do not mention charity's names, which plausibly makes the individual charities harder to remember. Second, DMs know that the replicators will see the decisions in random order and that we will randomize the positioning of the charities on the screen within a decision. Finally, the message elicitation comes as a surprise to the DMs, so they have no incentive to attempt to remember their decisions or change their process while choosing. Replicators will be matched with three DMs for a total of 15 guesses. We randomly select one decision to pay. Charity experiment: In our charity experiment, participants are randomized into one of two treatments that vary in choice complexity. In the Simple treatment, participants choose which charity to donate to from a menu with two charities. In the Complex treatment, participants choose from a menu with six charities. In both treatments, participants go through 25 rounds of decisions. In both treatments, in one random round between rounds 5 through 25, we elicit the decision-maker’s choice process by asking them to send a message to another participant who will try to guess their five previous choices; both participants are incentivized by the accuracy of the replication. To incentivize DMs to describe their decision-making process rather than individual choices, we do not mention charity's names, which plausibly makes the individual charities harder to remember. Second, DMs know that the replicators will see the decisions in random order and that we will randomize the positioning of the charities on the screen within a decision. Finally, the message elicitation comes as a surprise to the DMs, so they have no incentive to attempt to remember their decisions or change their process while choosing. Replicators will be matched with three DMs for a total of 15 guesses. We randomly select one decision to pay. Risk experiment: In a second experiment, we will conduct a similar exercise with lottery choice. We will vary complexity by changing the number of outcomes in the lotteries. We will have a treatment with 2-outcome, 3-outcome, and 10-outcome lotteries. While making choices, DMs can click buttons to get the following additional information about the lotteries: expected value, variance, max payment, min payment, and the chance of max payment. In one random round 10 through 25, we will ask decision-makers to describe their choice processes as described above. Finally, after all 25 rounds, we will ask DMs whether their message represents one of 11 pre-hypothesized simple procedures, such as maximizing the expected value. We will ask DMs to categorize their message and will ask a replicator to categorize this message in the same way, and both are incentivized to match the other's response. Replicators will be matched with three DMs for a total of 15 guesses. We randomly select one decision to pay.
Secondary Outcomes (End Points) message length and content message length and content, button clicks to get additional information in risk experiment, choices in risk experiment
Secondary Outcomes (Explanation) We will analyze the length and content of messages elicited across treatments. We will measure message length by number of characters and we will measure content through the number of charity attributes mentioned in the message. We will analyze the length and content of messages elicited across treatments. We will measure message length by number of characters and we will measure content through the number of charity or risk attributes mentioned in the message. In the risk experiment, we will also analyze number of times individuals click the buttons to get additional information, which buttons they click, how this correlates with replication rates, and whether it matches the message categorization. Furthermore, the risk experiment allows us to conduct more standard "choice" analysis and correlate this with procedural decision making. We include questions that allow us to test specific hypothesis on the quality of decision-making as it becomes procedural. These questions include lotteries related by dominance, mean preserving spreads, and repeated choices.
Back to top

Irbs

Field Before After
IRB Name Stanford University
IRB Approval Date March 31, 2023
IRB Approval Number 44866
Back to top