Experimental Design Details
In our charity experiment, participants are randomized into one of two treatments that vary in choice complexity. In the Simple treatment, participants choose which charity to donate to from a menu with two charities. In the Complex treatment, participants choose from a menu with six charities. In both treatments, participants go through 25 rounds of decisions.
In both treatments, in one random round between rounds 5 through 25, we elicit the decision-maker’s choice process by asking them to send a message to another participant who will try to guess their five previous choices; both participants are incentivized by the accuracy of the replication. To that incentivize DMs to describe their decision-making process rather than individual choices, we do not mention charity's names, which plausibly makes the individual charities harder to remember. Second, DMs know that the replicators will see the decisions in random order and that we will randomize the positioning of the charities on the screen within a decision. Finally, the message elicitation comes as a surprise to the DMs, so they have no incentive to attempt to remember their decisions or change their process while choosing.
Replicators will be matched with three DMs for a total of 15 guesses. We randomly select one decision to pay.
Comparing replication accuracy across treatments allows us to test our main hypothesis: Decision-making becomes more describable as decisions get more complex. To make the replication rates comparable across treatments, we always have replicators guess the decision from a menu that contains only two charities. This is straightforward for the Simple Treatment; we simply show replicators the same two-charity menu from which the DM chose. For the Complex Treatment, where menus have six charities, we create replication menus that contain the DM's chosen charity plus one other randomly selected charity. This keeps the random replication benchmark equal across treatments, so we can appropriately infer higher replication rates in one treatment to indicate that the DMs decision-making process was more describable.
Furthermore, we construct the menus in the Simple treatment to match the replication menus from the Complex treatment. This ensures that, in aggregate, replicators across both treatments are guessing choices from the same menus but using (potentially) different messages.
As a secondary test of the main hypothesis that decision-making becomes more describable as decisions get more complex, we compare message length and content.
Risk experiment:
The analysis and details of the risk experiment are similar to the charity experiment. We elicit messages only in rounds 10-25 (rather than 5-25) to give participants more experience in choosing the lotteries and using the additional information provided.
The main difference lies in the measurement of procedural decision-making. Since the menus that replicators face differ across treatments in the risk experiment (unlike in the charity experiment), we need a control condition. Our control condition has replicators try to guess DMs decisions *without* seeing the DMs message. Thus, a proxy measure of procedural decision-making is the difference in replication rates with and without the message, and we hypothesize that this difference will be larger in the Complex treatment.
Finally, for this treatment (and potentially ex-post for the charity treatment), we have realized that procedural decision-making is easiest to detect in menus that are less "obvious." If all DMs (and all replicators) would pick the same alternative from a given menu, then there is no room for replicability to increase. Thus, we will analyze "non-obvious" menus separately. We will define obviousness based on the DM choice probabilities (so a menu is more obvious if a larger percentage of DMs choose the same thing from the menu).