Experimental Design
In this section, we explain the main modifications to the original experimental design in detail.
First, we extend all three one-round treatments into multi-rounds in the lab with feedback. The purpose of doing so is to examine whether subjects’ behavior is consistent over time and whether learning emerges across rounds. In particular, each of the NoChoice treatments consists of 10 rounds, while the Choice treatment consists of 20 rounds. Subjects are randomly assigned to be either advisor or client at the beginning of the experiment, and remain the same role through out. Both advisor and client subjects receive the same experimental instructions (which are read out loud), including the information order advisors experience (or the choice of this) and the probability distribution of product B quality and advisor incentives. After each round, subjects are told the outcome (including the true quality of product B, product signal, advisor incentives and recommendation, client choice and payoff realization), before being randomly re-matched to a different opponent for the next round.
Second, we operationalize three Choice treatments (ChoiceFree, Incentive First Cost, Quality First Costly) into one. Specifically, we show a row of four cards on the screen with two preset to be black and red (Figure 2). Adviser subjects have to choose among two costless options, three black one red (3B1R) or three red one black (3R1B), and two costly options, all black (4B) or all red (4R). Choosing 3B1R (add two black cards) or 3R1B (add two red cards) do not cost anything. Choosing 4B or 4R (changing all cards to black/red) requires paying a small cost, but the computer will toss a fair coin to determine whether the last card will be changed (see Appendix B for a screenshot). Hence, paying the cost results in receiving either 3B1R/3R1B or 4B/4R with equal chance.
Thirdly, we redesign the experiment as a within-subject study to observe how NoChoice results relate to subsequent choice of information order. Specifically, subjects first complete the two NoChoice treatments, each consisting of 10 rounds, with the order randomized across sessions. They then proceed to the Choice treatment, which consists of 20 rounds. This is because we expected that 10 rounds in each NoChoice treatment would be sufficient for subjects’ behavior to display some degree of consistency. For the Choice treatment, we want more rounds to have sufficient observations of subjects being randomly assigned to their preferred and non-preferred choice.
Following Saccardo and Serra-Garcia (2023), our lab experiment retains the original payoff structure but in cents (so the amounts are scaled by a factor of 100). Hence, a $2 payoff becomes 200 ECU, and a $0.15 commission becomes 15 ECU. There are, however, two exceptions in our design. The first exception concerns the advisors’ base payment. In Saccardo and Serra-Garcia (2023), advisors received $0.50 per round in the NoChoice treatments, but $1 per round in the Choice treatment. To maintain consistency across treatments in our experiment, we instead set the advisors’ base payment at 50 ECU per round for all treatment. The second exception concerns the cost of changing information order in the Choice treatment. In the original study, the “cost” in the Incentive First Costly and Quality First Costly treatments is giving up the opportunity of earning an additional $0.05, or one-third of the commission. This is equal to 5 ECU in our study where the commission is 15 ECU. Such “cost” shifts the chance of seeing the preferred information order from 25% to 75%. In our design, we adopt the literal sense of “cost” and explicitly state that choosing the costly options requires paying 1.25 ECU. Note that our costly option shifts the chance of seeing the preferred information order from 75% to 87.5%. Hence, paying 1.25 ECU to shift the probability by 12.5% is the comparable to paying 5 ECU to shift the probability by 50% in the original study.
Finally, our matching and payment procedures also differ from those in Saccardo and Serra-Garcia (2023). In their design, one out of ten advisors is randomly selected, and the selected advisor’s recommendation is then passed to a client who chooses a product. In contrast, in our experiment each advisor is matched with a client in every round, and subjects are rematched across rounds. Moreover, to better understand clients’ decision-making, we employ the strategy method, requiring clients to state their choices contingent on two possible recommendations before receiving the advisor’s actual recommendation. This allows us to observe client responses systematically and identify their underlying decision rules. Lastly, advisors are paid for all rounds, whereas clients are paid for half of the rounds in each treatment, including NoChoice See Incentive First, NoChoice Assess Quality First, and the Choice treatment, to balance the expected payoffs of advisors and clients.