Enabling or Limiting Cognitive Flexibility? : A Replication and Extension of Saccardo and Serra-Garcia (2023) in the Lab

Last registered on April 29, 2026

Pre-Trial

Trial Information

General Information

Title
Enabling or Limiting Cognitive Flexibility? : A Replication and Extension of Saccardo and Serra-Garcia (2023) in the Lab
RCT ID
AEARCTR-0016108
Initial registration date
April 27, 2026

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
April 29, 2026, 3:57 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Primary Investigator

Affiliation
National Taiwan University

Other Primary Investigator(s)

PI Affiliation
National Taiwan University
PI Affiliation
National Taiwan University

Additional Trial Information

Status
In development
Start date
2026-02-22
End date
2027-02-22
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
This study replicates the experiment of Saccardo and Serra-Garcia (2023) in the laboratory, extending their single-round design to a multi-round setting and converting the between-subject manipulation into a within-subject design. Additionally, we replace the perfect correlation between the incentive and product B’s quality with independence, and make all information transparent to both advisors and clients. The significance of this study lies in testing whether the original findings is robust under a more transparent setting with more experimental control and evaluate the effect of learning.
External Link(s)

Registration Citation

Citation
Lai, Chien-Yu, Joseph Tao-yi Wang and Yu-Tang Yang. 2026. "Enabling or Limiting Cognitive Flexibility? : A Replication and Extension of Saccardo and Serra-Garcia (2023) in the Lab." AEA RCT Registry. April 29. https://doi.org/10.1257/rct.16108-1.0
Experimental Details

Interventions

Intervention(s)
Saccardo and Serra-Garcia (2023) conduct a one-shot, sender-receiver game experiment online. In this game, an advisor recommends one of two products, A or B, to an uninformed client. Each product is an urn containing five balls and the client receives as payoff the value of a randomly drawn ball from the chosen product. Product A has a fixed payoff structure: three $2 balls and two $0 balls (expected return $1.20). Product B’s payoff depends on an unknown quality state—high (H) or low (L)—which occurs with equal probability. If quality is high, B contains four $2 balls (expected return $1.60); if low, it contains only two (expected return $0.80). Before the recommendation, the advisor sees a ball randomly drawn from product B as a signal of its quality. The client decides whether to follow the advice and chooses the prodoct. The advisor earns $0.50 for the recommendation and can receive an additional $0.15 commission for recommending a particular product. This payment structure may create a conflict of interest when the commission incentive and the quality signal point to different directions. In that case, each choice maximizes the payoffs of either the advisor or client.

The NoChoice experiment includes two treatments. In See Incentive First, the advisor first learns which product is incentivized, and then sees the quality signal for product B. In Assess Quality First, the order is reversed. Then, the advisor makes a recommendation when the incentive is displayed but the signal is not regardless of treatment.

The Choice experiment includes three treatments. The first is Choice Free, in which the advisor states a choice between seeing incentive first or assessing quality first at no cost. The remaining two are Incentive First Costly and Quality First Costly, in which stating either to see their incentive first or the signal of quality first, incurs an additional $0.05 fee. In any case, stating one’s preference increases the probability of being assigned to their prefer order from 25% to 75%.

There are several notable features in their experimental design. First of all, they conduct the experiment online and subjects only played one round, making it impossible to observe learning. In addition, they incentivize product B whenever it is low-quality and incentivize product A whenever product B is high-quality which “maximized the number of cases in which advisors faced a conflict of interest. (Saccardo and Serra-Garcia, 2023, p.412)” So, incentives are perfectly correlated with the quality of product B. Although this correlation is not disclosed, advisors are unaware due to the one-shot nature of the online experiment. Additionally, since their focus is solely on advisors’ behavior, clients receive no information about the products or how the recommendation is made. Finally, they operationize the costly choice as foregoing additional earnings ($0.05) of the alternative choice in See Incentive First and Assess Quality First.

Motivated by their findings, we aim to replicate their experiment with several key modifications: First, the one-round design is extended into a multi-round experiment with feedback conducted in the computer lab. Second, product B’s quality and the incentivized product are independently determined. Third, clients are fully informed about product details and the advisor’s decision process. Lastly, we use a within-subject design: advisors first play the NoChoice experiment with both information orders, and then proceed to the Choice experiment. Half of the sessions will experience See Incentive First after Assess Quality First, while the other half will experience Assess Quality First after See Incentive First.
Intervention Start Date
2026-02-22
Intervention End Date
2027-02-22

Primary Outcomes

Primary Outcomes (end points)
The main variables of interest in this study include:
RecommendIncentive (Binary): Indicator for whether the advisor recommended the incentivized product.
Preference (Binary): Indicates for whether the advisor chooses see incentive first.
Selfishness (Standardized): The number of times the advisor chose to recommend the incentivized product in the five moral cost decisions, standardized to mean 0 and standard deviation 1.

See the trial's Pre-Analysis Plan for more details.
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
In this section, we explain the main modifications to the original experimental design in detail.

First, we extend all three one-round treatments into multi-rounds in the lab with feedback. The purpose of doing so is to examine whether subjects’ behavior is consistent over time and whether learning emerges across rounds. In particular, each of the NoChoice treatments consists of 10 rounds, while the Choice treatment consists of 20 rounds. Subjects are randomly assigned to be either advisor or client at the beginning of the experiment, and remain the same role through out. Both advisor and client subjects receive the same experimental instructions (which are read out loud), including the information order advisors experience (or the choice of this) and the probability distribution of product B quality and advisor incentives. After each round, subjects are told the outcome (including the true quality of product B, product signal, advisor incentives and recommendation, client choice and payoff realization), before being randomly re-matched to a different opponent for the next round.

Second, we operationalize three Choice treatments (ChoiceFree, Incentive First Cost, Quality First Costly) into one. Specifically, we show a row of four cards on the screen with two preset to be black and red (Figure 2). Adviser subjects have to choose among two costless options, three black one red (3B1R) or three red one black (3R1B), and two costly options, all black (4B) or all red (4R). Choosing 3B1R (add two black cards) or 3R1B (add two red cards) do not cost anything. Choosing 4B or 4R (changing all cards to black/red) requires paying a small cost, but the computer will toss a fair coin to determine whether the last card will be changed (see Appendix B for a screenshot). Hence, paying the cost results in receiving either 3B1R/3R1B or 4B/4R with equal chance.

Thirdly, we redesign the experiment as a within-subject study to observe how NoChoice results relate to subsequent choice of information order. Specifically, subjects first complete the two NoChoice treatments, each consisting of 10 rounds, with the order randomized across sessions. They then proceed to the Choice treatment, which consists of 20 rounds. This is because we expected that 10 rounds in each NoChoice treatment would be sufficient for subjects’ behavior to display some degree of consistency. For the Choice treatment, we want more rounds to have sufficient observations of subjects being randomly assigned to their preferred and non-preferred choice.

Following Saccardo and Serra-Garcia (2023), our lab experiment retains the original payoff structure but in cents (so the amounts are scaled by a factor of 100). Hence, a $2 payoff becomes 200 ECU, and a $0.15 commission becomes 15 ECU. There are, however, two exceptions in our design. The first exception concerns the advisors’ base payment. In Saccardo and Serra-Garcia (2023), advisors received $0.50 per round in the NoChoice treatments, but $1 per round in the Choice treatment. To maintain consistency across treatments in our experiment, we instead set the advisors’ base payment at 50 ECU per round for all treatment. The second exception concerns the cost of changing information order in the Choice treatment. In the original study, the “cost” in the Incentive First Costly and Quality First Costly treatments is giving up the opportunity of earning an additional $0.05, or one-third of the commission. This is equal to 5 ECU in our study where the commission is 15 ECU. Such “cost” shifts the chance of seeing the preferred information order from 25% to 75%. In our design, we adopt the literal sense of “cost” and explicitly state that choosing the costly options requires paying 1.25 ECU. Note that our costly option shifts the chance of seeing the preferred information order from 75% to 87.5%. Hence, paying 1.25 ECU to shift the probability by 12.5% is the comparable to paying 5 ECU to shift the probability by 50% in the original study.

Finally, our matching and payment procedures also differ from those in Saccardo and Serra-Garcia (2023). In their design, one out of ten advisors is randomly selected, and the selected advisor’s recommendation is then passed to a client who chooses a product. In contrast, in our experiment each advisor is matched with a client in every round, and subjects are rematched across rounds. Moreover, to better understand clients’ decision-making, we employ the strategy method, requiring clients to state their choices contingent on two possible recommendations before receiving the advisor’s actual recommendation. This allows us to observe client responses systematically and identify their underlying decision rules. Lastly, advisors are paid for all rounds, whereas clients are paid for half of the rounds in each treatment, including NoChoice See Incentive First, NoChoice Assess Quality First, and the Choice treatment, to balance the expected payoffs of advisors and clients.
Experimental Design Details
Not available
Randomization Method
In our within-subject design, randomization concerning the order of the two information conditions, See Incentive First and Assess Quality First, is implemented at the session level. Hence, all subjects in the same session experience the same order of conditions. We perform pairwise randomization of sessions. When we schedule a batch of sessions, we pair them based on similar characteristics, such as day of week, date proximity, and morning/afternoon. Then, we randomly assign one session in each pair to each order of conditions. If a paired session is canceled, say due to insufficient turnout, we replace it with a new session later scheduled to match the original session’s characteristics as much as possible.
Randomization Unit
Session
Was the treatment clustered?
Yes

Experiment Characteristics

Sample size: planned number of clusters
Data will be collected in sessions of 12 to 26 subjects per session (with even number of subjects). Depending on subject show-up rate, we will conduct between 12 sessions (if we recruit 26 subjects per session) and 26 sessions (if we recruit 12 subjects per session, both yielding at least 312 subjects. Half of the sessions will first experience See Incentive First, while the other half will first go through Assess Quality First.
Sample size: planned number of observations
We plan to recruit at least 300 subjects in total.
Sample size (or number of clusters) by treatment arms
Depending on subject show-up rate, we will conduct between 12 sessions (if we recruit 26 subjects per session) and 26 sessions (if we recruit 12 subjects per session, both yielding at least 312 subjects. Half of the sessions will first experience See Incentive First, while the other half will first go through Assess Quality First.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
See equations and tables in the attached file of pre-analysis plan. To test Hypothesis 1, we estimate a linear probability model with a subject-level random intercept (Equation 1). (p.14) We then use these coefficients from Table 3 (p.15) to perform power analysis with Monte Carlo Simulation. In the LPM model, the Intraclass Correlation Coefficient (ICC) which we use to translate a chosen ICC into the random-intercept variance for simulation. Using Table 3, we fix the total variance at σ2u+ σ2ϵ = 0.154 and vary ICC over [0.10,0.90]. For each ICC, we set σ2u = ICC×0.154, σ2ϵ = (1−ICC)×0.154. We test the two-sided hypothesis H0 : θC1 = 0 at α= 0.05 with R= 10,000 replications, targeting statistical power of at least 0.80. We evaluate sample sizes N = 20,...,300 at pair level and report, in Table 4 (p.16), the minimum N that achieves 80% power for each ICC under a fixed total variance of 0.154. Because each unit is a pair, the required number of individual participants is 2N (i.e., twice the reported minimum N). To test Hypothesis 2, we estimate a linear probability model with a subject-level random intercept (Equation 2). (p.16) We use coefficients from Table 1 (p.6) to perform power analysis with Monte Carlo Simulation. We fix the total variance σ2u+ σ2ϵ = 0.237 (the residual variance from Table 1) and vary ICC over [0.10, 0.90]. We test the two-sided hypothesis H0 : γSelf = 0 at α= 0.05 with R = 10,000 replications, targeting statistical power of at least 0.80. We evaluate sample sizes N = 10,...,1000 at pair level and report, in Table 5 (p.17), the minimum N that achieves 80% power for each working-scale ICC under a fixed total variance of 0.237. Because each unit is a pair, the required number of individual participants is 2N (i.e., twice the reported minimum N). To test Hypothesis 3, we estimate a linear probability model with a subject-level random intercept (Equation 3). (p.18) We use the coefficients from column (3) of Table 2 (p.9) to conduct a power analysis using Monte Carlo simulation. We fix the total variance σ2u + σ2ϵ = 0.170 (the residual variance from Table 2) and vary ICC over [0.10, 0.90]. The data-generating process uses all estimated coefficients from Table 2 as parameters. We test the two-sided hypothesis H0 : τP = 0 at α= 0.05 with R= 10,000 replications, targeting statistical power of at least 0.80. We evaluate sample sizes N = 10,...,1000 at pair level and report, in Table 6 (p.19), the minimum N that achieves 80% power for each working-scale ICC under a fixed total variance of 0.170. Because each unit is a pair the required number of individual participants is 2N (i.e., twice the reported minimum N). To conclude, Tables 5 and 6 indicate that power remains below 80% for N ≤300. This is likely because the effects of interest are modest even though Saccardo and Serra-Garcia (2023) have more than 3,000 participants (0.075 in Table 1 and 0.227 in Table 2). Since our primary analysis focuses on Hypothesis 1 regarding the NoChoice experiment, and Table 4 shows that power exceeds 80% when N ≥127, we set the target sample size to 300 (N = 150). The corresponding power for Hypotheses 2 and 3 reported in Table 7 (p.19) are very low (except for ICC = 0.10), so we do not expect to replicate Hypothesis 2 and 3. See "Section 3.3 Treatment Effect" in trial's Pre-analysis Plan for more details.
IRB

Institutional Review Boards (IRBs)

IRB Name
Research Ethics Committee National Taiwan University
IRB Approval Date
2026-01-20
IRB Approval Number
202501HS017
Analysis Plan

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information