Infinitely Repeated Games with Changing Discount Rates: An Experimental Study.

Last registered on December 17, 2020

Pre-Trial

Trial Information

General Information

Title
Infinitely Repeated Games with Changing Discount Rates: An Experimental Study.
RCT ID
AEARCTR-0005481
Initial registration date
February 19, 2020

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
February 19, 2020, 3:01 PM EST

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
December 17, 2020, 5:16 PM EST

Last updated is the most recent time when changes to the trial's registration were published.

Locations

Primary Investigator

Affiliation
Florida State University

Other Primary Investigator(s)

PI Affiliation
Florida State University

Additional Trial Information

Status
On going
Start date
2020-02-20
End date
2021-06-30
Secondary IDs
Abstract
Most formal studies of infinitely repeated games assume a fixed discount rate, but it is clear that discount rates vary over time. This changes the dynamic incentives faced by agents and should therefore affect cooperation rates. We are particularly interested in two questions: with changing discount rates, will subjects account for the change in dynamic incentives? Will behavior change beyond what can be explained by changing dynamic incentives? On the latter question, existing research on coordination games leads us to conjecture that cooperation rates will converge to high levels regardless of the current discount rate.
External Link(s)

Registration Citation

Citation
Cooper, David and Matt Gentry. 2020. "Infinitely Repeated Games with Changing Discount Rates: An Experimental Study.." AEA RCT Registry. December 17. https://doi.org/10.1257/rct.5481-1.4000000000000001
Experimental Details

Interventions

Intervention(s)
Most formal studies of infinitely repeated games assume a fixed discount rate, but it is clear that discount rates vary over time. This changes the dynamic incentives faced by agents and should therefore affect cooperation rates. We are particularly interested in two questions: with changing discount rates, will subjects account for the change in dynamic incentives? Will behavior change beyond what can be explained by changing dynamic incentives? On the latter question, existing research on coordination games leads us to conjecture that cooperation rates will converge to high levels regardless of the current discount rate.
Intervention Start Date
2020-02-20
Intervention End Date
2021-06-30

Primary Outcomes

Primary Outcomes (end points)
1) We are interested in what cooperation rates are reached by treatment.
2) We plan on fitting a structural model related to SFEM, and are interested in the distribution of strategies identified in the dataset as a function of the treatments.
Primary Outcomes (explanation)
Cooperation rates are measured directly from the choices of experimental subjects. We will use both individual cooperation rates and mutual cooperation rates. The former are more useful for considering individual strategies while the latter are better suited for studying transitions within a supergame. The two measures are highly correlated, so it is largely a matter of convenience which is used.

We also plan to fit a structural model to the data to estimate the distribution of strategies. We will fit SFEM using software developed by Dal Bo and Frechette (forthcoming), and also plan to modify SFEM to allow for sophisticated Bayesian learners.

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
We plan to conduct laboratory experiments systematically studying the effects of varying discount rates in supergames.
Experimental Design Details
We propose a laboratory experimental design that allows for varying discount rates. We do this in a deliberately simple manner: (1) subjects never face more than two possible values of δ, (2) they always know the current value of δ, and (3) they know the stochastic process governing changes to δ. Presumably none of these features are replicated in most relevant field settings, but we view this as a necessary first step in exploring the effect of varying δ.

All subjects play IRPD games with perfect monitoring using the payoff matrix shown below. Treatments vary the value(s) of δ and whether the value of δ is fixed or possibly changes at the beginning of a new stage game. We use a between subjects design, so all subjects participate in a single treatment, with random assignment of subjects to treatments. Our design contains the following treatments.

PD Payoff Matrix
C D
C 156 5
D 259 110

Baseline (Low and High): The value of δ is fixed at 0.75 throughout a session in Baseline Low and 0.90 in a session of Baseline High. This implies that the expected length of a supergame is only four stage games in Baseline Low versus ten stage games in Baseline High. Combining δ with the payoff matrix, BAD = 0.75 in Baseline Low versus 0.25 in Baseline High. Based on past results, cooperation rates are expected to be low in Baseline Low and substantially higher in Baseline High.

Switching: At the beginning of each supergame, an initial continuation probability is randomly selected, either δ = 0.75 or δ = 0.90. Each starting value is equally likely to be selected. At the end of each stage game when δ = 0.75, there is a 20% chance that δ switches to δ = 0.90 for the current stage game. Likewise, if δ = 0.90, there is a 20% chance it switches to δ = 0.75 for the next stage game. It is common knowledge when the value of δ changes.

Control (Low and High): The dynamic incentives in Baseline Low and the switching treatments are not the same even in stage games where δ = 0.75 in both cases. In the Baseline Low treatment, the subjects know that this low continuation probability will hold in all future stage games, but in the switching treatments they know that δ might switch. This implies that the expected length of the supergame is less in Baseline Low than in the switching treatments when δ = 0.75. As a result, we do not expect cooperation rates to be the same between Baseline Low and Switching Low even if random switching has no effect beyond altering the dynamic incentives. Similar logic applies to the comparison of Baseline High with Switching High.

We therefore added two additional control treatments, Control Low and Control High. We calculate the value of BAD for the initial stage game of Switching with δ = 0.75, accounting for the possibility of switching, and then calibrate the fixed value of δ (rounded to the closest 100th) that gives the same value of BAD. The resulting value of δ = 0.80, yielding BAD = 0.56 as in the switching treatments for stage games where δ = 0.75. Performing the same exercise for Switching with δ = 0.90 yields a fixed value of δ = 0.85 and BAD = 0.40. The value of δ is, therefore, fixed at 0.80 throughout a session in Control Low and 0.85 in a session of Control High.

Subjects in all treatments will play ten supergames using a strangers matching, with payment made for one randomly selected supergame. Payoffs will be calibrated to achieve an average payoff of 12 – 15 dollars per hour. Random seeds for the probability of continuation and the probability of switching δ are matched across treatments. This does not generate identical supergame lengths since the continuation probabilities vary, but does imply a similar pattern in terms of when relatively long and short supergames will occur within a session. We plan for five sessions, split into two matching groups, of each treatment. We plan to recruit at least 20 subjects per session. This gives us a minimum of 100 subjects and ten matching groups per treatment. We plan to double the sample for the Switching treatment; this treatment is of greater interest than the others, plus we want the extra data to ease fitting of a structural model.

Edit: 3/3/20

After an initial pilot session, we have decided to change the payoff table to the following:

C D
C 140 10
D 240 100

We have also changed the value of the continuation probabilities to 0.8 and 0.95 for the low and high values respectively. These changes were made for two reasons.

1) The pilot indicated lower than expected cooperation rates. This would make detection of treatment effects quite difficult. The changes lower the value of BAD, making cooperation more likely.

2) We rescaled the payoffs so we could pay in pennies rather than having to convert ECUs to dollars. This was done to reduce subject confusion.

Edit 5/19/20

We've had a long delay due to coronavirus, but are ready to start running sessions in a serious way. After finishing piloting, we have made a final tweak to the payoff table to make cooperation a bit easier.

C D
C 145 30
D 255 100

Due to coronavirus, we will be running sessions online. Budget and subject pool permitting, we hope to complete the final few sessions in the lab so we can check whether being online is affecting behavior. At this point in time, it is very unclear whether we will be able to run sessions in the lab or not.
Randomization Method
We use the ORSEE recruiting system. This randomly picks a subset of the FSU subject pool and invites them to a session. Given that the invitation contains no detailed information about the treatment or experimental design, this generates random assignment to treatments.
Randomization Unit
experimental session (matching group)
Was the treatment clustered?
Yes

Experiment Characteristics

Sample size: planned number of clusters
We plan on running a total of 30 sessions, with each session split into 2 matching groups.

[12/17/20] Due to the pandemic, we ran this online using smaller sessions. We have currently run 30 sessions, but plan for an additional 18. With one exception (due to high turnout)
Sample size: planned number of observations
We plan on a total of 600 experimental subjects.
Sample size (or number of clusters) by treatment arms
All treatments are planned to contain 5 sessions (10 matching groups) except for Switching, where these numbers will be doubled.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
This isn't particularly relevant. For differences in cooperation rates, we will use Wilcoxon rank-sum tests with the matching group as the unit of observation. The structural model uses standard maximum likelihood techniques.
IRB

Institutional Review Boards (IRBs)

IRB Name
Florida State University Institutional Review Board
IRB Approval Date
2019-10-08
IRB Approval Number
STUDY00000514

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials