Cooperation with the future in an OLG framework

Last registered on August 18, 2022


Trial Information

General Information

Cooperation with the future in an OLG framework
Initial registration date
August 14, 2022

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
August 18, 2022, 3:18 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.


Primary Investigator

University of Birmingham

Other Primary Investigator(s)

PI Affiliation
PI Affiliation
PI Affiliation

Additional Trial Information

In development
Start date
End date
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
People living today often have little-to-no incentive to contribute to multi-generational public goods that (primarily) benefit later generations. Prior experimental tests of this proposition have used different versions of the intergenerational goods game to show that many members of today’s generation indeed follow the Nash prediction of zero contributions. Yet, these existing frameworks and experiments abstract from some important features of intergenerational goods (IGG) problems. This paper adds a critical feature where multiple generations can overlap. With overlapping generations, there is a richer strategy space that may make cooperation with future generations sustainable, even among purely self-interested agents. In this project, we aim to study theoretically and experimentally the mechanisms behind why people do (and do not) contribute to intergenerational public goods. Across several experimental conditions, we consider and test mechanisms such as the role of social information, reciprocity, fear of retaliation, and efficiency concerns that can affect the decision to give to the past, present or future – or not give at all.
External Link(s)

Registration Citation

Freitas Groff, Zach et al. 2022. "Cooperation with the future in an OLG framework." AEA RCT Registry. August 18.
Experimental Details


We conduct an online experiment using a modified form of the IGG. In this experiment, participants are randomized to a sequence of decision-makers of indefinite lengths (with a continuation probability of 80% after every round). Depending on the treatment, each participant can make allocation choices that affect the experimental earnings of themselves, of other members in his/her sequence and/or of members of a different sequence. Depending on the treatment, participants have knowledge about the prior choices of participants in their sequence/a different sequence.

We test two main categories of interventions and how they affect participants’ contributions to future generations. All interventions are fully discussed in the hidden section.
Intervention Start Date
Intervention End Date

Primary Outcomes

Primary Outcomes (end points)
1) Allocations to Account C (“FIG” – i.e. the amount given to future generations)
2) Whether participants allocate anything positive amount (>0) to Account C (FIG)
3) For BIG-FIG arms, the percentage of pairs of consecutive participants in a sequence, with slack budgets (i.e. how many pairs choose inefficient strategies). We define as an inefficient strategy the possibility within a pair of consecutive participants in a sequence if money from the respective private accounts (Account A in the experiment) could be redistributed to accounts C and B of the respective players without making either player in the pair worse off financially.
4) For BIG-FIG arms, comparing the average and median of each of the following across all pairs of consecutive players:
The maximum amount the first player could be made better off by a Pareto improvement assuming risk neutrality.
The maximum amount the second player could be made better off by a Pareto improvement assuming risk neutrality.
5) The average payoff per player in a sequence relative to (i) the maximally attainable average amount (ii) the maximally attainable amount sustainable as a (Nash) equilibrium of the game

We plan to test the effects of choice sets, information received, and which generation receives the allocations, on these five primary outcomes, by comparing across treatments.

Primary Outcomes (explanation)
Rationale for these end-points is that they will allow us to understand how different information conditions and allocation possibilities affect giving to goods that benefit future generations, i.e. the FIG.

Secondary Outcomes

Secondary Outcomes (end points)
To understand the different mechanisms underlying the treatment effects, we will also test (in those treatments where sufficient information is available and the respective accounts are part of the choice set):

1) FIG allocations, conditional on the FIG allocations in the past generation(s)
2) FIG allocations, conditional on BIG allocations in the past generation
3) Unconditional levels of BIG contributions and whether any positive amount (>0) is allocated to the BIG account
4) BIG allocations, conditional on FIG allocations in the past generation
5) BIG allocations, conditional on BIG allocations in the past generation
6) Versions of primary outcome #4 that account for different levels of risk aversion. We will use the EG task we employ in the questionnaire to (i) classify participants into risk-neutral, risk averse and risk seeking and (ii) to understand the strength of risk averse preferences
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
All aspects of the design and procedures are discussed in the hidden section of the pre-registration
Experimental Design Details
Basic structure and decision task
We will be conducting this experiment online on the Prolific platform. Our exclusion restrictions are UK participants only with an approval rate of >95% on previous submissions and at least 100 minimum submissions.

Participants will be a part of the modified version of the IGG. Each participant will be randomly allocated to a sequence of participants, some participating before them and some (potentially) after them. There is an 80% chance that a participant will play after them and a 20% chance that the sequence stops after them. Each participant’s position in the sequence is random and unknown to participants. We will determine the length of each sequence before the experiment using a random number generator.

On Prolific, we will recruit participants of the next generation for all active sequences in all treatments once all data are collected for the current generation, which will be handled automatically by the software. We will stop recruiting new participants to a sequence when it becomes inactive, i.e. when its maximum length has been reached.

Participants will receive 20 points, and will choose to allocate these points to three different accounts A, B and C. All 20 points must be allocated by participants before continuing. Any points allocated to the participant will be paid as a bonus payment at a rate of 1 point = £0.04. As we describe above, the accounts will differ in who benefits from their allocation, depending on the treatment. In our main condition (BIG-FIG):
Account A: Points allocated to this account will be given to the participant. Points in this account will not be multiplied.
Account B: Points allocated to this account will be multiplied by 3 and then given to the participant directly before this participant in the sequence.
Account C: Points allocated to this account will be multiplied by 5 and given to the participant directly after this participant in the sequence (which occurs with an 80% chance).

Note: The multipliers in each account are the same across all treatments. In CUR, amounts contributed to Accounts B(C) will be given to another participant taking part in the study. In FIG+2, the amount allocated to Account C will be given to a participant taking part two “generations” after the decision-maker.

Depending on the treatment (see above), participants may also receive information about what other participants in the study did prior to submitting their allocations.

First participant’s information

In our experiment, participants do not know their position in the sequence. We inform all participants that they could be the first participant in a sequence. In this case, the information they will see on the information screen will not come from prior participants in their own sequence. Rather we will show information on participants from some additional pre-experiments (seed sequences). These will include the same decision task but will have a fixed length. In total, we will run 50 seed sequences using the BIG-FIG condition with a fixed length of 7. We use the decisions of players in positions 4, 5, and 6 to generate the information shown in the main sequences. For each main sequence, there is a corresponding seed sequence.

In treatments where this is relevant, we also inform participants that if they are in the first position of a sequence, their BIG contribution will still go to a real participant specifically invited for this purpose, and they will receive a FIG bonus originating from this participant. Crucially, we inform participants that they will not be able to infer their position in the sequence from looking at the information/decision screens.

Attention checks and comprehension questions are included on every instruction screen. Participants are only able to proceed to the following screen once they answer the comprehension questions correctly.

After this, the participants make their allocation decisions. (To see how each treatment differs in the allocation space and information space, see the table in the interventions section.)

Before the end of the experiment, participants will be asked several questions about their decisions, and their beliefs. In five of these questions, they will be incentivised to state accurate beliefs, as we will award them 1 bonus point for each correct guess. These include:

How they think the participant before them allocated their points
How they think the participant after them will allocate their points
How they think the participant two steps before them allocated their points
How they think the participant three steps before them allocated their points
The Krupka & Weber social norm coordination task for different allocations.

Further questions pertain to participants' demographics (age, gender, race). Furthermore, we use basic incentives choice paradigms to elicit basic economic preferences

Risk preferences: A version of the Eckel and Grossman (2007) lottery selection task.

Pro-social preferences. We will ask participants to pick a charity from a dropdown list. Then using the elicitation task in Exley (2015), participants will make decisions between them receiving 10 points and the charity receiving some points (where points increase by 2 each progressive row). The row where the participants switch to giving to their preferred charity is an indication of their altruistic preferences, with higher switching points indicating more self regarding preferences.

Other regarding preferences: Adapted survey questions from Falk (2016) for reciprocity, punishment, patience, and altruism.

Basic questions about task comprehension and motivation

An attention check

We will use data collected via the questionnaire as control variables in regressions and for exploratory analysis e.g. to understand heterogenous treatment effects and the roles of beliefs for our results.

Other Notes (Follow-up Treatments):

If we find that contributions to account C (on either the extensive or intensive margins) are significantly different in BIG-FIG full info compared to BIG-FIG+2, then we will consider conducting a follow-up experiment (conditional on funding).

We will run a new treatment (BIG-FIG full info 80% FIG). This treatment is the same as BIG-FIG full info, except there is only an 80% chance that the FIG (Account C) actually gets delivered. Meaning there is only a 64% chance that FIG occurs (80% next generation exists * 80% delivery). This treatment will help allow us to untangle the behavioral differences as to why contributions to account C are lower in BIG-FIG+2 compared to BIG-FIG full info. Whether it is the reciprocity link being broken between generations, or due to a lower chance of FIG (account C) occurring 64%.

Randomization Method
Subjects are randomized to treatments via a computer program embedded in the experimental software. Within each treatment, subjects will also be randomly allocated to a specific sequence within the treatment.

In each treatment, we have 50 sequences. We will pre-determine the lengths of each sequence (number of participants before the sequence ends, as there is a 20% chance the sequence stops after each participant) by randomly generating 50 sequence lengths using a random number generator (see below).
Randomization Unit
Randomisation will occur at the individual level: once entering the study, participants will be randomized to a treatment and sequence within treatment.
Was the treatment clustered?

Experiment Characteristics

Sample size: planned number of clusters
We plan to have 50 sequences in each treatment.

This will result in 50*8=400 sequences.
Sample size: planned number of observations
Within each sequence, there is a 20% chance that the sequence stops after each participant. Thus, in expectation, the average sequence will consist of 5 participants. In the actual implementation of our random number generator, we arrived at a realized average sequence length of 4.7 with the following lengths. (Sequence length/number of sequences) 1/9 2/8 3/4 4/7 5/6 6/7 7/2 9/2 12/2 13/1 14/1 15/1 This means that we will recruit 235 participants to each of our eight treatment conditions for a total of 1880 individuals. Additionally, we recruit 350 participants to the seed sequences and further participants that take the role of recipients for starting players in the respective conditions (but will not make active decisions in the experiment).
Sample size (or number of clusters) by treatment arms
In each treatment, there will be 50 sequences, with 235 participants according to the randomly predetermined list of sequences shown above.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Comparisons will be made both at the individual level (N=235 per treatment, total N=1,880), and sequence level (N=50 per treatment, total N=400). When we make comparisons on a generation-by-generation level (e.g. how did behaviour change in the first x generations of the game), we will use data for all sequences up to a length of 6. In the paper we will report non-parametric tests for treatment by treatment comparisons as well as regression-based tests for all outcome variables specified above. Regressions will be conducted both at the individual level and on the sequence-average level. When using individual level data in regressions we will cluster standard errors by sequence to account for non-independence of information across different generations within a sequence. We will also include variables to capture sequence and time specific fixed effects. Below we present power calculations for all single treatment comparisons using the appropriate non-parametric tests. All calculations (two-tailed) have been made using Stata power command with α=0.05, and power=0.8. For comparisons of amounts contributed (MW Rank Sum Tests) Minimal detectable treatment effect size (single treatment comparison, individual level): d = 0.26 Minimal detectable treatment effect size (single treatment comparison, individual level, allowing for clustering within 50 sequences): d = 0.44 Minimal detectable treatment effect size (single treatment comparison, sequence level): d = 0.57 For comparisons of proportions (Chi2 Test) Minimal detectable treatment effect size (single treatment comparison, individual level, Chi2 Test): d = 0.13 Minimal detectable treatment effect size (single treatment comparison, individual level, Ch2t, allowing for clustering within 50 sequences): d = 0.22 Minimal detectable treatment effect size (single treatment comparison, sequence level,Chi2): d = 0.27

Institutional Review Boards (IRBs)

IRB Name
Stanford IRB
IRB Approval Date
IRB Approval Number


Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information


Is the intervention completed?
Data Collection Complete
Data Publication

Data Publication

Is public data available?

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials