Time-Based and Milestone-Based Feedback in Public Goods Provision (In-Person Laboratory Experiments)

Last registered on May 03, 2022


Trial Information

General Information

Time-Based and Milestone-Based Feedback in Public Goods Provision (In-Person Laboratory Experiments)
Initial registration date
May 01, 2022

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
May 03, 2022, 9:44 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
May 03, 2022, 9:59 AM EDT

Last updated is the most recent time when changes to the trial's registration were published.



Primary Investigator

University of East Anglia

Other Primary Investigator(s)

PI Affiliation
University of Melbourne
PI Affiliation
University of Melbourne

Additional Trial Information

On going
Start date
End date
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
The goal of this research is to use insights from behavioral economics to mitigate the free-riding problem in public goods games. We will examine the role of real-time feedback versus intermediate goals (milestones) in shaping dynamic contributions to a public good. Individuals can contribute to the same public good from a fixed endowment over multiple time periods, and they receive different types of feedback at the end of each period depending on treatment assignment. This research will contribute to the well-established literature on voluntary contribution mechanism.
External Link(s)

Registration Citation

Erkal, Nisvan, Boon Han Koh and Nguyen Lam. 2022. "Time-Based and Milestone-Based Feedback in Public Goods Provision (In-Person Laboratory Experiments)." AEA RCT Registry. May 03. https://doi.org/10.1257/rct.9334
Experimental Details


Intervention Start Date
Intervention End Date

Primary Outcomes

Primary Outcomes (end points)
1) Total group contributions at the end of each sequence
2) Total group contributions at the end of each round within the sequence
3) Proportion of groups with total contributions meeting specific thresholds/milestones at the end of each sequence
Primary Outcomes (explanation)
1) Individual contributions by group members in each round within the sequence

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
[This study supercedes AEARCTR-0008445, which was previously designed to be an online experiment. The previous study was withdrawn given the easing of COVID-19 restrictions in Australia, thus giving us the opportunity to implement the experiment face-to-face in the laboratory. The size of the sessions and groups had to be changed to accommodate the capacity constraints in the laboratory. Consequently, parameters in the PG game have been modified accordingly to account for the changes to session and group sizes.]

Participants take part in 10 sequences of a public goods game with dynamic contributions. At the beginning of each sequence, participants are divided into groups of three (random rematching across sequences), and each group member is given an endowment of 30 tokens. Each sequence consists of 6 rounds where, in each round, group members can decide how much of their remaining endowment to contribute to the Group Account. Any unassigned tokens at the end of the 6th round will remain in the members' Private Accounts. Group members receive their payoffs from the Group and Private Accounts only at the end of the 6th round.

Payoffs follow the standard VCM experiment: For every token remaining in a group member's Private Account, that member receives 1 token. For every token there is in the Group Account, each group member receives 0.4 tokens.

The main treatments vary on the information that group members receive at the end of each round of the sequence, with feedback being either time-based or milestone-based. Where feedback is milestone-based, we will also vary the intervals of the milestones.
Experimental Design Details
Not available
Randomization Method
Randomization is done at the session level. Treatment assignment is randomly pre-determined before the session is run, and participants select into sessions randomly without knowing the pre-determined treatment assignment.
Randomization Unit
Unit of randomization is at the experimental session level
Was the treatment clustered?

Experiment Characteristics

Sample size: planned number of clusters
18 to 24 (depending on show-up rate for each session)
Sample size: planned number of observations
Sample size (or number of clusters) by treatment arms
120 (between 6 and 8 sessions per treatment, each with 15 participants.)
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)

Institutional Review Boards (IRBs)

IRB Name
Business and Economics & Melbourne Business School Human Ethics Advisory Group (University of Melbourne)
IRB Approval Date
IRB Approval Number
IRB Name
School of Economics Research Ethics Subcommittee (University of East Anglia)
IRB Approval Date
IRB Approval Number