x

NEW UPDATE: Completed trials may now upload and register supplementary documents (e.g. null results reports, populated pre-analysis plans, or post-trial results reports) in the Post Trial section under Reports, Papers, & Other Materials.
Observability and Peer Effects: Theory and Evidence from a Field Experiment
Last registered on July 27, 2020

Pre-Trial

Trial Information
General Information
Title
Observability and Peer Effects: Theory and Evidence from a Field Experiment
RCT ID
AEARCTR-0006237
Initial registration date
July 27, 2020
Last updated
July 27, 2020 10:25 AM EDT
Location(s)
Region
Primary Investigator
Affiliation
Lingnan University
Other Primary Investigator(s)
PI Affiliation
Chinese University of Hong Kong
PI Affiliation
Lingnan University
Additional Trial Information
Status
Completed
Start date
2014-05-05
End date
2014-08-08
Secondary IDs
nil
Abstract
This paper incorporates differential observability into the study of peer effects with multiple dimensions of outputs. It posits that an increase in the observability of quality prompts workers to produce outputs of higher quality while reducing quantity. Based on a field experiment in China, our empirical investigation provides support for this theoretical prediction. Moreover, our additional experiment on the switch of workers from a team-based incentive pay to an individual-based scheme shows that while workers responded to the change of incentive schemes, their behaviors can be partly explained by the lingering peer effects of team production.
External Link(s)
Registration Citation
Citation
Fan, Simon, Xiangdong Wei and Junsen Zhang. 2020. "Observability and Peer Effects: Theory and Evidence from a Field Experiment ." AEA RCT Registry. July 27. https://doi.org/10.1257/rct.6237-1.0.
Sponsors & Partners

There are documents in this trial unavailable to the public. Use the button below to request access to this information.

Request Information
Experimental Details
Interventions
Intervention(s)
Our empirical analysis was performed based on data from a company with new recruits to identify clear-cut peer effects with differential observability. To achieve the objectives of this study in the most cost-effective manner, a new internship scheme was set up, and a company was commissioned to run the scheme in Shenzhen of Guangdong Province, China. The experience of internship was a strict requirement for graduation at many Chinese universities, although the evaluation of one’s performance at the job was immaterial. Therefore, even a very small company found it easy to hire university students as summer interns, and the interns usually did not have any connection with the company after the period of their internship. Therefore, observing interns’ behaviors of a small company is ideal for us to identify their peer effect, since other effects (e.g. career concern) are weak or do not exist at all.
The internship involved a data input job of real survey questionnaires collected for another project. The company we commissioned is a small IT company in Shenzhen whose main business is software design and data processing. As a part of our agreement, the company allowed us to send two research assistants (RAs) to work as “managers” of the company and run the internship program during our study period.
Our experiments lasted for one month. Our investigation required workers who would take the job seriously and were happy with short-term jobs. The internship opportunity provided by our project was well appreciated by the Student Service Centre of Shenzhen University, which helped us advertised it in its intranet. All the interns were officially hired by the commissioned company, which also promised to issue internship certificates to all interns after they have completed the job.
Over 200 applications were received for the internship job, from which we randomly selected 40 candidates. Moreover, through random draws, the participants were divided into two groups, each consisting of 20 workers. The first group worked in the morning from 8:30 to 12:00, while the second group worked in the afternoon from 1:30 to 5:00. The participants worked from Monday to Friday for 3.5 hours per day. For the first 10 days (two weeks), a team-based performance pay was used, and worker pay was determined with a formula, which increases with total quantity and decreases with the average error rate (a measure of quality). Every day, a 5 min job review was conducted by two managers (our RAs) on the previous day’s work. For the first three days, the individual quantity of output was made public along with the group mean quality of output. The information on individual quality of output was only known by the person himself. For the next seven days, the same undertaking was realized for the morning team (control group), but both individual quantity and quality data were made public for the afternoon team (treatment group). Accordingly, the morning team only had observability on their quantity of output, whereas the afternoon team had full observability on both the quantity and quality of the output for each of its team members.
Intervention Start Date
2014-05-05
Intervention End Date
2014-08-08
Primary Outcomes
Primary Outcomes (end points)
First, for the first three days when both groups were treated with the same approach, no significant difference was observed between them in terms of average quantity or average quality produced. This finding suggested that the average productivities of the workers in both groups were largely the same, and the result of randomization in dividing the workers into two groups was successful. It also suggests that their productivity in the morning was similar to that in the afternoon. Second, the groups began to differ in incentives beginning on day 4 when the morning group continuously learned only about their individual quantity of output, whereas the afternoon group was aware of both their quantity and quality of output. Evidence precisely confirmed that for the rest of the seven days, the afternoon group produced less quantity of output of higher quality. This finding supported our theoretical prediction that peer pressure is associated with observability, and workers strategically respond to peer pressure.
Primary Outcomes (explanation)
It suggests that using peer effect to mitigate the agency problem in firms is limited by the observability of workers’ outputs.
Secondary Outcomes
Secondary Outcomes (end points)
Secondary Outcomes (explanation)
Experimental Design
Experimental Design
We used a field experiment for our empirical test. A field experiment is essential for the purpose of this paper because employee- or firm-level survey data are often contaminated by various endogeneity problems due to self-selection and sorting. Moreover, acquiring information on peer pressure and quality of output is usually difficult in such datasets. Also, a laboratory experiment may not provide a strong sense of “realism” that can generate sufficient peer effects.
Similar to that of Gneezy and List (2006) and Bellemare, Lapage, and Shearer (2010), our field experiment involved hiring students to perform real jobs instead of participating in a laboratory experiment. A firm in Shenzhen, China was hired to implement a summer internship scheme by recruiting students in a local university. The firm first advertised a data encoding job through the student union’s Intranet and promised that an internship certificate will be issued to all students selected and completed the internship program. The experience of internship was a requirement for graduation at the university, although the evaluation of one’s performance at the job was immaterial.
The target recruitment was 40, and over 200 university students applied. All the applicants were invited to participate in a random draw for the 40 positions. The 40 successful candidates participated in another round of random draw to divide them in two groups of 20 interns each: a morning group and an afternoon group. The interns were informed that the internship would last for a month or 20 working days , and the working hours would be 3.5 hours per day (8:30–12:00 for the morning group and 1:30–5:00 for the afternoon group). The interns were responsible for entering data from a survey we collected for another project on rural education in China.
The survey data had already been carefully encoded and checked. Thus, the quality of the input (the error rate) could be quickly verified by a computer using a custom-made program. We emphasized at the beginning that both quantity (measured by the number of questionnaires inputted) and quality (measured by the percentage error rate) were important. As for the quality of work, they were told that our managers will randomly check each individual’s daily output for establishing her/his daily quality measure: the error rate. Their pay for the first two weeks (10 working days) was based on group/team performance. The pay formula is as follows:

(1–1.5*error rate) N x $1/20

where error rate is the average percentage of entries inputted with errors per questionnaire, and N is the number of questionnaires inputted by a group. Note that an error is fined 1.5 times. The company we hired suggested this formula based on their usual practice. Given that the error rate is very low (average about 2.6%) the fine is still quite small relative to the overall pay.
In the following two weeks (9 working days, see footnote 7 for an explanation), an individual piece rate system was adopted, and the same pay formula applied, except that the quantity and quality of output were measured at the individual level. This second phase of the experiment enabled us to further test the effects of changing peer pressure brought by individual piece rate.
We hired two RAs who also acted as managers sent by the firm. One RA/manager was in charge of monitoring workers and conducted a short daily meeting with interns in each group before the start of work to provide feedback on the previous day’s work. The other mainly dealt with technical problems faced by the interns and was in charge of compiling the daily quantity and quality figures. The first three days of the first two weeks were used as the baseline period. Each individual’s output (quantity of work) was announced to the whole group together with the average error rate of the group for both groups. Starting on the day 4, the format was changed for the afternoon (treatment) group. In addition to announcing each individual’s daily quantity of output, the quality information of each individual was also given to the whole group . By so doing, we expected that the peer pressures people feel on these two dimensions of production would be different for the two groups and, therefore, the effort devoted by the team members to perform the two tasks would be different between the two groups as well. This provides us with the experimental environment to test our theory.
Experimental Design Details
Randomization Method
Our experiments lasted for one month. Our investigation required workers who would take the job seriously and were happy with short-term jobs. The internship opportunity provided by our project was well appreciated by the Student Service Centre of Shenzhen University, which helped us advertised it in its intranet. All the interns were officially hired by the commissioned company, which also promised to issue internship certificates to all interns after they have completed the job.
Over 200 applications were received for the internship job, from which we randomly selected 40 candidates. Moreover, through random draws, the participants were divided into two groups, each consisting of 20 workers. The first group worked in the morning from 8:30 to 12:00, while the second group worked in the afternoon from 1:30 to 5:00.
Randomization Unit
Individdual
Was the treatment clustered?
No
Experiment Characteristics
Sample size: planned number of clusters
n/a
Sample size: planned number of observations
40 workers
Sample size (or number of clusters) by treatment arms
360
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
100
IRB
INSTITUTIONAL REVIEW BOARDS (IRBs)
IRB Name
Ethical Committee of Lingnan University
IRB Approval Date
2012-01-01
IRB Approval Number
n/a
Post-Trial
Post Trial Information
Study Withdrawal
Intervention
Is the intervention completed?
No
Is data collection complete?
Data Publication
Data Publication
Is public data available?
No
Program Files
Program Files
Reports, Papers & Other Materials
Relevant Paper(s)
REPORTS & OTHER MATERIALS