Designing Performance Indicators for Career Incentives

Last registered on December 10, 2017

Pre-Trial

Trial Information

General Information

Title
Designing Performance Indicators for Career Incentives
RCT ID
AEARCTR-0002621
Initial registration date
December 07, 2017

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
December 10, 2017, 9:31 PM EST

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Primary Investigator

Affiliation
University of California, Berkeley

Other Primary Investigator(s)

PI Affiliation
Hong Kong University of Science and Technology
PI Affiliation
Renmin University
PI Affiliation
University of California, Berkeley
PI Affiliation
University of California, Berkeley

Additional Trial Information

Status
On going
Start date
2017-09-01
End date
2018-08-31
Secondary IDs
Abstract
In this study, we test theoretical predictions in the contract theory and personnel economics literature that subjective performance evaluations could suffer from two major issues: (1) delegation to an evaluating leader could induce favoritism and influence activities, which could be solved by creating ex ante uncertainty in the identity of the evaluating leader; and (2) principal and agent could have misaligned beliefs about the agent's performance, which could be solved by creating timely feedback between them.
We partner with provincial governments in Henan Province and Guangdong Province in China, and conduct a field experiment by randomly imposing different evaluation schemes to 4000 College Graduates Village Officials (CGVOs) in these two provinces. Having informed each CGVO at the beginning of the evaluation year about what his evaluation scheme would be, we observe whether his performance would change accordingly over the evaluation year, as predicted by theory. We use non-incentivized dimensions of performance as benchmark measures for welfare.
External Link(s)

Registration Citation

Citation
, et al. 2017. "Designing Performance Indicators for Career Incentives." AEA RCT Registry. December 10. https://doi.org/10.1257/rct.2621
Former Citation
, et al. 2017. "Designing Performance Indicators for Career Incentives." AEA RCT Registry. December 10. https://www.socialscienceregistry.org/trials/2621/history/23851
Sponsors & Partners

There are documents in this trial unavailable to the public. Use the button below to request access to this information.

Request Information
Experimental Details

Interventions

Intervention(s)
Help the government design a 5-arm evaluation scheme, introducing uncertainty about the identity of the evaluating leader, and introducing mid-term feedbacks between the evaluating leader and the CGVOs. The government randomly imposes each evaluation scheme on different students. The evaluation schemes are announced by the government at the beginning of the evaluation cycle, we wait a year to see the outcomes.
Intervention Start Date
2017-09-01
Intervention End Date
2018-08-31

Primary Outcomes

Primary Outcomes (end points)
Leader evaluation, colleague evaluation, self evaluation, attendance records, extra hours, villager satisfaction rates.
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
We have five different evaluation schemes, randomly imposed to different state employees in two provinces. Compared to the status quo, these arms introduce uncertainty in the identity of the evaluating leader, and introduce interim feedback between principal and agent, which in theory would improve the motivating effect of a subjective evaluation.
Experimental Design Details

T1: Objective Measures + Subjective Measures with known evaluator (Randomly choosing one from the two township leaders as evaluator, let the CGVO know ex ante who will be his evaluator. Reminder of the evaluation criteria every three months.)
We also ask every CGVO in the baseline survey to specify which of the two leaders he would prefer to be evaluated by, which is a good measure of (perceived) favoritism. Since the actual evaluator is randomly chosen, this gives us two randomized sub-arms in T1:
T1_0: CGVOs being evaluated by the leader that favors him.
T1_1: CGVOs being evaluated by the leader that doesn’t favor him.


T2: Objective Measures + Subjective Measures with unknown evaluator (Do not let the CGVO know ex ante who will be his evaluator, but inform him that the evaluators will be chosen by a random draw from the two township leaders as evaluator. This random draw will only be conducted at the end of the year. Reminder of the evaluation criteria every 3 months.)

T3: Objective Measures + Subjective Measures with known evaluator + trimonthly feedback from the chosen evaluator (Based on T1, but requires the randomly chosen evaluator to provide a trimonthly feedback on the CGVO’s performance to the CGVO himself, in the exact same form of a subjective evaluation, all but these trimonthly evaluation results will not be linked to any final evaluation or rewards.)
T3_0: CGVOs get feedback from the leader that favors him.
T3_1: CGVOs get feedback from the leader that doesn’t favor him.


T4: Objective Measures + Subjective Measures with known evaluator + trimonthly feedback from the non-evaluator (Based on T1, but require the randomly chosen non-evaluator to provide a trimonthly feedback on the CGVO’s performance to the CGVO himself, in the exact same form of a subjective evaluation, all but these trimonthly evaluation results will not be linked to any final evaluation or rewards.)
T4_0: CGVOs get feedback from the leader that favors him.
T4_1: CGVOs get feedback from the leader that doesn’t favor him.


T5: Objective Measures + Subjective Measures with unknown evaluator + trimonthly feedback from both leaders (Based on T2, but requires both leaders to provide trimonthly feedbacks on the CGVO’s performance to the CGVO himself, in the exact same form of a subjective evaluation, all but these trimonthly evaluation results will not be linked to any final evaluation or rewards.)
Randomization Method
randomization done in office by a computer
Randomization Unit
Randomization is done at government unit level.
Was the treatment clustered?
Yes

Experiment Characteristics

Sample size: planned number of clusters
1400 government units.
Sample size: planned number of observations
4000 state employees.
Sample size (or number of clusters) by treatment arms
800 T1, 800 T2, 800 T3, 800 T4, 800 T5
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
0.08 S.D. in satisfaction rate.
IRB

Institutional Review Boards (IRBs)

IRB Name
Designing Performance Indicators for Career Incentives
IRB Approval Date
2017-09-25
IRB Approval Number
2017-07-10117

Post-Trial

Post Trial Information

Study Withdrawal

There are documents in this trial unavailable to the public. Use the button below to request access to this information.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials