Friends Don't Lie: Allocating Jobs via Peer Information

Last registered on March 24, 2022


Trial Information

General Information

Friends Don't Lie: Allocating Jobs via Peer Information
Initial registration date
March 22, 2022

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
March 24, 2022, 4:42 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.



Primary Investigator

Stanford University

Other Primary Investigator(s)

PI Affiliation
Stanford University

Additional Trial Information

In development
Start date
End date
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
We often wish to target costly interventions (e.g. scholarships, financial aid, entrepreneurial loans) to the most suitable (e.g. hardworking, poor, trustworthy) individuals. However, suitability often remains unobserved and self-reports permit strategic misreporting. Peer prediction mechanisms, such as Robust Bayesian Truth Serum, aim to elicit suitability of an individual based on peer reports. However, these mechanisms are not robust to coordination among peers, which makes them unapplicable to digital screening problems. In this paper, we describe Algorithmic Truth Serum (ATS), the first peer prediction mechanism robust to coordination among peers. This mechanism would enable the automated, digital elicitation of peer information for the cost-effective targeting of interventions. To test this mechanism, we perform an experiment in which Indian job seekers digitally apply for a text transcription job. We elicit predictions from an applicant's peers and select the applicants with the highest predicted performance. To evaluate the performance of ATS, we compare eliciting peer predictions via ATS to not using incentives in an RCT. By improving screening, Algorithmic Truth Serum could aid in the growth of digital job market. Other possible applications include digital credit and targeting poverty programs.
External Link(s)

Registration Citation

Metzger, Jonas and Mark Walsh. 2022. "Friends Don't Lie: Allocating Jobs via Peer Information." AEA RCT Registry. March 24.
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details


We test a novel information elicitation mechanism, Algorithmic Truth Serum (ATS), for the setting in which a large number of individuals, about whom we have only little verifiable information, apply for a desirable allocation (such as a loan, job, or scholarship). We test this mechanism on recommenders for digital job applicants. Our mechanism incentivizes recommenders to reveal additional information about the applicant, which allows it to make a more efficient allocation decision than would be possible based on the verifiable information alone. The mechanism consists of two machine learning models, both of which are estimated without access to any additional ‘ground-truth’ data: the first is a payment rule incentivizing truthful reports, which is estimated via a conditional moment restriction. The second is a decision rule which is robust to strategic misreports.
Intervention Start Date
Intervention End Date

Primary Outcomes

Primary Outcomes (end points)
Word Error Rate (WER), Match Error Rate (MER), and Word Information Lost (WIL)
Primary Outcomes (explanation)
Since the respondents will be transcribing connected speech, we will use the Viterbi alignment procedure. WER is the proportion of word errors to words processed. MER is the probability of a given Input/Output word match being an error. WIL is an approximation of the statistical dependence between the input and output words. Refer to Morris, Maier and Green (2004) for the exact formulas used.

Secondary Outcomes

Secondary Outcomes (end points)
Quality of selected applicant index; Caste of selected applicants; MTurk experience of selected applicants; Recommenders’ reported truthfulness; Recommenders’ estimated truthfulness; Satisfaction with application process
Secondary Outcomes (explanation)
Quality of selected applicant index: An index constructed from WER, MER and WIL on practice transcription task and a short-version Raven’s matrices test.

Recommenders’ estimated truthfulness: This will be estimated based on the differences between the reports of the recommenders and the predicted reports of an algorithm using the experimental data.

Satisfaction with application process: Measured through reports of applicants in follow-up surveys after the experiment.

Experimental Design

Experimental Design
We will recruit digital job seekers to apply to a transcription task. In the application, the job seekers will choose a recommender, answer a few questions about themselves and take tests of their ability. We will send a recommendation survey to the chosen recommenders. In the treatment arm, the recommender will be incentivized to tell the truth via Algorithmic Truth Serum. In the control arm, the recommender will receive a fixed payment for completing the survey. Then, we will select the top 50% of job seekers in each arm based on the application and recommender surveys. The selected job seekers will then be able to access the transcription task and to collect their payment.
Experimental Design Details
The applicants will be recruited through a job posting for a “Newspaper Transcription" task on Amazon Mechanical Turk (MTurk). The applicants will need the "Newspaper Transcription" MTurk qualification to accept the job. To earn the qualification, the applicants will need to fill out the applicant survey. The applicant survey will ask the applicant to provide a MTurk worker ID who would be willing to be their recommender. The applicant will also be asked to send a link to a "Recommender" MTurk qualification to their recommender. The "Recommender" MTurk qualification will require the recommender to fill out a survey asking questions about the characteristics of the applicant. There will be two arms in this study, the Algorithmic Truth Serum (ATS) arm and the control (C) arm. In the treatment arm, recommenders are paid based on the ATS mechanism while they are paid a fixed amount in the control arm. After completing the survey, the recommender will be granted the "Recommender" MTurk qualification. This qualification will allow the recommender to accept the "Recommender Incentive Payments and Followup" MTurk job task. In the "Recommender Incentive Payments and Followup" job task, the recommender will only need to fill out a short follow-up survey about their impressions of the experiment before receiving their incentive payments which will range from 1-5 USD.

Once the recommenders fill out their surveys, we will use their reports to determine which applicants to select for the job. The selected applicants will receive the "Newspaper Transcription" MTurk qualification. This qualification will allow the applicants to accept the "Newspaper Transcription" job task. In this job task, the selected applicants will have three hours to transcribe and correct typos in sentences from English language newspapers. The sentences from the newspapers will be randomly chosen for each applicant. The selected applicants will then need to fill out a short followup survey. They will be paid 15 USD upon completion of the task.
Randomization Method
Randomization done through randomizer function in Qualtrics survey forms.
Randomization Unit
Was the treatment clustered?

Experiment Characteristics

Sample size: planned number of clusters
Sample size: planned number of observations
Sample size (or number of clusters) by treatment arms
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)

Institutional Review Boards (IRBs)

IRB Name
Stanford Institutional Review Board
IRB Approval Date
IRB Approval Number


Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information


Is the intervention completed?
Data Collection Complete
Data Publication

Data Publication

Is public data available?

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials