Friends Don't Lie: Allocating Jobs via Peer Information

Last registered on March 24, 2022

Pre-Trial

Trial Information

General Information

Title
Friends Don't Lie: Allocating Jobs via Peer Information
RCT ID
AEARCTR-0009109
Initial registration date
March 22, 2022

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
March 24, 2022, 4:42 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Region

Primary Investigator

Affiliation
GiveWell

Other Primary Investigator(s)

PI Affiliation
Stanford University

Additional Trial Information

Status
In development
Start date
2022-04-04
End date
2024-05-01
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
We often wish to target costly interventions (e.g. scholarships, financial aid, entrepreneurial loans) to the most suitable (e.g. hardworking, poor, trustworthy) individuals. However, suitability often remains unobserved and self-reports permit strategic misreporting. Peer prediction mechanisms, such as Robust Bayesian Truth Serum, aim to elicit suitability of an individual based on peer reports. However, these mechanisms are not robust to coordination among peers, which makes them unapplicable to digital screening problems. In this paper, we describe Algorithmic Truth Serum (ATS), the first peer prediction mechanism robust to coordination among peers. This mechanism would enable the automated, digital elicitation of peer information for the cost-effective targeting of interventions. To test this mechanism, we perform an experiment in which Indian job seekers digitally apply for a text transcription job. We elicit predictions from an applicant's peers and select the applicants with the highest predicted performance. To evaluate the performance of ATS, we compare eliciting peer predictions via ATS to not using incentives in an RCT. By improving screening, Algorithmic Truth Serum could aid in the growth of digital job market. Other possible applications include digital credit and targeting poverty programs.
External Link(s)

Registration Citation

Citation
Metzger, Jonas and Mark Walsh. 2022. "Friends Don't Lie: Allocating Jobs via Peer Information." AEA RCT Registry. March 24. https://doi.org/10.1257/rct.9109-1.0
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details

Interventions

Intervention(s)
We test a novel information elicitation mechanism, Algorithmic Truth Serum (ATS), for the setting in which a large number of individuals, about whom we have only little verifiable information, apply for a desirable allocation (such as a loan, job, or scholarship). We test this mechanism on recommenders for digital job applicants. Our mechanism incentivizes recommenders to reveal additional information about the applicant, which allows it to make a more efficient allocation decision than would be possible based on the verifiable information alone. The mechanism consists of two machine learning models, both of which are estimated without access to any additional ‘ground-truth’ data: the first is a payment rule incentivizing truthful reports, which is estimated via a conditional moment restriction. The second is a decision rule which is robust to strategic misreports.
Intervention (Hidden)
In the treatment arm of the experiment, recommenders will be told:

“You will be paid more if your reports appear reasonable to our algorithm with access to additional sources of information about the applicant. The algorithm was designed by Stanford University PhD students in economics and computer science.
The algorithm will use the information about the applicant to predict which response is most likely to be truthful for each question. The closer your responses are to the algorithm's prediction, the higher your payout. Your responses will be anonymized and not shared with anyone outside of the research team.

Each question will have a NOTE above it informing you how much you could be paid if you matched the algorithm's prediction of the truthful response.”

Then, the recommenders will go through a practice module to ensure they understand the mechanism (in the control group, recommenders will go through a similar practice module but with the references to ATS stripped out) before answering a series of questions about the applicant. The questions were chosen to make it difficult for recommenders to guess how their answer will influence the applicant’s likelihood of being selected for the job task. The ATS mechanism will use the recommenders’ responses to predict the applicants likely to perform the best at the job and will offer those applicants the opportunity to complete the job task.
Intervention Start Date
2022-04-04
Intervention End Date
2022-04-18

Primary Outcomes

Primary Outcomes (end points)
Word Error Rate (WER), Match Error Rate (MER), and Word Information Lost (WIL)
Primary Outcomes (explanation)
Since the respondents will be transcribing connected speech, we will use the Viterbi alignment procedure. WER is the proportion of word errors to words processed. MER is the probability of a given Input/Output word match being an error. WIL is an approximation of the statistical dependence between the input and output words. Refer to Morris, Maier and Green (2004) for the exact formulas used.

Secondary Outcomes

Secondary Outcomes (end points)
Quality of selected applicant index; Caste of selected applicants; MTurk experience of selected applicants; Recommenders’ reported truthfulness; Recommenders’ estimated truthfulness; Satisfaction with application process
Secondary Outcomes (explanation)
Quality of selected applicant index: An index constructed from WER, MER and WIL on practice transcription task and a short-version Raven’s matrices test.

Recommenders’ estimated truthfulness: This will be estimated based on the differences between the reports of the recommenders and the predicted reports of an algorithm using the experimental data.

Satisfaction with application process: Measured through reports of applicants in follow-up surveys after the experiment.

Experimental Design

Experimental Design
We will recruit digital job seekers to apply to a transcription task. In the application, the job seekers will choose a recommender, answer a few questions about themselves and take tests of their ability. We will send a recommendation survey to the chosen recommenders. In the treatment arm, the recommender will be incentivized to tell the truth via Algorithmic Truth Serum. In the control arm, the recommender will receive a fixed payment for completing the survey. Then, we will select the top 50% of job seekers in each arm based on the application and recommender surveys. The selected job seekers will then be able to access the transcription task and to collect their payment.
Experimental Design Details
The applicants will be recruited through a job posting for a “Newspaper Transcription" task on Amazon Mechanical Turk (MTurk). The applicants will need the "Newspaper Transcription" MTurk qualification to accept the job. To earn the qualification, the applicants will need to fill out the applicant survey. The applicant survey will ask the applicant to provide a MTurk worker ID who would be willing to be their recommender. The applicant will also be asked to send a link to a "Recommender" MTurk qualification to their recommender. The "Recommender" MTurk qualification will require the recommender to fill out a survey asking questions about the characteristics of the applicant. There will be two arms in this study, the Algorithmic Truth Serum (ATS) arm and the control (C) arm. In the treatment arm, recommenders are paid based on the ATS mechanism while they are paid a fixed amount in the control arm. After completing the survey, the recommender will be granted the "Recommender" MTurk qualification. This qualification will allow the recommender to accept the "Recommender Incentive Payments and Followup" MTurk job task. In the "Recommender Incentive Payments and Followup" job task, the recommender will only need to fill out a short follow-up survey about their impressions of the experiment before receiving their incentive payments which will range from 1-5 USD.

Once the recommenders fill out their surveys, we will use their reports to determine which applicants to select for the job. The selected applicants will receive the "Newspaper Transcription" MTurk qualification. This qualification will allow the applicants to accept the "Newspaper Transcription" job task. In this job task, the selected applicants will have three hours to transcribe and correct typos in sentences from English language newspapers. The sentences from the newspapers will be randomly chosen for each applicant. The selected applicants will then need to fill out a short followup survey. They will be paid 15 USD upon completion of the task.
Randomization Method
Randomization done through randomizer function in Qualtrics survey forms.
Randomization Unit
Applicant
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
1000
Sample size: planned number of observations
1000
Sample size (or number of clusters) by treatment arms
500
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
Stanford Institutional Review Board
IRB Approval Date
2021-05-27
IRB Approval Number
60394

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials