Friends Don't Lie: Allocating Jobs via Peer Information

Last registered on March 24, 2022

Pre-Trial

Trial Information

General Information

Title
Friends Don't Lie: Allocating Jobs via Peer Information
RCT ID
AEARCTR-0009109
Initial registration date
March 22, 2022

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
March 24, 2022, 4:42 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Primary Investigator

Affiliation
Stanford University

Other Primary Investigator(s)

PI Affiliation
Stanford University

Additional Trial Information

Status
In development
Start date
2022-04-04
End date
2024-05-01
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
We often wish to target costly interventions (e.g. scholarships, financial aid, entrepreneurial loans) to the most suitable (e.g. hardworking, poor, trustworthy) individuals. However, suitability often remains unobserved and self-reports permit strategic misreporting. Peer prediction mechanisms, such as Robust Bayesian Truth Serum, aim to elicit suitability of an individual based on peer reports. However, these mechanisms are not robust to coordination among peers, which makes them unapplicable to digital screening problems. In this paper, we describe Algorithmic Truth Serum (ATS), the first peer prediction mechanism robust to coordination among peers. This mechanism would enable the automated, digital elicitation of peer information for the cost-effective targeting of interventions. To test this mechanism, we perform an experiment in which Indian job seekers digitally apply for a text transcription job. We elicit predictions from an applicant's peers and select the applicants with the highest predicted performance. To evaluate the performance of ATS, we compare eliciting peer predictions via ATS to not using incentives in an RCT. By improving screening, Algorithmic Truth Serum could aid in the growth of digital job market. Other possible applications include digital credit and targeting poverty programs.
External Link(s)

Registration Citation

Citation
Metzger, Jonas and Mark Walsh. 2022. "Friends Don't Lie: Allocating Jobs via Peer Information." AEA RCT Registry. March 24. https://doi.org/10.1257/rct.9109-1.0
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details

Interventions

Intervention(s)
We test a novel information elicitation mechanism, Algorithmic Truth Serum (ATS), for the setting in which a large number of individuals, about whom we have only little verifiable information, apply for a desirable allocation (such as a loan, job, or scholarship). We test this mechanism on recommenders for digital job applicants. Our mechanism incentivizes recommenders to reveal additional information about the applicant, which allows it to make a more efficient allocation decision than would be possible based on the verifiable information alone. The mechanism consists of two machine learning models, both of which are estimated without access to any additional ‘ground-truth’ data: the first is a payment rule incentivizing truthful reports, which is estimated via a conditional moment restriction. The second is a decision rule which is robust to strategic misreports.
Intervention Start Date
2022-04-04
Intervention End Date
2022-04-18

Primary Outcomes

Primary Outcomes (end points)
Word Error Rate (WER), Match Error Rate (MER), and Word Information Lost (WIL)
Primary Outcomes (explanation)
Since the respondents will be transcribing connected speech, we will use the Viterbi alignment procedure. WER is the proportion of word errors to words processed. MER is the probability of a given Input/Output word match being an error. WIL is an approximation of the statistical dependence between the input and output words. Refer to Morris, Maier and Green (2004) for the exact formulas used.

Secondary Outcomes

Secondary Outcomes (end points)
Quality of selected applicant index; Caste of selected applicants; MTurk experience of selected applicants; Recommenders’ reported truthfulness; Recommenders’ estimated truthfulness; Satisfaction with application process
Secondary Outcomes (explanation)
Quality of selected applicant index: An index constructed from WER, MER and WIL on practice transcription task and a short-version Raven’s matrices test.

Recommenders’ estimated truthfulness: This will be estimated based on the differences between the reports of the recommenders and the predicted reports of an algorithm using the experimental data.

Satisfaction with application process: Measured through reports of applicants in follow-up surveys after the experiment.

Experimental Design

Experimental Design
We will recruit digital job seekers to apply to a transcription task. In the application, the job seekers will choose a recommender, answer a few questions about themselves and take tests of their ability. We will send a recommendation survey to the chosen recommenders. In the treatment arm, the recommender will be incentivized to tell the truth via Algorithmic Truth Serum. In the control arm, the recommender will receive a fixed payment for completing the survey. Then, we will select the top 50% of job seekers in each arm based on the application and recommender surveys. The selected job seekers will then be able to access the transcription task and to collect their payment.
Experimental Design Details
Not available
Randomization Method
Randomization done through randomizer function in Qualtrics survey forms.
Randomization Unit
Applicant
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
1000
Sample size: planned number of observations
1000
Sample size (or number of clusters) by treatment arms
500
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
Stanford Institutional Review Board
IRB Approval Date
2021-05-27
IRB Approval Number
60394