Social learning: the Role of Organizational Prestige and Incentives

Last registered on July 28, 2023

Pre-Trial

Trial Information

General Information

Title
Social learning: the Role of Organizational Prestige and Incentives
RCT ID
AEARCTR-0011534
Initial registration date
July 26, 2023

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
July 28, 2023, 2:06 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Region

Primary Investigator

Affiliation
Pontificia Universidad Catolica de Chile

Other Primary Investigator(s)

PI Affiliation
Pontificia Universidad Catolica de Chile
PI Affiliation
London Business School

Additional Trial Information

Status
On going
Start date
2023-07-04
End date
2023-09-01
Secondary IDs
Prior work
This trial is based on or builds upon one or more prior RCTs.
Abstract
We study the role of organizational prestige in fostering workers' learning. In a field study, we hire around 400/600 workers who are alumni or students of different high-education institutions to evaluate the performance of seller executives. In an online evaluation platform, workers listen to executives' recorded conversations to predict the executive's success rate. To test for the role of organizational prestige in learning, we measure how workers' predictions change as they are informed about predictions of other people who belong to their own, a higher, and lower prestige institution. In the control, workers are informed of an average prediction without referencing any institution. To test for the moderating effect of incentives, we randomize a small monetary bonus for the prediction accuracy across the prestige treatments.
External Link(s)

Registration Citation

Citation
Brahm, Francisco, Rosario Macera and Joaquin Poblete. 2023. "Social learning: the Role of Organizational Prestige and Incentives ." AEA RCT Registry. July 28. https://doi.org/10.1257/rct.11534-1.0
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details

Interventions

Intervention(s)
We hire workers to evaluate the performance of seller executives from a charity institution in Chile. In an online platform, workers listen to audio files of conversations between executives and pedestrians. Then, they answer 18 questions about the executives’ subjective ability to persuade the pedestrian to become a permanent donor. Once they answer the questionary, workers predict the overall success rate of the executive by guessing how many pedestrians---out of 100---they think the executive would be able to persuade (the “prediction” question). Workers are paid a fixed wage of $ 20,000CLP to evaluate audio files for four hours. This is the market wage for a half-day blue-collar type job in Chile. All workers are alumni or current students of high-education institutions in Chile.

In a within-subjects design, we inform subjects at the audio level about the average prediction of people from several high-education institutions of varying prestige. Workers start the job evaluating six audios to get familiar with the questionnaire. In the following four audios (“treatment audios”), after subjects have answered the prediction question, they receive a message delivering the average prediction of people from a top-prestige institution (“High-prestige” treatment), a low-prestige institution (“Low-prestige” treatment), their own institution (“Belonging” treatment), and a message with an average prediction but no institution (“No institution” control). Then, the platform offers them the possibility to reconsider their prediction and answer the prediction question again. The four messages are randomly assigned to the four audios. The order of the four treatment audios is also randomized.

To study whether incentives moderate prestige-induced learning, we also randomize a small monetary bonus for accuracy in their prediction. In two of the four treated audios, workers can earn $1.000 CLP (around 1.2 US dollars) if their prediction matches the actual success rate of the executive in the audio and $500 CLP if their prediction is three percentage points above or below the truth. The incentive is randomized either in the first and third audio/message or the second and fourth.
Intervention Start Date
2023-07-04
Intervention End Date
2023-09-01

Primary Outcomes

Primary Outcomes (end points)
(1) To measure the influence of the message on the prediction, the dependent variable is the difference between the prediction after observing the message and before observing it.

We will study this variable in the following versions:
* Absolute change in the prediction.
* A binary version measuring whether the worker changed her prediction after seeing the message/incentive.
* A binary version measuring whether the prediction increased.
* A binary version measuring whether the prediction decreased.

(If the share of subjects changing their predictions is large enough, we will also study variables 3 and 4 in their continuous versions).

(2) To measure the impact of the message on the extent of copy/learning, the dependent variable is the distance between the prediction shown in the message and that chosen by the workers after observing it.

* We will study this variable in the following versions:
* The absolute distance.
* A binary version measuring whether the absolute distance differs from zero.
* A binary version measuring whether the prediction exceeds the message.

A binary version measuring whether the prediction is lower than the message.

We will also calculate this variable in percentual terms, i.e., the share of the distance between the original prediction (before the message) and the message’s prediction that decreases/increases with the message:

abs(prediction after message - prediction in message)*100/abs(prediction before message - prediction in message)

if prediction before message ≠ prediction in the message. If prediction before message = prediction in the message, the variable takes the value 1. Values lower than one reflect copy; values greater than one would show that a subject does the opposite.

We will also study this variable in its binary version (1 if lower than one).

To distinguish whether the updated prediction gets closer or surpasses that in the message (as a measure of overreaction to the message), we will drop the absolute values in the formula above to study whether:
* The variable takes a positive value if the message is above/below the initial prediction and the subject updates her prediction closer but still above/below the message.
* The variable takes a negative value if the message is above or below the initial prediction and the subject updates her prediction beyond the message.

We will consider this variable in its continuous and binary versions.

(3) To measure the impact of the message on the extent of learning, we will re-calculate all the variables above but relative to the real success rate rather than that the prediction in the message.

(4) Finally, as exploratory work, we will also study the treatment effects over how fast subjects make their predictions (as a proxi for how certain they are in their assessments). To this end, all relevant questions in the Qualtrics survey have a time stamp.

Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
We are also interested in the heterogeneity of the treatment effects. To this end, we will explore heterogeneity in the following dimensions:

* The extent of the concern for prestige (measured by the subjects' assessment of the prestige of their own institution, that of the high and low-prestige institutions, and the overlapping circles measures described in the exit survey).
* The extent of their self-perception as “sentimental” people (measured by the Likert scale in the exit survey).
* Type of high-educational institution (university, vocational (CFT and IP in Chile)) [conditional on having enough observations in each group].
* The time spent in the high-education institution
* Number of years since graduation
* Alumni versus current students
* Gender and occupational status.

We will study heterogeneity using traditional interactions and median splits in our linear regressions.

Since we also have further data from subjects’ CVs, the exit survey, and their full evaluations of the audios (the original questionary), we will also consider using machine learning techniques that are agnostic about the source of heterogeneity (e.g., Chernozhukov, Demirer, Duflo, Fernández-Val (2023); Athey and Wager (2019).
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
(1) Workers’ recruitment
The job add is advertised in online job boards and social media pages advertising part-time jobs (Facebook and Instagram). All applicants who are older than 18, have a computer with internet connection, are Spanish native speakers, and have (at some point) studied in a high-education institution (university of vocational institutions (IPs and CFTs in Chile)) are eligible.. Since payment is processed through a wire transfer, we also request workers to have an account at a bank operating in Chile.

(2) The job
Workers are hired to evaluate audio files containing the executives-pedestrian conversations during 4 hours. The job takes place completely online, in an evaluation platform programmed in Qualtrics for the purpose of this study.

After listening to a given audio, workers answer 18 questions related to the executive performance (for instance, whether the executive delivered information about the charity programs, the emotions they were able to generate, etc.) For this study, the relevant question is the previous to last, where workers are asked to make a prediction about the executive’s future success rate:

“Finally, if this [executive] were to talk to 100 more [clients], how many of those 100 do you think she/he would be able to convince to become members? (That is, to deliver a monthly contribution)”

Randomization of the information and incentives occur in this question.

(3) Treatments and radomization
Treatments take place in a within-subject design. Randomization, therefore, takes place at the audio level.

Workers evaluate three blocks of audios. The first block contains six fixed audios that serve as practice. The third and final block has random audio until the 4 hours of word elapse.

Randomization occurs in the second block of audios containing four fixed audios, equal for all participants. In these four audio files, we randomize four messages informing the worker about the average prediction of people from other high-education institutions: a top-prestige institution (“High-prestige” treatment), a low-prestige institution (“Low-prestige” treatment), their own institution (“Belonging” treatment) and a message with an average prediction but no institution (Control).

To study how incentives affect prestige-induced learning, we also randomize a monetary bonus for accuracy in their prediction. In two of the four treated audios, workers can earn $1.000 CLP (around 1.2 US dollars) if their prediction matches the real success rate of the executive in the audio, and $500 CLP if their prediction is three percentage points above or below the truth. The incentive is randomized either in the first and third audio/message or in the second and fourth. As a reference of the power of the incentive, the overall fixed payment for evaluating audios for 4.5 hours is $20.000 CLP.

In the four treated audio files, subjects make an initial prediction without seeing the message, then they see the message/incentive and are asked to re-evaluate their answer. Messages are randomly assigned to audios. The order of the audios is also randomized. To avoid confounding learning with accuracy, reference institutions in the high, low, and control treatments have the same average prediction across the four audios so that predictions are, on average, equally accurate.
Experimental Design Details
Randomization Method
Randomization at every level (audio order, message to each audio, and incentive to audio) is done by the software used to program the survey, Qualtrics.
Randomization Unit
Randomized at the audio level. Specifically in the second block of four and the two subsequent audios. Ours is a within-subjects design: all subjects receive the prestige treatments (three plus the control) and the incentive treatment.
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
Our unit is workers, around 400 to 600 for each treatment. Treatments are not clustered.
Sample size: planned number of observations
The unit is at the worker level. For each unit, we will observe six baseline evaluation audios, plus four treatment audios and two extra audios testing for order effects. Total observations: 4800 to 7200, depending on the data collection.
Sample size (or number of clusters) by treatment arms
Around 400 to 600 workers, collected in one or two arms depending on funding.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
We cannot do power calculations as we are not aware of any other study which to borrow standard deviations from.
IRB

Institutional Review Boards (IRBs)

IRB Name
Comite de Etica Pontificia Universidad Catolica de Chile
IRB Approval Date
2023-06-12
IRB Approval Number
230217001

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials