Numbers Tell, but Words Sell: Evidence from Research Communication

Last registered on July 17, 2024

Pre-Trial

Trial Information

General Information

Title
Numbers Tell, but Words Sell: Evidence from Research Communication
RCT ID
AEARCTR-0013947
Initial registration date
July 15, 2024

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
July 17, 2024, 2:02 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Primary Investigator

Affiliation
UCL

Other Primary Investigator(s)

PI Affiliation
University of Warwick
PI Affiliation
MIT

Additional Trial Information

Status
In development
Start date
2024-07-16
End date
2025-06-30
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
Researchers must often choose whether to communicate using numbers or language when sharing information with policymakers. While there are many reasons why experts would send messages using language or numbers, our study emphasizes the role of message precision: numbers represent precise estimates, whereas language is imprecise because one word can be interpreted as many numerical values. We run an experiment in which we recruit academic researchers and vary their incentives to directionally persuade or to convey accurate information to a sample of policymakers that we recruit. We analyze how researchers choose between message formats, testing whether directional incentives increase their likelihood of using language to communicate as compared to using numeric estimates, and explore these effects on policymakers.
External Link(s)

Registration Citation

Citation
Thaler, Michael, Mattie Toma and Victor Yaneng Wang. 2024. "Numbers Tell, but Words Sell: Evidence from Research Communication." AEA RCT Registry. July 17. https://doi.org/10.1257/rct.13947-1.0
Experimental Details

Interventions

Intervention(s)
For experimental details, see the Experimental Design (Public) section.
Intervention Start Date
2024-07-16
Intervention End Date
2024-09-24

Primary Outcomes

Primary Outcomes (end points)
H1. We hypothesize that researchers are less likely to choose to send number messages, and thus are more likely to choose to send language messages, when they face a directional incentive.
Primary Outcomes (explanation)
H1. The outcome is a binary indicator for whether a language message was used. Our main specification controls for study fixed effects and individual-level control variables, including: binary indicators for position (PhD student, Postdoc, Assistant Professor or equivalent, Associate Professor or equivalent, Professor or equivalent, Other; if sufficient observations in each group), binary indicators for gender (man and woman if sufficient observations in other groups; just woman otherwise), binary indicator for whether most of their research projects involve working with data (Yes/No). Standard errors are clustered (here and everywhere else) at the individual level.

Secondary Outcomes

Secondary Outcomes (end points)
We may conduct exploratory analyses as well:

H2. We hypothesize that senders with directional incentives will be more likely to send number messages for larger (good news) compared to smaller (bad news) effect sizes. Note that this specification will not control for study-level fixed effects.

H3. We may explore measuring slant of language and numbers in future waves by benchmarking language to numbers. We hypothesize that senders slant more with directional incentives than with aligned incentives, conditional on their preferred message format. We will also explore whether senders slant more with directional incentives within message formats.

H4. We may test whether receivers are persuaded by senders; that is, whether receiver predictions are higher when senders faced directional compared with aligned incentives. Then, we may look at whether senders with directional incentives increase receivers' predictions when language messages are sent relative to when number messages are sent.
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
Recruitment:

This experiment will be conducted in two waves: The first wave involves "Senders" (recruited from a pool of academic researchers) and the second wave involves "Receivers" (recruited from a pool of policymakers in government using the Warwick Policymakers Lab).

We will recruit senders via emails to academic social-science researchers over the course of two weeks, or until we reach 150 senders, whichever comes first. If we do not reach 50 senders during this time period, we will extend the time period. At the end of the study, we will ask senders if they previously knew about our main hypothesis, and will not include senders who answer "yes" to this question in our main analysis.
We will recruit receivers via participants in the Warwick Policymakers Lab over the subsequent eight weeks, or until we reach 75 Receivers, whichever comes first.


Experiment Timing:

Senders complete multiple rounds. In each round, they learn the treatment effect size of an academic research study. Then, they choose how to communicate this effect size to a participant in the receiver wave as detailed below.

Receivers complete multiple rounds. In each round, they are asked to predict the treatment effect of one of the studies. Then, they are matched with a sender and are given the sender's message. Finally, they are asked to predict the effect size of the study, asked how likely they would be to share the information with their colleagues, and are asked whether they would like to receive an infographic about the study that they can share with their colleagues.

At the end of the study, we will ask subjects a set of questions, including about demographics, which we may use in exploratory analyses.

We will include all completed rounds in our analysis, even in cases when the participant does not finish the survey.


Incentives:

Senders and receivers are paid in Amazon gift cards.

For receivers, one of the questions is randomly selected, and they earn a bonus payment if their prediction of the effect size is sufficiently close to the actual effect size.

For senders, we randomly vary whether their payments are:
- The same as their receiver's ("aligned"), or
- Increasing in their receiver's predictions ("directional").

Next, we describe what messages senders can send. Messages can come in one of two "formats":
- Number messages have the format: "The treatment led to an increase of X percentage points."
- Language messages have the format: "The treatment led to a [word/phrase] increase."

Senders make two choices. They first choose the X they want to communicate for the Number message and the word they want to communicate for the Language message. Then, they choose whether they would rather send their Number or their Language message. One of their two messages will always be the one used, but their preferred format will sometimes not be the one used.

Receivers face these treatments as well. That is, they are either matched with senders who have aligned or directional incentives, but do not know which type of sender they face.
Experimental Design Details
Not available
Randomization Method
Randomization for all treatments done by computer.
Randomization Unit
We randomize the incentives treatment (aligned or directional) between senders; receivers may face senders with different incentives.
There are six studies, and we randomize the studies that participants see.
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
Senders: 150 participants (see notes on recruitment and sample size above)
Receivers: 75 participants (see notes on recruitment and sample size above)
Sample size: planned number of observations
Senders: 900 message choices (450 number messages and 450 language messages), 450 message format choices Receivers: 150 predictions before messages, 150 predictions after messages
Sample size (or number of clusters) by treatment arms
Senders: 1/2 have aligned incentives and 1/2 have directional incentives.
Receivers: 1/2 face senders with aligned incentives and 1/2 face senders with directional incentives.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
University of Warwick Humanities & Social Sciences Research Ethics Committee
IRB Approval Date
2024-06-24
IRB Approval Number
219/23-24