Exploring the Determinants of Central Bank Credibility

Last registered on February 24, 2022

Pre-Trial

Trial Information

General Information

Title
Exploring the Determinants of Central Bank Credibility
RCT ID
AEARCTR-0009026
Initial registration date
February 24, 2022

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
February 24, 2022, 4:47 PM EST

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Region

Primary Investigator

Affiliation
University of Oxford

Other Primary Investigator(s)

Additional Trial Information

Status
In development
Start date
2022-02-25
End date
2022-05-31
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
Managing expectations via central bank communication has become a primary component of monetary policy. A necessary condition for effective communication is central bank credibility. Despite this, we know surprisingly little about the determinants and properties of credibility. This paper aims to fill this gap by introducing a novel experimental framework that allows for nuanced study of central bank credibility. We employ this framework to study how variations in a central bank’s recent inflation forecasting performance influences subjects’ perceptions of the bank’s credibility.
External Link(s)

Registration Citation

Citation
Mcmahon, Michael. 2022. "Exploring the Determinants of Central Bank Credibility." AEA RCT Registry. February 24. https://doi.org/10.1257/rct.9026
Experimental Details

Interventions

Intervention(s)
We study how the timing and magnitude of central bank forecasting errors impact the perceived credibility of the central bank. We first expose participants to a 12-quarter history of realized inflation alongside corresponding central bank forecasts. After viewing this history, participants form their own inflation forecast (our elicitation captures information about the first three central moments of subjects’ inflation expectations) for the next quarter (quarter 13). We then reveal the central bank’s quarter-13 inflation forecast and allow subjects to update their own inflation forecast (or not). We measure central bank credibility as the degree to which subjects update their forecast in the direction of the central bank’s forecast. We normalize this measure so that we may compare it cleanly across different economic histories.
Intervention Start Date
2022-02-25
Intervention End Date
2022-05-31

Primary Outcomes

Primary Outcomes (end points)
Central Bank Credibility as measured by the weight that individuals place on the central bank forecast in forming their expectations.
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
We use a within-subjects design to study how the timing and magnitude of central bank forecasting errors impact the perceived credibility of the central bank. To do this, we first expose participants to a 12-quarter history of realized inflation alongside corresponding central bank forecasts. After viewing this history, participants form their own inflation forecast (our elicitation captures information about the first three central moments of subjects’ inflation expectations) for the next quarter (quarter 13). We then reveal the central bank’s quarter-13 inflation forecast and allow subjects to update their own inflation forecast (or not). We measure central bank credibility as the degree to which subjects update their forecast in the direction of the central bank’s forecast. We normalize this measure so that we may compare it cleanly across different economic histories.

We introduce treatment variation via the economic histories that precede participant forecasts. We vary whether the bulk of central bank forecast errors in the 12-quarter history occur early, occur late, or are distributed consistently. Our interest is in how the timing of errors impacts the degree to which subjects incorporate the central bank’s quarter-13 forecast into their own updated forecast. We also vary whether central banks have a relatively good or bad forecasting history (I.e. we vary the average absolute forecasting error).

This design yields a total of 12 treatments. Each subject experiences either the good forecast or bad forecast history but never both. For that history, a subject experiences only one of the six permutations of Early, Late, and Consistent forecast histories. This within-subject design ensures that idiosyncratic characteristics (over confidence, etc.) cannot drive effects. Further, treating each permutation as its own treatment ensures we are able to net out of our final results any ordering or learning effects.
Experimental Design Details
Randomization Method
Randomisation done via a randomisation algorithm.
Randomization Unit
Subjects participated individually. We deployed our experiment online via Prolific.
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
2400
Sample size: planned number of observations
2400
Sample size (or number of clusters) by treatment arms
200 observations per treatment x 12 treatments:
Good: Early-Consistent-Late
Good: Early-Late-Consistent
Good: Consistent-Early-Late
Good: Consistent-Late-Early
Good: Late-Early-Consistent
Good: Late-Consistent-Early
Bad: Early-Consistent-Late
Bad: Early-Late-Consistent
Bad: Consistent-Early-Late
Bad: Consistent-Late-Early
Bad: Late-Early-Consistent
Bad: Late-Consistent-Early
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
University of Oxford Central University Research Ethics Committee
IRB Approval Date
2019-06-07
IRB Approval Number
ECONCIA19-20-09

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials