Expert Reviews and Professional Learning: Evidence from a Physician Drug Rating Platform

Last registered on March 20, 2025

Pre-Trial

Trial Information

General Information

Title
Expert Reviews and Professional Learning: Evidence from a Physician Drug Rating Platform
RCT ID
AEARCTR-0014751
Initial registration date
November 07, 2024

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
November 15, 2024, 1:37 PM EST

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
March 20, 2025, 9:12 PM EDT

Last updated is the most recent time when changes to the trial's registration were published.

Locations

Region

Primary Investigator

Affiliation
Keio University

Other Primary Investigator(s)

Additional Trial Information

Status
In development
Start date
2024-11-11
End date
2025-07-31
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
This study examines how expert reviews influence product discovery and observational learning in a professional healthcare setting. While previous research has documented how consumers learn about product quality through expert reviews in consumer goods markets, little is known about whether these learning mechanisms persist in professional settings where reviewers have substantial domain expertise. In markets with a vast array of products, expert reviews can serve as a crucial mechanism for product discovery and quality assessment. We conduct a randomized controlled trial on an online medical platform exclusively for licensed physicians, hosting over 600,000 drug reviews. Focusing on a selected set of widely prescribed medications, we causally identify how exposure to reviews from different types of experts—top reviewers, veteran prescribers, and recognized opinion leaders—affects other physicians' rating behaviors and learning processes about drug quality. By focusing on a setting with medical professionals, this study provides novel insights into the role of expert reviews in professional observational learning and contributes to our understanding of how physicians discover and evaluate pharmaceutical products.

External Link(s)

Registration Citation

Citation
Nakajima, Ryo. 2025. "Expert Reviews and Professional Learning: Evidence from a Physician Drug Rating Platform." AEA RCT Registry. March 20. https://doi.org/10.1257/rct.14751-2.0
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details

Interventions

Intervention(s)
The intervention is conducted on a secure web-based medication review platform that is accessible exclusively to licensed physicians. On this platform, physicians evaluate medications by submitting a numerical rating (on a five-point overall satisfaction scale) and providing detailed written comments based on their clinical experiences.

To examine how expert reviews influence physicians’ evaluations, participants are randomly assigned at the review level to one of five experimental groups:

Treatment Groups
Group A: Shown evaluations from Veteran Reviewers
Group B: Shown evaluations from Top Reviewers
Group C: Shown evaluations from Authority Reviewers
Group D: Shown evaluations with reviewer identity anonymized

Control Group
Group O: Shown no expert evaluations

Each expert evaluation includes both a numerical satisfaction rating and a written comment describing the expert's experience with the medication.
Intervention (Hidden)
The intervention is implemented on a web-based medication review platform where licensed physicians routinely share their experiences with various drugs through structured ratings and written narratives. In the experiment, expert evaluations are displayed immediately before the participant begins their own assessment. Expert reviewers are identified using objective and pre-specified criteria as follows:

- Top Reviewers: Physicians ranked in the top 10% based on total “likes” received from other physicians on the platform
- Veteran Reviewers: Physicians ranked in the top 10% by both prescription volume and patient count
- Authority Reviewers: Recognized clinical leaders such as Key Opinion Leaders (KOLs) or Area Opinion Leaders (AOLs) in the relevant therapeutic domain

The full experimental design includes all five arms described in the public protocol (Groups A through D, and Group O). However, due to practical considerations such as time constraints or budget limitations, a partial implementation may be adopted. The partial design comprises only three groups: Group A (Veteran Reviewers), Group D (Anonymous Reviewer Information), and Group O (Control).

The initial experiment timeline was set from November 11, 2024 to June 30, 2025. However, in coordination with our partner MedPeer Inc., the launch was rescheduled to April 11, 2025 to accommodate refinements in the experimental procedure. Prior to the full-scale implementation, a pilot study involving a single medication will be conducted between March 12 and March 20, 2025.
Intervention Start Date
2025-04-14
Intervention End Date
2025-05-31

Primary Outcomes

Primary Outcomes (end points)
1. Physicians' medication satisfaction ratings (5-point scale)
2. Semantic similarity between general physicians' review comments and expert review comments, measured using natural language processing techniques
Primary Outcomes (explanation)
The semantic similarity between review comments will be calculated using a medical-domain-specific Japanese BERT model embeddings and appropriate similarity metrics such as cosine similarity. This method allows us to quantify the degree to which general physicians' review content mirrors that of expert reviewers.

Secondary Outcomes

Secondary Outcomes (end points)
1. Time spent completing medication reviews
2. Complexity measures of review comments, including:
- Average sentence length and token count
- Vocabulary diversity (Type-Token Ratio)
- Linguistic sophistication measures
Secondary Outcomes (explanation)
These outcomes will help assess whether exposure to expert reviews leads to more thorough and detailed evaluations by general physicians.

Experimental Design

Experimental Design
The study employs a randomized controlled trial design with four treatment arms and one control arm. The study participants are licensed physicians registered on the web-based review board who actively prescribe medications in the relevant therapeutic areas. Participating physicians are randomly assigned based on their member ID to one of the three expert review types, a review without reviewer type information (treatment), or no expert review (control) when evaluating medications. The study focuses on medications commonly used for treating major chronic diseases and psychiatric disorders.
Experimental Design Details
The experiment includes detailed analyses of:
- Impact of expert reviews on rating scores and comment content
- Differential effects across expert reviewer types
- Heterogeneous effects by physician characteristics (gender, age group, practice type)
- Variation in effects by review timing (time of day, day of week)
- Impact on review thoroughness and complexity

The study targets medications that have received the highest number of physician reviews on the platform, including those used for treating chronic kidney disease, constipation, schizophrenia, diabetes, and obesity. These therapeutic areas were selected based on their high review volume, ensuring sufficient statistical power for the analyses. Only reviews from physicians who are actively prescribing medications in these therapeutic areas are included in the study.
Randomization Method
Randomization is conducted by computer at the time physicians submit medication reviews on the web-based review board. Using a random number generator, each reviewing physician is randomly assigned to one of the five experimental groups (four treatment groups or control) before they begin their medication evaluation. The randomization is conducted at the physician level to ensure that each physician consistently receives or does not receive expert reviews throughout the study period, preventing potential contamination between treatment and control conditions.
Randomization Unit
The unit of randomization is the reviewing physician. When a physician initiates a medication review on the platform, they are randomly assigned to either one of the four treatment groups (shown a review) or the control group (no review shown).

Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
Not applicable as randomization occurs at the individual physician-review level rather than cluster level.
Sample size: planned number of observations
For each medication analyzed, the target minimum sample size is 270 reviews (54 reviews × 5 groups) if control variables are included, or 135 reviews (27 reviews × 5 groups) if control variables are not included. However, these sample sizes apply to the full experimental scheme, which includes four treatment groups (A, B, C, D) and one control group. If only the partial experimental scheme (with Treatment Groups A, D, and the Control Group) is conducted, the total sample size will be smaller. Nevertheless, we aim to secure at least 54 reviews per group if control variables are included, or 27 reviews per group if control variables are not included.
Sample size (or number of clusters) by treatment arms
The study will collect data from medication reviews across multiple therapeutic areas. For each medication analyzed, we target at least 54 reviews per group (treatment groups A, B, C, D and control group O) to achieve 80% power at a 0.05 significance level for detecting a medium effect size. In this analysis, we perform three separate regression analyses, each comparing one treatment group (A, B, or C) with the control group (O).

For each medication analyzed:
- Treatment Group A (Veteran Reviewer): 54 reviews
- Treatment Group B (Top Reviewer): 54 reviews
- Treatment Group C (Authority Reviewer): 54 reviews
- Treatment Group D (Non-specified Reviewer): 54 reviews
- Control Group O: 54 reviews

This setup allows for sufficient statistical power across three distinct comparisons: (A vs. O), (B vs. O), (C vs. O), and (D v.s O).
For each medication analyzed:
-Treatment Group A (Top Reviewer): 54 reviews
- Treatment Group B (Veteran Reviewer): 54 reviews
- Treatment Group C (Authority Reviewer): 54 reviews
- Control Group O: 54 reviews
This setup allows for sufficient statistical power across three distinct comparisons: (A vs. O), (B vs. O), and (C vs. O).
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
For each medication analyzed, we aim to detect a medium effect size with a 0.05 significance level and 80% power, focusing on two main outcomes: physician rating scores and comment content similarity. (1) For rating scores on a 5-point scale, we can detect a minimum effect size of 0.15 (Cohen's f2), equivalent to a 0.3-point difference. When including control variables (e.g., gender, age, employment type), a sample size of 108 reviews (divided evenly between treatment and control) is required. Without control variables, 54 reviews (also divided evenly between treatment and control) are sufficient. The regression model will estimate the influence of expert ratings on general physicians' scores, adjusting for demographic factors. (2) For comment similarity, we can detect a minimum effect size of 0.25 (Cohen's f), requiring 270 pairwise comparisons per group. This translates to a minimum of 16 comments per group, or 80 comments in total, to achieve sufficient pairwise comparisons. An ANCOVA analysis will compare group differences in similarity, assessing whether exposure to different expert types influences the language used in physician reviews. For further details, refer to the Pre-Analysis Plan document.
IRB

Institutional Review Boards (IRBs)

IRB Name
Institutional Review Board, Institute for Economic Studies, Keio University
IRB Approval Date
2024-10-07
IRB Approval Number
24010R
IRB Name
Institutional Review Board, Institute for Economic Studies, Keio University
IRB Approval Date
2025-02-23
IRB Approval Number
24020
Analysis Plan

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials