Competetive Rating Systems in Healthcare Credence Goods Markets

Last registered on March 15, 2022

Pre-Trial

Trial Information

General Information

Title
Competetive Rating Systems in Healthcare Credence Goods Markets
RCT ID
AEARCTR-0009086
Initial registration date
March 11, 2022

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
March 15, 2022, 8:32 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Region

Primary Investigator

Affiliation
Technical University of Munich, School of Management

Other Primary Investigator(s)

PI Affiliation
University of Innsbruck
PI Affiliation
UMIT Tirol
PI Affiliation
ETH Zurich
PI Affiliation
ESCP Business School

Additional Trial Information

Status
In development
Start date
2022-03-14
End date
2022-12-31
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
A key characteristic of health care markets is the information asymmetry between patients and physicians. Physicians know more about the disease and the appropriate treatment than patients. This may result in different forms of physician misbehavior: providing more treatments than necessary, i.e. overtreatment; providing less treatment than necessary, i.e. undertreatment or charging more treatments than provided, i.e. overcharging. Patients have to trust in physicians that they receive appropriate treatment. This is why health services are often referred to as credence goods (Darby and Karni 1973, Dulleck and Kerschbamer 2006).
The provision of feedback on rating platforms and the associated reputation building has gained more and more attention in the past two decades in the context of physician-patient interactions. In Germany, for instance, about 70% of physician-rating website users are influenced by the rating in their physician choice (Emmert and Meszmer 2018).
Based on the credence goods framework established by Dulleck and Kerschbamer (2006) and Dulleck et al. (2011), we introduce a toy model that enables us to derive hypotheses and test them in a laboratory experiment. In total, three conditions of market interactions are planned with 144 undergraduate students either in the role of physicians or patients. In the baseline condition (B), no reputation building is possible between physicians and patients. In two rating conditions, we introduce the possibility to rate physicians on a rating scale between zero and five stars. The rating is based on the payoff information of patients resulting from the interaction between physician and patient. In the first rating condition (R-noComp), physicians observe their own public rating when they make their decisions, but not that of other physicians. In the second rating condition (R-Comp), physicians see their own public rating, as well as the public rating of other physicans.
Our design allows us to investigate the effect of a public rating mechanism on outcomes in healthcare credence goods markets. Furthermore, it enables us to explore whether strategic considerations regarding the relative ranking in the rating system influence physician behaviour.

References
Darby, M. R. and E. Karni (1973). "Free Competition and the Optimal Amount of Fraud." Journal of Law & Economics 16(1): 67-88.
Dulleck, U. and R. Kerschbamer (2006). "On Doctors, Mechanics, and Computer Specialists: The Economics of Credence Goods." Journal of Economic Literature 44(1): 5-42. DOI: https://doi.org/10.1257/002205106776162717.
Dulleck, U., R. Kerschbamer and M. Sutter (2011). "The Economics of Credence Goods: An Experiment on the Role of Liability, Verifiability, Reputation, and Competition." American Economic Review 101(2): 526-555. DOI: https://doi.org/10.1257/aer.101.2.526.
Emmert, M. and N. Meszmer (2018). "Eine Dekade Arztbewertungsportale in Deutschland: Eine Zwischenbilanz Zum Aktuellen Entwicklungsstand." Gesundheitswesen 80(10): 851-858. DOI: https://doi.org/10.1055/s-0043-114002.


External Link(s)

Registration Citation

Citation
Angerer, Silvia et al. 2022. "Competetive Rating Systems in Healthcare Credence Goods Markets." AEA RCT Registry. March 15. https://doi.org/10.1257/rct.9086-1.0
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details

Interventions

Intervention(s)
We experimentally investigate the effect of a public rating system in healthcare credence goods markets. Therefore, we plan to employ a laboratory experiment framed in a healthcare context, where experts are called physicians and consumers are called patients, using a student sample from the University of Innsbruck.
To start with, we plan to run three experimental conditions. In the baseline condition, there is no feedback mechanism in place. Next, we introduce a public rating mechanism into the market, where patients can rate the interactions with physicians on a five-star-rating-scale. To explore whether strategic considerations influence physician behaviour, we employ one condition, where physicians only see their own public rating, but not that of other pyhsicians, and a second condition where they see both, their own, and the public rating of others.
Our design allows us to investigate the effect of a public rating mechanism on outcomes in healthcare credence goods markets. Furthermore, it enables us to explore whether strategic considerations regarding the relative rank in the rating system influence physician behaviour.
Intervention (Hidden)
The experiment will be conducted at the Econ Lab at the University of Innsbruck. We build our experimental design on the credence goods framework of Dulleck and Kerschbamer (2006) and slightly adapt it to better resemble healthcare markets. Throughout the experiment, we implement a health care framing in which we refer to consumers of the credence good as patients and sellers as physicians, respectively.

Experiment

The basic set-up and parameterization:
In our basic set-up, patients and physicians are grouped in a market of 8 subjects (4 patients & 4 physicians). Patients suffer from a major health problem with probability h = 0.5 and a minor one with probability (1-h). The probability h=0.5 is common knowledge. Patients choose a physician knowing that they suffer from some health problem in every period. They do not get information about the severity of their health problem. Physicians diagnose their patients’ health problem with certainty and at zero costs. They provide one of two treatments, a simple or an intensive treatment. The cost for the physician to provide the intensive treatment (cI) is 10 ECU (Experimental Currency Unit). The cost for the simple treatment (cS) is 5 ECU. Treatment prices, paid by an insurance company (not represented by participants in the lab), are either 20 ECU (pI) or 10 ECU (pS). Patients pay an insurance premium of 15 ECUs. The intensive treatment cures both, the major and the minor health problem, while the simple treatment only cures the minor one. Patients obtain 25 ECU (v) if cured, and zero if treated insufficiently. The payoff for patients consulting a physician is the difference between the obtained value v, the insurance premium and a disutility, which depends on the type of treatment. If patients have to go through intensive treatment, they bear a disutility of 5 ECU (dI), while the disutility for a simple treatment is zero ECU (dS). For physicians, the payoff is the spread between the price charged (pI or pS) and the cost for the chosen treatment (cI or cS). Patients have to choose exactly one physician in every round. Physicians receive oPhy = 0 if they do not interact with any patient in a given round. Throughout our experiment, we implement verifiability, that is, physicians can only charge the price for the treatment they perform (i.e. overcharging is ruled out by design). Furthermore, physcians have to provide sufficient treatment to patients (i.e. undertreament is ruled out by design). Participants are not identifiable in the experimental setting. Therefore, reputation building is possible only in the conditions with feedback mechanisms.
The structure of the stage-game is as follows:
1) For each patient, nature draws the type of health problem. With probability h patients have a major health problem, and with probability
(1-h) patients have a minor health problem.
2) Patients choose one physician from a list of four.
3) Physicians are informed about the health problem and provide a treatment (q_I or q_S). If a patient has a major health problem, physicans have to provide the intensive treatment (q_I).
4) Patients and physicians observe their payoff in the respective period. Note that patients cannot infer whether their physician treated them appropriately, they only learn which treatment was chosen.
5) In the conditions with a public rating system: After learning the payoff for the respective period, patients decide whether to rate the interaction with their treating physician. If they decide to rate the interaction, they choose the rating on a scale between 0 and 5 stars which is shown to the treating physician afterward.


[Treatment Variation]
As explained above, we plan to run three treatments:
[Experimental Condition 1] — No Feedback-Mechanism
[Experimental Condition 2] — Public Feedback-Mechanism without Feedback-Competition
[Experimental Condition 3] — Public Feedback-Mechanism with Feedback-Competition
Intervention Start Date
2022-03-15
Intervention End Date
2022-05-31

Primary Outcomes

Primary Outcomes (end points)
Overtreatment-Rates
Primary Outcomes (explanation)
Overtreatment-Rates are defined as situations where patients with a minor disease receive the intensive treatment.

Secondary Outcomes

Secondary Outcomes (end points)
Market Efficiency.
Secondary Outcomes (explanation)
For each interaction, efficiency is defined to be one if the patient was treated appropriately and zero otherwise (if overtreated).

Experimental Design

Experimental Design
We plan to use a student sample from the University of Innsbruck and run each experimental condition with 48 subjects (as suggested by our power analysis). Therefore, we plan to run two sessions with 24 subjects each in every experimental condition. All sessions are run computerized using z-Tree and students are recruited using hroot. Participants do not know which experiment they are going to participate in when they register. They only receive information about the expected duration of the experiment (2h).
Our experiment is structured as follows for all our conditions:

Stage 1: The experimenter explains the experiment and participants read the instructions.
Stage 2: Participants answer several control questions to ensure they understood the game.
Stage 3: The computer randomly assigns roles and markets to participants.
Stage 4: Participants play the game for 16 periods.
Stage 5: Participants answer several questions concerning their strategy in the game and beliefs about others.
Stage 6: Participants participate in additional games: an individual risk preference task, a dictator game, a lying task, and a trust game.
Stage 7: Participants fill out a questionnaire.
Experimental Design Details
Randomization Method
Randomization is carried out in the experiment by a computer.
Randomization Unit
at the session level
Was the treatment clustered?
Yes

Experiment Characteristics

Sample size: planned number of clusters
6 clusters á 8 individuals per experimental condition.
Sample size: planned number of observations
48 (6 x 8) individuals per experimental condition.
Sample size (or number of clusters) by treatment arms
144 (3 x 48) individuals (students at the University of Innsbruck).
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Based on previous findings, we performed a power calculation, indicating that we need six clusters á 8 subjects per experimental condition when aiming for a power of 80%.
IRB

Institutional Review Boards (IRBs)

IRB Name
Leopold-Franzens-Universität Innsbruck, Certificate of good standing.
IRB Approval Date
2017-10-18
IRB Approval Number
40/2017

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials