Generic Aversion in the Over-the-Counter Drug Market

Last registered on January 03, 2023


Trial Information

General Information

Generic Aversion in the Over-the-Counter Drug Market
Initial registration date
December 27, 2022

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
January 03, 2023, 5:28 PM EST

First published corresponds to when the trial was first made public on the Registry after being reviewed.


Primary Investigator

Montana State University

Other Primary Investigator(s)

PI Affiliation
UC Berkeley

Additional Trial Information

Start date
End date
Secondary IDs
Prior work
This trial is based on or builds upon one or more prior RCTs.
Through a labeling intervention at a national retailer, we test three hypotheses for consumer aversion to generic over the counter (OTC) drugs: lack of information on the comparability of generic and brand drugs, inattention to their price differences, and uncertainty about generic quality that can be reduced with information on peer purchase rates. With a difference-in-differences strategy, we find that posted information on the purchases of other customers increases generic purchase shares significantly, while other treatments have mixed results. Consumers without prior generic purchases appear particularly responsive to this information. These findings have policy implications for promoting evidence-based, cost-effective choices.
External Link(s)

Registration Citation

Carrera, Mariana and Sofia Villas Boas. 2023. "Generic Aversion in the Over-the-Counter Drug Market." AEA RCT Registry. January 03.
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details


For four weeks, labels were posted beneath the price tags of generic products in treated drug classes in treated stores. The content of the labels differed across three experimental arms.
Test 1. To test the hypothesis that consumers lack basic information on brand/generic drug comparability, we created labels that described the similarity between brand and generic products as specifically as possible. The strongest statement we used was: “The FDA determined this product to be therapeutically equivalent and bioequivalent to [corresponding brand product],” taken verbatim from the FDA approval letter, for drugs with such approval letters available on the FDA website. The second statement used was “This product contains the same active ingredient as [corresponding brand product] and has been approved by the FDA,” shown with the reference number and date of FDA approval. This label appeared on products for which we found notices of FDA approval, but either no electronically available letter, or a letter that did not include any statement about bioequivalence. The third statement, which was posted for older-generation drugs whose manufacturers need not seek explicit approval from the FDA prior to marketing a generic, was “This product contains the same active ingredient as [corresponding brand product].”
Test 2a. To test for inattention to price differences, we posted labels stating “Customers who choose this product save X%” with a footnote specifying that the savings was relative to the specified brand product per dose. X ranged from 14% to 68% in the products labeled.
Test 2b. In another store, we highlighted the price differences in a different way, by stating “Customers who choose [corresponding brand product] pay Y% more than the generic alternative.” In this type of label, the price difference is framed as a loss rather than a gain. Also, for the same brand and generic prices, Y will be a larger number than X, because the generic price is a smaller denominator. For these reasons, we hypothesized that Test 2b would have a stronger effect than Test 2a. Note, however, that the label was placed below the generic product, as we were not permitted to place labels below branded products.
Test 3a. To test for observational learning, we posted labels stating “X% of customers in this store choose this product instead of [corresponding brand product].” The values of this share were calculated for each product and each store, using either the previous year’s sales data (Jan-Dec 2011) or the first three months of the current year (Jan-March 2012). To obtain quasi-exogenous variation in the value of the share displayed, holding constant the product and the store, we alternated which method of calculation was used in each store’s labels, each week.
Test 3b. An alternate way to frame the information displayed in Test 3a is to report the share of customers who buy the brand product, e.g. “Y% of customers in this store choose [corresponding brand product]” instead of this product.” If the mere act of bringing attention to the purchase of a specific product leads consumers to buy it, or if the statement is read as an implicit endorsement of a particular product, then Test 3b could have a different effect than Test 3a. If, instead, both labels only affect purchases insofar as they shift customers’ beliefs about what others buy, then Tests 3a and 3b should have the same effect on consumer purchases.

Intervention Start Date
Intervention End Date

Primary Outcomes

Primary Outcomes (end points)
Total number of purchases and generic share of purchases
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
The six stores to be treated were selected by convenience as the research team needed to physically travel there once per week to post the labels. The six control stores were pre-selected as a similar set of stores based on prior year purchasing statistics. The treatments 1-3 were assigned randomly to each of the six selected treatment stores, as were the drug classes chosen to be treated.
Experimental Design Details
Randomization Method
Randomization done in office by a computer.
Randomization Unit
Drug classes and store.
Was the treatment clustered?

Experiment Characteristics

Sample size: planned number of clusters
8 drug class groups, 6 stores treated, 6 stores control.
Sample size: planned number of observations
12 stores x 34 drugs x 10 weeks = 4080 weekly sales observations
Sample size (or number of clusters) by treatment arms
8 drug class groups (4 treated; 4 untreated), 12 stores ( 6 treated, 6 control).
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)

Institutional Review Boards (IRBs)

IRB Name
University of California Berkeley
IRB Approval Date
IRB Approval Number


Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information


Is the intervention completed?
Data Collection Complete
Data Collection Completion Date
December 31, 2012, 12:00 +00:00
Final Sample Size: Number of Clusters (Unit of Randomization)
Was attrition correlated with treatment status?
Final Sample Size: Total Number of Observations
Final Sample Size (or Number of Clusters) by Treatment Arms
Data Publication

Data Publication

Is public data available?

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Program Files

Program Files
Program Files URL
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials