An Economic Experimental Study on the Impact of AI Tags on Knowledge Dissemination

Last registered on January 22, 2026

Pre-Trial

Trial Information

General Information

Title
An Economic Experimental Study on the Impact of AI Tags on Knowledge Dissemination
RCT ID
AEARCTR-0017605
Initial registration date
January 12, 2026

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
January 22, 2026, 5:55 AM EST

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Primary Investigator

Affiliation
Hunan University

Other Primary Investigator(s)

PI Affiliation

Additional Trial Information

Status
In development
Start date
2026-01-31
End date
2026-04-30
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
This study investigates how different labeling regimes for AI-generated content affect user engagement and perceived credibility through a randomized controlled trial (RCT) on a major knowledge-sharing platform. Addressing market inefficiencies caused by “AI discrimination” and inconsistent standards, we evaluate the causal impact of various explicit tags (e.g., Pure AI, Human-AI Collaborative) on user behavior. We employ a stratified block randomized design with five arms across diverse content domains, strictly controlling for content quality and account characteristics. We combine objective interaction data (e.g., click-through rates, likes) with subjective credibility ratings from post-experiment questionnaires. Our analysis focuses on three dimensions: (1) the differential negative impact of tag types on perceived credibility; (2) the mediating role of perceived credibility between labeling and user engagement; and (3) heterogeneous effects across user demographics and content fields. The results aim to provide optimized labeling strategies for platforms and empirical support for digital content regulation.
External Link(s)

Registration Citation

Citation
Deng, Weiguang and Fangxu Qian. 2026. "An Economic Experimental Study on the Impact of AI Tags on Knowledge Dissemination." AEA RCT Registry. January 22. https://doi.org/10.1257/rct.17605-1.0
Experimental Details

Interventions

Intervention(s)
A randomized controlled experiment is conducted on the Zhihu platform, with 5 experimental groups (1 control group + 4 treatment groups) designed to manipulate AI label characteristics. The experiment covers two types of content fields (AI inquiry high-frequency fields and general fields) to explore the impact of AI labels on users' knowledge acceptance and interaction behavior.
Intervention Start Date
2026-03-31
Intervention End Date
2026-04-30

Primary Outcomes

Primary Outcomes (end points)
User interaction behavior indicators (number of likes, comments, collections, click-through rate);
User perceived credibility score of content.
Primary Outcomes (explanation)
User Interaction Behavior Indicators: Collected through Zhihu platform background data, reflecting users' active acceptance of content. Click-through rate is calculated as "number of clicks / number of exposures", and likes, comments, and collections are measured by absolute counts of user behaviors;
Perceived Credibility Score: Obtained through user questionnaires (referring to Ovsyannikova et al.'s (2025) trust scale), using a 5-point Likert scale (1 = extremely unreliable to 5 = extremely reliable) to measure users' subjective trust in content. The score is the average of multiple objective description questions (e.g., "Please rate the credibility of this content") to avoid guiding biases.

Secondary Outcomes

Secondary Outcomes (end points)
Heterogeneous response differences of users in different age groups/regions/fields.
Secondary Outcomes (explanation)
Heterogeneous Response Differences: Based on user background data (age, IP location) and content field types, the differences in primary outcome indicators among subgroups are analyzed to reflect the boundary conditions of AI label effects.

Experimental Design

Experimental Design
A randomized controlled trial (RCT) with 5 parallel groups, using stratified random sampling to assign users to different experimental groups. The core intervention variable is the type of AI label, with content quality, account characteristics, and release time controlled as confounding variables. The experiment focuses on comparing the differences in user behavior and perceived credibility among groups.
Experimental Design Details
Not available
Randomization Method
Computer-generated randomization. Using Zhihu's user background data system, stratified random assignment is performed after grouping users by age and region to ensure that the user structure of each experimental group is balanced and the randomness of group assignment is guaranteed.
Randomization Unit
Computer-generated randomization. Using Zhihu's user background data system, stratified random assignment is performed after grouping users by age and region to ensure that the user structure of each experimental group is balanced and the randomness of group assignment is guaranteed.
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
40 content clusters (2 sub-fields × 20 hot topics per sub-field). Each content cluster is applied to all 5 experimental groups to control the impact of content characteristics on results.
Sample size: planned number of observations
1,000 individual users (200 users per experimental group). The sample size is determined based on similar studies (1,235 valid samples for human-machine difference effects) and power calculation results, ensuring sufficient statistical power to detect medium effect sizes.
Sample size (or number of clusters) by treatment arms
Control Group (CG): 200 users, covering 40 content clusters;
Treatment Group 1 (T1): 200 users, covering 40 content clusters;
Treatment Group 2 (T2): 200 users, covering 40 content clusters;
Treatment Group 3 (T3): 200 users, covering 40 content clusters;
Treatment Group 4 (T4): 200 users, covering 40 content clusters.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Unit of main outcome: Standard deviation (SD) of perceived credibility score (assuming SD=1 based on 5-point Likert scale data characteristics); Minimum Detectable Effect Size (MDE): 0.2 SD (i.e., a relative effect of 20% compared to the control group); Statistical parameters: Significance level α=0.05, statistical power 1-β=0.8, intra-cluster correlation coefficient (ICC)=0.05 (considering the correlation of user responses to the same content cluster); Rationale: Referring to the experimental result that AI labels may cause a 20%-40% decrease in click-through rate, the MDE is set to 0.2 SD, which is a practically significant effect size. The planned sample size of 200 users per group can effectively detect this effect after controlling for stratification and clustering.
IRB

Institutional Review Boards (IRBs)

IRB Name
IRB Approval Date
IRB Approval Number