When Humans Encounter LLMs: An Experiment on Copy Writing and User Perception on Social Media

Last registered on May 17, 2023


Trial Information

General Information

When Humans Encounter LLMs: An Experiment on Copy Writing and User Perception on Social Media
Initial registration date
May 12, 2023

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
May 17, 2023, 2:48 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.



Primary Investigator

Peking University

Other Primary Investigator(s)

PI Affiliation
Peking University

Additional Trial Information

In development
Start date
End date
Secondary IDs
Prior work
This trial is based on or builds upon one or more prior RCTs.
This study wants to explore the competition and collaboration between humans and Large Language Models(e.g. ChatGPT), as well as humans' attitudes towards LLMs. We are planning to run a randomized control trial on social media platform. Four accounts will be operated simultaneously from scratch, with the copywriting for three of the accounts being generated separately by humans, LLMs, and human-AI collaboration. The copy for the other account will be randomly selected from those generated by LLMs or human-AI collaboration, but we will indicate at the beginning of the post that the copy was generated or co-generated by LLMs. Afterwards, we will allocate traffic to the four accounts and observe the views, likes, reposts and comments acquired by the accounts. We will examine (1) In the context of copywriting, who will gain more traffic and perform better: humans, LLMs, or human-AI collaboration?(2) Compared to copywriting produced by humans, will social media users have different attitudes towards copywriting generated or co-generated by LLMs? (3) For people who cooperate with LLMs to write copywriting, will they adopt more LLMs later in their work? What is the effect of LLMs adoption on performance?
External Link(s)

Registration Citation

Huang, Huaqing and Juanjuan Meng. 2023. "When Humans Encounter LLMs: An Experiment on Copy Writing and User Perception on Social Media." AEA RCT Registry. May 17. https://doi.org/10.1257/rct.11347-1.0
Experimental Details


Intervention Start Date
Intervention End Date

Primary Outcomes

Primary Outcomes (end points)
the number of views, likes, reposts and comments for each post; the number of growing fans for each account; the number of purchasing if the post attaches a purchasing link.

humans' perceptions and attitudes towards LLMs; the adoption of LLMs; the performance before and after the adoption of LLMs.
Primary Outcomes (explanation)
Humans' perceptions and attitudes towards LLMs will be elicited from surveys.

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
Each group corresponds to one social media account, and four accounts will be operated simultaneously from scratch.
⚫ Pure Human Group: Humans generate copywriting independently.
⚫ Pure AI Group: LLMs generate copywriting independently.
⚫ Human-AI Collaboration Group: There is no specific limitation on the collaboration method. It can be multiple rounds of interaction between humans and LLMs, or humans modifying the copywriting generated by LLMs.
⚫ Labeled Group: Randomly select copywriting from the Pure AI Group and Human-AI Collaboration Group for posting. When posting, indicate at the beginning that "This copywriting is generated by LLMs" or "This copywriting is co-generated by humans and LLMs".

These four accounts will be used to promote the same brand and product managed by a company. We plan to recruit employees from this company to generate or co-generate copywritings at the designated time and place. There will be "an initial questionnaire" to collect the basic information of these employees and especially to survey their usage of LLMs and other AI tools. "A following questionnaire" will be released two weeks after the copywriting generation and survey their later adoption and usage of LLMs. We will combine it with their performance evaluation data from the company to analyze the productivity effect of LLMs.

Among all the copywritings, part of them will be paired with product image or purchase link or both. We will then send these posts and allocate traffic for them. We will purchase the same exposure number for each post and observe the final number of views, likes, shares, and comments data. A questionnaire for social media users to survey their perceptions and attitudes towards these posts will be released through the four accounts when the experiment is finished.
Experimental Design Details
Randomization Method
For people we recruit to write copywriting, the randomization between "Pure Human Group" and "Human-AI Collaboration Group" is done in office by a computer.
Randomization Unit
Was the treatment clustered?

Experiment Characteristics

Sample size: planned number of clusters
150 individuals.
Sample size: planned number of observations
150 individuals, with each individual generate or co-generate two posts; another 300 posts are generated by LLMs or randomly selected from the existing posts. 600 posts in total. The number of planned exposure we purchase for each post is 10000-20000 users. 6000000-12000000 social media users exposed to copywritings for the above four accounts in total.
Sample size (or number of clusters) by treatment arms
150 posts generated by humans independently; 150 posts generated by LLMs independently; 150 posts generated by human-AI collaboration; 150 posts randomly selected from posts generated/co-generated by LLMs and labeled "generated/ cogenerated by LLMs".
1500000-3000000 users exposed to copywritings for each account.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)

Institutional Review Boards (IRBs)

IRB Name
Guanghua School of Management, Peking University
IRB Approval Date
IRB Approval Number


Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information


Is the intervention completed?
Data Collection Complete
Data Publication

Data Publication

Is public data available?

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials