Measuring AI Adoption Across Randomized Nudges

Last registered on October 28, 2024

Pre-Trial

Trial Information

General Information

Title
Measuring AI Adoption Across Randomized Nudges
RCT ID
AEARCTR-0014569
Initial registration date
October 22, 2024

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
October 28, 2024, 12:59 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Primary Investigator

Affiliation

Other Primary Investigator(s)

PI Affiliation
Microsoft
PI Affiliation
Microsoft

Additional Trial Information

Status
In development
Start date
2024-10-07
End date
2025-06-30
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
Several studies have examined the uses and impact of generative AI applications aimed at improving the productivity of people in the workplace and other settings. However, little is known about the contexts in which people may be more or less willing to seek and adopt AI feedback. This study is aimed at determining how perceptions about AI affects its usage by people. We randomly vary the way in which an AI feature is introduced to users via text-based “nudges,” and then measure how users respond in terms of their willingness to try out the feature.
External Link(s)

Registration Citation

Citation
Ahmad, Wajeeha, Adam Fourney and Eric Horvitz. 2024. "Measuring AI Adoption Across Randomized Nudges." AEA RCT Registry. October 28. https://doi.org/10.1257/rct.14569-1.0
Experimental Details

Interventions

Intervention(s)
For this field experiment, we are working with a company which has developed a feature that provides AI-generated feedback to users who are typing an email message. This tool gives feedback to users on attributes of their language, such as tone, reader sentiment, and clarity. Users can then choose to incorporate such feedback when communicating with others via email. The company’s product team has developed an initial text-based nudge to users who are typing an email in the form of brief text within their email window that encourages them to use the AI feature. We propose a randomized roll-out of different text-based variants of this nudge within the email product such that users in our experiment will either receive no nudge or receive one variant of a text-based nudge describing the capability of the AI feature while they are writing emails. We aim to measure the types of text-based nudges that lead to greater usage of the AI feature.
Intervention Start Date
2024-10-07
Intervention End Date
2025-01-07

Primary Outcomes

Primary Outcomes (end points)
The primary outcome of interest is whether our randomized nudges influence users’ decisions about using the AI feature.
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
As a secondary outcome, we will measure whether people retain the AI-generated feedback provided by the AI tool, as measured by users pressing the "apply suggestions" button after trying the tool.
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
Information on the experimental design is hidden until the end of the trial.
Experimental Design Details
Not available
Randomization Method
Randomization is done via code by the company's experimentation platform.
Randomization Unit
Randomization is at the level of the individual user.
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
N/A.
Sample size: planned number of observations
The nudge will be rolled out to worldwide users of the company who are eligible for the AI tool. There are approximately 1 million worldwide eligible users.
Sample size (or number of clusters) by treatment arms
Users are evenly distributed into each of the seven randomized arms with approximately 145,000 users in each group who are eligible to receive the nudges.

However, the number of users who may receive the nudges is likely smaller than those eligible. Nudges are triggered for users based on specific criteria determined by the product’s code. Based on an initial test of the nudge system, approximately 10,000 nudges were triggered in a two-week period for 30,000 users. Assuming nudges are triggered for users at a similar rate, this means that we can expect to see at least 48,000 nudges in a two-week period per treatment group that is assigned a nudge. (Note that nudges may be triggered several times for the same user.)

We plan to examine outcomes for all users eligible for the nudge as well as the subset who ended up receiving nudges.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
Microsoft Research Ethics Review Program and IRB
IRB Approval Date
2024-08-09
IRB Approval Number
ERP 10837 / R&CT 7374
IRB Name
Administrative Panels for the Protection of Human Subjects, Stanford University
IRB Approval Date
2024-08-29
IRB Approval Number
76749