NEW UPDATE: Completed trials may now upload and register supplementary documents (e.g. null results reports, populated pre-analysis plans, or post-trial results reports) in the Post Trial section under Reports, Papers, & Other Materials.
Manipulation-Proof Machine Learning
Initial registration date
August 30, 2019
February 04, 2020 10:34 PM EST
This section is unavailable to the public. Use the button below
to request access to this information.
Other Primary Investigator(s)
Additional Trial Information
An increasing number of decisions are guided by machine learning algorithms. An individual’s behavior is typically used as input to an estimator that determines future decisions. But when an estimator is used to allocate resources, individuals may strategically alter their behavior to achieve a desired outcome. This paper develops a new class of estimators that are stable under manipulation, even when the decision rule is fully transparent. We explicitly model the costs of manipulating different behaviors, and identify decision rules that are stable in equilibrium. Through a large field experiment in Kenya, we test decision rules estimated with our strategy-robust method.
The subject population consists of Kenyans aged 18 years or older who own a smartphone and are able to travel to the Busara center in Nairobi. A baseline survey will be conducted with all consenting participants in the Busara offices in Nairobi. At the time of enrollment, participants will install a "Sensing" app on their smartphone. More details on the app and the baseline survey are provided below.
While enrolled in the study, each participant will be dynamically assigned to a treatment for each week. Users will be sent both a text message (SMS) and an app push-notification on a weekly basis that directs them to the app. After a user opens the app, it will ask them to opt in to a "challenge". If they accept, they will observe a challenge which provides them some monetary incentive to change their behavior, as observed through the Sensing app.
Intervention Start Date
Intervention End Date
Primary Outcomes (end points)
Note that our RCT is not standard in the sense that our primary objective is not to estimate the average treatment effect of an intervention on primary and secondary outcomes. Instead, the RCT is designed to allow us to estimate:
- The cost of manipulating different behaviors, as observed through the Sensing app (e.g., making a phone call, using multiple WiFi hotspots)
- The ease with which subjects can "game" algorithmic decision rules
- Whether transparent decision rules are easier to game than opaque ones
Primary Outcomes (explanation)
Secondary Outcomes (end points)
Secondary Outcomes (explanation)
The subject population consists of Kenyans aged 18 years or older who own a smartphone and are able to travel to the Busara center in Nairobi. We anticipate recruiting roughly 1,200 subjects for the main study, primarily through in-person solicitation in public spaces (e.g., public markets) by Busara research staff.
Informed consent is requested from each subject, following the procedures registered with the Committee for the Protection of Human Subjects at U.C. Berkeley. Subjects must actively decide to participate in the research; no pressure or undue influence will be given to induce subjects to participate. A baseline survey will be conducted with all consenting participants in the Busara offices in Nairobi. At the time of enrollment, participants will install a "Sensing" app on their smartphone. While enrolled in the study, each participant will be dynamically assigned to a treatment for each week. Users will be sent both a text message (SMS) and an app push-notification on a weekly basis that directs them to the app. After a user opens the app, it will ask them to opt in to a "challenge". These challenges may use either a naive or manipulation-proof decision rule, and we will experimentally vary the level of transparency of the algorithm to the subject.
Experimental Design Details
Randomization done in office by a computer
Was the treatment clustered?
Sample size: planned number of clusters
Sample size: planned number of observations
Sample size (or number of clusters) by treatment arms
Approximately 50 simple challenges and 10 complex challenges
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
INSTITUTIONAL REVIEW BOARDS (IRBs)
U.C. Berkeley Committee for the Protection of Human Subjects
IRB Approval Date
IRB Approval Number
Kenya Medical Research Institute
IRB Approval Date