Manipulation-Proof Machine Learning

Last registered on July 02, 2021

Pre-Trial

Trial Information

General Information

Title
Manipulation-Proof Machine Learning
RCT ID
AEARCTR-0004649
Initial registration date
August 30, 2019

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
September 03, 2019, 1:39 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
July 02, 2021, 1:00 PM EDT

Last updated is the most recent time when changes to the trial's registration were published.

Locations

Region

Primary Investigator

Affiliation
U.C. Berkeley

Other Primary Investigator(s)

PI Affiliation
Brown University
PI Affiliation
Brown University

Additional Trial Information

Status
Completed
Start date
2019-07-01
End date
2021-01-01
Secondary IDs
Abstract
An increasing number of decisions are guided by machine learning algorithms. An individual’s behavior is typically used as input to an estimator that determines future decisions. But when an estimator is used to allocate resources, individuals may strategically alter their behavior to achieve a desired outcome. This paper develops a new class of estimators that are stable under manipulation, even when the decision rule is fully transparent. We explicitly model the costs of manipulating different behaviors, and identify decision rules that are stable in equilibrium. Through a large field experiment in Kenya, we test decision rules estimated with our strategy-robust method.
External Link(s)

Registration Citation

Citation
Bjorkegren, Daniel, Joshua Blumenstock and Samsun Knight. 2021. "Manipulation-Proof Machine Learning." AEA RCT Registry. July 02. https://doi.org/10.1257/rct.4649-1.4000000000000001
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details

Interventions

Intervention(s)
The subject population consists of Kenyans aged 18 years or older who own a smartphone and are able to travel to the Busara center in Nairobi. A baseline survey will be conducted with all consenting participants in the Busara offices in Nairobi. At the time of enrollment, participants will install a "Sensing" app on their smartphone. More details on the app and the baseline survey are provided below.

While enrolled in the study, each participant will be dynamically assigned to a treatment for each week. Users will be sent both a text message (SMS) and an app push-notification on a weekly basis that directs them to the app. After a user opens the app, it will ask them to opt in to a "challenge". If they accept, they will observe a challenge which provides them some monetary incentive to change their behavior, as observed through the Sensing app.
Intervention Start Date
2019-07-01
Intervention End Date
2020-02-14

Primary Outcomes

Primary Outcomes (end points)
Note that our RCT is not standard in the sense that our primary objective is not to estimate the average treatment effect of an intervention on primary and secondary outcomes. Instead, the RCT is designed to allow us to estimate:
- The cost of manipulating different behaviors, as observed through the Sensing app (e.g., making a phone call, using multiple WiFi hotspots)
- The ease with which subjects can "game" algorithmic decision rules
- Whether transparent decision rules are easier to game than opaque ones
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
The subject population consists of Kenyans aged 18 years or older who own a smartphone and are able to travel to the Busara center in Nairobi. We anticipate recruiting roughly 1,200 subjects for the main study, primarily through in-person solicitation in public spaces (e.g., public markets) by Busara research staff.

Informed consent is requested from each subject, following the procedures registered with the Committee for the Protection of Human Subjects at U.C. Berkeley. Subjects must actively decide to participate in the research; no pressure or undue influence will be given to induce subjects to participate.

A baseline survey will be conducted with all consenting participants in the Busara offices in Nairobi. At the time of enrollment, participants will install a "Sensing" app on their smartphone. While enrolled in the study, each participant will be dynamically assigned to a treatment for each week. Users will be sent both a text message (SMS) and an app push-notification on a weekly basis that directs them to the app. After a user opens the app, it will ask them to opt in to a "challenge". These challenges may use either a naive or manipulation-proof decision rule, and we will experimentally vary the level of transparency of the algorithm to the subject.
Experimental Design Details
See attached PDF
Randomization Method
Randomization done in office by a computer
Randomization Unit
Individual-week
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
1500 individuals
Sample size: planned number of observations
21600 individual-weeks
Sample size (or number of clusters) by treatment arms
Approximately 50 simple challenges and 10 complex challenges
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
U.C. Berkeley Committee for the Protection of Human Subjects
IRB Approval Date
2018-07-19
IRB Approval Number
041671-001
IRB Name
Kenya Medical Research Institute
IRB Approval Date
2018-08-17
IRB Approval Number
624
Analysis Plan

Analysis Plan Documents

Busara_Manipulation_PAP_v3.pdf

MD5: 973080ba90348431af95fb0a9c870471

SHA1: 9a5674ed1d03588b615b358482add37c8b0e2529

Uploaded At: February 04, 2020

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Yes
Data Collection Completion Date
Final Sample Size: Number of Clusters (Unit of Randomization)
Was attrition correlated with treatment status?
Final Sample Size: Total Number of Observations
Final Sample Size (or Number of Clusters) by Treatment Arms
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Abstract
An increasing number of decisions are guided by machine learning algorithms. In many settings, from consumer credit to criminal justice, those decisions are made by applying an estimator to data on an individual's observed behavior. But when consequential decisions are encoded in rules, individuals may strategically alter their behavior to achieve desired outcomes. This paper develops a new class of estimator that is stable under manipulation, even when the decision rule is fully transparent. We explicitly model the costs of manipulating different behaviors, and identify decision rules that are stable in equilibrium. Through a large field experiment in Kenya, we show that decision rules estimated with our strategy-robust method outperform those based on standard supervised learning approaches.
Citation
Björkegren, D., Blumenstock, J. E., & Knight, S. (2020). Manipulation-Proof Machine Learning. ArXiv:2004.03865 [Cs, Econ]. http://arxiv.org/abs/2004.03865

Reports & Other Materials