Disentangling Device and Observability Effects in Dishonesty: Evidence from Probability-Equivalent Randomization Tasks

Last registered on April 01, 2026

Pre-Trial

Trial Information

General Information

Title
Disentangling Device and Observability Effects in Dishonesty: Evidence from Probability-Equivalent Randomization Tasks
RCT ID
AEARCTR-0018177
Initial registration date
March 26, 2026

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
April 01, 2026, 9:58 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Primary Investigator

Affiliation
Universidad Autónoma de Madrid

Other Primary Investigator(s)

PI Affiliation
Universidad Autónoma de Madrid

Additional Trial Information

Status
In development
Start date
2026-03-30
End date
2026-05-31
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
This project studies dishonest behavior in environments in which individuals privately observe outcomes from randomization devices and self-report them for monetary gain. We design a set of eight treatments that isolate the causal effect of the structure and representation of randomization devices by equalizing outcome probabilities and payoff distributions across mechanisms. The treatment variations include comparisons between binary and multi-outcome devices, simple and cognitively complex randomization procedures, and physical or digital formats. In addition, the design incorporates treatments that vary the observability of outcomes, allowing for both aggregate inference of dishonesty and direct observation of individual misreporting.

The study examines how these features affect the level and structure of dishonest reporting. Beyond standard distributional measures, we leverage treatments with observable outcomes to characterize heterogeneity in reporting behavior and classify individuals into behavioral types. The project contributes to the literature by disentangling device effects from incentives and by linking aggregate evidence on dishonesty to underlying individual-level strategies.
External Link(s)

Registration Citation

Citation
Pintér, Ágnes and Nuria Rodríguez Priego. 2026. "Disentangling Device and Observability Effects in Dishonesty: Evidence from Probability-Equivalent Randomization Tasks." AEA RCT Registry. April 01. https://doi.org/10.1257/rct.18177-1.0
Experimental Details

Interventions

Intervention(s)
The study consists of a series of incentivized laboratory experiments in which participants observe realizations from randomization devices and report the outcome for monetary compensation. The objective is to study how the structure and implementation of the randomization device affect dishonest reporting, holding constant the probability distribution of outcomes and the associated payoffs.

Participants are assigned to one of several experimental conditions that vary along three main dimensions. First, we compare binary and multi-outcome environments by implementing coin-based and die-based tasks with equivalent winning probabilities. Second, we vary the cognitive complexity of the randomization process by contrasting simple devices with composite mechanisms that generate identical distributions through sequential steps. Third, we manipulate the format of the randomization device. The latter distinguishes between conditions in which participants generate outcomes themselves (e.g., by flipping a coin or rolling a die) and conditions in which outcomes are generated by the computer within the experimental software.

When outcomes are generated by the participant, only reported values are observed, and dishonesty can be inferred only from aggregate deviations from the theoretical distribution. When outcomes are generated by the computer, the realization is recorded by the software, allowing the researcher to compare realized and reported outcomes at the individual level. Although all decisions remain anonymous and unsupervised, the computer-generated implementation may affect participants’ perception of privacy.

Each participant completes multiple independent rounds of the task within a single session, and monetary payoffs depend on reported outcomes. The design does not involve deception and maintains identical incentives across comparable treatments.
Intervention Start Date
2026-03-30
Intervention End Date
2026-04-30

Primary Outcomes

Primary Outcomes (end points)
The primary outcomes are the reported outcome in each round. At the aggregate level, we analyze mean reported values and distributional deviations. In treatments with observable outcomes, we additionally construct individual-level measures of misreporting, including indicators for any misreporting, upward misreporting, and maximal misreporting.
Primary Outcomes (explanation)
The reported outcome constitutes the fundamental behavioral variable and is recorded either as a binary indicator (in coin-based tasks) or as a numerical value (in multi-outcome tasks). Aggregate dishonesty is constructed by comparing the empirical distribution of reported outcomes with the theoretical distribution implied by truthful reporting. This includes differences in mean reported outcomes relative to the expected value under honesty and formal distributional comparisons.

In treatments where outcomes are generated and recorded by the computer, individual-level dishonesty is directly observed. A binary indicator of dishonesty is defined as whether the reported outcome differs from the realized outcome. In addition, the magnitude of dishonesty is measured as the difference between reported and true outcomes, allowing for an analysis of the frequency, intensity and motives of misreporting.

Secondary Outcomes

Secondary Outcomes (end points)
Secondary outcomes include measures related to the cognitive and individual correlates of reporting behavior. These comprise response times during the reporting decision, individual characteristics such as gender, measures of cognitive ability, and survey-based measures capturing moral attitudes, risk preferences, and background characteristics (including field of study and demographics).
Secondary Outcomes (explanation)
Response times are recorded for each reporting decision and will be used as a proxy for the cognitive effort associated with reporting. In particular, longer response times may reflect the additional cognitive cost of manipulating a reported outcome relative to truthfully reporting the realized value.

Individual characteristics are used to study heterogeneity in dishonest behavior. Gender will be included as a pre-specified dimension of heterogeneity, given its relevance in the literature on ethical decision-making. Cognitive ability measures collected in the post-experimental questionnaire will be used to examine whether the propensity to misreport varies with participants’ ability to process probabilistic or sequential tasks.

Survey-based measures of moral attitudes will be constructed from questionnaire responses and used to analyze whether individual differences in moral views are associated with reporting behavior. Risk preferences and field of study will be included as additional covariates in heterogeneity analyses.

These secondary outcomes are not the primary focus of the study but are intended to provide complementary evidence on the mechanisms underlying dishonest reporting and to explore potential sources of heterogeneity across participants.

Experimental Design

Experimental Design
The study employs a between-subjects experimental design in which participants are randomly assigned to one of eight treatments. Each participant takes part in a single experimental session and is exposed to only one treatment condition. Within each session, participants complete five independent repetitions of a reporting task in which they observe the realization of a randomization device and report the outcome for monetary compensation.

The treatments vary along two key dimensions. The first dimension concerns the structure of the randomization device. Participants are assigned either to binary-outcome tasks or to multi-outcome tasks, and within each category, the randomization process is implemented either through a simple device or through a sequential composite mechanism that generates an equivalent probability distribution. The second dimension concerns the mode of implementation of the randomization device. In some treatments, the outcome is generated by the participant using a physical device (such as a coin or die), while in others the outcome is generated by the experimental software (z-Tree).

In all treatments, the probability distribution of outcomes and the associated payoff structure are held constant across comparable conditions. This ensures that any differences in reporting behavior can be attributed to the structure or implementation of the randomization device rather than to differences in expected monetary incentives.

Participants receive monetary payoffs based on their reported outcomes in one randomly selected round. The experimental design does not involve deception, and all decisions are made anonymously and without direct supervision by the experimenter.
Experimental Design Details
Not available
Randomization Method
Participants are randomly assigned to treatments through a computerized randomization process that ensures that each participant has an equal probability of being allocated to any of the treatment conditions. Randomization is performed independently for each participant and does not depend on participant characteristics or on the behavior of other participants.
Randomization Unit
The unit of randomization is the individual participant. Each participant is assigned to exactly one treatment and remains in that treatment throughout the session.
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
480 participants
Sample size: planned number of observations
Each of the 480 participants completes five independent rounds of the reporting task, resulting in a total of 2,400 observations at the decision level.
Sample size (or number of clusters) by treatment arms
The sample is evenly distributed across the eight treatments, with 60 participants assigned to each treatment condition. Since each participant completes five rounds, each treatment yields 300 observations at the decision level.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
ELTE CERS Research Ethics Committee
IRB Approval Date
2026-03-17
IRB Approval Number
1Főig/11-1/2026