Man vs robot: are self-reported emotions congruent with software output?

Last registered on December 03, 2024

Pre-Trial

Trial Information

General Information

Title
Man vs robot: are self-reported emotions congruent with software output?
RCT ID
AEARCTR-0014661
Initial registration date
November 27, 2024

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
December 03, 2024, 1:35 PM EST

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Primary Investigator

Affiliation
Sapienza University of Rome

Other Primary Investigator(s)

Additional Trial Information

Status
In development
Start date
2024-11-28
End date
2025-02-28
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
Advancements in technology are transforming data collection methods in experimental economics, enabling more sophisticated insights into human behavior. Tools capable of monitoring and reporting subjects' emotions during experiments provide a richer understanding of economic decision-making and the psychological mechanisms driving it. This marks a significant improvement over traditional approaches that relied solely on self-reported emotional evaluations, which are prone to biases and inaccuracies. By integrating self-reported evaluations with emotion-tracking technologies, researchers can achieve a more precise estimation of subjects' emotions. Understanding the differences between these two methods is crucial for developing accurate measures that enhance the reliability and depth of analysis in experimental economics.
External Link(s)

Registration Citation

Citation
De Santis, Gianmarco. 2024. "Man vs robot: are self-reported emotions congruent with software output?." AEA RCT Registry. December 03. https://doi.org/10.1257/rct.14661-1.0
Experimental Details

Interventions

Intervention(s)
Intervention Start Date
2024-11-28
Intervention End Date
2025-02-28

Primary Outcomes

Primary Outcomes (end points)
Score differences between self-reported and software-based evaluations for a set of emotions
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
Panas-C Questionnaire;
Filmclip;
Three Multiple Price Lists;
Panas-C Questionnaire;
Probability Update task;
Incentivized questionnaire on financial and sustainability literacy;
Socio-demographic questionnaire
Experimental Design Details
Not available
Randomization Method
Randomization done by computer
Randomization Unit
Individual randomization; session randomization
Was the treatment clustered?
Yes

Experiment Characteristics

Sample size: planned number of clusters
1 university
Sample size: planned number of observations
100 university students
Sample size (or number of clusters) by treatment arms
50/50
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
IRB Approval Date
IRB Approval Number