In a first experiment, I used a variation of the observed game presented by Gneezy et al. (2018). However, the lying rates of the experiment conducted online were too low. To check whether the results vary when I deliver more privacy, I used this second experiment.
In this experiment, two participants play sequentially. In the beginning, the first-mover (P1) chooses a color out of five in their head. Then, she/he clicks on a box on the computer's screen, and a color (one of the five) is revealed. Therefore, the probability that both colors match is 0.2. After the first-mover observes the draw, they are asked to report to the second-mover (P2) whether the colors match. Once the second-mover learns this report, they follow the same process as P1.
Participants' payoffs depend on both the reports of the two members of a dyad. In particular, if any of the two participants report that the colors match, both of them get 2.5 pounds. Otherwise, both gain 0.3 pounds. I use this payoff structure because I am interested in lying at the extensive margin.
In the Baseline, I hypothesized that the first-movers will take advantage of their position and, therefore, will avoid the cost of lying. To attribute truth-telling to the possibility of having another participant lying on their behalf, I make the report of the second mover a random variable instead of a decision made by P2. Specifically, in the treatment No Avoidance, after the second-mover learns the report of P1, she/he chooses a color and reveals the selected color. Then, the computer reports whether the selected color is the same as the drawn color. Thus, in No Avoidance, the first-mover can not rely on second-mover incentives to lie.
In No Avoidance, the only thing that changes compared with the Baseline is the second-mover report's beliefs. However, I am interested in knowing whether this result will hold even when dropping the positive externality generated by the first-mover's report. With the previous treatments, I do not know whether the report in No Externality is purely because it is not possible to use P2 as an implicit agent or because P1 is using the handy excuse of causing a benefit on P2. In other words, it may be the case that in the Baseline, the first-mover is not using the positive externality as a justification because both participants generate positive externalities and then take full advantage of their position. In contrast, in No Avoidancethe positive impact of P1's report is more salient.
Hence, in the treatment No Externality, I modify the payoff scheme to eliminate the positive externality generated by P1 in previous treatments. In No Externality, the game sequence and main game features, including the pecuniary payoffs for the first-mover, are identical to those in No Avoidance. However, the payoff for P2 depends only on the report made by the computer.
In No Avoidance and No Externality, I control for different components that can explain why the first-mover decides to report truthfully in Baseline. However, there is another component that worth ruling out: the fact that participants play sequentially. In this treatment, Simultaneous, participants decide at the same time. The impact of this treatment on first-mover strategy is similar to No Avoidance. However, it allows me to understand the role of the moral signal that the first mover sends about whether it is correct to lie or not. Thus, this treatment makes it possible to confirm the first-mover's willingness to avoid the cost of lying and the impact of the moral signal on second-mover behavior.