|
Field
Trial Start Date
|
Before
September 19, 2019
|
After
March 13, 2020
|
|
Field
Trial End Date
|
Before
March 30, 2020
|
After
May 16, 2020
|
|
Field
Last Published
|
Before
March 09, 2020 07:37 PM
|
After
March 09, 2020 08:09 PM
|
|
Field
Intervention Start Date
|
Before
September 19, 2019
|
After
March 13, 2020
|
|
Field
Intervention End Date
|
Before
March 30, 2020
|
After
May 16, 2020
|
|
Field
Randomization Method
|
Before
Disasters randomized by computer during sessions.
Business risk levels randomized by computer during sessions.
Treatment order randomized in office by computer. In order to balance the sample, after generating a random ordering of trials 1 2 3 4, we also conduct sessions ordered 4 3 2 1, 2 1 4 3, and 3 4 1 2. Each ordering is used for three sessions.
|
After
Disasters randomized by computer during sessions.
Business risk levels randomized by computer during sessions.
Treatment order randomized in office by computer with each session having a different order. All orders will be tested.
|
|
Field
Planned Number of Clusters
|
Before
120 subjects
|
After
60 subjects
|
|
Field
Planned Number of Observations
|
Before
120 subjects* 3 businesses * 25 periods * 4 treatments = 24000 observations
|
After
60 subjects* 2 businesses * 20 periods * 3 treatments =7200 observations
|
|
Field
Intervention (Hidden)
|
Before
Treatments differ based on the degree of reinforcement associated with "good" actions and the value of "good" actions. We vary these by changing the reward level (which changes both reinforcement and value) and then separately manipulating reinforcement in several ways.
There are two mechanisms which we use to manipulate reinforcement.
One mechanism reduces reinforcement by not reporting disasters for the player's businesses in a given period. In such periods, the players instead learn about disasters impacting a hypothetical neighbor's businesses. The neighbor's businesses share risk type with the player's, so the information value is identical, but the information is not associated with payoffs.
Another mechanism increases reinforcement by telling the player about additional prizes he has a chance of receiving after taking an action. After taking a reinforced action (either insuring when a disaster occurs or not insuring when no disaster occurs), the player learns about a high bonus prize. After taking a non-reinforced action, the player learns about an on average lower bonus prize. The actual chance of different prizes does not depend on which is revealed, so the reinforcement does not add value to the information.
There are four treatments:
1. High value, high reinforcement-This treatment has no reinforcement modification and has a prize of $0.25
2. High value, low reinforcement-This treatment has a 50% of seeing the neighbor's shock and a prize of $0.25
3. Low value, low reinforcement-This treatment has no reinforcement modification and has a prize of $0.12
3. Low value, high reinforcement-This treatment a bonus prize of $0.13 and has a prize of $0.12
|
After
Treatments differ based on the degree of reinforcement associated with "good" actions and the value of "good" actions. We vary these by changing the reward level (which changes both reinforcement and value) and then separately manipulating reinforcement in several ways.
There are two mechanisms which we use to manipulate reinforcement.
One mechanism reduces reinforcement by not reporting disasters for the player's businesses in a given period. In such periods, the players instead learn about disasters impacting a hypothetical neighbor's businesses. The neighbor's businesses share risk type with the player's, so the information value is identical, but the information is not associated with payoffs.
Another mechanism increases reinforcement by telling the player about additional prizes he has a chance of receiving after taking an action. After taking a reinforced action (either insuring when a disaster occurs or not insuring when no disaster occurs), the player learns about a bonus prize. After taking a non-reinforced action, the player does not. The actual chance of different receiving a prize does not depend on whether it is revealed is revealed, so the reinforcement does not add value to the information.
There are three treatments:
1. High value, high reinforcement-This treatment has no reinforcement modification and has a prize of $0.25
2. High value, low reinforcement-Players see the neighbor's shock and a prize of $0.25
3. Low value, high reinforcement-This treatment a bonus prize of $0.20 and has a prize of $0.05
|
|
Field
Secondary Outcomes (Explanation)
|
Before
We are also interested in the degree to which individuals
|
After
We are also interested in the degree to which individuals overweight newer information both when purchasing and not purchasing insurance.
|