|
Field
Abstract
|
Before
In this study we aim to answer the following research question: "“Do investors react to information on the emission reduction potential (ERP) of climate tech start-ups?”. Increasingly there is awareness and methodologies that can estimate the potential in reducing carbon emissions of different technologies and start-ups. Collaborating with a company which, for privacy reasons, we will identify as "X" --- a platform dedicated to provide information on climate tech innovation and that has investors, corporations, researchers, and others as customers --- we will randomize information about ERP in their monthly newsletter. The treatment will be mentioning ERP in the header of the email, and adding a section in the newsletter devoted to ERP which includes a link to a webpage with additional information (which contains general information about ERP, showcases the ERP metrics that X provides in their platform, and provides a downloadable document). The newsletter is sent to investors but also many other subscribers (e.g., corporates, academic, climate curious, etc.), and thus we will stratify the randomization by type of recipient. We will track the following outcomes: i) rate of e-mail opening within 10 days of the newsletter release, ii) number of clicks on links in the newsletter within 10 days of the newsletter release (this captures whether ERP was "additive" in terms of newsletter engagement), iii) for the treated group, number of clicks on links in the ERP section of the newsletter (also, within 10 days of the release of the newsletter) (in combination with ii), this outcome will allow us to explore whether the attention drawn by ERP came at a cost of crowding out attention on the rest of the newsletter --- crowding out by ERP = clicks on links in the control group / clicks on non-ERP links in the treated group), iv) for recipients that are not customers of X, we will study whether there is an increase in the likelihood of becoming a customer of X within 4 months of the release of the newsletter, and v) for recipients that are customers of of X, we will study several daily outcomes within 30 days of the release of the newsletter: times the customer logged into the X platform, times the customer browsed a start-up with ERP information (X provides ERP information for some start-ups), times the customer browsed a start-up without ERP information, and other general engagement metrics that X tracks.
|
After
In this study we aim to answer the following research question: "“Do investors react to information on the emission reduction potential (ERP) of climate tech start-ups?”. Increasingly there is awareness and methodologies that can estimate the potential in reducing carbon emissions of different technologies and start-ups. Collaborating with a company which, for privacy reasons, we will identify as "X" --- a platform dedicated to provide information on climate tech innovation and that has investors, corporations, researchers, and others as customers --- we will randomize information about ERP in their monthly newsletter. The treatment group will have the following manipulation: i) The header of the email that delivers the newsletter will be "Which climatech technology reduces emissions the most?", ii) The newsletter will have a section devoted to ERP which will be placed at the top of the newsletter, includes a link to a webpage with additional information (which contains general information about ERP, showcases the ERP metrics that X provides in their platform, and provides a downloadable document). The Control group instead will have: i) a different header of the email that will be decided through the usual process that company "X" has for their newsletter, ii) The ERP section will be at the bottom of the newsletter. The newsletter is sent to investors but also many other subscribers (e.g., corporates, academic, climate curious, etc.), and thus we will stratify the randomization by type of recipient. We will track the following outcomes: i) rate of e-mail opening within 10 days of the newsletter release (==> this is affected by the header of the email), ii) number of clicks on links in the ERP section versus other sections of the newsletter within 10 days of the newsletter release (===> this is affected by "salience of the ERP section", that is, the ERP section located the at the top or at the bottom of the newsletter), and iii) for recipients that are customers of of X, we will study several daily outcomes within 30 days of the release of the newsletter: times the customer logged into the X platform, times the customer browsed a start-up with ERP information (X provides ERP information for some start-ups), times the customer browsed a start-up without ERP information, and other general engagement metrics that X tracks. Outcome iii) will only be analyzed if the opening rates of the emails are large (larger 50%) and if the effect of the salience of ERP is considerable.
|
|
Field
Trial End Date
|
Before
October 06, 2024
|
After
July 27, 2024
|
|
Field
Last Published
|
Before
July 09, 2024 08:53 AM
|
After
July 09, 2024 10:02 AM
|
|
Field
Intervention (Public)
|
Before
In this study we aim to answer the following research question: "“Do investors react to information on the emission reduction potential (ERP) of climate tech start-ups?”. Increasingly there is awareness and methodologies that can estimate the potential in reducing carbon emissions of different technologies and start-ups. Collaborating with a company which, for privacy reasons, we will identify as "X" --- a platform dedicated to provide information on climate tech innovation and that has investors, corporations, researchers, and others as customers --- we will randomize information about ERP in their monthly newsletter. The treatment will be mentioning ERP in the header of the email, and adding a section in the newsletter devoted to ERP which includes a link to a webpage with additional information (which contains general information about ERP, showcases the ERP metrics that X provides in their platform, and provides a downloadable document). The newsletter is sent to investors but also many other subscribers (e.g., corporates, academic, climate curious, etc.), and thus we will stratify the randomization by type of recipient. We will track the following outcomes: i) rate of e-mail opening within 10 days of the newsletter release, ii) number of clicks on links in the newsletter within 10 days of the newsletter release (this captures whether ERP was "additive" in terms of newsletter engagement), iii) for the treated group, number of clicks on links in the ERP section of the newsletter (also, within 10 days of the release of the newsletter) (in combination with ii), this outcome will allow us to explore whether the attention drawn by ERP came at a cost of crowding out attention on the rest of the newsletter --- crowding out by ERP = clicks on links in the control group / clicks on non-ERP links in the treated group), iv) for recipients that are not customers of X, we will study whether there is an increase in the likelihood of becoming a customer of X within 4 months of the release of the newsletter, and v) for recipients that are customers of of X, we will study several daily outcomes within 30 days of the release of the newsletter: times the customer logged into the X platform, times the customer browsed a start-up with ERP information (X provides ERP information for some start-ups), times the customer browsed a start-up without ERP information, and other general engagement metrics that X tracks.
|
After
Collaborating with a company which, for privacy reasons, we will identify as "X" --- a platform dedicated to provide information on climate tech innovation and that has investors, corporations, researchers, and others as customers --- we will randomize information about ERP in their monthly newsletter. The treatment group will have the following manipulation: i) The header of the email that delivers the newsletter will be "Which climatech technology reduces emissions the most?", ii) The newsletter will have a section devoted to ERP which will be placed at the top of the newsletter, includes a link to a webpage with additional information (which contains general information about ERP, showcases the ERP metrics that X provides in their platform, and provides a downloadable document). The Control group instead will have: i) a different header of the email that will be decided through the usual process that company "X" has for their newsletter, ii) The ERP section will be at the bottom of the newsletter. The newsletter is sent to investors but also many other subscribers (e.g., corporates, academic, climate curious, etc.), and thus we will stratify the randomization by type of recipient. We will track the following outcomes: i) rate of e-mail opening within 10 days of the newsletter release (==> this is affected by the header of the email), ii) number of clicks on links in the ERP section versus other sections of the newsletter within 10 days of the newsletter release (===> this is affected by "salience of the ERP section", that is, the ERP section located the at the top or at the bottom of the newsletter), and iii) for recipients that are customers of of X, we will study several daily outcomes within 30 days of the release of the newsletter: times the customer logged into the X platform, times the customer browsed a start-up with ERP information (X provides ERP information for some start-ups), times the customer browsed a start-up without ERP information, and other general engagement metrics that X tracks. Outcome iii) will only be analysed if the opening rates of the emails are large (larger 50%) and if the effect of the salience of ERP is considerable.
|
|
Field
Intervention Start Date
|
Before
June 06, 2024
|
After
July 17, 2024
|
|
Field
Intervention End Date
|
Before
June 13, 2024
|
After
July 27, 2024
|
|
Field
Primary Outcomes (End Points)
|
Before
We will track the following outcomes: i) rate of e-mail opening within 10 days of the newsletter release, ii) number of clicks on links in the newsletter within 10 days of the newsletter release (this captures whether ERP was "additive" in terms of newsletter engagement), iii) for the treated group, number of clicks on links in the ERP section of the newsletter (also, within 10 days of the release of the newsletter) (in combination with ii), this outcome will allow us to explore whether the attention drawn by ERP came at a cost of crowding out attention on the rest of the newsletter --- crowding out by ERP = clicks on links in the control group / clicks on non-ERP links in the treated group), iv) for recipients that are not customers of X, we will study whether there is an increase in the likelihood of becoming a customer of X within 4 months of the release of the newsletter, and v) for recipients that are customers of of X, we will study several daily outcomes within 30 days of the release of the newsletter: times the customer logged into the X platform, times the customer browsed a start-up with ERP information (X provides ERP information for some start-ups), times the customer browsed a start-up without ERP information, and other general engagement metrics that X tracks.
|
After
We will track the following outcomes: i) rate of e-mail opening within 10 days of the newsletter release (==> this is affected by the header of the email), ii) number of clicks on links in the ERP section versus other sections of the newsletter within 10 days of the newsletter release (===> this is affected by "salience of the ERP section", that is, the ERP section located the at the top or at the bottom of the newsletter), and iii) for recipients that are customers of of X, we will study several daily outcomes within 30 days of the release of the newsletter: times the customer logged into the X platform, times the customer browsed a start-up with ERP information (X provides ERP information for some start-ups), times the customer browsed a start-up without ERP information, and other general engagement metrics that X tracks. Outcome iii) will only be analysed if the opening rates of the emails are large (larger 50%) and if the effect of the salience of ERP is considerable.
|
|
Field
Experimental Design (Public)
|
Before
We will randomize the treatment (described above) in the newsletter that X will send to its recipients on June 2024. The treatment will be stratified by the type of recipients that X identifies (investor, corporate executive/employee, Government/NGO, Start-up founder/operator, Student/academic, Climate curious, other) and by whether the recipients are customers of X or not.
Control and treatment groups:
40% of recipients will be assigned to the control group.
30% of recipients will be assigned to the treatment 1 group.
30% of recipients will be assigned to the treatment 2 group.
Treatment 1: The subject of the email will be "Emission reduction potential, topic2, topic3, and more". The section about ERP will be at the top of the newsletter (out of 5 sections).
Treatment 2: The subject of the email will be "topic2, topic3, Emission reduction potential, and more". The section about ERP will be at 4th in the newsletter (out of 5 sections).
|
After
IMPORTANT NOTE: The experiment that was originally pre-registered suffered from a mistake in the execution from the part of company "X". In particular, instead of laying out "emission reduction potential" in the email header, they wrote "ERP" which audience do not know meant "emission reduction potential". This lead to a very small opening rate in the treated group (<7%), in comparison to their usual opening rates (~30%). Given that problem, we decide to eschew that first experiment, do a new one, and thus, register the changes to the experiment. The changes are in the date, the treatments and the outcomes we track (as the first failed run also gave us the opportunity to learn).
We will randomize the treatment (described above) in the newsletter that X will send to its recipients on 17th July 2024. The treatment will be stratified by the type of recipients that X identifies (investor, corporate executive/employee, Government/NGO, Start-up founder/operator, Student/academic, Climate curious, other) and by whether the recipients are customers of X or not. Also we will stratify by the randomization that was created for the failed first run of the experiment (control, treatment 1 and treatment 2).
Control and treatment groups:
50% of recipients will be assigned to the control group.
50% of recipients will be assigned to the treatment group.
Treatment: The subject of the email will be "Which climatech technology ". The section about ERP will be at the top of the newsletter (out of 5 sections).
Control: The subject of the email will be "[decided by the company X, but doesn't include climatech". The section about ERP will be at 4th in the newsletter (out of 5 sections).
|
|
Field
Randomization Method
|
Before
Randomization will be performed using a computer and the list of recipients that X will send a week ahead of the date of release of the newsletter.
|
After
Randomization will be performed using stata software and the list of recipients that X will send a a few days ahead of the date of release of the newsletter.
|
|
Field
Power calculation: Minimum Detectable Effect Size for Main Outcomes
|
Before
We can calculate the minimum detectable effect (MDE) for the outcome i) "rate of e-mail opening", for which we have information from previous months of newsletters by X. The average rate of email opening that X experiences is 28%, which means a standard deviation of 44.9%. Our sample size of 2,500 recipients allows us to detect a MDE of 11% of a standard deviation, or approximately 5 percentage points in a simple treated vs control analysis; regarding the investor recipient type, our sample of 580 allows us to detect a MDE of around 25% or 11 percentages points. If we add a baseline and allow for a difference-in-difference analysis, using the opening rate of the newsletter immediately before that of our treatment (May 2024), we can reduce these MDE to 9.5% and 21%, respectively (of course, depending on assumptions on autocorrelation).
For the outcomes ii) and iii) regarding the number of clicks, in past newsletter the average number of clicks per recipient is 0.25 (with a standard deviation of 0.43) (the vast majority does 1 or 0 clicks). The sample size of 2,500 allows to detect a MDE of 11% of a standard deviation (equivalent to 0.047 clicks); if performing a dif-in-dif analysis, the MDE is 9.5% of a standard deviation (equivalent to 0.041 clicks). On investors, the sample of 580 allows a MDE of 23% of a standard deviation (equivalent to 0.010 clicks); if performing a dif-in-dif analysis, the MDE is 21.5% of a standard deviation (equivalent to 0.093 clicks).
|
After
We can calculate the minimum detectable effect (MDE) for the outcome i) "rate of e-mail opening", for which we have information from previous months of newsletters by X. The average rate of email opening that X experiences is 28%, which means a standard deviation of 44.9%. Our sample size of 2,500 recipients allows us to detect a MDE of 11% of a standard deviation, or approximately 5 percentage points in a simple treated vs control analysis; regarding the investor recipient type, our sample of 580 allows us to detect a MDE of around 25% or 11 percentages points. If we add a baseline and allow for a difference-in-difference analysis, using the opening rate of the newsletter immediately before that of our treatment (May 2024), we can reduce these MDE to 9.5% and 21%, respectively (of course, depending on assumptions on autocorrelation).
|
|
Field
Intervention (Hidden)
|
Before
|
After
IMPORTANT NOTE: The experiment that was originally pre-registered suffered from a mistake in the execution from the part of company "X". In particular, instead of laying out "emission reduction potential" in the email header, they wrote "ERP" which audience do not know meant "emission reduction potential". This lead to a very small opening rate in the treated group (<7%), in comparison to their usual opening rates (~30%). Given that problem, we decide to eschew that first experiment, do a new one, and thus, register the changes to the experiment. The changes are in the date, the treatments and the outcomes we track (as the first failed run also gave us the opportunity to learn).
|