NEW UPDATE: Completed trials may now upload and register supplementary documents (e.g. null results reports, populated pre-analysis plans, or post-trial results reports) in the Post Trial section under Reports, Papers, & Other Materials.
Initial registration date
April 05, 2019
August 03, 2020 9:33 AM EDT
This section is unavailable to the public. Use the button below
to request access to this information.
London School Of Economics & Political Science
Other Primary Investigator(s)
Additional Trial Information
A factor underlying the poor productivity performance in the UK in recent years has been an underinvestment in new technologies. Such issues have applied in particular amongst SMEs, and in the “low wage sectors”, which include retail and hospitality - both large employers. As yet, little is known about the causal effects of policies that seek to stimulate the adoption of new technologies, and their impacts on business performance. This randomised control trial will seek to understand what type of intervention (light-touch versus more targeted) works for encouraging SME adoption of cutting-edge but tried and tested artificial intelligence (AI) technologies, and stimulating innovation and productivity improvements more broadly in London's retail and hospitality sectors.
This trial will seek to understand what type of intervention (light-touch versus more targeted) works for encouraging small and medium enterprises (SMEs) to adopt cutting-edge but tried and tested artificial intelligence (AI) technologies, and stimulating innovation and productivity improvements more broadly.
We will test two methods of catalysing the adoption of chatbots and marketing automation technology, versus a control group and each other. Stream 1 is a market convening methodology and Stream 2 is a more targeted intervention based on the idea that access to independent, tailored advice and some financial support is needed in order to increase uptake of AI amongst SMEs. We will also incentivise involvement in the project for all companies (including those in the control group) by giving all who sign up access to a 'Guide to AI for SMEs' document. This implies that all participants in this programme, including the control group, get a minimal level of intervention that is likely to have some impact on knowledge/awareness.
Intervention Start Date
Intervention End Date
Primary Outcomes (end points)
1. Technology assessment process
2. Intentions to adopt chatbots/marketing automation technologies.
3. Actual adoption / decision to adopt chatbots/marketing automation technologies
Primary Outcomes (explanation)
We set out three primary outcomes, which relate to different stages of adoption (persuasion, decision, implementation).
1. Technology assessment process
Variable: Answer to “Have you allocated staff time or resource to exploring the possible adoption of chatbots / marketing automation technologies?” Binary variable 0,1. This variable can be analysed both as a level at endline and as a change from baseline, and will give us information on whether firms are committing resources to moving forwards within the persuasion stage.
2. Intentions to adopt chatbots/marketing automation technologies
Variable: Answer to “how likely are you to adopt chatbots/marketing automation technologies over the next 12 months?” score 1-5, 1=very unlikely, 5=very likely. This variable can be analysed both as a level at endline and as a change from baseline, and will give us information on whether firms are moving from persuasion towards decision. 3. Actual adoption of chatbots/marketing automation technologies/ decision to adopt
Variable: Answer to “have you adopted, or taken the decision to adopt a chatbot/marketing automation technology over the past 6 months? Binary variable 0,1 (analysed as a level at endline). This variable would give us information on whether a firm has moved into decision/implementation. Given the short time frame of this project, it should be realistic to see movement on this margin. Where firms indicate that they have taken the decision to adopt we will ask some more specific questions about the commitment mechanism (e.g. signed off by management/in business plan etc.).
Secondary Outcomes (end points)
1. Attitudes towards chatbots/marketing automation technologies.
2. A measure of the change in intentions to adopt chatbots/marketing automation technologies.
3. Broader adoption of technologies or innovative organisational practices/decision to adopt.
4. Self-reported performance measures (short term): ln(turnover), ln(turnover per employee), number of customers.
Secondary Outcomes (explanation)
The secondary measures we set out here are intended to allow us to explore mechanisms and wider impacts.
1. Attitudes towards chatbots/marketing automation technologies
Variable: Answers to a series of questions such as “Please indicate whether you agree with the following statements: chatbots of marketing automation technologies would lead to increased (a) revenues (b) profitability (b) customers” score 1-5, 1=strongly disagree, 5=strongly agree. This variable can be analysed both as a level at endline and as a change from baseline. 2. A measure of the change in intentions to adopt chatbots / marketing automation technologies
Variable: Answer to the question “Are you now more likely to adopt chatbots / marketing automation technologies than before the intervention?” Binary variable 0,1. This variable would be useful in the case of low adoption rates, but where a high proportion of SMEs state that they are likely to adopt in the baseline survey. At endline, it will explicitly ask SMEs to consider whether they have become more likely to adopt during the course of the programme. 3. Broader adoption of technologies or innovative organisational practices/ decision to adopt
Variable: Answer to “have you adopted, or taken the decision to adopt any other innovative technologies or innovative organisational practices over the past 6 months? Binary variable 0,1.This variable can be analysed both as a level at endline and as a change from baseline, and will give us information on whether a firm has moved into/ committed to a decision / implementation in a broader sense. Where firms indicate that they have taken the decision to adopt we will ask some more specific questions about the commitment mechanism (e.g. signed off by management/in business plan etc.).
4. Self-reported performance measures (short term): ln(turnover), ln(turnover per employee), number of customers – we would ask for the average of turnover or customers over the last 3 months. The technologies within scope of this project seek to increase revenues by increasing the number of customers or better converting enquiries to sales. Therefore, revenues and number of customers are the most appropriate performance variables to measure. Normalising by employment, to give a productivity measure seems sensible. We will also consider impacts on employment itself. We note that we expect that the impacts of technology adoption on firm performance will occur over a longer timeframe than that captured by the endline survey. Nevertheless, we will obtain basic self-reported performance measures (revenues, customers, employees) to conduct some exploratory performance analysis. In our analysis, we will take the natural log of turnover or turnover/employees as these variables tend to be highly skewed. Longer term analysis (beyond the scope of the initial evaluation report) will seek to analyse impacts by tracking firms in administrative data.
This is a parallel randomised controlled trial design. There are three arms to the study: a control group and two treatment groups.
Experimental Design Details
Stratified randomisation done in office by a computer (using Stata software)
Stratify by firms size band and subsector (Retail vs Hospitality).
Firms (eligible SMEs).
Was the treatment clustered?
Sample size: planned number of clusters
Sample size: planned number of observations
Sample size (or number of clusters) by treatment arms
Stream 1: 100
Stream 2: 100
Control: 200 (or more if more eligible firms apply and are happy to proceed in the programme)
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
First we provide estimates of the minimum detectible effect sizes for the primary outcome (3): adoption of chatbot/marketing automation technology. The same logic applies for primary outcome (1), which measures whether the firm has entered into a technology assessment process.
We would expect some firms in the control group to adopt AI, as being in trial means they are potentially interested in these technologies. Let us assume that this proportion is relatively low, at 1%. With alpha=.05, power=.8, two-sided chi-squared test, comparing either of the Streams 1 and 2 (N=100) and the control group (NC=200) the minimum detectible effect would be around 7 percentage points. A higher proportion of control group adoption leads to a larger minimum detectible effect. If 5% of the control group adopt, the MDE is 10%; and if 10% of the control group adopt, the MDE is 12 percentage points.
It will be harder to detect smaller differences between the two treatment arms. For example, if 15% of Stream 1 adopt, we will only be able to detect a 17 percentage point higher adoption rate in Stream 2.
Primary outcome (2), a measure of intentions, will be a score 1-5. If we standardise this, so that the variable has a mean of zero and standard deviation 1, we will be able to detect an increase in the standardised intentions score of 0.34 standard deviations between either treatment group and the control.
INSTITUTIONAL REVIEW BOARDS (IRBs)
LSE Ethics Committee
IRB Approval Date
IRB Approval Number