Back to History Current Version

AI for SMEs

Last registered on September 06, 2019

Pre-Trial

Trial Information

General Information

Title
AI for SMEs
RCT ID
AEARCTR-0003999
Initial registration date
April 05, 2019

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
April 06, 2019, 3:08 PM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
September 06, 2019, 6:54 AM EDT

Last updated is the most recent time when changes to the trial's registration were published.

Locations

Primary Investigator

Affiliation
London School Of Economics & Political Science

Other Primary Investigator(s)

Additional Trial Information

Status
In development
Start date
2019-04-15
End date
2020-06-30
Secondary IDs
Abstract
A factor underlying the poor productivity performance in the UK in recent years has been an underinvestment in new technologies. Such issues have applied in particular amongst SMEs, and in the “low wage sectors”, which include retail and hospitality - both large employers. As yet, little is known about the causal effects of policies that seek to stimulate the adoption of new technologies, and their impacts on business performance. This randomised control trial will seek to understand what type of intervention (light-touch versus more targeted) works for encouraging SME adoption of cutting-edge but tried and tested artificial intelligence (AI) technologies, and stimulating innovation and productivity improvements more broadly in London's retail and hospitality sectors.

Registration Citation

Citation
Valero, Anna. 2019. "AI for SMEs." AEA RCT Registry. September 06. https://doi.org/10.1257/rct.3999-2.0
Former Citation
Valero, Anna. 2019. "AI for SMEs." AEA RCT Registry. September 06. https://www.socialscienceregistry.org/trials/3999/history/52912
Sponsors & Partners

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information
Experimental Details

Interventions

Intervention(s)
This trial will seek to understand what type of intervention (light-touch versus more targeted) works for encouraging small and medium enterprises (SMEs) to adopt cutting-edge but tried and tested artificial intelligence (AI) technologies, and stimulating innovation and productivity improvements more broadly.

We will test two methods of catalysing the adoption of chatbots and marketing automation technology, versus a control group and each other. Stream 1 is a market convening methodology and Stream 2 is a more targeted intervention based on the idea that access to independent, tailored advice and some financial support is needed in order to increase uptake of AI amongst SMEs.

We will also incentivise involvement in the project for all companies (including those in the control group) by giving all who sign up access to a 'Guide to AI for SMEs' document. This implies that all participants in this programme, including the control group, get a minimal level of intervention that is likely to have some impact on knowledge/awareness.
Intervention Start Date
2019-06-10
Intervention End Date
2020-03-31

Primary Outcomes

Primary Outcomes (end points)
1. Technology assessment process
2. Intentions to adopt chatbots/marketing automation technologies.
3. Actual adoption / decision to adopt chatbots/marketing automation technologies
Primary Outcomes (explanation)
We set out three primary outcomes, which relate to different stages of adoption (persuasion, decision, implementation).

1. Technology assessment process
Variable: Answer to “Have you allocated staff time or resource to exploring the possible adoption of chatbots / marketing automation technologies?” Binary variable 0,1. This variable can be analysed both as a level at endline and as a change from baseline, and will give us information on whether firms are committing resources to moving forwards within the persuasion stage.

2. Intentions to adopt chatbots/marketing automation technologies
Variable: Answer to “how likely are you to adopt chatbots/marketing automation technologies over the next 12 months?” score 1-5, 1=very unlikely, 5=very likely. This variable can be analysed both as a level at endline and as a change from baseline, and will give us information on whether firms are moving from persuasion towards decision.

3. Actual adoption of chatbots/marketing automation technologies/ decision to adopt
Variable: Answer to “have you adopted, or taken the decision to adopt a chatbot/marketing automation technology over the past 6 months? Binary variable 0,1 (analysed as a level at endline). This variable would give us information on whether a firm has moved into decision/implementation. Given the short time frame of this project, it should be realistic to see movement on this margin. Where firms indicate that they have taken the decision to adopt we will ask some more specific questions about the commitment mechanism (e.g. signed off by management/in business plan etc.).

Secondary Outcomes

Secondary Outcomes (end points)
1. Attitudes towards chatbots/marketing automation technologies.
2. A measure of the change in intentions to adopt chatbots/marketing automation technologies.
3. Broader adoption of technologies or innovative organisational practices/decision to adopt.
4. Self-reported performance measures (short term): ln(turnover), ln(turnover per employee), number of customers.
Secondary Outcomes (explanation)
The secondary measures we set out here are intended to allow us to explore mechanisms and wider impacts.

1. Attitudes towards chatbots/marketing automation technologies
Variable: Answers to a series of questions such as “Please indicate whether you agree with the following statements: chatbots of marketing automation technologies would lead to increased (a) revenues (b) profitability (b) customers” score 1-5, 1=strongly disagree, 5=strongly agree. This variable can be analysed both as a level at endline and as a change from baseline.

2. A measure of the change in intentions to adopt chatbots / marketing automation technologies
Variable: Answer to the question “Are you now more likely to adopt chatbots / marketing automation technologies than before the intervention?” Binary variable 0,1. This variable would be useful in the case of low adoption rates, but where a high proportion of SMEs state that they are likely to adopt in the baseline survey. At endline, it will explicitly ask SMEs to consider whether they have become more likely to adopt during the course of the programme.

3. Broader adoption of technologies or innovative organisational practices/ decision to adopt
Variable: Answer to “have you adopted, or taken the decision to adopt any other innovative technologies or innovative organisational practices over the past 6 months? Binary variable 0,1.This variable can be analysed both as a level at endline and as a change from baseline, and will give us information on whether a firm has moved into/ committed to a decision / implementation in a broader sense. Where firms indicate that they have taken the decision to adopt we will ask some more specific questions about the commitment mechanism (e.g. signed off by management/in business plan etc.).

4. Self-reported performance measures (short term): ln(turnover), ln(turnover per employee), number of customers – we would ask for the average of turnover or customers over the last 3 months.

The technologies within scope of this project seek to increase revenues by increasing the number of customers or better converting enquiries to sales. Therefore, revenues and number of customers are the most appropriate performance variables to measure. Normalising by employment, to give a productivity measure seems sensible. We will also consider impacts on employment itself.

We note that we expect that the impacts of technology adoption on firm performance will occur over a longer timeframe than that captured by the endline survey. Nevertheless, we will obtain basic self-reported performance measures (revenues, customers, employees) to conduct some exploratory performance analysis. In our analysis, we will take the natural log of turnover or turnover/employees as these variables tend to be highly skewed. Longer term analysis (beyond the scope of the initial evaluation report) will seek to analyse impacts by tracking firms in administrative data.



Experimental Design

Experimental Design
This is a parallel randomised controlled trial design. There are three arms to the study: a control group and two treatment groups.
Experimental Design Details
This is a parallel RCT design, where the unit of randomisation is a SME. There are three arms to the study: a control group and two treatment groups. Given budget constraints on sample size, we plan to have a larger control group (200). Stream 1 will have 100 participants and Stream 2 will have 100.

Sept 2019 update: it has proved extremely challenging to recruit sufficient businesses using the GLA/business networks. In response to this the team decided to move towards a cohort model – so that 3 cohorts would be sequentially recruited and interventions delivered. This is to avoid excessive delay and hence risk losing companies that signed up early in the process. In addition, the team moved to a direct approach whereby companies are contacted over the phone, this proved more successful than other methods and the intention is to adopt this approach for future cohorts.

The intention is still to achieve N=400 across 3 cohorts. If insufficient firms are recruited to have a larger control group, firms will be randomised equally across streams and control.

We initially assumed that we would have sufficient sign ups to have firms on reserve lists for Streams 1 and 2, in case of drop out before the interventions are delivered, but given recruitment difficulaties this is unlikely to be feasible.

Given that our sample is relatively small, we plan to stratify by two features: subsector (retail vs hospitality) and size band (firms wither fewer than 10 employees, versus those with 10 or greater).
Randomization Method
Stratified randomisation done in office by a computer (using Stata software)

Stratify by firms size band and subsector (Retail vs Hospitality).
Randomization Unit
Firms (eligible SMEs).
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
400 SMEs
Sample size: planned number of observations
400 SMEs
Sample size (or number of clusters) by treatment arms
Stream 1: 100
Stream 2: 100
Control: 200 (or more if more eligible firms apply and are happy to proceed in the programme)
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
First we provide estimates of the minimum detectible effect sizes for the primary outcome (3): adoption of chatbot/marketing automation technology. The same logic applies for primary outcome (1), which measures whether the firm has entered into a technology assessment process. We would expect some firms in the control group to adopt AI, as being in trial means they are potentially interested in these technologies. Let us assume that this proportion is relatively low, at 1%. With alpha=.05, power=.8, two-sided chi-squared test, comparing either of the Streams 1 and 2 (N=100) and the control group (NC=200) the minimum detectible effect would be around 7 percentage points. A higher proportion of control group adoption leads to a larger minimum detectible effect. If 5% of the control group adopt, the MDE is 10%; and if 10% of the control group adopt, the MDE is 12 percentage points. It will be harder to detect smaller differences between the two treatment arms. For example, if 15% of Stream 1 adopt, we will only be able to detect a 17 percentage point higher adoption rate in Stream 2. Primary outcome (2), a measure of intentions, will be a score 1-5. If we standardise this, so that the variable has a mean of zero and standard deviation 1, we will be able to detect an increase in the standardised intentions score of 0.34 standard deviations between either treatment group and the control.
IRB

Institutional Review Boards (IRBs)

IRB Name
LSE Ethics Committee
IRB Approval Date
2018-12-11
IRB Approval Number
000808

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials