NEW UPDATE: Completed trials may now upload and register supplementary documents (e.g. null results reports, populated pre-analysis plans, or post-trial results reports) in the Post Trial section under Reports, Papers, & Other Materials.
Service Design Training vouchers for manufacturing SMEs
Initial registration date
March 08, 2017
March 08, 2017 11:17 AM EST
Dialogic, Utrecht University
Other Primary Investigator(s)
Dialogic, Utrecht University
Additional Trial Information
Despite evidence that switching to service-based business models might increase their competitiveness, many manufacturing SMEs experience difficulties when trying to do so. Due to lacking awareness of its potential, the SMEs often are unable to identify tools and parties relevant for successful ‘servitisation’. In order to investigate whether policy support can make a difference, a Dutch pragmatic trial issues vouchers for manufacturing SMEs to receive service design training from public or private knowledge suppliers.
Aiming at >240 applications, the RCT is designed to grant 80 submissions and reject >160 of them. An initial benchmark survey is followed up (both amongst the intervention group and the control groups) 1 month and 13 months after the treatment period terminates. This, in combination with several case studies, generates a basis for examining the extent policy-backed service design trainings truly help manufacturing SMEs to strengthen their competitiveness. Registration Citation
The core of the experiment is a voucher scheme for manufacturing SMEs to receive service design training from creative industries’ firms or knowledge institutes with relevant expertise. SMEs that are granted a randomly assigned voucher can approach a qualifying knowledge supplier with a ‘knowledge question’ related to service design. This question can range from “what is service design” in general, to more specific questions related to management, tools, or actual project development. The voucher is worth max. €4000, out of which the SME has to contribute 25%. In practice, this implies that a voucher project rewards a participating SME with a few days of interaction with one or multiple experts from the knowledge supplier. There is considerable freedom when it comes to the form and content of the pragmatic trial, as long as it meets the characteristics of service design training.
The primary research question is: Do manufacturing SMEs that receive vouchers for obtaining service design training become more competitive than those who do not receive such support?
Intervention Start Date
Intervention End Date
Primary Outcomes (end points)
Do firms in the intervention group, in comparison to those in the control group:
A. have a better understanding of what service design is, and how it might be relevant for their particular case; B. have a better understanding of the challenges they need to overcome to succeed in developing competitive product-service combinations; C. have a better overview of the type of partners that might be able to support the servitisation process; D. are found to introduce more (and more advanced) services; E. are found to have a better customer satisfaction; F. are found to yield more turnover and be more profitable (in general / due to their services).
Primary Outcomes (explanation)
Secondary Outcomes (end points)
Secondary Outcomes (explanation)
To assess the effect of the service design training vouchers, the performance of manufacturing SMEs (the unit of analysis) receiving the vouchers is compared against firms not getting any treatment. The trial type thus concerns a RCT with parallel group design.
The general set-up of the RCT involves a baseline survey, followed by two outcome surveys in later stages. The focal unit of analysis is the firm level. The items in the questionnaire are based on extensive review of literature on servitisation. All items, typically using yes/no-questions or 5-point Likert scales, are adopted from existing studies like the European Manufacturing Survey (EMS). This does not just guarantee the validity of the scales, but also opens up possibilities to benchmark the applicants against the response of those other studies. In line with the approach followed in the EMS, the benchmark and outcome survey items are based on a preference for surveying the implementation of actual practices rather than only asking for perceptions. A first outcome survey is conducted 1 month after the 4-month treatment period is over, and a second one follows 13 months later. The survey will be distributed amongst all applicants. In case there were only few of them, also control groups 2 and 3 will be addressed. Control group 2 is made up of early bird survey respondents who did not submit an actual application for a voucher. Control group 3 is the collection of firms who were not involved at all, but belong to the communities that were targeted in the recruitment strategy. Involving them would no longer fit with the principles of a RCT.
Experimental Design Details
At an abstract level, the treatment offered by the voucher scheme is always a service design training. How this is operationalized can differ per voucher project, as SMEs have the freedom to ask knowledge suppliers for training content and forms that meet their specific situation. Although the treatment is thus more heterogeneous than a sum of money or issuing an iPad, it is common to conduct valid RCTs based on trainings. After all, the firm’s situation will differ from case to case as well, and the top priority is to attract sufficient applications. To ensure fidelity, the trial protocol contains clear instructions (for the knowledge suppliers) on what type of treatments can be provided and what procedures should be followed.
There is a minor particularity in the design of the voucher scheme. There are two intervention groups: one group (a) consists of firms filling out the baseline survey before the applications open, while the other group (b) consists of firms who fill out the baseline survey as part of their application. The same dichotomy holds for our primary control group. Distinguishing between early and late appliers allows us to analyse whether the process of searching an appropriate knowledge supplier (and formulating a knowledge question) in itself has already any effect. This trial arm does not concern the main analysis, but is a relevant extension of the regular comparison between the two key groups. The fact that there are early birds that in the end do not submit an application also provides a secondary control group of firms that can be approached with survey 1 and 2.
From an evaluation purpose, it is relevant to know whether the service design trainings that are ultimately provided can be grouped to various clearly distinct treatment categories (e.g. presentations only, skills-oriented workshop, or innovation-oriented workshop). These treatment varieties cannot be regarded as full-fledged trial arms - the respondent selects the category rather than being assigned to it – but it does provide a basis for analysing which category yielded the best results.
Finally, to anticipate a low number applications, the project also includes some possibilities to conduct a quasi-experimental design (QED). Much of the outcome indicators are retrieved from the European Manufacturing Survey (EMS). The last EMS (2015) provides a baseline for checking if the voucher applicants are similar to the general population of manufacturing SMEs. Results from the outcome surveys can later be compared with the EMS 2017 results, following a difference-in-difference approach. Obviously this design suffers from significant biases, and should be regarded only as a fall-back option.
All communication on the voucher scheme clearly states that it concerns a RCT involving randomisation. Applications testify of informed consent when filling out their submission forms to RVO. Participants passing RVO’s technical check will be assigned to ‘In’ and ‘Out’ groups by means of a blocked randomisation, as this ensures that in the end 80 applications are granted. At the time of lottery, the full sample is known so the allocation ratio and ideal block size can be determined. There are no reasons to suggest stratified randomisation, unless response on the baseline survey suggests that there are substantial differences (on the outcome indicators part of that survey) between firms of different sizes or sub-sectors. The research team at Dialogic generates the allocation sequence. For both practical reasons as well as for the sake of allocation concealment, the RVO agency (where applications are received and checked) will be in charge of randomising the participants using the simple ‘aselect’ Excel-function.
Inclusion/exclusion criteria The voucher scheme is open for applications by SMEs of 10-250 FTE from manufacturing industries (based on a selection of NACE-categories). As part of their application, the firms need to indicate at which specific knowledge supplier they intend to exchange their voucher for service design training. Knowledge suppliers can be either public research institutes or firms from selected creative industries. Part of the ongoing RCT-project is the development of a recruitment plan including outreach activities targeted at, amongst others, the RVO-website, the Chamber of Commerce, industry associations, and the Topsector ‘Creative Industries’.
Was the treatment clustered?
Sample size: planned number of clusters
The funding available for just vouchers accounts for €280.000 out of the ± €470.000 (£400.000) budget for the formulation, implementation and evaluation of the entire policy experiment. This amounts to minimal 80 vouchers, given that for each voucher maximally €3000 is subsidized (and SMEs always have to match 25%). An intervention group of 80 firms seems sufficient for sensible statistical analyses, as there is no strict need for including control variables in extensive regressions as long as the intervention group is matched with a considerable amount of firms in the control group(s). For the primary control group we aim at a size of >160 firms, implying that at least 240 sound applications need to be submitted. Unequal allocation is preferred because it is takes little effort for firms in the control group to participate and it is likely that not all of them will keep participating in the outcome surveys. According to power calculations using free online software, a sample size of 240 firms with allocation ratio 1:2 would be sufficient for finding statistical significant results (p = 0.95) when using binary survey items like “did your firm introduce service type x?”.
Sample size: planned number of observations
See planned number of cluster.
Sample size (or number of clusters) by treatment arms
See planned number of cluster.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
INSTITUTIONAL REVIEW BOARDS (IRBs)
Post Trial Information
Is the intervention completed?
Is data collection complete?