Public Attitudes Towards AI in Children's Social Care: An Online Conjoint Experiment with British Adults

Last registered on July 23, 2025

Pre-Trial

Trial Information

General Information

Title
Public Attitudes Towards AI in Children's Social Care: An Online Conjoint Experiment with British Adults
RCT ID
AEARCTR-0015042
Initial registration date
June 29, 2025

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
June 30, 2025, 6:15 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
July 23, 2025, 6:27 PM EDT

Last updated is the most recent time when changes to the trial's registration were published.

Locations

Primary Investigator

Affiliation
Birkbeck College

Other Primary Investigator(s)

Additional Trial Information

Status
In development
Start date
2025-07-21
End date
2025-08-31
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
This trial aims to investigate the trade-off between procedural safeguards and accuracy in how citizens perceive AI systems in the context of children's social care. The relevant procedural safeguards examined are transparency regarding the overall system, transparency in the decision-making process for specific children, and the role of the system in decision-making. The measures of accuracy studied are whether the system reduces false negatives (i.e., how many fewer at-risk children had their cases incorrectly closed) and false positives (i.e., how many fewer safe children had their cases incorrectly progressed). Trade-offs will be studied in terms of the willingness to "pay" for procedural safeguards in relation to each accuracy measure, e.g., how much reduction in false positives will respondents be willing to give up to ensure transparency in the overall AI system? To achieve this, statistical models known as mixed logit models will be employed, which measure the degree to which individuals vary in their willingness to "pay" for accuracy without needing to observe the factors that determine that variation, while also allowing for consideration of whether preferences for the different safeguards are correlated. The differences in willingness to pay associated with trust, both in general institutions (such as local councils and the UK's legal system) and in social workers, will be studied, with trust in social workers broken down into competency, motivation, and resources to undertake their work. As a more exploratory analysis, differences in willingness to pay in terms of age and education level will also be explored.
External Link(s)

Registration Citation

Citation
Gibbons, Daniel. 2025. "Public Attitudes Towards AI in Children's Social Care: An Online Conjoint Experiment with British Adults." AEA RCT Registry. July 23. https://doi.org/10.1257/rct.15042-1.1
Sponsors & Partners

Sponsors

Experimental Details

Interventions

Intervention(s)
Conjoint experiment, so a table of two AI systems using the following.

Instructions:
Children's services often receive reports, for example, from schoolteachers, about a child who might be at risk of abuse or neglect. The local council wants to use a computer program that learns from past cases to help social workers. This program helps predict if the child might be harmed if no action is taken.

Now, imagine you are a council worker tasked with choosing which program to pick. On the following pages, please indicate which of the two programs shown you would personally prefer to see used by the council.

You will see seven pairs of programs, please choose one from each pair.

Background:
Before the introduction of the program, social workers made correct decisions about 80 children out of every 100 children referred to them.
They made two kinds of mistakes:

"False negatives": 10 children from every 100 referred children were at risk of harm, but were missed by social workers, and their cases were closed.
"False positives": 10 children from every 100 referred children were not at risk of harm, but were incorrectly progressed to further investigation.
Below are descriptions of two programs that could help social workers with screening referrals. Please read the descriptions and indicate which of the two programs shown you would personally prefer to see used by the council.

Attributes:
- Use of the AI system, "How the program is used by social workers in decisions made about children" (levels: The program makes a preliminary decision that the social worker must override to change; The program only makes a recommendation to the social worker)
- Measuring overall transparency, "Information about the program is:" (levels: Easy to understand and publicly available; Hard to understand without expert advice, but publicly available; Not publicly available)
- Measuring transparency in individual decisions "For each specific child, information about how the program made a prediction is:" (levels:
- How many fewer at-risk children had their cases incorrectly closed (levels: 0, 2, 4, 6, 8, 10) Note for context: background includes information that for social workers alone "10 children from every 100 referred children were at risk of harm, but were missed by social workers, and their cases were closed".
- How many fewer safe children had their cases incorrectly progressed (levels: 0, 2, 4, 6, 8, 10) Note for context: background includes information that for social workers alone "10 children from every 100 referred children were not at risk of harm, but were incorrectly progressed to further investigation".
Intervention (Hidden)
Intervention Start Date
2025-07-21
Intervention End Date
2025-08-31

Primary Outcomes

Primary Outcomes (end points)
Forced choice of an AI system from the two presented options in each task. In terms of the estimands, these will primarily be measured using the accuracy of willingness to pay in the primary analysis, but Average Marginal Component Effects (AMCEs) or choice probabilities will also be considered.
Primary Outcomes (explanation)
Willingness to pay in a mixed logit framework can be calculated by dividing the coefficients by a chosen scale or "price" parameter (e.g., false positive reduction) or through a model fit in willingness-to-pay space, as explained here: https://www.jstatsoft.org/article/view/v105i10.

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
This will be an online conjoint experiment, consisting of six tasks with two randomly assigned options (with a seventh task as the first task, flipped to test for consistency), randomly assigned separately across five sets of attribute levels described above under "Interventions". The order of the attribute levels will be randomised across participants, but the order will be consistent across their set of six tasks.
Experimental Design Details
Randomization Method
Randomised done via oTree.
Randomization Unit
Individual
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
N/A
Sample size: planned number of observations
2211 British online participants
Sample size (or number of clusters) by treatment arms
Conjoint experiment, so randomisation is done uniformly for each level across each attribute, e.g. separately for each accuracy measure and each procedural safeguard attribute. This should result in a roughly equal number of tasks completed across the different attribute levels.
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Given that the willingness-to-pay estimates will be produced from complex models in a novel context, it is quite difficult to compute an appropriate MDES here. If the AMCEs standard in political science were used, then using the cjpower application from Freitag and Schuessler (2020) based on their paper Schuessler and Freitag (2020), yields an MDES of 0.03 or assuming the conservative correction for intra-respondent reliability in Clayton et al. (2023, see Statistical Methods) 0.039, meaning a difference of 3 or 3.9 percentage points in the choice probability can be detected. While this is larger than the smallest statistically significant effect size in Horvath et al. (2023), this is a more high-stakes setting with, therefore, perhaps stronger preferences over attributes, and this MDES is still relatively low. To obtain an MDES of 0.02 would require 4,904 respondents, which is beyond the scope of the available budget.
Supporting Documents and Materials

Documents

Document Name
Survey Questions
Document Type
survey_instrument
Document Description
The list of survey questions for the experiment.
File
Survey Questions

MD5: 86097a6de79eee09f11c2cc729b5fe5d

SHA1: 7197ef224d31e820fea8f58ba333c9e898e3d3b1

Uploaded At: June 28, 2025

IRB

Institutional Review Boards (IRBs)

IRB Name
Birkbeck College of Social Sciences Ethics
IRB Approval Date
2025-06-27
IRB Approval Number
cmbz0kg6c009pl708mnsad1tu
Analysis Plan

Analysis Plan Documents

Trial Protocol for Prospective Conjoint Experiment

MD5: 7be2f0477140fcbb27ea965066a3c048

SHA1: 3304ba5affbabb84d4bc282e498c98bcdafc1d07

Uploaded At: June 29, 2025

Post-Trial

Post Trial Information

Study Withdrawal

There is information in this trial unavailable to the public. Use the button below to request access.

Request Information

Intervention

Is the intervention completed?
No
Data Collection Complete
Data Publication

Data Publication

Is public data available?
No

Program Files

Program Files
Reports, Papers & Other Materials

Relevant Paper(s)

Reports & Other Materials