Compatibility Between AI Mediation and Learner Proximity: Testing Question (Q) vs Answer (A) Orientations in AI-Driven Learning

Last registered on October 13, 2025

Pre-Trial

Trial Information

General Information

Title
Compatibility Between AI Mediation and Learner Proximity: Testing Question (Q) vs Answer (A) Orientations in AI-Driven Learning
RCT ID
AEARCTR-0017007
Initial registration date
October 12, 2025

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
October 13, 2025, 11:11 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Region

Primary Investigator

Affiliation
Liaoning University

Other Primary Investigator(s)

Additional Trial Information

Status
In development
Start date
2025-11-01
End date
2026-06-30
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
Trial Overview / Abstract
This study investigates how artificial intelligence (AI) interacts with human learning proximity—specifically, whether AI-driven instruction is more compatible with *Question (Q)* or *Answer (A)* proximity orientations of learners. Building on proximity theory and cognitive load frameworks, the experiment examines whether AI aligns more naturally with the learner’s tendency to construct knowledge through questions (Q-proximity) or to consolidate understanding through answers (A-proximity). The study employs a randomized 2×2 factorial design manipulating (1) proximity orientation (Q vs A) and (2) AI compatibility level (High vs Low). Participants complete microlearning tasks within an AI-assisted learning environment where AI either mirrors or mismatches their proximity orientation. The primary outcomes include engagement, arousal, uncertainty reduction, and learning performance. The results will clarify the cognitive–affective mechanisms by which AI complements or conflicts with human learning orientations, offering empirical guidance for the design of adaptive and personalized AI learning systems.
Primary Research Question
Is AI-driven instruction more compatible with Question-oriented (Q-proximity) or Answer-oriented (A-proximity) learners, and how does this compatibility influence engagement, arousal, and performance?
Hypotheses
H1: AI–learner compatibility (matching AI mode with learner proximity type) leads to higher engagement and performance than incompatibility.
H2: AI’s answer-generative nature makes it more compatible with A-proximity learners, improving near-term learning outcomes.
H3: Q-proximity learners benefit more when AI adopts a Socratic or exploratory questioning mode, supporting curiosity and transfer.
H4: The effects of compatibility on performance are mediated by arousal and uncertainty reduction.
Intervention Description
Participants are randomly assigned to one of four experimental conditions that vary in the match between their learning orientation (Q or A) and the AI’s instructional mode (Socratic questioning vs direct answer delivery). Each participant completes a 45–60 minute microlearning session via an AI-powered learning interface. The AI’s dialogue structure and feedback are scripted to either mirror or mismatch the learner’s proximity orientation. After completing the module, participants take an immediate post-test and a delayed test one week later.
Experimental Design
A between-subjects randomized controlled trial with a 2×2 factorial design:
Factor 1: Learner Proximity Orientation (Q vs A)
Factor 2: AI Compatibility (High vs Low)

Treatment Arms:
1. Q-Compatible AI (Socratic questioning matched to Q orientation)
2. Q-Incompatible AI (Direct answers given to Q orientation)
3. A-Compatible AI (Direct answers matched to A orientation)
4. A-Incompatible AI (Socratic prompts mismatched to A orientation)
Experimental Unit
Individual learner (between-subject randomization).
Sample Size
Target N = 300 participants (approximately 75 per treatment cell).
Randomization Procedure
Participants will be randomly assigned to one of the four conditions using computer-generated block randomization in the online experimental platform. Randomization will be stratified by gender, learner type (student vs professional), and prior AI familiarity.
Sampling Method and Recruitment
Voluntary participants will be recruited from university students and adult learners through email invitations and online learning platforms. Inclusion criteria: age 18+, English proficiency, basic computer literacy. Exclusion criteria: prior exposure to the experimental module or incomplete responses.
Intervention Period
Each participant engages in one 45–60 minute session (Day 1) and a 10-minute follow-up test (Day 8).

Registration Citation

Citation
Malik, Tariq. 2025. "Compatibility Between AI Mediation and Learner Proximity: Testing Question (Q) vs Answer (A) Orientations in AI-Driven Learning." AEA RCT Registry. October 13. https://doi.org/10.1257/rct.17007-1.0
Experimental Details

Interventions

Intervention(s)
Intervention Start Date
2025-11-15
Intervention End Date
2026-04-15

Primary Outcomes

Primary Outcomes (end points)
Primary Outcomes
1. Learning engagement (behavioral: interaction frequency; affective: interest and arousal; cognitive: inquiry depth).
2. Learning performance (immediate quiz and one-week retention test).
3. AI–learner compatibility effect (difference between matched and mismatched AI responses).
Primary Outcomes (explanation)
The primary outcome is learning performance, measured as the participant’s accuracy and efficiency on post-learning assessments. It combines (a) the percentage of correct answers on an 8-item immediate quiz and (b) an inverse response-time efficiency score (seconds per correct response, reversed so that higher = better). Both measures are standardized (z-scores) and averaged to yield a single Learning Performance Index. This index captures the learner’s cognitive mastery and fluency following exposure to the AI-mediated instructional module.

Secondary Outcomes

Secondary Outcomes (end points)
Secondary Outcomes
1. Arousal (6-point self-report scale).
2. Uncertainty reduction.
3. Far-transfer problem-solving score.
4. Time-on-task and AI interaction count.
Secondary Outcomes (explanation)
The secondary outcomes capture the motivational and cognitive–affective processes that underlie learning engagement and retention in AI-mediated environments:

Learning Engagement Index: A composite of behavioral (interaction frequency, hint use, time-on-task), affective (self-reported interest and excitement), and cognitive (depth of inquiry and focus) indicators. All items are standardized and averaged to represent overall engagement.

Arousal (Mediator): The learner’s physiological and emotional activation during the module, measured using a 6-point self-report scale (e.g., “excited,” “alert,” “energized”), averaged across items to form an arousal index.

Uncertainty Reduction (Mediator): The degree to which learners feel their questions or knowledge gaps have been resolved. Constructed from standardized items on perceived clarity, understanding confidence, and reduced ambiguity.

Delayed Performance (Retention): Performance on a parallel 4-item quiz administered one week later, standardized and averaged with response-time efficiency to measure knowledge retention.

Far-Transfer Score: Learners’ ability to apply concepts in novel contexts, based on two short open-ended tasks rated on conceptual accuracy, justification, and novelty by independent coders (average of standardized scores).

AI–Learner Compatibility Index: Perceived and behavioral alignment between the learner’s proximity orientation (Q or A) and the AI’s instructional mode. Constructed from self-reported perceived match and log-based concordance (share of aligned AI turns).

Experimental Design

Experimental Design

Experimental Unit
Individual learner (between-subject randomization).
Sample Size
Target N = 300 participants (approximately 75 per treatment cell).
Randomization Procedure
Participants will be randomly assigned to one of the four conditions using computer-generated block randomization in the online experimental platform. Randomization will be stratified by gender, learner type (student vs professional), and prior AI familiarity.
Sampling Method and Recruitment
Voluntary participants will be recruited from university students and adult learners through email invitations and online learning platforms. Inclusion criteria: age 18+, English proficiency, basic computer literacy. Exclusion criteria: prior exposure to the experimental module or incomplete responses.
Intervention Period
Each participant engages in one 45–60 minute session (Day 1) and a 10-minute follow-up test (Day 8).
Experimental Design Details
Not available
Randomization Method
Experimental Unit
Individual learner (between-subject randomization).
Sample Size
Target N = 300 participants (approximately 75 per treatment cell).
Randomization Procedure
Participants will be randomly assigned to one of the four conditions using computer-generated block randomization in the online experimental platform. Randomization will be stratified by gender, learner type (student vs professional), and prior AI familiarity.
Sampling Method and Recruitment
Voluntary participants will be recruited from university students and adult learners through email invitations and online learning platforms. Inclusion criteria: age 18+, English proficiency, basic computer literacy. Exclusion criteria: prior exposure to the experimental module or incomplete responses.
Intervention Period
Each participant engages in one 45–60 minute session (Day 1) and a 10-minute follow-up test (Day 8).
Randomization Unit

Experimental Unit
Individual learner (between-subject randomization).
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
300 to 500 participants (university students)
Sample size: planned number of observations
Planned Number of Observations Between 300 and 500 individual learner participants will be observed, depending on final recruitment and attrition rates. Each participant represents one independent observation. Participants will be randomly assigned in approximately equal numbers (75–125 per condition) across four experimental groups in a 2×2 factorial design: Q-Compatible AI Q-Incompatible AI A-Compatible AI A-Incompatible AI The final sample size will ensure statistical power (0.80, α = 0.05) to detect small-to-moderate effects (Cohen’s f = 0.20) on primary outcomes of learning engagement and performance. Planned Number of Clusters None (individual-level randomization). Each participant serves as an independent observational unit. Cluster Definition Not applicable. Randomization occurs at the individual level rather than by group, institution, or class. Each learner interacts independently with the AI learning environment, ensuring no within-group dependence.
Sample size (or number of clusters) by treatment arms
300 to 500 participants
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Design & test: Two-sided α=0.05, power=0.80, individual randomization (no clustering). Estimator: Main effects from a balanced 2×2 design (Q vs A; Compatibility High vs Low). The main-effect contrast uses all participants split into two groups of size N/2, so SE = 4 𝜎 2 𝑁 , MDE = ( 𝑧 0.975 + 𝑧 0.80 ) ⋅ SE = 2.80 ⋅ 2 𝜎 𝑁 . SE= N 4σ 2 ​ ​ ,MDE=(z 0.975 ​ +z 0.80 ​ )⋅SE=2.80⋅ N ​ 2σ ​ . Unit & SD: We report (a) standardized units (σ=1) for the Learning Performance Index and (b) percentage-point equivalents for the Immediate Quiz % assuming baseline SD = 15 percentage points. MDE by planned total sample size (no clustering; DEFF=1) N = 300: 0.323 SD (≈ 4.85 p.p. if quiz SD=15 p.p.) N = 400: 0.280 SD (≈ 4.20 p.p.) N = 500: 0.250 SD (≈ 3.76 p.p.) Interpretation: With N=300–500, the study can detect main-effect differences of about 0.25–0.32 SD on the primary performance index (≈ 3.8–4.9 percentage points on the immediate quiz). If clustering were introduced later (illustrative only) If, for example, learning occurred in classes of size m = 25 with ICC = 0.05, the design effect would be DEFF = 1 + ( 𝑚 − 1 ) ⋅ ICC = 1 + 24 ⋅ 0.05 = 2.20 , DEFF=1+(m−1)⋅ICC=1+24⋅0.05=2.20, inflating MDE by DEFF ≈ 1.48 DEFF ​ ≈1.48: N = 300: 0.480 SD N = 500: 0.372 SD
IRB

Institutional Review Boards (IRBs)

IRB Name
Liaoning University
IRB Approval Date
2024-06-06
IRB Approval Number
N/A