Compatibility Between AI Mediation and Learner Proximity: Testing Question (Q) vs Answer (A) Orientations in AI-Driven Learning

Last registered on October 27, 2025

Pre-Trial

Trial Information

General Information

Title
Compatibility Between AI Mediation and Learner Proximity: Testing Question (Q) vs Answer (A) Orientations in AI-Driven Learning
RCT ID
AEARCTR-0017006
Initial registration date
October 24, 2025

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
October 27, 2025, 8:57 AM EDT

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Locations

Region
Region
Region

Primary Investigator

Affiliation
Liaoning University

Other Primary Investigator(s)

Additional Trial Information

Status
In development
Start date
2025-11-01
End date
2026-06-30
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
This study investigates how artificial intelligence (AI) interacts with human learning proximity—specifically, whether AI-driven instruction is more compatible with Question (Q) or Answer (A) proximity orientations of learners. Building on proximity theory and cognitive load frameworks, the experiment examines whether AI aligns more naturally with the learner’s tendency to construct knowledge through questions (Q-proximity) or to consolidate understanding through answers (A-proximity). The study employs a randomized 2×2 factorial design manipulating (1) proximity orientation (Q vs A) and (2) AI compatibility level (High vs Low). Participants complete microlearning tasks within an AI-assisted learning environment where AI either mirrors or mismatches their proximity orientation. The primary outcomes include engagement, arousal, uncertainty reduction, and learning performance. The results will clarify the cognitive–affective mechanisms by which AI complements or conflicts with human learning orientations, offering empirical guidance for the design of adaptive and personalized AI learning systems.

Primary Research Question

Is AI-driven instruction more compatible with Question-oriented (Q-proximity) or Answer-oriented (A-proximity) learners, and how does this compatibility influence engagement, arousal, and performance?

Hypotheses

H1: AI–learner compatibility (matching AI mode with learner proximity type) leads to higher engagement and performance than incompatibility.

H2: AI’s answer-generative nature makes it more compatible with A-proximity learners, improving near-term learning outcomes.

H3: Q-proximity learners benefit more when AI adopts a Socratic or exploratory questioning mode, supporting curiosity and transfer.

H4: The effects of compatibility on performance are mediated by arousal and uncertainty reduction.

Experimental Design

A between-subjects randomized controlled trial (2×2 factorial):

Factor 1: Learner proximity orientation (Q-proximity vs A-proximity)

Factor 2: AI compatibility (High: AI mirrors learner orientation; Low: AI mismatches it)

Each participant engages in an AI-guided learning module (e.g., concept learning in economics or statistics).
AI responses are scripted to simulate either Socratic questioning (Q-compatible) or direct answer tutoring (A-compatible).

Treatment Arms

Q-Compatible AI: AI adopts a questioning style (Socratic prompts) aligned with Q-oriented learners.

Q-Incompatible AI: AI provides direct answers to Q-oriented learners.

A-Compatible AI: AI provides structured answers aligned with A-oriented learners.

A-Incompatible AI: AI prompts learners with open questions, mismatching their A-orientation.

Primary Outcomes

Learning engagement: Behavioral (interaction rate), affective (interest/arousal), cognitive (depth of inquiry).

Learning performance: Immediate quiz, far-transfer, and one-week retention.

Mediators: Arousal and uncertainty reduction.

Expected Contribution

This study contributes to understanding human–AI compatibility in educational contexts. By isolating whether AI naturally favors answer-based or question-based learners, it informs the design of adaptive pedagogical AI capable of shifting between inquiry and explanation modes to optimize learning engagement and performance.

Key Details for Registry Form

Experimental unit: Individual learner

Sample size: N = 300 (balanced across four cells)

Randomization: Computerized block randomization

Setting: AI-enabled online learning platform (simulated or live environment)

Analysis: GLM and SEM with mediation (Arousal → Uncertainty → Performance)

Primary outcome timeframe: Immediate and one-week delayed post-tests

IRB: From Liaoning University
External Link(s)

Registration Citation

Citation
Malik, Tariq. 2025. "Compatibility Between AI Mediation and Learner Proximity: Testing Question (Q) vs Answer (A) Orientations in AI-Driven Learning." AEA RCT Registry. October 27. https://doi.org/10.1257/rct.17006-1.0
Experimental Details

Interventions

Intervention(s)
1) Research aim & hypotheses
• Aim: Test whether brief, music-based prompts can shift students’ learning mode toward question-ownership (vs. answer-ownership) during an interactive session with teacher/peers/AI.
• H1 (primary): Treated students will show a larger increase from pre to post on a Question-Ownership Index (QOI).
• H2 (secondary): Treated students will report higher excitement and confidence, and lower anxiety about learning challenges.
2) Design overview
• Participants: ~60 undergrads + 10 postgrads you already teach (n≈70).
• Arms (between-subjects RCT):
o Treatment (Q-lyrics): 1 song version emphasizing curiosity, asking, exploration (lyrics/questions).
o Control (A-lyrics): Same melody/length/tempo, lyrics emphasizing getting correct answers/solutions.
• Exposure: 6–8 minutes total (one full play + short reprise), identical volume and environment.
• Timing: Pre-survey → music exposure → 10–12-minute interactive task (pair + AI or peer + teacher mini-rounds) → Post-survey.
3) Measures (pre & post)
Use your 20-item instrument as the backbone. Map items into outcomes:
(A) Primary outcome: Question-Ownership Index (QOI)
• Build from Q1–Q11 (dyads among Teacher/Student/Peers/AI).
• Create 2 subscales:
o Who owns the Question? Q-items where the left role asks (e.g., Student–Teacher means S asks, T answers).
o Who owns the Answer? Q-items where the right role provides answers.
• Scoring rule: For each item, higher agreement = stronger endorsement of the presented configuration. Code items where the question is owned by the learner as +1 direction; configurations where the answer is owned by others as −1 direction (reverse-code so that higher always = more question-ownership). Then average and rescale 0–100.
• Report QOI total and (optionally) two facets:
o Personal Q-ownership (student-as-questioner items: S–T, S–Peers, S–AI).
o External A-ownership (teacher/peers/AI-as-answer items; reverse-coded).
(B) Affective & self-efficacy outcomes
• Excitement (Q18), Confidence (Q19), Anxiety (Q20, reverse); rescale 0–100.
(C) Manipulation checks (post only; add 3 quick items)
• “The song made me want to formulate questions.”
• “The song made me focus on getting correct answers.” (reverse)
• “During the activity I generated new questions.”
(Use 5-point Likert; 0–100 rescale.)
Demographics/covariates (pre only): degree (U/P), major, gender, university, GPA.
Use the exact wording from your sheet for Q1–Q20 to preserve validity.
Intervention Start Date
2025-11-15
Intervention End Date
2026-04-15

Primary Outcomes

Primary Outcomes (end points)
Primary Outcomes of the Study
1. Question–Ownership Index (QOI)

Core construct:
Measures the extent to which students perceive and internalize the act of questioning (vs. answering) as their own role in learning interactions.

Derived from:
Survey items (Q1–Q11) in your Survey Design_QBL_english.docx

Survey Design_QBL_english

, which assess who “owns” the question or the answer across the dyads:

Student–Teacher

Student–Peer

Student–AI

Operationalization:

Each item is rated on a Likert scale (e.g., 1–5 or 1–7).

Items where the student is the question-asker (question ownership) are coded in the positive direction (+1).

Items where the student mainly provides or seeks the answer (answer ownership) are reverse-coded (–1).

The mean composite score (0–100) represents the Question–Ownership Index (QOI).

Higher scores → greater internalization of question-ownership.

Lower scores → more answer-ownership orientation.

Hypothesis (H1):

Students exposed to the Question-Lyric (The Why Inside / 探索之歌) will show a higher increase in QOI (Pre → Post) compared to those exposed to the Answer-Lyric (The Truth I Hold / 掌握之歌).

2. Emotional Activation toward Learning (EAL)

Core construct:
Captures the affective response to the learning process — curiosity, excitement, and anxiety control.

Derived from:
Items Q18–Q20:

Q18: Excitement toward learning (positive activation)

Q19: Confidence in learning process (positive control)

Q20: Anxiety during learning (negative, reverse-coded)

Operationalization:

Compute standardized composite:

EAL
=
Excitement
+
Confidence

Anxiety
EAL=Excitement+Confidence−Anxiety

Rescale to 0–100 for interpretability.

Hypothesis (H2):

Students hearing the Question-Lyric song will report higher EAL (more curiosity, confidence, and less anxiety) than those hearing the Answer-Lyric version.
Primary Outcomes (explanation)

Secondary Outcomes

Secondary Outcomes (end points)
1. Learning Engagement Behavior (LEB)

Conceptual definition:
Observable participation in the learning process, reflected by question generation and discussion involvement.

Measurement source:

Count of student-generated questions during the interactive task phase.

Optionally supplemented by peer/teacher observation or AI log data.

Method:

Record and code the number and complexity level of questions per student.

Standardize to a 0–100 scale for comparison.

Timing: During intervention task.

Expected direction: Higher in Question-Lyric group.

2. Perceived Creativity (PC)

Conceptual definition:
Self-assessed capacity to think divergently, connect ideas, or produce novel thoughts during the learning task.

Measurement source:

Two post-task survey items such as:

“I felt creative while engaging in the discussion.”

“This activity helped me think in new ways.”

Method:

Average Likert responses (1–5) → rescale to 0–100.

Timing: Post-test only.

Expected direction: Higher in Question-Lyric group.

3. Manipulation Check (MC)

Conceptual definition:
Verifies whether the music successfully induced the intended learning frame (curiosity vs. certainty).

Measurement source: Three short post-intervention items, e.g.:

“The song made me want to ask more questions.”

“The song made me focus on getting correct answers.” (reverse-coded)

“The song encouraged me to explore possibilities.”

Method:

Compute mean (0–100).

Used for validity verification, not hypothesis testing.

Timing: Post-test only.

Expected direction: Higher curiosity orientation in Question-Lyric group; higher certainty orientation in Answer-Lyric group.

4. Knowledge Retention (KR) [Optional Extension]

Conceptual definition:
Short-term recall or understanding of content discussed during the interactive phase.

Measurement source:

Two to three factual or conceptual questions based on the discussion content.

Method:

Score = % correct (0–100).

Can test whether question-ownership interferes or enhances recall.

Timing: Immediately post-task (optional follow-up after 24 hours).

Expected direction: Uncertain (exploratory; may favor either condition).
Secondary Outcomes (explanation)

Experimental Design

Experimental Design
Design overview

Participants: ~60 undergrads + 10 postgrads you already teach (n≈70).

Arms (between-subjects RCT):

Treatment (Q-lyrics): 1 song version emphasizing curiosity, asking, exploration (lyrics/questions).

Control (A-lyrics): Same melody/length/tempo, lyrics emphasizing getting correct answers/solutions.

Exposure: 6–8 minutes total (one full play + short reprise), identical volume and environment.

Timing: Pre-survey → music exposure → 10–12-minute interactive task (pair + AI or peer + teacher mini-rounds) → Post-survey.
Experimental Design Details
Not available
Randomization Method
Randomization Method
1. Unit of Randomization

Individual student (not class or group).
Each participant is randomly assigned to either:

Treatment (Q-lyrics): music with question-oriented lyrics, or

Control (A-lyrics): same song melody and tempo but answer-oriented lyrics.

2. Stratified Randomization

Because your sample includes both undergraduate and postgraduate students and likely a gender mix, stratify to maintain balance across these strata:

Strata variables: Degree level (U/P) × Gender (M/F).

Within each stratum, randomize approximately half to each arm.
Randomization Unit
1. Unit of Randomization: The Individual Student

Each student (not the group, class, or pair) is the randomized unit.

Every student is randomly assigned to one of two experimental arms:

Treatment: listens to the Q-lyrics (question-oriented) version of the song.

Control: listens to the A-lyrics (answer-oriented) version of the song.

Randomization ensures that, before exposure, students in both groups are comparable on average (e.g., baseline curiosity, gender, degree, GPA).
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
Several hundred students
Sample size: planned number of observations
The large sample will be survey in thousands of observations across the country
Sample size (or number of clusters) by treatment arms
None
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
IRB

Institutional Review Boards (IRBs)

IRB Name
Liaoning University
IRB Approval Date
2024-06-06
IRB Approval Number
N/A