Abstract
This study investigates how artificial intelligence (AI) interacts with human learning proximity—specifically, whether AI-driven instruction is more compatible with Question (Q) or Answer (A) proximity orientations of learners. Building on proximity theory and cognitive load frameworks, the experiment examines whether AI aligns more naturally with the learner’s tendency to construct knowledge through questions (Q-proximity) or to consolidate understanding through answers (A-proximity). The study employs a randomized 2×2 factorial design manipulating (1) proximity orientation (Q vs A) and (2) AI compatibility level (High vs Low). Participants complete microlearning tasks within an AI-assisted learning environment where AI either mirrors or mismatches their proximity orientation. The primary outcomes include engagement, arousal, uncertainty reduction, and learning performance. The results will clarify the cognitive–affective mechanisms by which AI complements or conflicts with human learning orientations, offering empirical guidance for the design of adaptive and personalized AI learning systems.
Primary Research Question
Is AI-driven instruction more compatible with Question-oriented (Q-proximity) or Answer-oriented (A-proximity) learners, and how does this compatibility influence engagement, arousal, and performance?
Hypotheses
H1: AI–learner compatibility (matching AI mode with learner proximity type) leads to higher engagement and performance than incompatibility.
H2: AI’s answer-generative nature makes it more compatible with A-proximity learners, improving near-term learning outcomes.
H3: Q-proximity learners benefit more when AI adopts a Socratic or exploratory questioning mode, supporting curiosity and transfer.
H4: The effects of compatibility on performance are mediated by arousal and uncertainty reduction.
Experimental Design
A between-subjects randomized controlled trial (2×2 factorial):
Factor 1: Learner proximity orientation (Q-proximity vs A-proximity)
Factor 2: AI compatibility (High: AI mirrors learner orientation; Low: AI mismatches it)
Each participant engages in an AI-guided learning module (e.g., concept learning in economics or statistics).
AI responses are scripted to simulate either Socratic questioning (Q-compatible) or direct answer tutoring (A-compatible).
Treatment Arms
Q-Compatible AI: AI adopts a questioning style (Socratic prompts) aligned with Q-oriented learners.
Q-Incompatible AI: AI provides direct answers to Q-oriented learners.
A-Compatible AI: AI provides structured answers aligned with A-oriented learners.
A-Incompatible AI: AI prompts learners with open questions, mismatching their A-orientation.
Primary Outcomes
Learning engagement: Behavioral (interaction rate), affective (interest/arousal), cognitive (depth of inquiry).
Learning performance: Immediate quiz, far-transfer, and one-week retention.
Mediators: Arousal and uncertainty reduction.
Expected Contribution
This study contributes to understanding human–AI compatibility in educational contexts. By isolating whether AI naturally favors answer-based or question-based learners, it informs the design of adaptive pedagogical AI capable of shifting between inquiry and explanation modes to optimize learning engagement and performance.
Key Details for Registry Form
Experimental unit: Individual learner
Sample size: N = 300 (balanced across four cells)
Randomization: Computerized block randomization
Setting: AI-enabled online learning platform (simulated or live environment)
Analysis: GLM and SEM with mediation (Arousal → Uncertainty → Performance)
Primary outcome timeframe: Immediate and one-week delayed post-tests
IRB: From Liaoning University