Back to History

Fields Changed

Registration

Field Before After
Abstract Our study seeks to investigate the impact of social media, specifically recommendation algorithms and digital addiction, on the human capital accumulation of undergraduate students. The rise of short-video platforms, designed with features like 'infinite scroll' and 'dopamine-driven feedback loops', makes disengagement difficult and has been linked to impairments in attention, self-control, and executive function. This trend coincides with a global rise in mental health challenges, raising urgent questions about the psychological and behavioral impacts of these platforms. Our study aims to explore the mechanisms through which these factors affect key outcomes, including mental health, academic performance, final grades, in-class attention, sleeping time, and peer network formation. The study will also explore strategies to mitigate potential negative effects, such as limiting algorithm-driven content and managing screen time. We plan to conduct a multi-wave survey and a randomized experiment. Our target is to recruit roughly 900 undergraduate students in their first, second, and third years, who will participate voluntarily following an initial screening. The experiment will run for four weeks immediately preceding the university's final exam week. The data collection will follow a structured timeline: 1) Baseline Survey 1: Late-October 2) Baseline Survey 2: Early November 3) Follow-up Survey: Mid-December To monitor digital usage, students will be incentivized to upload screenshots or reports detailing their time use for specific applications. In addition to survey data, we will utilize administrative data from the university to analyze students' academic performance, class attendance, mental health and other indicators of human capital. Our study seeks to investigate the impact of social media, specifically recommendation algorithms and digital addiction, on the human capital accumulation of undergraduate students. The rise of short-video platforms, designed with features like 'infinite scroll' and 'dopamine-driven feedback loops', makes disengagement difficult and has been linked to impairments in attention, self-control, and executive function. This trend coincides with a global rise in mental health challenges, raising urgent questions about the psychological and behavioral impacts of these platforms. Our study aims to explore the mechanisms through which these factors affect key outcomes, including mental health, academic performance, final grades, in-class attention, sleeping time, and peer network formation. The study will also explore strategies to mitigate potential negative effects, such as limiting algorithm-driven content and managing screen time. We plan to conduct a multi-wave survey and a randomized experiment. Our target is to recruit roughly 900 undergraduate students in their first, second, and third years, who will participate voluntarily following an initial screening. The experiment will run for four weeks immediately preceding the university's final exam week. The data collection will follow a structured timeline: 1) Baseline Survey 1: Early November 2) Baseline Survey 2: Early November 3) Follow-up Survey: Mid/Late December 4) Follow-up Survey 2: Early January To monitor digital usage, students will be incentivized to upload screenshots or reports detailing their time use for specific applications. In addition to survey data, we will utilize administrative data from the university to analyze students' academic performance, class attendance, mental health and other indicators of human capital.
Trial Start Date October 30, 2025 November 05, 2025
Last Published October 27, 2025 09:22 AM October 31, 2025 11:57 AM
Intervention (Public) Our study aims to explore the mechanisms through which these factors affect key outcomes, including mental health, academic performance, final grades, in-class attention, sleeping time, and peer network formation. The study will also explore strategies to mitigate potential negative effects, such as limiting algorithm-driven content and managing screen time. To monitor digital usage, students will be incentivized to upload screenshots or reports detailing their time use for specific applications. In addition to survey data, we will utilize administrative data from the university to analyze students' academic performance, class attendance, and other indicators of human capital. We plan to conduct a multi-wave survey and a randomized experiment. Our target is to recruit roughly 900 undergraduate students in their first, second, and third years, who will participate voluntarily following an initial screening. The experiment will run for four weeks immediately preceding the university's final exam week. The data collection will follow a structured timeline: 1) Baseline Survey 1: Mid-October 2) Baseline Survey 2: Early November 3) Follow-up Survey: Mid-December These surveys and experiments will help assess the causal effects of algorithms and digital addiction on student behavior, test our hypotheses on their impact on human capital, and identify promising interventions to address this issue. Our study aims to explore the mechanisms through which these factors affect key outcomes, including mental health, academic performance, final grades, in-class attention, sleeping time, and peer network formation. The study will also explore strategies to mitigate potential negative effects, such as limiting algorithm-driven content and managing screen time. To monitor digital usage, students will be incentivized to upload screenshots or reports detailing their time use for specific applications. In addition to survey data, we will utilize administrative data from the university to analyze students' academic performance, class attendance, and other indicators of human capital. We plan to conduct a multi-wave survey and a randomized experiment. Our target is to recruit roughly 900 undergraduate students in their first, second, and third years, who will participate voluntarily following an initial screening. The experiment will run for four weeks immediately preceding the university's final exam week. The data collection will follow a structured timeline: 1) Baseline Survey 1: Early November 2) Baseline Survey 2: Early November 3) Follow-up Survey: Mid/Late December 4) Follow-up Survey2: Early January These surveys and experiments will help assess the causal effects of algorithms and digital addiction on student behavior, test our hypotheses on their impact on human capital, and identify promising interventions to address this issue. To minimize monitoring effects, Android-based phone participants will upload a screen‑time screenshot each week, whereas iPhone participants will upload a single screenshot at the end of the experiment’s final week. While weekly uploads may themselves induce monitoring, the one‑time upload helps us assess and rule out whether any observed changes are driven by being monitored rather than the intervention.
Intervention Start Date November 03, 2025 November 15, 2025
Intervention End Date December 31, 2025 December 16, 2025
Experimental Design (Public) This study addresses a critical and timely issue at the intersection of technology, education, and public health. In recent years, the rapid proliferation of digital platforms, particularly those driven by recommendation algorithms and designed for high engagement, has raised significant concerns among educators, policymakers, and the general public. These concerns are particularly acute for young adults, such as undergraduate students, who are at a crucial stage of human capital development. As noted in recent research, it is difficult to distinguish whether digital platforms cause these issues or if individuals predisposed to these challenges are simply more likely to use them heavily. Our study's randomized controlled trial (RCT) design is specifically structured to overcome this challenge of self-selection. By randomly assigning interventions, we can isolate the causal effects of recommendation algorithms and screen time on student outcomes, providing rigorous, credible evidence that is currently lacking. This research moves beyond simply identifying a problem to exploring actionable solutions. By deconstructing the digital experience into its core components—the algorithmic content recommendation and the duration of use—our six-arm experimental design allows us to pinpoint the specific mechanisms driving the observed effects. The findings will provide crucial insights into whether the type of content (algorithm-driven) or the sheer volume of exposure (time) is more detrimental. This nuanced understanding is essential for developing effective interventions, whether they be technological (e.g., app design changes), educational (e.g., digital literacy programs for students), or institutional (e.g., university wellness policies). This study addresses a critical and timely issue at the intersection of technology, education, and public health. In recent years, the rapid proliferation of digital platforms, particularly those driven by recommendation algorithms and designed for high engagement, has raised significant concerns among educators, policymakers, and the general public. These concerns are particularly acute for young adults, such as undergraduate students, who are at a crucial stage of human capital development. As noted in recent research, it is difficult to distinguish whether digital platforms cause these issues or if individuals predisposed to these challenges are simply more likely to use them heavily. Our study's randomized controlled trial (RCT) design is specifically structured to overcome this challenge of self-selection. By randomly assigning interventions, we can isolate the causal effects of recommendation algorithms and screen time on student outcomes, providing rigorous, credible evidence that is currently lacking. This research moves beyond simply identifying a problem to exploring actionable solutions. By deconstructing the digital experience into its core components—the algorithmic content recommendation and the duration of use—our four-arm experimental design allows us to pinpoint the specific mechanisms driving the observed effects. The findings will provide crucial insights into whether the type of content (algorithm-driven) or the sheer volume of exposure (time) is more detrimental. This nuanced understanding is essential for developing effective interventions, whether they be technological (e.g., app design changes), educational (e.g., digital literacy programs for students), or institutional (e.g., university wellness policies).
Intervention (Hidden) We will randomly assign the students to one of six groups: 1. Control Group: Students receive no intervention and continue their digital usage as normal. 2. Mandatory Limit & No Algorithm: Students will have app algorithm recommendations turned off and will be subject to a mandatory time limit on app usage. 3. No Algorithm Only: Students will have app algorithm recommendations turned off but will have no time limit on app usage. 4. Mandatory Limit Only: Students will not have app algorithm recommendations turned off but will be subject to a mandatory time limit on app usage. 5. Voluntary Limit & No Algorithm: Students will have app algorithm recommendations turned off and will be given the freedom to choose their own time limit for app usage. 6. Voluntary Limit Only: Students will not have app algorithm recommendations turned off but will be given the freedom to choose their own time limit for app usage. We will randomly assign the students to one of four groups: 1. Control Group: Students receive no intervention and continue their digital usage as normal. 2. App Time Limit & Algorithm Off: Students will have app algorithm recommendations turned off and will be subject to a mandatory time limit on app usage. 3. Algorithm Off Only: Students will have app algorithm recommendations turned off but will have no time limit on app usage. 4. App Time Limit Only: Students will not have app algorithm recommendations turned off but will be subject to a mandatory time limit on app usage. To minimize monitoring effects, Android-based phone participants will upload a screen‑time screenshot each week, whereas iPhone participants will upload a single screenshot at the end of the experiment’s final week. While weekly uploads may themselves induce monitoring, the one‑time upload helps us assess and rule out whether any observed changes are driven by being monitored rather than the intervention.
Back to top