Scientists as Citizens: Towards Human-First Network State

Last registered on February 06, 2025

Pre-Trial

Trial Information

General Information

Title
Scientists as Citizens: Towards Human-First Network State
RCT ID
AEARCTR-0014840
Initial registration date
January 26, 2025

Initial registration date is when the trial was registered.

It corresponds to when the registration was submitted to the Registry to be reviewed for publication.

First published
January 27, 2025, 10:18 AM EST

First published corresponds to when the trial was first made public on the Registry after being reviewed.

Last updated
February 06, 2025, 2:23 PM EST

Last updated is the most recent time when changes to the trial's registration were published.

Locations

Region
Region
Region
Region

Primary Investigator

Affiliation
KNOWDYN

Other Primary Investigator(s)

Additional Trial Information

Status
In development
Start date
2025-02-01
End date
2025-09-30
Secondary IDs
Prior work
This trial does not extend or rely on any prior RCTs.
Abstract
This randomized controlled trial, Scientists as Citizens: Towards Human-First Network State, explores how decentralized governance models, augmented by AI for knowledge discovery and synthesis, can empower scientists as digital citizens, knowledge creators, and decision-makers. The study addresses systemic challenges in global science, such as funding inequities, irreproducibility, and peer-review inefficiencies.

The trial engages a stratified random sample of 384 scientists from the ORCID database, distributed across three experimental groups to examine varying interaction models: individual engagement (non-cooperative), team-based collaboration (semi-cooperative), and AI-assisted collective interaction (human-first network state). Participants will engage in five structured scenarios representing real-world challenges: resource allocation, collaborative research, peer review, ethical dilemmas, and policy deliberation. AI agents will play a facilitative role in enhancing data analysis, decision-making, and policy evaluation processes.

Key metrics include collective intelligence, equity in governance, trust in AI systems, ethical alignment, and participant well-being. The trial hypothesizes that the human-first network state model will outperform non-cooperative and semi-cooperative models in fostering superior behavioral and decision-making outcomes, including trust, efficiency, and ethical alignment.

This research aims to provide actionable insights into how AI-augmented governance frameworks can redefine global scientific ecosystems, promoting equitable, sustainable, and impactful practices for academia and R&D communities. By analyzing scientists' behaviors as digital citizens, the trial seeks to validate the transformative potential of the human-first network state in creating a new paradigm for decentralized governance.

Registration Citation

Citation
Saqr, Khalid. 2025. "Scientists as Citizens: Towards Human-First Network State." AEA RCT Registry. February 06. https://doi.org/10.1257/rct.14840-1.1
Experimental Details

Interventions

Intervention(s)
The trial involves five distinct scenarios tailored to test the impact of decentralized governance augmented by AI on the behavior of scientists as digital citizens. These scenarios are designed to reflect real-world challenges and decision-making processes encountered by scientists in academia and research-based organizations. The interventions aim to bridge the gap between traditional governance models and innovative, AI-powered frameworks for knowledge management and policy implementation.
Intervention Start Date
2025-04-15
Intervention End Date
2025-07-31

Primary Outcomes

Primary Outcomes (end points)
The primary outcome variables in this experiment are designed to evaluate the transformative potential of the human-first network state in facilitating equitable, participatory, and efficient human-machine interactions. Drawing inspiration from leading social experiments in human-machine collaboration, the outcomes are defined as follows:

Collective Intelligence Enhancement (CIE):
Measures the improvement in group problem-solving capacity, informed by the synergy between human participants and AI agents. This includes indicators such as solution diversity, time-to-solution efficiency, and the quality of outputs in collaborative tasks.

Equity in Governance and Participation (EGP):
Assesses the fairness and inclusiveness of decision-making processes within the network state. Key metrics include the balance of resource distribution, representation of diverse viewpoints, and engagement levels across participant demographics.

Trust and Ethical Alignment with AI (TEA):
Evaluates participant trust in AI systems and their alignment with the ethical norms of the group. Metrics include trust scores derived from surveys, adherence to ethical guidelines, and the perceived transparency of AI-mediated processes.

Behavioral Adaptation and Agency (BAA):
Tracks shifts in individual and collective behavior when interacting with AI-driven governance mechanisms. Indicators include participant autonomy, acceptance of AI recommendations, and the evolution of decision-making strategies over time.

Satisfaction and Well-being (SWB):
Gauges participant satisfaction with the network state's processes and outcomes, alongside their overall sense of well-being. Metrics include self-reported satisfaction scores, stress levels, and qualitative reflections on their experiences.
Primary Outcomes (explanation)
The human-first network state will be constructed from a synthesis of behavioral, cognitive, and ethical dimensions derived from the primary variables. The outcome framework draws on rigorous methodologies used in seminal human-machine interaction studies, ensuring a comprehensive and reproducible approach.

Outcome Construction Framework
Behavioral Analytics:

Data Sources: Interaction logs, task completion records, and recorded deliberations.
Methodology: Behavioral patterns, such as response times, decision consistency, and collaboration efficiency, will be quantified using time-series analysis and latent profile modeling. Behavioral adaptations will be tracked across scenarios to identify trends in individual and collective agency.
Cognitive and Emotional Metrics:

Data Sources: Surveys (Likert-scale trust and satisfaction measures), biometric feedback (optional), and open-ended responses.
Methodology: Using a multilevel modeling approach, we will correlate cognitive variables (e.g., perceived fairness) with emotional states (e.g., stress, satisfaction). Cognitive load and decision quality will be analyzed to assess how participants process AI-provided insights.
AI-Human Synergy Analysis:

Data Sources: AI interaction data, including the frequency of AI suggestion acceptance, overrides, and modifications.
Methodology: Employ network analysis to map the flow of decisions influenced by AI agents. Metrics such as decision consensus rates and efficiency improvements will highlight the role of AI in enhancing collective intelligence.
Ethical Alignment:

Data Sources: Case study evaluations of ethical dilemmas and governance policies.
Methodology: Ethical decision patterns will be assessed using decision-tree modeling to identify alignment between participant choices and predefined ethical guidelines. The variability between individual and group-level responses will be analyzed for coherence and divergence.
Outcome Integration and Validation:

Synthesis Framework: A Bayesian data integration model will combine metrics from behavioral, cognitive, and emotional domains to construct a dynamic representation of the network state. This model will identify the key drivers of successful human-machine collaboration and test their generalizability across groups.
Reproducibility: Standardized procedures for data collection and analysis will ensure the framework can be replicated across other human-machine interaction experiments, extending the validity of the findings.

Secondary Outcomes

Secondary Outcomes (end points)
The secondary outcomes in this experiment focus on assessing the broader implications and ancillary effects of the human-first network state on participants’ cognitive, social, and technological interactions. These outcomes aim to capture latent variables and emergent behaviors that contribute to the robustness of the primary outcomes. Key secondary outcomes include:

Technological Adoption and Usability (TAU):
Evaluates the ease of use, adoption rates, and participant engagement with AI tools in the network state. Metrics include interaction frequency, system usability scores, and feedback on AI features.

Cultural and Interdisciplinary Collaboration (CIC):
Assesses the extent to which the network state fosters cross-disciplinary and cross-cultural collaboration among participants. Indicators include diversity in team contributions, interdisciplinary knowledge integration, and inclusivity of perspectives.

Resilience and Conflict Resolution (RCR):
Measures the network state’s capacity to mediate disputes and resolve conflicts among participants. Metrics include the resolution time for conflicts, the frequency of escalations, and satisfaction with conflict outcomes.

Knowledge Synthesis and Dissemination (KSD):
Captures the ability of the network state to generate, synthesize, and disseminate actionable insights. Metrics include the quality of synthesized outputs, participant knowledge retention, and the rate of sharing outputs within and outside the system.
Secondary Outcomes (explanation)
Secondary Outcomes (Explanation)
The secondary outcomes provide a complementary layer of analysis, contextualizing the primary outcomes and uncovering emergent patterns that may not be immediately apparent. These outcomes will be constructed through the following approaches:

Technological Adoption and Usability (TAU):
Construction Framework: Usability will be evaluated through a combination of self-reported usability scales (e.g., SUS), interaction logs, and task completion rates. By integrating these metrics, we aim to map the relationship between technological ease of use and participant engagement.

Cultural and Interdisciplinary Collaboration (CIC):
Construction Framework: Data from team-based tasks and participant surveys will be analyzed using network theory to identify patterns of collaboration and inclusivity.

Resilience and Conflict Resolution (RCR):
Construction Framework: Conflict resolution will be measured by combining timestamps of escalation and resolution events with qualitative reviews of the process. The rate of successful resolution will be modeled using survival analysis to track the "lifespan" of conflicts.

Knowledge Synthesis and Dissemination (KSD):
Construction Framework: Outputs from team tasks will be rated for novelty, practicality, and depth by independent reviewers. Knowledge retention will be assessed through follow-up surveys and task repetition experiments.
Innovative Angle: The use of semantic similarity analysis between AI-synthesized insights and participant-generated outputs will quantify the effectiveness of knowledge integration.

Experimental Design

Experimental Design
This trial is designed to investigate the role of AI-assisted governance systems in enhancing collaborative decision-making, ethical alignment, and equitable resource allocation within a human-first network state framework. Conducted entirely online, the study focuses on how scientists interact with AI agents and each other under varying levels of collaboration.

Study Overview
Participants are randomly assigned to one of three groups to explore different interaction models:

Individual Engagement (Non-Cooperative Model): Participants work independently to complete tasks without external collaboration or AI assistance.
Team-Based Collaboration (Semi-Cooperative Model): Participants form small teams to collaboratively complete tasks with minimal AI involvement.
AI-Assisted Collective Interaction (Human-First Network State Model): Participants work together as a single team, with AI agents facilitating communication, decision-making, and conflict resolution.
The trial comprises five progressive scenarios:

Resource Allocation and Prioritization
Collaborative Research Design
Peer Review and Ethical Decision-Making
Ethical Dilemmas and Governance Policies
Policy Deliberation and Voting
Key Outcomes
The primary outcomes include:

Collective Intelligence: Measuring the quality and efficiency of decision-making processes.
Equity in Governance: Evaluating fairness in resource distribution and participation.
Trust in AI: Assessing participants’ confidence in AI-assisted systems.
Ethical Alignment: Determining the alignment of decisions with shared ethical principles.
Secondary outcomes explore technological usability, interdisciplinary collaboration, and conflict resolution.

Public Accessibility
The trial focuses on building foundational insights into how AI-augmented governance can enhance equity and efficiency in scientific and organizational contexts. While specific data and findings will remain confidential until the trial concludes, the overall design and methodology are publicly available to encourage replication and further exploration in related studies.

All participant data will remain perpetually anonymized to ensure privacy and confidentiality. Results will be disseminated after trial completion to advance the understanding of human-machine interactions in governance contexts.
Experimental Design Details
Not available
Randomization Method
Randomization is done on two stages. First a random number is assigned to every participant upon registration, then a digital lottery algorithm is used to assign participants to each group.
Randomization Unit
Participant
Was the treatment clustered?
No

Experiment Characteristics

Sample size: planned number of clusters
384 participants.
Sample size: planned number of observations
384 participants
Sample size (or number of clusters) by treatment arms
128 participant for the individual group, 128 participants for the human-teams group, 128 for the human-AI teams group
Minimum detectable effect size for main outcomes (accounting for sample design and clustering)
Key Inputs for Power Calculation Sample Size: Total participants: 384 Per group: 128 participants Confidence Level: 95% (α = 0.05, two-tailed) Power Level: 80% (β = 0.20) Clustering and Design Effect: Assuming minimal clustering due to random and independent assignment of participants, the design effect (DE) is set to 1. However, if significant clustering within groups is expected, DE > 1 should be factored in. Outcome Measurement: Continuous outcomes (e.g., trust scores, equity measures) and proportions (e.g., collaboration success rate). Minimum Detectable Effect Size (MDE) The MDE quantifies the smallest effect size (difference between group means or proportions) that can be reliably detected with the given sample size, power, and confidence level. The calculations are done separately for continuous and proportion-based outcomes. Minimum Detectable Effect Size (MDE) Continuous Outcomes: Detectable mean difference: 0.198 units. Standard deviation: 0.8. Proportion-Based Outcomes: Detectable difference: 12.2 percentage points. Baseline proportion: 50%.
IRB

Institutional Review Boards (IRBs)

IRB Name
KNOWDYN Review Board
IRB Approval Date
2024-11-01
IRB Approval Number
N/A