Abstract
This project examines how individuals in low- and middle-income countries perceive and respond to the rapid diffusion of generative artificial intelligence across economic, social, and governance domains. While generative AI offers substantial potential productivity gains, uncertainty about its implications for employment, privacy, fairness, and global power dynamics may change adoption patterns. The uncertainty and risks associated with its adoption also raises concerns about data governance, unequal global distribution of AI capabilities, and appropriate roles of governments in regulating or promoting AI.
To identify the mechanisms shaping adoption, beliefs, and governance preferences, we conduct a randomized survey experiment with participants across diverse demographic and occupational groups. Respondents are assigned to one of six informational treatments: (1) practical instruction on AI use and prompt engineering; (2) labor-market disruption risks; (3) labor-market opportunities; (4) algorithmic bias; (5) privacy concerns; and (6) geopolitical competition and global inequality in AI capacity. The experiment measures effects on willingness to reskill, willingness to use and invest in AI, peer contact and social effects, and willingness to supply data. It also captures policy preferences, including attitudes toward AI-related redistribution, desired government promotion and regulation of AI, the use of AI in allocation decisions, and preferences over domestic versus international AI development and governance. These outcomes allow us to assess not only behavioral adjustments but also how individuals think governments should respond to the opportunities and risks of AI.
The study aims to provide causal evidence on the belief channels shaping AI adoption, skill adaptation, and support for national and global AI governance. The results inform policy interventions aimed at expanding access to AI’s benefits while addressing emerging socioeconomic, ethical, and geopolitical concerns.