Understanding People’s Willingness to Participate in Human-LLM Conversational Interaction Research Studies - A Scenario Study
Abstract
As Large Language Models (LLMs) become integrated into human-subjects research, understanding how participants perceive and consent to data use is central to building transparent and trustworthy research practices. We conducted a scenario-based study in which 157 participants evaluated 12 scenarios (sampled from a pool of 864), each varying across seven parameters: topic sensitivity, LLM management, data anonymization, data retention, model training, additional data access, and additional consent. We find that the effect of parameters on participants’ willingness to participate was mediated by their perceptions of comfort, usefulness, safety, appropriateness, and trust. Anonymization and explicit consent increased participation willingness, while longer retention and broader data access reduced it. Surprisingly, corporate LLM management (vs. federal) did not reduce participation, and data use for model training or personalization—despite potential data leakage—increased willingness to participate. We conclude with design implications for consent processes, transparency mechanisms, and governance practices that align research with participant expectations.
Citation
Hanna Alzughbi, Jinkyung Katie Park, and Bart Knijnenburg. 2026. Understanding People’s Willingness to Participate in Human-LLM Conversational Interaction Research Studies - A Scenario Study. In International Journal of Human-Computer Studies (IJHCS). https://doi.org/10.2139/ssrn.6235653