Hoovered up as a data point: Exploring Privacy Behaviours, Awareness, and Concerns Among UK Users of LLM-based Conversational Agents
Authors: Lisa Mekioussa Malki (University College London), Akhil Polamarasetty (University College London), Majid Hatamian (Google), Mark Warner (University College London), Enrico Costanza (University College London)
Volume: 2025
Issue: 4
Pages: 838–860
DOI: https://doi.org/10.56553/popets-2025-0160
Abstract: Large Language Models (LLMs) are widely used in conversational agents (CAs) due to their ability to generate coherent and human-like text. However, their deployment raises significant privacy concerns, as users often share sensitive data in prompts. This data can be used to train the underlying LLM, introducing memorisation risks and challenging users' right to be forgotten. While these issues have been explored from a technical standpoint, little is known about how users perceive and navigate privacy issues in their day-to-day use of LLM-based CAs. To address this research gap, we conducted a survey of UK-based CA users (n=211) that focused on their privacy behaviours, self-disclosure boundaries, concerns, and awareness of LLM-specific privacy issues. We found that engagement with protective behaviours was low overall, and that many participants held inaccurate beliefs about the effects of deleting data and opting out of model training. Although participants were generally reluctant to share sensitive information during interactions, we identified several challenges to limiting self-disclosure in practice, such as balancing privacy with app utility. Lastly, we observed a nuanced relationship between privacy awareness and concern, and identified significant demographic effects. We propose design avenues for privacy-supportive tools, and discuss the implications of our work for regulation and governance.
Keywords: Large Language Models (LLMs), AI, conversational interfaces, chatbots, usable privacy, human-computer interaction
Copyright in PoPETs articles are held by their authors. This article is published under a Creative Commons Attribution 4.0 license.
