Understanding Users’ Privacy Perceptions Towards LLM’s RAG-based Memory

  • Shuning Zhang
  • , Rongjun Ma
  • , Ying Ma
  • , Shixuan Li
  • , Yiqun Xu
  • , Xin Yi*
  • , Hewu Li
  • *Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference article in proceedingsScientificpeer-review

2 Downloads (Pure)

Abstract

Large Language Models (LLMs) are increasingly integrating memory functionalities to provide personalized and context-aware interactions. However, user understanding, practices and expectations regarding these memory systems are not yet well understood. This paper presents a thematic analysis of semi-structured interviews with 18 users to explore their mental models of LLM’s Retrieval Augmented Generation (RAG)-based memory, current usage practices, perceived benefits and drawbacks, privacy concerns and expectations for future memory systems. Our findings reveal diverse and often incomplete mental models of how memory operates. While users appreciate the potential for enhanced personalization and efficiency, significant concerns exist regarding privacy, control and the accuracy of remembered information. Users express a desire for granular control over memory generation, management, usage and updating, including clear mechanisms for reviewing, editing, deleting and categorizing memories, as well as transparent insight into how memories and inferred information are used. We discuss design implications for creating more user-centric, transparent, and trustworthy LLM memory systems.

Original languageEnglish
Title of host publicationHAIPS 2025 - Proceedings of the 1st Workshop on Human-Centered AI Privacy and Security, Co-located with
Subtitle of host publicationCCS 2025
EditorsTianshi Li, Toby Jia-Jun Li, Yaxing Yao, Sauvik Das
PublisherACM
Pages10-19
Number of pages10
ISBN (Electronic)9798400719059
DOIs
Publication statusPublished - 17 Nov 2025
MoE publication typeA4 Conference publication
EventWorkshop on Human-Centered AI Privacy and Security - Taipei, Taiwan, Republic of China
Duration: 13 Oct 202517 Oct 2025
Conference number: 1

Workshop

WorkshopWorkshop on Human-Centered AI Privacy and Security
Abbreviated titleHAIPS
Country/TerritoryTaiwan, Republic of China
CityTaipei
Period13/10/202517/10/2025

Keywords

  • Large Language Model
  • Memory
  • Personalization
  • Privacy Perception
  • Trade-offs

Fingerprint

Dive into the research topics of 'Understanding Users’ Privacy Perceptions Towards LLM’s RAG-based Memory'. Together they form a unique fingerprint.

Cite this