Abstract

Uncertainty-aware user modeling is crucial for de-signing AI systems that adapt to users in real-time while addressing privacy concerns. This paper pro-poses a novel framework for privacy-preserving probabilistic user modeling that integrates un-certainty quantification and differential privacy (DP). Building on neural processes (NPs), a scalable latent variable probabilistic model, we enable meta-learning for user behaviour prediction under privacy constraints. By employing differentially private stochastic gradient descent (DP-SGD), our method achieves rigorous privacy guarantees while preserving predictive accuracy. Unlike prior work, which primarily addresses privacy-preserving learning for convex or smooth functions, we establish theoretical guarantees for non-convex objectives, focusing on the utility-privacy trade-offs inherent in uncertainty-aware models. Through extensive experiments, we demonstrate that our approach achieves competitive accuracy under stringent privacy budgets. Our results showcase the potential of privacy-preserving probabilistic user models to enable trustworthy AI systems in real-world interactive applications.

Original languageEnglish
Pages (from-to)3979-3989
Number of pages11
JournalProceedings of Machine Learning Research
Volume286
Publication statusPublished - 2025
MoE publication typeA4 Conference publication
EventConference on Uncertainty in Artificial Intelligence - Rio de Janeiro, Brazil
Duration: 21 Jul 202525 Jul 2025
Conference number: 41

Funding

This work was supported by the Research Council of Finland Flagship programme: Finnish Center for Artificial Intelligence FCAI and decisions 358958, 359567. Amir Sonee, Haripriya Harikumar, and Samuel Kaski were supported by the UKRI Turing AI World-Leading Researcher Fellowship, [EP/W002973/1].

Fingerprint

Dive into the research topics of 'Privacy-Preserving Neural Processes for Probabilistic User Modeling'. Together they form a unique fingerprint.

Cite this