Privacy & Security in Learning Systems

Privacy & Security in Learning Systems

The ever-growing deployment of machine learning models in industrial and health contexts raises critical privacy and security concerns. These models are built on personal data (e.g. clinical records, images, and user profiles). Our work on this subject focuses on extending ideas in differential privacy to deep neural networks to secure these models in contexts with sensitive data.

Objectives:

  • Identify privacy vulnerabilities in existing machine learning systems, particularly those that may leak sensitive information to third parties.
  • Develop provably-robust privacy-preserving machine learning systems.

Selected Publications:

  • Truc Nguyen & Phung Lai, Nhat Hai Phan, and My T. Thai. “XRand: Differentially Private Defense against Explanation-Guided Attacks,” in AAAI, 2023.
  • Truc Nguyen, Phung Lai, Khang Tran, Nhat Hai Phan, and My T. Thai. “Active Membership Inference Attack under Local Differential Privacy in Federated Learning”, in AISTATS, 2023.
  • Nhat Hai Phan & Minh Vu & Yang Liu, Ruoming Jin, Dejing Dou, Xintao Wu, and My T. Thai“Heterogeneous Gaussian Mechanism: Preserving Differential Privacy in Deep Learning with Provable Robustness,” in IJCAI2019