Privacy & Security in Learning Systems

Privacy & Security in Learning Systems

The ever-growing deployment of machine learning models in industrial and health contexts raises critical privacy and security concerns. These models are built on personal data (e.g. clinical records, images, and user profiles). Our work on this subject focuses on extending ideas in differential privacy to deep neural networks to secure these models in contexts with sensitive data.

Objectives:

  • Identify privacy vulnerabilities in existing machine learning systems, particularly those that may leak sensitive information to third parties.
  • Develop provably-robust privacy-preserving machine learning systems.

Selected Publications:

  • Nhat Hai Phan & Minh Vu & Yang Liu, Ruoming Jin, Dejing Dou, Xintao Wu, and My T. Thai“Heterogeneous Gaussian Mechanism: Preserving Differential Privacy in Deep Learning with Provable Robustness,” in IJCAI2019