Optimization Theory for Machine Learning

Optimization Theory for Machine Learning

At the foundations of machine learning lie a number of interconnected discrete optimization problems, such as optimal point labeling and subset selection. As time passes, the need to address ever more complex variations on these problems arises. We address this need with a study of the discrete optimization foundations of machine learning, with an emphasis on general-purpose approximation algorithms for high-dimensional discrete optimization.

Objectives:

  • Develop & refine the underlying optimization theory that drives learning systems, especially in discrete contexts.

Selected Publications:

  • Minh N. Vu, Truc Nguyen, and My T. Thai. “NeuCEPT: Learn Neural Networks’ Mechanism via Critical Neurons with Precision Guarantee,” in ICDM, 2022
  • J David Smith, Alan Kuhnle, and My T. Thai“An Approximately Optimal Bot for Non-Submodular Social Reconnaissance,” in Proceedings of HyperText2018
  • Alan Kuhnle, J David Smith, Victoria Crawford, and My T. Thai“Fast Maximization of Non-Submodular, Monotonic Functions on the Integer Lattice,” in Proceedings of ICML2018