Sparse Newton's Method
||Thursday, March 23, 2017
||11:30am - 12:30pm
||D344 LSRC, Duke
||Note Time Change
Newton’s method provides a powerful framework for designing efficient and parallel algorithms. However, when applied to large-scale matrices, the intermediate matrices quickly become dense, even when the input matrix is sparse. In this talk, I will present an approach that uses spectral sparsification to keep all intermediate matrices sparse while preserving the effectiveness of Newton’s method. As building blocks and applications of our sparse Newton’s method, we gave the first nearly-linear time algorithms for problems in machine learning, numerical analysis, and spectral graph theory: (1) sampling Gaussian graphical models with symmetric diagonally dominant precision matrices, (2) solving the matrix roots problem for graph Laplacians, and (3) spectral sparsification of random-walk matrix polynomials.
This is based on joint work with Dehua Cheng, Yan Liu, Richard Peng and Shang-Hua Teng.
Yu Cheng is a final year Ph.D. student in the Computer Science Department at the University of Southern California (USC), advised by Shang-Hua Teng. He is interested in the area of theoretical computer science, focusing on algorithmic game theory and spectral graph theory.