Unbiased Estimation of the Eigenvalues of Large Implicit Matrices
Many important problems are characterized by the eigenvalues of a large matrix. For example, the difficulty of many optimization problems, such as those arising from the fitting of large models in statistics and machine learning, can be investigated via the spectrum of the Hessian of the empirical loss function. Network data can be understood via the eigenstructure of the Laplacian matrix through spectral graph theory. Quantum simulations and other many-body problems are often characterized via the eigenvalues of the solution space, as are various dynamic systems. However, naive eigenvalue estimation is computationally expensive even when the matrix can be represented; in many of these situations the matrix is so large as to only be available implicitly via products with vectors. Even worse, one may only have noisy estimates of such matrix vector products. In this talk I will discuss how several different randomized techniques can be combined into a single procedure for unbiased estimates of the spectral density of large implicit matrices in the presence of noise.
Ryan Adams is a machine learning researcher and Professor of Computer Science at Princeton University. Ryan completed his Ph.D. in physics under David MacKay FRS at the University of Cambridge, where he was a Gates Cambridge Scholar and a member of St. John's College. Following his Ph.D. Ryan spent two years as a Junior Research Fellow at the University of Toronto as a part of the Canadian Institute for Advanced Research. From 2011-2016, he was an Assistant Professor at Harvard University in the School of Engineering and Applied Sciences. Ryan has won paper awards at ICML, UAI, and AISTATS, received the DARPA Young Faculty Award and the Alfred P. Sloan Fellowship. He also co-hosted the popular Talking Machines podcast.