TITLE: Statistical Problems and Convex Relaxations: Dreams of a General Theory
Algorithmic problems that arise in practice are often not well described by the worst-case model of computation. Statistical
models are an avenue for building a theory that better supports computational endeavors; rather than design algorithms for the worst case, we assume that inputs come from a structured distribution. From an algorithmic perspective, statistical models are a canonical construct in machine learning; from a complexity-theoretic perspective, statistical models are a rich source of cryptographic and security primitives.
In this talk, I will describe my work towards building a theory of algorithms for statistical models using convex relaxations (and particularly the powerful sum-of-squares hierarchy). I’ll talk about my efforts in understanding the trade-off between computation and statistical information, in giving fast (often linear- or almost-linear-time) algorithms based on slower semidefinite programs, and in making a broad connection between convex programs and spectral algorithms for statistical problems.
Tselil Schramm is a postdoc in theoretical computer science at Harvard and MIT, hosted by Boaz Barak, Jon Kelner, Ankur Moitra, and Pablo Parrilo. She obtained her Ph.D. in computer science from U.C. Berkeley under the advisement of Prasad Raghavendra and Satish Rao, and was supported by a NSF Graduate Research Fellowship. She spent Fall 2017 as the Google Research Fellow at the Simons Institute program on optimization. Her research interests include statistical and average-case problems, optimization via convex programs (especially the sum-of-squares hierarchy), spectral algorithms, spectral graph theory, and more.