**The IDEaS Seminar Series Presents two speakers on Friday September 22 in the TSRB Auditorium.**

At 2 pm, Michael Mahoney of UC Berkeley speaks on "Second Order Machine Learning"

At 3 pm, Ravi Kannan of Microsoft Research India speaks on "Topic Modeling: Proof to Practice"

**A reception from 4-5 pm will follow the seminars. **Food will be served at the reception. The abstracts and biographies for both speakers appear below.

**Michael W. Mahoney of the International Computer Science Institute and the Department of Statistics, UC Berkeley
"Second Order Machine Learning"**

**Abstract:**

A major challenge for large-scale machine learning, and one that will only increase in importance as we develop models that are more and more domain-informed, involves going beyond high-variance first-order optimization methods to more robust second order methods. Here, we consider the problem of minimizing the sum of a large number of functions over a convex constraint set, a problem that arises in many data analysis, machine learning, and more traditional scientific computing applications, as well as non-convex variants of these basic methods. While this is of interest in many situations, it has received attention recently due to challenges associated with training so-called deep neural networks. We establish improved bounds for algorithms that incorporate sub-sampling as a way to improve computational efficiency, while maintaining the original convergence properties of these algorithms. These methods exploit recent results from Randomized Linear Algebra on approximate matrix multiplication. Within the context of second order optimization methods, they provide quantitative convergence results for variants of Newton's methods, where the Hessian and/or the gradient is uniformly or non-uniformly sub-sampled, under much weaker assumptions than prior work.

**Bio:**

Michael W. Mahoney is at the University of California at Berkeley in the Department of Statistics and at the International Computer Science Institute (ICSI). He works on algorithmic and statistical aspects of modern large-scale data analysis. Much of his recent research has focused on large-scale machine learning, including randomized matrix algorithms and randomized numerical linear algebra, geometric network analysis tools for structure extraction in large informatics graphs, scalable implicit regularization methods, and applications in genetics, astronomy, medical imaging, social network analysis, and internet data analysis. He received him PhD from Yale University with a dissertation in computational statistical mechanics, and he has worked and taught at Yale University in the mathematics department, at Yahoo Research, and at Stanford University in the mathematics department. Among other things, he is on the national advisory committee of the Statistical and Applied Mathematical Sciences Institute (SAMSI), he was on the National Research Council's Committee on the Analysis of Massive Data, he co-organized the Simons Institute's fall 2013 program on the Theoretical Foundations of Big Data Analysis, and he runs the biennial MMDS Workshops on Algorithms for Modern Massive Data Sets.

**Host: **

Dana Randall, Professor in the College of Computing, Co-Executive Director of the Institute for Data Engineering and Science at Georgia Tech

**Ravi Kannan of Microsoft Research India
"Topic Modeling: Proof to Practice"**

**Abstract:**

Topic Modeling is used in a variety of contexts. This talk will outline from ﬁrst principles the problem, and the well-known Latent Dirichlet Al-location (LDA) model before moving to the main focus of the talk: Recent algorithms to solve the model-learning problem with provable worst-case error and time guarantees. We present a new algorithm which enjoys both provable guarantees as well performance to scale on corpora with billions of words on a single box. Besides corpus size, a second challenge is the growth in the number of topics. We address this with a new model in which topics lie on low-dimensional faces of the topic simplex rather than just vertices.

**Bio:**

Ravi Kannan is a principal researcher at Microsoft Research India, where he leads the algorithms research group. He also holds an adjunct faculty position in the computer science and automation department of the Indian Institute of Science. Before joining Microsoft, Kannan was the William K. Lanman, Jr. Professor of Computer Science and Applied Mathematics at Yale University. He has also taught at MIT and CMU. Kannan's research interests include algorithms, theoretical computer science and discrete mathematics, as well as optimization. His work has mainly focused on efficient algorithms for problems of a mathematical (often geometric) flavor that arise in computer science. He has worked on algorithms for integer programming and the geometry of numbers, random walks in n-space, randomized algorithms for linear algebra, and learning algorithms for convex sets. He was awarded the Knuth Prize in 2011 for developing influential algorithmic techniques aimed at solving long-standing computational problems, the Fulkerson Prize in 1991 for his work on estimating the volume of convex sets, and the Distinguished Alumnus award of the Indian Institute of Technology, Bombay in 1999.

**Host:**

Santosh Vempala, Frederick G. Storey Chair in Computing and Professor in the College of Computing at Georgia Tech