Jacob Abernethy is assistant professor in computer science at Georgia Tech. He started his faculty career in the Department of Electrical Engineering and Computer Science at the University of Michigan. In October 2011, he finished a Ph.D. in the Division of Computer Science at the University of California at Berkeley, and then spent nearly two years as a Simons postdoctoral fellow at the CIS department at UPenn, working with Michael Kearns. Abernethy’s primary interest is in machine learning, with a particular focus in sequential decision making, online learning, online algorithms, and adversarial learning models. He did his master’s degree at TTI-C, and his bachelor’s degree at MIT. Abernethy’s Ph.D. advisor was Professor Peter Bartlett.
A very popular trick for solving certain types of optimization problems is this: write your objective as the solution of a two-player zero-sum game, endow both players with an appropriate learning algorithm, watch how the opponents compete, and extract an (approximate) solution from the actions/decisions taken by the players throughout the process. This approach is very generic and provides a natural template to produce new and interesting algorithms. I will describe this framework and show how it applies in several scenarios, and describe recent work that draws a connection to the Frank-Wolfe method of optimization.