Advances in generative modeling and adversarial learning gave rise to a recent surge of interest in smooth two-players games, specifically in the context of learning generative adversarial networks (GANs). Solving these games raise intrinsically different challenges than the minimization tasks the machine learning community is used to. The goal of this workshop is to bring together the several communities interested in such smooth games, in order to present what is known on the topic and identify current open questions, such as how to handle the non-convexity appearing in GANs.
A number of problems and applications in machine learning are formulated as games. A special class of games, smooth games, have come into the spotlight recently with the advent of GANs. In a two-players smooth game, each player attempts to minimize their differentiable cost function which depends also on the action of the other player. The dynamics of such games are distinct from the better understood dynamics of optimization problems. For example, the Jacobian of gradient descent on a smooth two-player game, can be non-symmetric and have complex eigenvalues. Recent work by ML researchers has identified these dynamics as a key challenge for efficiently solving similar problems.
A major hurdle for relevant research in the ML community is the lack of interaction with the mathematical programming and game theory communities where similar problems have been tackled in the past, yielding useful tools. While ML researchers are quite familiar with the convex optimization toolbox from mathematical programming, they are less familiar with the tools for solving games. For example, the extragradient algorithm to solve variational inequalities has been known in the mathematical programming literature for decades, however the ML community has until recently mainly appealed to gradient descent to optimize adversarial objectives.
The aim of this workshop is to provide a platform for both theoretical and applied researchers from the ML, mathematical programming and game theory community to discuss the status of our understanding on the interplay between smooth games, their applications in ML, as well existing tools and methods for dealing with them. We also encourage, and will devote time during the workshop, on work that identifies and discusses open, forward-looking problems of interest to the NeurIPS community.
A submission should take the form of an anonymous extended abstract (2-4 pages long excluding references) in PDF format using the following modified NeurIPS style. The submission process will be handled via CMT. Previously published work (or under-review) is acceptable, though it needs to be clearly indicated as published work when submitting. Please provide as a footnote in the actual pdf indicating the venue where the work has been submitted. Submissions can be accepted as contributed talks, spotlight or poster presentations (all accepted submissions can have a poster). Extended abstracts must be submitted by Wednesday October 10, 2018 (11:59pm AoE). Final versions will be posted on the workshop website (and are archival but do not constitute a proceedings).
Tickets for the workshop are sold from the NeurIPS website. One ticket per accepted paper has been set aside in a pool of reserved tickets to ensure that the presenters can attend the NeurIPS workshops.
Improving Generative Adversarial Networks using Game Theory and Statistics
Generative Adversarial Networks (aka GANs) are a recently proposed approach for learning samplers of high-dimensional distributions with intricate structure, such as distributions over natural images, given samples from these distributions. They are trained by setting up a two-player zero-sum game between two neural networks, which learn statistics of a target distribution by adapting their strategies in the game using gradient descent. Despite their intriguing performance in practice, GANs pose great challenges to both Optimization and Statistics. Their training suffers from oscillations, and they are difficult to scale to high-dimensional settings. We study how game-theoretic and statistical techniques can be brought to bare on these important challenges. We use Game Theory towards improving GAN training, and Statistics towards scaling up the dimensionality of the generated distributions.
Constantinos Daskalakis works on computation theory and its interface with game theory, economics, probability theory, statistics and machine learning. His work has resolved long-standing open problems about the computational complexity of Nash equilibrium, the mathematical structure and computational complexity of multi-item auctions, and the behavior of machine-learning methods such as the expectation-maximization algorithm. He has obtained computationally and statistically efficient methods for statistical hypothesis testing and learning in high-dimensional settings, as well as results characterizing the structure and concentration properties of high-dimensional distributions. For his work on the complexity of Nash equilibrium, he has been awarded the Kalai prize from the Game Theory Society and the SIAM outstanding paper prize. He is also a recipient of the ACM doctoral dissertation award, a Simons investigator award, and the Rolf Nevanlinna Prize, one of the most prestigious international awards in mathematics.
Smooth Games in Machine Learning Beyond GANs
This talk will discuss a wide spectrum of recent advances in machine learning using smooth games, beyond the phenomenal GANs. Such showcases include reinforcement learning, robust and adversarial machine learning, approximate Bayesian computation, maximum likelihood estimation in exponential family, and etc. We show that all of these classical machine learning tasks can be reduced to solving (non) convex-concave min-max optimization problems. Hence, it is of paramount importance to developing a good theoretical understanding and principled algorithms for min-max optimization. We will review some of the theory and algorithms for smooth games and variational inequalities in the convex regime and shed some light on their counterparts in the non-convex regime.
Niao He is an assistant professor in the Department of Industrial and Enterprise Systems Engineering and Coordinated Science Laboratory at the University of Illinois at Urbana-Champaign. Before joining Illinois, she received her Ph.D. degree in Operations Research from Georgia Institute of Technology in 2015 and B.S. degree in Mathematics from University of Science and Technology of China in 2010. Her research interests are in large-scale optimization and machine learning, with a primary focus in bridging modern optimization theory and algorithms with core machine learning topics, like Bayesian inference, reinforcement learning, and adversarial learning. She is also a recipient of the NSF CISE Research Initiation Initiative (CRII) Award and the NCSA Faculty Fellowship.
Building Algorithms by Playing Games
A very popular trick for solving certain types of optimization problems is this: write your objective as the solution of a two-player zero-sum game, endow both players with an appropriate learning algorithm, watch how the opponents compete, and extract an (approximate) solution from the actions/decisions taken by the players throughout the process. This approach is very generic and provides a natural template to produce new and interesting algorithms. I will describe this framework and show how it applies in several scenarios, and describe recent work that draws a connection to the Frank-Wolfe algorithm and Nesterov's Accelerated Gradient Descent.
Jacob Abernethy is an Assistant Professor in Computer Science at Georgia Tech. He started his faculty career in the Department of Electrical Engineering and Computer Science at the University of Michigan. In October 2011 he finished a PhD in the Division of Computer Science at the University of California at Berkeley, and then spent nearly two years as a Simons postdoctoral fellow at the CIS department at UPenn, working with Michael Kearns. Abernethy's primary interest is in Machine Learning, with a particular focus in sequential decision making, online learning, online algorithms and adversarial learning models. He did his Master's degree at TTI-C, and his Bachelor's Degree at MIT. Abernethy's PhD advisor is Prof. Peter Bartlett.
An interpretation of GANs via online learning and game theory
Generative Adversarial Networks (GANs) have become one of the most powerful paradigms in learning real-world distributions. Despite this success, their minimax nature makes them fundamentally different to more classical generative models thus raising novel challenges; most notably in terms of training and evaluation. Indeed, finding a saddle-point is in general a harder task than converging to an extremum. We view the problem of training GANs as finding a mixed strategy in a zero-sum game. Building upon ideas from online learning and game theory, we propose (i) a novel training method with provable convergence to an equilibrium for semi-shallow GAN architectures, i.e. architectures where the discriminator is a one layer network and the generator is an arbitrary network and (ii) a natural metric for detecting non-convergence, namely the duality gap.
Paulina Grnarova is a PhD student at ETH Zurich, supervised by Prof. Thomas Hofmann. Prior to her PhD she did her Master studies at EPF Lausanne. Her research mainly focuses on the development and understanding of generative models for complex data. Recently she proposed an online learning inspired algorithm for training GANs with better convergence guarantees. Since 2016, she is also a Google AI collaborator and works on improving text generation as part of a research team.
Smooth game optimization ?
Invited talk #1:
Improving Generative Adversarial Networks using Game Theory and Statistics [abstract]
Provable Non-Convex Min-Max Optimization.
Solving differential games by methods for finite-dimensional saddle-point problems.
Generalized Mirror Prox Algorithm for Variational Inequalities.
On the convergence of stochastic forward-backward-forward algorithms with variance reduction in pseudo-monotone variational inequalities.
A Variational Inequality Perspective on Generative Adversarial Networks.
|10:30||Poster session + Coffee break|
|11:00|| Invited talk #2:
Smooth Games in Machine Learning Beyond GANs
|11:40||Contributed Talk #1:
Finding Mixed Nash Equilibria of Generative Adversarial Networks.
|12:00||Contributed talk #2:
Pier Giuseppe Sessa
Bounding Inefficiency of Equilibria in Continuous Actions Games using Submodularity and Curvature.
|14:00|| Invited talk #3:
Building Algorithms by Playing Games
|14:40||Contributed talk #3:
Reyhane Askari Hemmat
Negative Momentum for Improved Game Dynamics.
|15:30||Contributed talk #4:
Regret Decomposition in Sequential Games with Convex Action Spaces and Losses.
|15:50|| Invited talk #4:
An interpretation of GANs via online learning and game theory
Model Compression with Generative Adversarial Networks.
An Adversarial Labeling Game for Learning from Weak Supervision.
Multi-objective training of Generative Adversarial Networks with multiple discriminators.
A GAN framework for Instance Segmentation using the Mutex Watershed Algorithm.
| Panel with the invited speakers:
Constantinos Daskalakis, Niao He, Jacob Abernethy, Paulina Grnarova
Simon Lacoste-Julien is a CIFAR fellow and an assistant professor at Mila and DIRO from Université de Montréal. His research interests are machine learning and applied math, with applications to computer vision and natural language processing. He obtained a B.Sc. in math., physics and computer science from McGill, a PhD in computer science from UC Berkeley and a post-doc from the University of Cambridge. He spent a few years as a research faculty at INRIA and École normale supérieure in Paris before coming back to his roots in Montreal in 2016.
Simon published several papers at the intersection of mathematical programming and machine learning, and in particular for solving min-max games. He is a frequent participant to the NeurIPS OPT workshop series, and co-organized the NeurIPS 2009 workshop on “The Generative & Discriminative Learning Interface” .
Ioannis Mitliagkas is an assistant professor in the department of Computer Science and Operations Research (DIRO) at the University of Montréal. Before that, he was a Postdoctoral Scholar with the departments of Statistics and Computer Science at Stanford University. He obtained his Ph.D. from the department of Electrical and Computer Engineering at The University of Texas at Austin. His research includes topics in optimization, statistical learning and inference, and efficient large-scale and distributed algorithms.
He is particularly interested in the dynamics of optimization, like momentum methods, in the presence of system dynamics, adaptivity, and lately, smooth two-player games (ongoing work).
Gauthier Gidel received the Diplôme de l’École Normale Supérieure in 2017 (ULM MPI2013) and the Master of Science MVA from École Normale supérieur Paris-Saclay in 2016. Gauthier is currently pursuing his PhD at Mila and DIRO from Université de Montréal under the supervision of Simon Lacoste-Julien.
Gauthier’s PhD thesis topic revolves around saddle point optimization (a.k.a mini-max problems) for machine learning and more generally variational inequalities on which Gauthier has published several papers [Gidel et al. 2017, Gidel et al. 2018].
Vasilis is a Researcher at Microsoft Research, New England. His research lies at the intersection of theoretical computer science, machine learning and economics/econometrics. He received his Ph.D. in Computer Science from Cornell University and then spent two years as a postdoc researcher at Microsoft Research, NYC, as part of the Algorithmic Economics and the Machine Learning groups.
Vasilis has published numerous paper on algorithmic game theory including a best paper award at NeurIPS in 2015 for a paper providing fast convergence guarantees of regularized learning in games [Syrgkanis et al. 2015].
Éva Tardos is a Jacob Gould Schurman Professor of Computer Science at Cornell University, and she was Computer Science department chair from 2006 to 2010. She received her BA and PhD from Eötvös University in Budapest. She joined the faculty at Cornell in 1989. Tardos’s research interest is algorithms and algorithmic game theory. She is most known for her work on network-flow algorithms and quantifying the efficiency of selfish routing. She has been elected to the National Academy of Engineering, the National Academy of Sciences, the American Academy of Arts and Sciences, and is an external member of the Hungarian Academy of Sciences.
Éva is a leading figure in the algorithmic game theory field. She is the recipient of a number of fellowships and awards including the Packard Fellowship, the Gödel Prize, Dantzig Prize, Fulkerson Prize, ETACS prize, and the IEEE Technical Achievement Award. She is editor editor-in-Chief of the Journal of the ACM, has been editor-in-Chief of SIAM Journal of Computing, and editor of several other journals including Combinatorica; she served as problem committee member for many conferences, and was program committee chair for the ACM-SIAM Symposium on Discrete Algorithms (1996), as well as FOCS’05, and EC’13.
Léon Bottou received the Diplôme d’Ingénieur de l’École Polytechnique (X84) in 1987, the Magistère de Mathématiques Fondamentales et Appliquées et d’Informatique from École Normale Supérieure in 1988, and a Ph.D. in Computer Science from Université de Paris-Sud in 1991. His research career took him to AT&T Bell Laboratories, AT&T Labs Research, NEC Labs America and Microsoft. He joined Facebook AI Research in 2015.
The long term goal of Léon’s research is to understand how to build human-level intelligence. Although reaching this goal requires conceptual advances that cannot be anticipated at this point, it certainly entails clarifying how to learn and how to reason. Leon Bottou best known contributions are his work on “deep” neural networks in the 90s, his work on large scale learning in the 00’s, and possibly his more recent work on causal inference in learning systems. Leon has also recently worked on the geometry of probability measures applied to Generative Adversarial Networks.
Sebastian Nowozin is a Principal Researcher at Microsoft Research, Cambridge, UK, managing the Machine Intelligence and Perception group. He develops novel algorithms and models for artificial intelligence and machine learning applications.
Sebastian has recently been awarded the German Pattern Recognition Award 2016 and has published several fundamental papers on Generative adversarial networks.
Abernethy, J.D., Bartlett, P.L., Rakhlin, A., Tewari, A., Optimal strategies and minimax lower bounds for online convex games. In COLT 2009.
Arora, S., Ge, R., Liang, Y., Ma, T., Zhang, Y., Generalization and Equilibrium in Generative Adversarial Nets (GANs). In ICML 2017.
Balduzzi, D., Racaniere, S., Martens, J., Foerster, J., Tuyls, K. and Graepel, T., 2018. The Mechanics of n-Player Differentiable Games. In ICML 2018.
Daskalakis, C., Goldberg, P., Papadimitriou, C., The Complexity of Computing a Nash Equilibrium. SIAM J. Comput., 2009.
Daskalakis, C., Ilyas, A., Syrgkanis, V., Zeng, H., Training GANs with Optimism. In ICLR 2018.
Ewerhart, C., Ordinal Potentials in Smooth Games (SSRN Scholarly Paper No. ID 3054604). Social Science Research Network, Rochester, NY, 2017.
Fedus, W., Rosca, M., Lakshminarayaan, B., Dai, A.M., Mohamed, S., Goodfellow, I., Many Paths to Equilibrium: GANs Do Not Need to Decrease a Divergence At Every Step. In ICLR 2018.
Gidel, G., Jebara, T., Lacoste-Julien, S. Frank-Wolfe Algorithms for Saddle Point Problems. In AISTATS 2017.
Gidel, G., Berard,H., Vincent, P., Lacoste-Julien, S., A Variational Inequality Perspective on Generative Adversarial Networks. arXiv:1802.10551 [cs, math, stat], 2018.
Grnarova, P., Levy, K.Y., Lucchi, A., Hofmann, T., Krause, A., An Online Learning Approach to Generative Adversarial Networks. In ICLR 2018.
Harker, P.T., Pang, J.-S., Finite-dimensinal variational inequality and nonlinear complementarity problems: A survey of theory, algorithms and applications. Mathematical Programming, 1990.
Hazan, E., Singh, K., Zhang, C., Efficient Regret Minimization in Non-Convex Games, in ICML 2017.
Karlin, S., Weiss, G., The Theory of Infinite Games, Mathematical Methods and Theory in Games, Programming, and Economics, 1959.
Lipton, R.J., Young, N.E., Simple Strategies for Large Zero-sum Games with Applications to Complexity Theory, in STOC 94.
Mescheder, L., Nowozin, S., Geiger, A., The Numerics of GANs. In NeurIPS 2017.
Nisan, N., Roughgarden, T., Tardos, E., Vazirani, V., Algorithmic Game Theory. Cambridge University Press, 2007.
Pfau, D., Vinyals, O., Connecting Generative Adversarial Networks and Actor-Critic Methods. arXiv:1610.01945 [cs, stat], 2016.
Roughgarden, T., Intrinsic Robustness of the Price of Anarchy, in: Communications of The ACM - CACM, 2009.
Scutari, G., Palomar, .P., Facchinei, F., Pang, J. s., Convex Optimization, Game Theory, and Variational Inequality Theory. IEEE Signal Processing Magazine, 2010.
Syrgkanis, V., Agarwal, A., Luo, H., Schapire, R.E., Fast Convergence of Regularized Learning in Games, in NeurIPS 2015.
Von Neumann, J., Morgenstern, O., Theory of Games and Economic Behavior. Princeton University Press, 1944.