Advances in generative modeling and adversarial learning have given rise to renewed interest in differentiable two-players games, with much of the attention falling on generative adversarial networks (GANs). Solving these games introduces distinct challenges compared to the standard minimization tasks that the machine learning (ML) community is used to. A symptom of this issue is ML and deep learning (DL) practitioners using optimization tools on game-theoretic problems. Recent work seeks to rectify this situation by bringing game theoretic tools into ML. At NeurIPS 2018 we held “Smooth games optimization in ML”, a workshop with this scope and goal in mind. Last year’s workshop addressed theoretical aspects of games in machine learning, their special dynamics, and typical challenges. Talks by Costis Daskalakis, Niao He, Jacob Abernethy and Paulina Grnarova emphasized various fundamental topics in a pure, simplified theoretical setting. A number of contributed talks and posters tackled similar questions. The workshop culminated in a panel discussion that identified a number of interesting questions. The aim of this workshop is to provide a platform for both theoretical and applied researchers from the ML, mathematical programming and game theory community to discuss the status of our understanding on the interplay between smooth games, their applications in ML, as well existing tools and methods for dealing with them. We are looking for contributions that identifies and discusses open, forward-looking problems of interest to the NeurIPS community.

We are soliciting contributions that address one of the below questions, or secondarily, another question on the intersection of modern machine learning and games. This year we are particularly interested in accepting work that uses non-standard formulations and applications for games in ML.

- How can we integrate learning with game theory? (e.g. [Schuurmans et al., 2016])
- How can we inject deep learning into games and vice-versa (eg. actor-critic formulations can be cast as a game)?
- What are the practical implications and applications?
- How do we go beyond the standard GAN discussion and model general agents that interact with each other in a learning context?
- What can we say about the existence and uniqueness results of equilibria in smooth games?
- Can we approximate mixed equilibria have better properties than the exact ones? [Arora, S., Ge, R., Liang, Y., Ma, T., Zhang, 2017] [Lipton et al., 2002]
- Can we define a weaker notion of solution than Nash Equilibria? [Papadimitriou, Piliouras, 2018]
- Can we compare the quality/performance of Nash equilibria/cycles ? Are there points that have a better quality/outcome than Nash equilibria ? [Kleinberg et al. 2011]
- How do we design efficient algorithms that are guaranteed to achieve the desired solutions?
- Finally, how do we design better objectives to match a specific ML task at hand?
- Abstract submission deadline: September 16, 2019 (11:59pm AoE) via CMT
- Acceptance notification: October 1, 2019

A submission should take the form of an **anonymous** extended abstract (2-4 pages long **excluding references**) in PDF format using the following modified NeurIPS style. The submission process will be handled via
CMT.
Previously published work (or under-review) is acceptable, though it needs to be clearly indicated as published work when submitting. Please provide as a footnote in the actual pdf indicating the venue where the work has been submitted.
Submissions can be accepted as contributed talks, spotlight or poster presentations (all accepted submissions can have a poster). Extended abstracts must be submitted by September 16, 2019 (11:59pm AoE). Final versions will be posted on the workshop website (and are archival but **do not constitute a proceedings**).

TBA.

TBA.

Fei Fang is an Assistant Professor at the Institute for Software Research in the School of Computer Science at Carnegie Mellon University. Before joining CMU, she was a Postdoctoral Fellow at the Center for Research on Computation and Society (CRCS) at Harvard University. She received her Ph.D. from the Department of Computer Science at the University of Southern California in June 2016.

TBA

TBA

Éva Tardos received her Dipl.Math. in 1981 , and her Ph.D. 1984, from Eötvös University , Budapest, Hungary . She joined Cornell in 1989, and was Chair of the Department of Computer Science 2006-2010. She has been elected to the National Academy of Engineering, National Academy of Sciences, and the American Academy of Arts and Sciences, is an external member of the Hungarian Academy of Sciences, and is the recipient of a number of fellowships and awards including the the IEEE John von Neumann Medal, Packard Fellowship, the Gödel Prize, Dantzig Prize, and the Fulkerson Prize. She was editor editor-in-Chief of SIAM Journal of Computing 2004-2009, and is currently editor-in-Chief of the Journal of the ACM, and editor of some other journals includingthe Theory of Computing, and Combinatorica.

TBA

TBA.

David Balduzzi is a researcher at Google DeepMind. He did his PhD in representation theory and algebraic geometry at the University of Chicago. After that he worked on computational neuroscience at UW-Madison and machine learning at the MPI for Intelligent Systems, ETH Zürich and Victoria University Wellington. He now works on game theory and machine learning at DeepMind.

TBA

TBA.

Since completing her graduate studies at MIT in 2003, Dr. Ozdaglar has been a faculty member in the Electrical Engineering and Computer Science Department at MIT. She is affiliated with LIDS and the Operations Research Center. Her research focuses on problems that arise in the analysis and optimization of large-scale dynamic multi-agent networked systems including communication networks, transportation networks, and social and economic networks.

Ioannis Mitliagkas is an assistant professor in the department of Computer Science and Operations Research (DIRO) at the University of Montréal. Before that, he was a Postdoctoral Scholar with the departments of Statistics and Computer Science at Stanford University. He obtained his Ph.D. from the department of Electrical and Computer Engineering at The University of Texas at Austin. His research includes topics in optimization, statistical learning and inference, and efficient large-scale and distributed algorithms.

He is particularly interested in the dynamics of optimization, like momentum methods, in the presence of system dynamics, adaptivity, and lately, smooth two-player games (ongoing work).

Gauthier Gidel received the Diplôme de l’École Normale Supérieure in 2017 (ULM MPI2013) and the Master of Science MVA from École Normale supérieur Paris-Saclay in 2016. Gauthier is currently pursuing his PhD at Mila and DIRO from Université de Montréal under the supervision of Simon Lacoste-Julien.

Gauthier’s PhD thesis topic revolves around saddle point optimization (a.k.a mini-max problems) for machine learning and more generally variational inequalities on which Gauthier has published several papers [Gidel et al. 2017, Gidel et al. 2018].

Niao He is an assistant professor in the Department of Industrial and Enterprise Systems Engineering and Coordinated Science Laboratory at the University of Illinois at Urbana-Champaign. Before joining Illinois, she received her Ph.D. degree in Operations Research from Georgia Institute of Technology in 2015 and B.S. degree in Mathematics from University of Science and Technology of China in 2010. Her research interests are in large-scale optimization and machine learning, with a primary focus in bridging modern optimization theory and algorithms with core machine learning topics, like Bayesian inference, reinforcement learning, and adversarial learning. She is also a recipient of the NSF CISE Research Initiation Initiative (CRII) Award and the NCSA Faculty Fellowship.

Reyhane Askari is a PhD student at Mila lab, Université de Montréal. She works under the supervision of Ioannis Mitliagkas (UdeM) and Nicolas Le Roux (Google Brain). Prior to her PhD, she received her Masters in Computer Science from Université de Montréal and started working as a Machine Learning engineer for two years at Mila. During that time she worked on several open-source software for deep learning such as Theano, Orion and Cortex. She also did her bachelors in Computer Engineering at Amirkabir University of Technology (Tehran Polytechnic).

Her research interests are on understanding accelerated methods in single objective and multi-objective settings using tools from dynamical systems.

Nika Haghtalab is an Assistant Professor in the Department of Computer Science at Cornell University. She works broadly on the theoretical aspects of machine learning and algorithmic economics. She especially cares about developing a theory for machine learning that accounts for its interactions with people and organizations, and the wide range of social and economic limitations, aspiration, and behavior they demonstrate. Prior to Cornell, she was a postdoctoral researcher at Microsoft Research, New England, in 2018-2019.

She received her Ph.D. from the Computer Science Department of Carnegie Mellon University, where she was co-advised by Avrim Blum and Ariel Procaccia. Her thesis titled Foundation of Machine Learning, by the People, for the People received the CMU School of Computer Science Dissertation Award (2018) and a SIGecom Dissertation Honorable Mention Award (2019).

Simon Lacoste-Julien is a CIFAR fellow and an assistant professor at Mila and DIRO from Université de Montréal. His research interests are machine learning and applied math, with applications to computer vision and natural language processing. He obtained a B.Sc. in math., physics and computer science from McGill, a PhD in computer science from UC Berkeley and a post-doc from the University of Cambridge. He spent a few years as a research faculty at INRIA and École normale supérieure in Paris before coming back to his roots in Montreal in 2016.

Simon published several papers at the intersection of mathematical programming and machine learning, and in particular for solving min-max games. He is a frequent participant to the NeurIPS OPT workshop series, and co-organized the NeurIPS 2009 workshop on “The Generative & Discriminative Learning Interface” .

Abernethy, J.D., Bartlett, P.L., Rakhlin, A., Tewari, A., Optimal strategies and minimax lower bounds for online convex games. In COLT 2009.

Arora, S., Ge, R., Liang, Y., Ma, T., Zhang, Y., Generalization and Equilibrium in Generative Adversarial Nets (GANs). In ICML 2017.

Balduzzi, D., Racaniere, S., Martens, J., Foerster, J., Tuyls, K. and Graepel, T., 2018. The Mechanics of n-Player Differentiable Games. In ICML 2018.

Daskalakis, C., Goldberg, P., Papadimitriou, C., The Complexity of Computing a Nash Equilibrium. SIAM J. Comput., 2009.

Daskalakis, C., Ilyas, A., Syrgkanis, V., Zeng, H., Training GANs with Optimism. In ICLR 2018.

Ewerhart, C., Ordinal Potentials in Smooth Games (SSRN Scholarly Paper No. ID 3054604). Social Science Research Network, Rochester, NY, 2017.

Fedus, W., Rosca, M., Lakshminarayaan, B., Dai, A.M., Mohamed, S., Goodfellow, I., Many Paths to Equilibrium: GANs Do Not Need to Decrease a Divergence At Every Step. In ICLR 2018.

Gidel, G., Jebara, T., Lacoste-Julien, S. Frank-Wolfe Algorithms for Saddle Point Problems. In AISTATS 2017.

Gidel, G., Berard,H., Vincent, P., Lacoste-Julien, S., A Variational Inequality Perspective on Generative Adversarial Networks. arXiv:1802.10551 [cs, math, stat], 2018.

Grnarova, P., Levy, K.Y., Lucchi, A., Hofmann, T., Krause, A., An Online Learning Approach to Generative Adversarial Networks. In ICLR 2018.

Harker, P.T., Pang, J.-S., Finite-dimensinal variational inequality and nonlinear complementarity problems: A survey of theory, algorithms and applications. Mathematical Programming, 1990.

Hazan, E., Singh, K., Zhang, C., Efficient Regret Minimization in Non-Convex Games, in ICML 2017.

Karlin, S., Weiss, G., The Theory of Infinite Games, Mathematical Methods and Theory in Games, Programming, and Economics, 1959.

Lipton, R.J., Young, N.E., Simple Strategies for Large Zero-sum Games with Applications to Complexity Theory, in STOC 94.

Mescheder, L., Nowozin, S., Geiger, A., The Numerics of GANs. In NeurIPS 2017.

Nisan, N., Roughgarden, T., Tardos, E., Vazirani, V., Algorithmic Game Theory. Cambridge University Press, 2007.

Pfau, D., Vinyals, O., Connecting Generative Adversarial Networks and Actor-Critic Methods. arXiv:1610.01945 [cs, stat], 2016.

Roughgarden, T., Intrinsic Robustness of the Price of Anarchy, in: Communications of The ACM - CACM, 2009.

Scutari, G., Palomar, .P., Facchinei, F., Pang, J. s., Convex Optimization, Game Theory, and Variational Inequality Theory. IEEE Signal Processing Magazine, 2010.

Syrgkanis, V., Agarwal, A., Luo, H., Schapire, R.E., Fast Convergence of Regularized Learning in Games, in NeurIPS 2015.

Von Neumann, J., Morgenstern, O., Theory of Games and Economic Behavior. Princeton University Press, 1944.

Schuurmans, Dale, and Martin A. Zinkevich. "Deep learning games." Advances in Neural Information Processing Systems. 2016.

1055 Canada Pl, Vancouver, BC V6C 0C3

Canada Place