Smooth Games Optimization and Machine Learning Workshop:

Bridging Game Theory and Deep Learning

NEWS: Find videos of each talk in the schedule!

Dec 14th, NeurIPS2019, Vancouver,


Advances in generative modeling and adversarial learning have given rise to renewed interest in differentiable two-players games, with much of the attention falling on generative adversarial networks (GANs). Solving these games introduces distinct challenges compared to the standard minimization tasks that the machine learning (ML) community is used to. A symptom of this issue is ML and deep learning (DL) practitioners using optimization tools on game-theoretic problems. Recent work seeks to rectify this situation by bringing game theoretic tools into ML. At NeurIPS 2018 we held “Smooth games optimization in ML”, a workshop with this scope and goal in mind. Last year’s workshop addressed theoretical aspects of games in machine learning, their special dynamics, and typical challenges. Talks by Costis Daskalakis, Niao He, Jacob Abernethy and Paulina Grnarova emphasized various fundamental topics in a pure, simplified theoretical setting. A number of contributed talks and posters tackled similar questions. The workshop culminated in a panel discussion that identified a number of interesting questions. The aim of this workshop is to provide a platform for both theoretical and applied researchers from the ML, mathematical programming and game theory community to discuss the status of our understanding on the interplay between smooth games, their applications in ML, as well existing tools and methods for dealing with them. We are looking for contributions that identifies and discusses open, forward-looking problems of interest to the NeurIPS community.

Invited Speakers

Morning Schedule

Time Speaker Title
8:15 Ioannis Mitliagkas Opening remarks
8:30 Invited talk, Eva Tardos Learning in dynamic multi-agent environments [abstract] [video]
9:10 Poster Spotlights:
David Fridovich-Keil
Eric Mazumdar
Olya Ohrimenko
Yan Yan
Guojun Zhang
Shuang Li
Shuang Li
Kevin Lai
Mingrui Liu†
Lisa Lee
[video] starts at 55:33
Stable, Efficient Solutions for Differential Games with Feedback Linearizable Dynamics [PDF]
Policy Gradient in Linear Quadratic Dynamic Games Has No Convergence Guarantees [PDF]
Collaborative Machine Learning Markets [PDF]
Sharp Analysis of Simple Restarted Stochastic Gradient for Min-Max Optimization [PDF]
Convergence Behaviour of Some Gradient-Based Methods on Bilinear Zero-Sum Games [PDF]
Cubic Regularization for Differentiable Games [PDF]
Geometry Correspondence between Empirical and Population Games [PDF]
Last-iterate convergence rates for min-max optimization [PDF]
Decentralized Parallel Algorithm for Training Generative Adversarial Nets [PDF]
Efficient Exploration via State Marginal Matching [PDF]
9:30 Poster session + Coffee break
11:00 Invited talk, : David Balduzzi Composition, learning, and games [abstract] [video]
11:40 Contributed Talk, Praneeth Netrapalli What is Local Optimality in Nonconvex-Nonconcave Minimax Optimization? [video] starts at 41:35
12:05 Contributed talk, Tanner Fiez Characterizing Equilibria in Stackelberg Games [video] starts at 1:06:03
12:30 Lunch break

Afternoon Schedule

Time Speaker Title
14:00 Invited talk, Fei Fang Integrating Machine Learning with Game Theory for Societal Challenges
[abstract] [video]
14:40 Contributed talk, Yuanhao Wang On Solving Local Minimax Optimization: A Follow-the-Ridge Approach [video] starts at 35:05
15:05 Contributed talk, Elizabeth Bondi Exploiting Uncertain Real-Time Information from Deep Learning in Signaling Games for Security and Sustainability [video] starts at 1:01:44
15:30 Coffee break
16:00 Invited talk, Aryan Mokhtari Understanding the role of optimism in minimax optimization
[abstract] [video]
16:40 Poster spotlights:
Andrew Bennett
Moksh Jain
Ryan D'Orazio
Benjamin Chasnov
Hongkai Zheng
Christos Tsirigotis
Ian Gemp
Konstantin Mishchenko
Gabriele Farina
Adam Lerer
Ioannis Panageas
[video] starts at 36:16
Deep Generalized Method of Moments for Instrumental Variable Analysis [PDF]
Proximal Policy Optimization for Improved Convergence in IRGAN [PDF]
Bounds for Approximate Regret-Matching Algorithms [PDF]
Opponent Anticipation via Conjectural Variations [PDF]
Implicit competitive regularization in GANs [PDF]
Objectives Towards Stable Adversarial Training Without Gradient Penalties [PDF]
The Unreasonable Effectiveness of Adam on Cycles [PDF]
Revisiting Stochastic Extragradient [PDF]
Compositional Calculus of Regret Minimizers [PDF]
Search in Cooperative Partially Observable Games [PDF]
Last-Iterate Convergence: Zero-Sum Games and Constrained Min-Max Optimization [PDF]
17:00 Discussion panel David Balduzzi, Elizabeth Bondi, Noam Brown, Praneeth Netrapalli, Eva Tardos, Jakob Foerster [video] starts at 1:0:0
17:30 Organizers Concluding remarks -- afternoon poster session [video] starts at 1:43:20
18:30 Workshop ends

Accepted Contributions

Call for Contributions

We are soliciting contributions that address one of the below questions, or secondarily, another question on the intersection of modern machine learning and games. This year we are particularly interested in accepting work that uses non-standard formulations and applications for games in ML.

  • How can we integrate learning with game theory? (e.g. [Schuurmans et al., 2016])
  • How can we inject deep learning into games and vice-versa (eg. actor-critic formulations can be cast as a game)?
  • What are the practical implications and applications?
  • How do we go beyond the standard GAN discussion and model general agents that interact with each other in a learning context?
  • What can we say about the existence and uniqueness results of equilibria in smooth games?
  • Can we approximate mixed equilibria have better properties than the exact ones? [Arora, S., Ge, R., Liang, Y., Ma, T., Zhang, 2017] [Lipton et al., 2002]
  • Can we define a weaker notion of solution than Nash Equilibria? [Papadimitriou, Piliouras, 2018]
  • Can we compare the quality/performance of Nash equilibria/cycles ? Are there points that have a better quality/outcome than Nash equilibria ? [Kleinberg et al. 2011]
  • How do we design efficient algorithms that are guaranteed to achieve the desired solutions?
  • Finally, how do we design better objectives to match a specific ML task at hand?
  • Submission details

    A submission should take the form of an anonymous extended abstract (2-4 pages long excluding references) in PDF format using the following modified NeurIPS style. The submission process will be handled via CMT. Previously published work (or under-review) is acceptable, though it needs to be clearly indicated as published work when submitting. Please provide as a footnote in the actual pdf indicating the venue where the work has been submitted. Submissions can be accepted as contributed talks, spotlight or poster presentations (all accepted submissions can have a poster). Extended abstracts must be submitted by September 16, 2019 (11:59pm AoE). Final versions will be posted on the workshop website (and are archival but do not constitute a proceedings).

    A limited number of NeurIPS registration slots will be available for accepted talks and posters to this workshop. We do not have control over the number, so it might happen that not all accepted posters get a slot. We strongly advise you to first try to register through the NeurIPS lottery to increase your chances of getting a registration slot. If you get a registration slot it means that you are guaranteed the ability to register, but you will have to pay for the registration.

    Key Dates:

    • Abstract submission deadline: September 16, 2019 (11:59pm AoE) via CMT
    • Acceptance notification: October 1, 2019


Acknowledgement to TPC

We would like to thank the following members of the technical program committee for participating in the review process for the workshop.

Abhishek Gupta, Aryan Mokhtari, Bert Huang, Chi Jin, Chidubem G Arachie, Damien Scieur, Daniel Hennes, David Balduzzi, Jan Balaguer, Jason Lee, Konstantin Mishchenko, Marc Lanctot, Maxim Raginsky, Nicolas Loizou, Panayotis Mertikopoulos, Pavel Dvurechensky, Sarath Pattathil, Thomas Anthony, Tianbao Yang, Volkan Cevher, Yair Carmon

Relevant References

Abernethy, J.D., Bartlett, P.L., Rakhlin, A., Tewari, A., Optimal strategies and minimax lower bounds for online convex games. In COLT 2009.

Arora, S., Ge, R., Liang, Y., Ma, T., Zhang, Y., Generalization and Equilibrium in Generative Adversarial Nets (GANs). In ICML 2017.

Balduzzi, D., Racaniere, S., Martens, J., Foerster, J., Tuyls, K. and Graepel, T., 2018. The Mechanics of n-Player Differentiable Games. In ICML 2018.

Daskalakis, C., Goldberg, P., Papadimitriou, C., The Complexity of Computing a Nash Equilibrium. SIAM J. Comput., 2009.

Daskalakis, C., Ilyas, A., Syrgkanis, V., Zeng, H., Training GANs with Optimism. In ICLR 2018.

Ewerhart, C., Ordinal Potentials in Smooth Games (SSRN Scholarly Paper No. ID 3054604). Social Science Research Network, Rochester, NY, 2017.

Fedus, W., Rosca, M., Lakshminarayaan, B., Dai, A.M., Mohamed, S., Goodfellow, I., Many Paths to Equilibrium: GANs Do Not Need to Decrease a Divergence At Every Step. In ICLR 2018.

Gidel, G., Jebara, T., Lacoste-Julien, S. Frank-Wolfe Algorithms for Saddle Point Problems. In AISTATS 2017.

Gidel, G., Berard,H., Vincent, P., Lacoste-Julien, S., A Variational Inequality Perspective on Generative Adversarial Networks. arXiv:1802.10551 [cs, math, stat], 2018.

Grnarova, P., Levy, K.Y., Lucchi, A., Hofmann, T., Krause, A., An Online Learning Approach to Generative Adversarial Networks. In ICLR 2018.

Harker, P.T., Pang, J.-S., Finite-dimensinal variational inequality and nonlinear complementarity problems: A survey of theory, algorithms and applications. Mathematical Programming, 1990.

Hazan, E., Singh, K., Zhang, C., Efficient Regret Minimization in Non-Convex Games, in ICML 2017.

Karlin, S., Weiss, G., The Theory of Infinite Games, Mathematical Methods and Theory in Games, Programming, and Economics, 1959.

Lipton, R.J., Young, N.E., Simple Strategies for Large Zero-sum Games with Applications to Complexity Theory, in STOC 94.

Mescheder, L., Nowozin, S., Geiger, A., The Numerics of GANs. In NeurIPS 2017.

Nisan, N., Roughgarden, T., Tardos, E., Vazirani, V., Algorithmic Game Theory. Cambridge University Press, 2007.

Pfau, D., Vinyals, O., Connecting Generative Adversarial Networks and Actor-Critic Methods. arXiv:1610.01945 [cs, stat], 2016.

Roughgarden, T., Intrinsic Robustness of the Price of Anarchy, in: Communications of The ACM - CACM, 2009.

Scutari, G., Palomar, .P., Facchinei, F., Pang, J. s., Convex Optimization, Game Theory, and Variational Inequality Theory. IEEE Signal Processing Magazine, 2010.

Syrgkanis, V., Agarwal, A., Luo, H., Schapire, R.E., Fast Convergence of Regularized Learning in Games, in NeurIPS 2015.

Von Neumann, J., Morgenstern, O., Theory of Games and Economic Behavior. Princeton University Press, 1944.

Schuurmans, Dale, and Martin A. Zinkevich. "Deep learning games." Advances in Neural Information Processing Systems. 2016.


1055 Canada Pl, Vancouver, BC V6C 0C3

Vancouver Convention center

Canada Place