Smooth Games Optimization and Machine Learning Workshop:

Bridging Game Theory and Deep Learning




Dec 14th, NeurIPS2019, Vancouver.

Call for Contributions

Overview

Advances in generative modeling and adversarial learning have given rise to renewed interest in differentiable two-players games, with much of the attention falling on generative adversarial networks (GANs). Solving these games introduces distinct challenges compared to the standard minimization tasks that the machine learning (ML) community is used to. A symptom of this issue is ML and deep learning (DL) practitioners using optimization tools on game-theoretic problems. Recent work seeks to rectify this situation by bringing game theoretic tools into ML. At NeurIPS 2018 we held “Smooth games optimization in ML”, a workshop with this scope and goal in mind. Last year’s workshop addressed theoretical aspects of games in machine learning, their special dynamics, and typical challenges. Talks by Costis Daskalakis, Niao He, Jacob Abernethy and Paulina Grnarova emphasized various fundamental topics in a pure, simplified theoretical setting. A number of contributed talks and posters tackled similar questions. The workshop culminated in a panel discussion that identified a number of interesting questions. The aim of this workshop is to provide a platform for both theoretical and applied researchers from the ML, mathematical programming and game theory community to discuss the status of our understanding on the interplay between smooth games, their applications in ML, as well existing tools and methods for dealing with them. We are looking for contributions that identifies and discusses open, forward-looking problems of interest to the NeurIPS community.

Call for Contributions

We are soliciting contributions that address one of the below questions, or secondarily, another question on the intersection of modern machine learning and games. This year we are particularly interested in accepting work that uses non-standard formulations and applications for games in ML.

  • How can we integrate learning with game theory? (e.g. [Schuurmans et al., 2016])
  • How can we inject deep learning into games and vice-versa (eg. actor-critic formulations can be cast as a game)?
  • What are the practical implications and applications?
  • How do we go beyond the standard GAN discussion and model general agents that interact with each other in a learning context?
  • What can we say about the existence and uniqueness results of equilibria in smooth games?
  • Can we approximate mixed equilibria have better properties than the exact ones? [Arora, S., Ge, R., Liang, Y., Ma, T., Zhang, 2017] [Lipton et al., 2002]
  • Can we define a weaker notion of solution than Nash Equilibria? [Papadimitriou, Piliouras, 2018]
  • Can we compare the quality/performance of Nash equilibria/cycles ? Are there points that have a better quality/outcome than Nash equilibria ? [Kleinberg et al. 2011]
  • How do we design efficient algorithms that are guaranteed to achieve the desired solutions?
  • Finally, how do we design better objectives to match a specific ML task at hand?
  • Submission details

    A submission should take the form of an anonymous extended abstract (2-4 pages long excluding references) in PDF format using the following modified NeurIPS style. The submission process will be handled via CMT. Previously published work (or under-review) is acceptable, though it needs to be clearly indicated as published work when submitting. Please provide as a footnote in the actual pdf indicating the venue where the work has been submitted. Submissions can be accepted as contributed talks, spotlight or poster presentations (all accepted submissions can have a poster). Extended abstracts must be submitted by September 16, 2019 (11:59pm AoE). Final versions will be posted on the workshop website (and are archival but do not constitute a proceedings).

    A limited number of NeurIPS registration slots will be available for accepted talks and posters to this workshop. We do not have control over the number, so it might happen that not all accepted posters get a slot. We strongly advise you to first try to register through the NeurIPS lottery to increase your chances of getting a registration slot. If you get a registration slot it means that you are guaranteed the ability to register, but you will have to pay for the registration.

    Key Dates:

    • Abstract submission deadline: September 16, 2019 (11:59pm AoE) via CMT
    • Acceptance notification: October 1, 2019

Invited Speakers

Accepted Contributions


Organizers

Relevant References

Abernethy, J.D., Bartlett, P.L., Rakhlin, A., Tewari, A., Optimal strategies and minimax lower bounds for online convex games. In COLT 2009.

Arora, S., Ge, R., Liang, Y., Ma, T., Zhang, Y., Generalization and Equilibrium in Generative Adversarial Nets (GANs). In ICML 2017.

Balduzzi, D., Racaniere, S., Martens, J., Foerster, J., Tuyls, K. and Graepel, T., 2018. The Mechanics of n-Player Differentiable Games. In ICML 2018.

Daskalakis, C., Goldberg, P., Papadimitriou, C., The Complexity of Computing a Nash Equilibrium. SIAM J. Comput., 2009.

Daskalakis, C., Ilyas, A., Syrgkanis, V., Zeng, H., Training GANs with Optimism. In ICLR 2018.

Ewerhart, C., Ordinal Potentials in Smooth Games (SSRN Scholarly Paper No. ID 3054604). Social Science Research Network, Rochester, NY, 2017.

Fedus, W., Rosca, M., Lakshminarayaan, B., Dai, A.M., Mohamed, S., Goodfellow, I., Many Paths to Equilibrium: GANs Do Not Need to Decrease a Divergence At Every Step. In ICLR 2018.

Gidel, G., Jebara, T., Lacoste-Julien, S. Frank-Wolfe Algorithms for Saddle Point Problems. In AISTATS 2017.

Gidel, G., Berard,H., Vincent, P., Lacoste-Julien, S., A Variational Inequality Perspective on Generative Adversarial Networks. arXiv:1802.10551 [cs, math, stat], 2018.

Grnarova, P., Levy, K.Y., Lucchi, A., Hofmann, T., Krause, A., An Online Learning Approach to Generative Adversarial Networks. In ICLR 2018.

Harker, P.T., Pang, J.-S., Finite-dimensinal variational inequality and nonlinear complementarity problems: A survey of theory, algorithms and applications. Mathematical Programming, 1990.

Hazan, E., Singh, K., Zhang, C., Efficient Regret Minimization in Non-Convex Games, in ICML 2017.

Karlin, S., Weiss, G., The Theory of Infinite Games, Mathematical Methods and Theory in Games, Programming, and Economics, 1959.

Lipton, R.J., Young, N.E., Simple Strategies for Large Zero-sum Games with Applications to Complexity Theory, in STOC 94.

Mescheder, L., Nowozin, S., Geiger, A., The Numerics of GANs. In NeurIPS 2017.

Nisan, N., Roughgarden, T., Tardos, E., Vazirani, V., Algorithmic Game Theory. Cambridge University Press, 2007.

Pfau, D., Vinyals, O., Connecting Generative Adversarial Networks and Actor-Critic Methods. arXiv:1610.01945 [cs, stat], 2016.

Roughgarden, T., Intrinsic Robustness of the Price of Anarchy, in: Communications of The ACM - CACM, 2009.

Scutari, G., Palomar, .P., Facchinei, F., Pang, J. s., Convex Optimization, Game Theory, and Variational Inequality Theory. IEEE Signal Processing Magazine, 2010.

Syrgkanis, V., Agarwal, A., Luo, H., Schapire, R.E., Fast Convergence of Regularized Learning in Games, in NeurIPS 2015.

Von Neumann, J., Morgenstern, O., Theory of Games and Economic Behavior. Princeton University Press, 1944.

Schuurmans, Dale, and Martin A. Zinkevich. "Deep learning games." Advances in Neural Information Processing Systems. 2016.

Address


1055 Canada Pl, Vancouver, BC V6C 0C3

Vancouver Convention center


Canada Place