A Generalized Framework for Self-Play TrainingShow others and affiliations
2019 (English)In: 2019 IEEE CONFERENCE ON GAMES (COG), IEEE, 2019, p. 1-8Conference paper, Published paper (Refereed)
Abstract [en]
Throughout scientific history, overarching theoretical frameworks have allowed researchers to grow beyond personal intuitions and culturally biased theories. They allow to verify and replicate existing findings, and to link disconnected results. The notion of self-play, albeit often cited in multiagent Reinforcement Learning, has never been grounded in a formal model. We present a formalized framework, with clearly defined assumptions, which encapsulates the meaning of self-play as abstracted from various existing self-play algorithms. This framework is framed as an approximation to a theoretical solution concept for multiagent training. On a simple environment, we qualitatively measure how well a subset of the captured self-play methods approximate this solution when paired with the famous PPO algorithm. The results indicate that throughout training the trained policies exhibit cyclic evolutions, showing that self-play research is still at an early stage.
Place, publisher, year, edition, pages
IEEE, 2019. p. 1-8
Series
IEEE Conference on Computational Intelligence and Games, ISSN 2325-4270
National Category
Computer Sciences Software Engineering
Identifiers
URN: urn:nbn:se:uu:diva-494395DOI: 10.1109/CIG.2019.8848006ISI: 000843154300088ISBN: 978-1-7281-1884-0 (electronic)OAI: oai:DiVA.org:uu-494395DiVA, id: diva2:1728294
Conference
IEEE Conference on Games (IEEE COG), AUG 20-23, 2019, London, ENGLAND
2023-01-182023-01-182023-01-18Bibliographically approved