Workshop 1: Population-based Multi-agent Reinforcement Learning

Organizers:

Abstract:

Population-based multi-agent reinforcement learning (PB-MARL) refers to a series of methods combining dynamical population selection methodologies and multi-agent reinforcement learning (MARL) algorithms. In recent years, PB-MARL has shown great potentials for non-trivial multi-agent tasks, such as RTS Games and Poker Games. This workshop will bring together researchers working at the intersection of population-based learning and multi-agent reinforcement learning. We hope it will help interested researchers outside of the field gain a high-level view of the current state of the art and potential directions for future contributions.

Schedule (Jan 3, 2022, 14:00pm-17:15pm UTC+8, RoomB):

Welcome and Introduction, 14:00-14:05

Ming Zhou, Shanghai Jiao Tong University, 14:05-14:50

Xidong Feng, University College London, 15:15-15:50

Le Cong Dinh, University of Southampton, 15:50-16:30

Stephen McAleer, University of California, Irvine, 16:30 - 17:10

Workshop 2: Workshop on Learning and Optimization

Organizers:

Abstract:

In recent years, the optimization and computational intelligence methods have achieved remarkable results in domains including robotics, games, circuit design, and large-scale scheduling engine. This workshop will bring together researchers working at the field of optimization, reinforcement learning, and evolutionary computation, and it will also cover the cutting-edge research both in academic and industrial communities. It aims to help interested researchers both inside and outside of the field gain a high-level view about the current state-of-the-art works and potential directions and applications for future.

Data : Jan 4, 2022, 9:00am-18:00pm UTC+8.

Schedule : https://learnopt2021.github.io/2021/