Special Issue on Cutting Edge of Reinforcement Learning and its Hybrid Methods
Keiki Takadama and Kazuteru Miyazaki
Professor, The University of Electro-Communications, Japan
Associate Professor, National Institution for Academic Degrees and Quality Enhancement of Higher Education, Japan
Machine learning has been attracting significant attention again since the potential of deep learning was recognized. Not only has machine learning been improved, but it has also been integrated with “reinforcement learning,” revealing other potential applications, e.g., deep Q-networks (DQN) and AlphaGO proposed by Google DeepMind. It is against this background that this special issue, “Cutting Edge of Reinforcement Learning and its Hybrid Methods,” focuses on both reinforcement learning and its hybrid methods, including reinforcement learning with deep learning or evolutionary computation, to explore new potentials of reinforcement learning.
Of the many contributions received, we finally selected 13 works for publication. The first three propose hybrids of deep learning and reinforcement learning for single agent environments, which include the latest research results in the areas of convolutional neural networks and DQN. The fourth through seventh works are related to the Learning Classifier System, which integrates evolutionary computation and reinforcement learning to develop the rule discovery mechanism. The eighth and ninth works address problems related to goal design or the reward, an issue that is particularly important to the application of reinforcement learning. The last four contributions deal with multiagent environments.
These works cover a wide range of studies, from the expansion of techniques incorporating simultaneous learning to applications in multiagent environments. All works are on the cutting edge of reinforcement learning and its hybrid methods. We hope that this special issue constitutes a large contribution to the development of the reinforcement learning field.
This article is published under a Creative Commons Attribution-NoDerivatives 4.0 Internationa License.