Recently, there has been tremendous progress in artificial intelligence (AI) and computational intelligence (CI) and games. In 2015, Google DeepMind published a paper “Human-level control through deep reinforcement learning” in Nature, showing the power of AI&CI in learning to play Atari video games directly from the screen capture. Furthermore, in Nature 2016, it published a cover paper “Mastering the game of Go with deep neural networks and tree search” and proposed the computer Go program, AlphaGo. In March 2016, AlphaGo beat the world’s top Go player Lee Sedol by 4:1. In early 2017, the Master, a variant of AlphaGo, won 60 matches against top Go players. In late 2017, AlphaGo Zero learned only from self-play and was able to beat the original AlphaGo without any losses (Nature 2017). This becomes a new milestone in the AI&CI history, the core of which is the algorithm of deep reinforcement learning (DRL). Moreover, the achievements on DRL and games are manifest. In 2017, the AIs beat the expert in Texas Hold’em poker (Science 2017). OpenAI developed an AI to outperform the champion in the 1V1 Dota 2 game. Facebook released a huge database of StarCraft I. Blizzard and DeepMind turned StarCraft II into an AI research lab with a more open interface. In these games, DRL also plays an important role.
Needless to say, the great achievements of DRL are first obtained in the domain of games, and it is timely to report major advances in a special issue of IEEE Computational Intelligence Magazine. IEEE Trans. on Neural network and Learning Systems and IEEE Trans. on Computational Intelligence and AI in Games have organized similar ones in 2017.
DRL is able to output control signals directly based on input images, and integrates the capacity for perception of deep learning (DL) and the decision making of reinforcement learning (RL). This mechanism has many similarities to human modes of thinking. However, there is much work left to do. The theoretical analysis of DRL, e. g., the convergence, stability, and optimality, is still in early days. Learning efficiency needs to be improved by proposing new algorithms or combining with other methods. DRL algorithms still need to be demonstrated in more diverse practical settings. Therefore, the aim of this special issue is to publish the most advanced research and state-of-the-art contributions in the field of DRL and its application in games. We expect this special issue to provide a platform for international researchers to exchange ideas and to present their latest research in relevant topics. Specific topics of interest include but are not limited to:
· Survey on DRL and games;
· New AI&CI algorithms in games;
· New algorithms of DL, RL and DRL;
· Theoretical foundation of DL, RL and DRL;
· DRL combined with search algorithms or other learning methods;
· Challenges of AI&CI games as limitations in strategy learning, etc.;
· DRL or AI&CI Games based applications in realistic and complicated systems.
Submission Deadline: October 1st, 2018
Notification of Review Results: December 10th, 2018
Submission of Revised Manuscripts: January 31st, 2019
Submission of Final Manuscript: March 15th, 2019
Special Issue Publication: August 2019 Issue
D. Zhao, Institute of Automation, Chinese Academy of Sciences, China, Dongbin.firstname.lastname@example.org
Dr. Zhao is a professor at Institute of Automation, Chinese Academy of Sciences and also a professor with the University of Chinese Academy of Sciences, China. His current research interests are in the area of deep reinforcement learning, computational intelligence, adaptive dynamic programming, games, and robotics. Dr. Zhao is the Associate Editor of IEEE Transactions on Neural Networks and Learning Systems and IEEE Computation Intelligence Magazine, etc. He is the Chair of Beijing Chapter, and the past Chair of Adaptive Dynamic Programming and Reinforcement Learning Technical Committee of IEEE Computational Intelligence Society (CIS). He works as several guest editors of renowned international journals, including the leading guest editor of the IEEE Trans.on Neural Network and Learning Systems special issue on Deep Reinforcement Learning and Adaptive Dyanmic Programming.
S. Lucas, Queen Mary University of London, UK, email@example.com
Dr. Lucas was a full professor of computer science, in the School of Computer Science and Electronic Engineering at the University of Essex until July 31, 2017, and now is the Professor and Head of School of Electronic Engineering and Computer Science at Queen Mary University of London. He was the Founding Editor-in-Chief of the IEEE Transactions on Computational Intelligence and AI in Games, and also co-founded the IEEE Conference on Computational Intelligence and Games, first held at the University of Essex in 2005. He is the Vice President for Education of the IEEE Computational Intelligence Society. His research has gravitated toward Game AI: games provide an ideal arena for AI research, and also make an excellent application area.
J. Togelius, New York University, USA, firstname.lastname@example.org.
Julian Togelius is an Associate Professor in the Department of Computer Science and Engineering, New York University, USA. He works on all aspects of computational intelligence and games and on selected topics in evolutionary computation and evolutionary reinforcement learning. His current main research directions involve search-based procedural content generation in games, general video game playing, player modeling, and fair and relevant benchmarking of AI through game-based competitions. He is the Editor-in-Chief of IEEE Transactions on Computational Intelligence and AI in Games, and a past chair of the IEEE CIS Technical Committee on Games.
1. The IEEE CIM requires all prospective authors to submit their manuscripts in electronic format, as a PDF file. The maximum length for Papers is typically 20 double-spaced typed pages with 12-point font, including figures and references. Submitted manuscript must be typewritten in English in single column format. Authors of Papers should specify on the first page of their submitted manuscript up to 5 keywords. Additional information about submission guidelines and information for authors is provided at the IEEE CIM website. Submission will be made via https://easychair.org/conferences/?conf=ieeecimcitbb2018.
2. Send also an email to guest editor D. Zhao (email@example.com) with subject “IEEE CIM special issue submission” to notify about your submission.
3. Early submissions are welcome. We will start the review process as soon as we receive your contribution.