XCSR Learning from Compressed Data Acquired by Deep Neural Network
Kazuma Matsumoto*, Takato Tatsumi*, Hiroyuki Sato*, Tim Kovacs**, and Keiki Takadama*
*The University of Electro-Communications
1-5-1 Chofugaoka, Chofu, Tokyo 182-8585, Japan
**The University of Bristol
MVB, Woodland Rd., Bristol, BS8 1UB, United Kingdom
The correctness rate of classification of neural networks is improved by deep learning, which is machine learning of neural networks, and its accuracy is higher than the human brain in some fields. This paper proposes the hybrid system of the neural network and the Learning Classifier System (LCS). LCS is evolutionary rule-based machine learning using reinforcement learning. To increase the correctness rate of classification, we combine the neural network and the LCS. This paper conducted benchmark experiments to verify the proposed system. The experiment revealed that: 1) the correctness rate of classification of the proposed system is higher than the conventional LCS (XCSR) and normal neural network; and 2) the covering mechanism of XCSR raises the correctness rate of proposed system.
-  G. E. Hinton, S. Osindero, and Y.-W. Teh, “A fast learning algorithm for deep belief nets,” Neural computation, Vol.18, No.7, pp. 5271-1554, 2006.
-  H. John, “Holland. Escaping brittleness: The possibilities of general-purpose learning algorithms applied to parallel rule-based systems,” Machine learning, an artificial intelligence approach, Vol.2, pp. 593-623, 1986.
-  S. W. Wilson, “Get real! XCS with continuous-valued inputs,” Learning Classifier Systems, pp. 209-219, Springer, 2000.
-  K. O. Stanley and R. Miikkulainen, “Evolving neural networks through augmenting topologies,” Evolutionary computation, Vol.10, No.2, pp. 99-127, 2002.
-  V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski, et al., “Human-level control through deep reinforcement learning,” Nature, Vol.518, No.7540, pp. 529-533, 2015.
-  R. S. Sutton, “Learning to predict by the methods of temporal differences,” Machine learning, Vol.3, No.1, pp. 9-44, 1988.
-  G.W. Cottrell and P. Munro, “Principal components analysis of images via back propagation,” Visual Communications and Image Processing ’88: 3rd in a Series, pp. 1070-1077, International Society for Optics and Photonics, 1988.
-  W. S. McCulloch and W. Pitts, “A logical calculus of the ideas immanent in nervous activity,” The bulletin of mathematical biophysics, Vol.5, No.4, pp. 115-133, 1943.
-  D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning representations by back-propagating errors,” Cognitive Modeling, Vol.5, No.3, 1988.
-  S.W.Wilson, “Classifier Fitness Based on Accuracy,” Evolutionary Computation, Vol.3, No.2, pp. 149-175, June 1995.
-  M. V. Butz and S. W. Wilson, “An Algorithmic Description of XCS,” Soft Computing, Vol.6, No.3.4, pp. 144-153, 2002.
-  M. V. Butz, D. E. Goldberg, and K. Tharakunnel, “Analysis and Improvement of Fitness Exploitation in XCS: Bounding Models, Tournament Selection, and Bilateral Accuracy,” Evolutionary Computation, Vol.11, No.3, pp. 239-277, 2003.
-  G. E. Hinton and R. R. Salakhutdinov, “Reducing the dimensionality of data with neural networks,” Science, Vol.313, No.5786, pp. 504-507, 2006.
-  R. P. Gorman and T. J. Sejnowski, “Analysis of hidden units in a layered network trained to classify sonar targets,” Neural Networks, Vol.1, No.75, 1988.
-  S. W. Wilson, “Classifier fitness based on accuracy,” Evolutionary computation, Vol.3, No.2, pp. 149-175, 1995.
This article is published under a Creative Commons Attribution-NoDerivatives 4.0 International License.