single-jc.php

JACIII Vol.21 No.5 pp. 868-875
doi: 10.20965/jaciii.2017.p0868
(2017)

Paper:

Exemplar-Based Learning Classifier System with Dynamic Matching Range for Imbalanced Data

Hiroyasu Matsushima* and Keiki Takadama**

*The National Institute of Advanced Industrial Science and Technology (AIST)
Tsukuba Center 1, 1-1-1 Umezono, Tsukuba, Ibaraki 305-8560, Japan

**The University of Electro-Communication
1-5-1 Chofugaoka, Chofu, Tokyo 182-8585, Japan

Received:
March 21, 2017
Accepted:
July 21, 2017
Published:
September 20, 2017
Keywords:
learning classifier system, exemplar, knowledge extraction, imbalanced data set, single-step problems
Abstract

In this paper, we propose a method to improve ECS-DMR which enables appropriate output for imbalanced data sets. In order to control generalization of LCS in imbalanced data set, we propose a method of applying imbalance ratio of data set to a sigmoid function, and then, appropriately update the matching range. In comparison with our previous work (ECS-DMR), the proposed method can control the generalization of the appropriate matching range automatically to extract the exemplars that cover the given problem space, wchich consists of imbalanced data set. From the experimental results, it is suggested that the proposed method provides stable performance to imbalanced data set. The effect of the proposed method using the sigmoid function considering the data balance is shown.

Cite this article as:
H. Matsushima and K. Takadama, “Exemplar-Based Learning Classifier System with Dynamic Matching Range for Imbalanced Data,” J. Adv. Comput. Intell. Intell. Inform., Vol.21 No.5, pp. 868-875, 2017.
Data files:
References
  1. [1] K. Ikeda and I. Ono, “Instance Based Policy Representation and Its Evolutionary Optimization,” System/Control/Information: J. of The Institute of Systems, Control and information Engineers, Vol.57, No.10, pp. 415-420, 2013.
  2. [2] K. Ikeda, “Exemplar-Based Direct Policy Search with Evolutionary optimization,” Proc. of 2005 Congress on Evolutionary Computation (CEC2005), pp. 2357-2364, 2005.
  3. [3] D. E. Moriarty, A. C. Schultz, and J. J. Grefenstette, “Evolutionary algorithms for reinforcement learning,” J. of Artificial Intelligence Research, Vol.11, pp. 241-276, 1999.
  4. [4] D. E. Goldberg, “Genetic Algorithms in Search, Optimization, and Machine Learning,” Addison-Wesley, 1989.
  5. [5] J. H. Holland and J. Reitman, “Cognitive Systems Based on Adaptive Algorithms,” in D. A. Waterman and F. Hayes-Roth (Eds.), Pattern Directed Inference Systems, Academic Press, 1978.
  6. [6] S. W. Wilson, “ZCS: A Zeroth Level Classifier System,” Evolutionary Computation, Vol.2, No.1, pp. 1-18, 1994.
  7. [7] S. W. Wilson, “Classifier Fitness Based on Accuracy,” Evolutionary Computation, Vol.3, No.2, pp. 149-175, 1995.
  8. [8] S. W. Wilson, “Generalization in the XCS Classifier Systems,” Proc. of the 3rd Annual Conf. Genetic Programing, pp. 665-674, Morgan Kaufmann, 1998.
  9. [9] S. W. Wilson, “Classifiers that approximate functions,” J. of Natural Computating, Vol.1, No.2-3, pp. 211-234, 2002.
  10. [10] H. Matsushima, K. Hattori, K. Sato, and K. Takadama, “Dynamic matching range in Exemplar-based Learning Classifier System,” The 2010 IEEE World Congress on Computational Intelligence (WCCI2010), pp. 1975-1982, 2010.
  11. [11] R. S. Sutton, “Learning to Predict by the Methods of Temporal Differences,” Machine Learning, Vol.3, pp. 9-44, 1988.
  12. [12] R. S. Sutton, “Generalization in Reinforcement Learning: Successful Examples Using Sparse Coarse Coding,” Advance in Neural Information Processing Systems, Vol.8, pp. 1038-1044, The MIT Press, 1996.
  13. [13] R. S. Sutton and A. Barto, “An Introduction to Reinforcement Learning,” MIT Press, Cambridge, MA, 1998.
  14. [14] C. W. Anderson and R. M. Krethamar, “Solving Optimal Control and Search Problems with Reinforcement Learning in MATLAB,” http://www.cs.colostate.edu/˜anderson/res/rl/matlabpaper/rl.html [accessed Sep. 8, 2017].
  15. [15] F. Gomez, J. Schmidhuber, and R. Mikkulainen, “Accelerated neural evolution through cooperatively coevolved synapses,” J. of Machine Learning Reseach, Vol.9, pp. 937-965, 2008.
  16. [16] C. Igel, “Neuroevolution for reinforcement learning using evolution strategies,” The Proc. of IEEE Congress on Evolutionary Computation (CEC’03), Vol.4, pp. 2588-2595, 2003.
  17. [17] N. Hansen and A. Ostermeier, “Completely derandomized self-adaptation in evolution strategies,” Evolutionary Computation, Vol.9, pp. 159-195, 2001.
  18. [18] K. O. Stanley and R. Mikkulainen, “Evolving neural networks through augmenting topologies,” Evolutionary Computation, Vol.10, pp. 99-127, 2002.
  19. [19] Y. Kassahun, “Towards a Unified Approach to Learning and Adaptation,” Ph.D. thesis, Institute of Computer Science and Applied Mathematics, Chritian-Albrechts University, Kiel, Germany, 2006.
  20. [20] M. V. Butz and S. W. Wilson, “An Algorithmic Description of XCS,” Lecture Notes in Computer Science, Vol.1996, pp. 253-272, 2001.
  21. [21] M. V. Butz, “Rule-Based Evolutionary Online Learning Systems – A Principled Approach to LCS Analysis and Design,” Studies in Fuzziness and Soft Computing Series, Vol.191, Springer, 2006.
  22. [22] J. H. Holland, “Escaping Brittleness: the Possibilities of General-purpose Learning Algorithms Applied to Parallel Rule-based System,” Machine Learning, an artificial intelligence approach, Vol.2, pp. 593-623, 1986.
  23. [23] T. Kovacs, “Evolving Optimal Populations with XCS Classifier Systems,” Technical Report CSRP-96-17, School of Computer of Science, University of Birmingham, 1996.
  24. [24] P. L. Lanzi, “A Study of the Generalization Capabilities of XCS,” Proc. of the 7th Int. Conf. of Genetic Algorithms (ICGA97), pp. 418-425, Morgan Kaufman, San Francisco, 1997.
  25. [25] P. L. Lanzi, D. Loiacono, S. W. Wilson, and D. E. Goldberg, “Extending XCSF beyond linear approximation,” Proc. of Genetic Evolutionary Computation, 2005.
  26. [26] P. L. Lanzi, D. Loiacono, S. W. Wilson, and D. E. Goldberg, “XCS with Computed Prediction for the Learning of Boolean Functions,” Proc. of the IEEE Congress on Evolutionary Computation (CEC’05), 2005.
  27. [27] S. W. Wilson, “Get Real! XCS with Continuous-Valued Inputs,” Learning Classifier Systems, Springer Berlin Heidelberg, pp. 209-219, 2000.
  28. [28] E. Bernadó and J. M. Garrell, “Accuracy-Based Learning Classifier Systems: Models, Analysis and Applications to Classification Tasks,” Evolutionary Computation, Vol.11, No.3, pp. 209-238, 2003.
  29. [29] A. Orriols-Puig and E. Bernadó-Mansilla, “The Class Imbalance Problem in UCS Classifier System: Fitness Adaptation,” Congress on Evolutionary Computation 2005, Vol.1, pp. 604-611, 2005.
  30. [30] E. Bernadó, Contributions to Genetic Based Classifier Systems, Ph.D. thesis, Enginyeria i Arquitectura la Salle, Ramon Llull University, Barcelona, 2002.
  31. [31] L. Bull, E. Bernadó, J. Holmes (Eds.), “Learning Classifier Systems in Data Mining,” Studies in Computational Intelligence Series, Vol.125, Springer, 2008.
  32. [32] C. Stone and L. Bull, “For Real! XCS with Conti-nuous-Valued Inputs,” Evolutionary Computation, Vol.11, No.3, pp. 299-336, 2003.
  33. [33] N. Japkowicz and S. Stephen, “The Class Imbalance Problem: A Systematic Study,” Intelligent Data Analisis, Vol.6, No.5, pp. 429-450, November 2002.
  34. [34] E. Bernadó and T. K. Ho, “Domain of Competence of XCS Classifier System in Complexity Measurement Space,” IEEE Trans. on Evolutionary Computation, Vol.9, No.1, pp. 82-104, 2005.

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Apr. 19, 2024