single-jc.php

JACIII Vol.13 No.3 pp. 331-337
doi: 10.20965/jaciii.2009.p0331
(2009)

Paper:

Variable Ranking for Online Ensemble Learning

Hassab Elgawi Osman

Image Science and Engineering Lab, Tokyo Institute of Technology, 4259 Nagatsuta, Midori-ku, Yokohama, Kanagawa 226-8503, Japan

Received:
September 26, 2008
Accepted:
February 10, 2009
Published:
May 20, 2009
Keywords:
ensemble learning, on-line learning, feature selection, random forests, NIPS 2003.
Abstract
In proposing, incremental feature selection based on correlation ranking (CR) for classification problems, we develop on-line training using the random forests (RF) algorithm, then evaluate the performance of the combination based on an NIPS 2003 Feature Selection Challenge dataset. Results show that our approach achieves performance comparable to others batch learning algorithms, including RF.
Cite this article as:
H. Osman, “Variable Ranking for Online Ensemble Learning,” J. Adv. Comput. Intell. Intell. Inform., Vol.13 No.3, pp. 331-337, 2009.
Data files:
References
  1. [1] L. Breiman, “Bagging predictors,” Machine Learning, Vol.24, No.2, pp. 123-140, 1996.
  2. [2] R. Schapire, Y. Freund, P. Bartlett, and W. Lee, “Boosting the margin: a new explanation for the effectiveness of voting methods,” Ann. Statist., Vol.26, No.5, pp. 1651-1686, 1998.
  3. [3] L. Breiman, “Random Forests,” Machine Learning, Vol.45, No.1, pp. 5-32, 2001.
  4. [4] H. Elgawi Osman, “Online Random Forests based on CorrFS and CorrBE,” in Proc. IEEE workshop on online classification, (CVPR'08), pp. 1-7, 2008.
  5. [5] H. Elgawi Osman, “Variable Ranking for Online Ensemble Learning,” in Proc. The 24th Annual ACM Symposium on Applied Computing (ACM SAC), 2009.
  6. [6] R. Kohavi and G. John, “Wrappers for feature subset selection,” Artifcial Intelligence, Vol.97, No. 1-2, pp. 273-324, 1997.
  7. [7] H. Liu, H. Motoda, and L. Yu, “A selective sampling approach to active feature selection,” Artificial Intelligence, Vol.159, No.1-2, pp. 49-74, 2004.
  8. [8] H. Motoda and H. Liu, “Data reduction: feature selection, Handbook of data mining and knowledge discovery,” Oxford University Press, Inc., New York, NY, 2002.
  9. [9] I. Guyon and A. Elisseeff, “An introduction to variable and feature selection,” J. Machine Learning Research, Vol.3, pp. 1157-1182, 2003.
  10. [10] S. Perkins, K. Lacker, and J. Theiler, “Grafting: fast, incremental feature selection by gradient descent in function space,” J. Machine Learning Research, Vol.3, pp. 1333-1356, 2003.
  11. [11] L. Breiman, Jerome H. Friedman, Richard A. Olshen, and Charles J. Stone, “Classification and regression trees,” Wadsworth Inc., Belmont, California, 1984.
  12. [12] R. E. Banfield, L. O. Hall, K. W. Bowyer, D. Bhadoria, W. P. Kegelmeyer, and S. Eschrich, “A comparison of ensemble creation techniques,” in Proc. Int. Conf. on Multiple Classifier Systems, 2004.
  13. [13] I. Guyon, “Design of experiments of the NIPS 2003 variable selection benchmark,” http://www.nipsfsc.ecs.soton.ac.uk/papers/Datasets.pdf, 2003.
  14. [14] I. H. Witten and E. Frank, “Data Mining: Practical Machine Learning Tools with Java Implementations,” Morgan Kaufmann, San Francisco, 1999.

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Oct. 01, 2024